Models Fail: Greenland and Iceland Land Surface Air Temperature Anomalies

I’m always amazed when global warming enthusiasts announce that surface temperatures in some part of the globe are warming faster than simulated by climate models. Do they realize they’re advertising that the models don’t work well for the area in question? And that their shouting it from the hilltops only helps to highlight the limited value of the models?

Greenland is a hotspot for climate change alarmism in more ways than one. A chunk of glacial ice sits atop Greenland, and as it melts, it contributes to the rise in sea levels. If surface temperatures in Greenland warm in the future, the warming rate will impact how quickly Greenland ice melts and its contribution to future sea levels. Greenland is also one of the locations around globe where land surface air temperatures in recent decades have been warming faster than simulated by models. See Figure 1, which is a model-data comparison of the surface air temperature anomalies of Greenland and its close neighbor Iceland. Somehow, that modeling failure turns into proclamations of doom, with the Chicken Littles of the anthropogenic global warming movement proclaiming we’re going to drown because rising sea levels.

Figure 1

Figure 1

A more detailed discussion of Figure 1: It compares the new and improved UK Met Office CRUTEM4 land surface air temperature anomalies for Greenland and Iceland (60N-85N, 75W-10W), for the period of January 1970 to February 2013, and the multi-model ensemble-member mean of the models stored in the CMIP5 archives, based on the scenario RCP6.0. As you’ll recall, the models in the CMIP5 archive are being used by the IPCC for its upcoming 5th Assessment Report. Based on the linear trends, since 1970, Greenland and Iceland surface air temperatures are warming at a rate that’s about 65% faster than predicted by the models. That’s not a very good showing for the models. And, for example, the disparity between the models and observations is even greater if we start the comparison in 1995, Figure 2. During the last 18 years, Greenland and Iceland land surface temperatures have been warming at a rate that’s more than 2.5 times faster than simulated by models. Obviously the modelers haven’t a clue about what causes land surface temperatures to warm there.

Figure 2

Figure 2

LOOKING AT THE RECENT WARMING PERIOD DOESN’T TELL THE WHOLE STORY

The data in Figure 1 covers a period of a little more than 40 years. Let’s look at a model-data comparison for the 40-year period before that, January 1930 to December 1969. Refer to Figure 3. During that multidecadal period, land surface air temperature anomalies in Greenland and Iceland actually cooled, and they cooled at a significant rate. On the other hand, the models show a miniscule long-term cooling from 1930 to 1969, but the trend is basically flat. The models fail again.

Figure 3

Figure 3

In our example in Figure 2, we looked at the trends from 1995 to present, so Figure 4 compares the models and data from January 1930 to December 1994. The data show cooling at a significant rate, about 0.25 deg C per decade, but now the models show warming.

Figure 4

Figure 4

TWO MORE REASONS FOR THIS EXERCISE

In addition to showing you how poorly the models simulate the land surface temperatures of Greenland and Iceland, another point I wanted to make was that you have to be wary of the start year of any study of Greenland surface temperatures. Figure 5 compares the models and data for Greenland and Iceland from 1930 to present. In it, the data and model output have been smoothed with 13-month running-average filters to minimize the monthly variations. Greenland and Iceland obviously cooled for much of the period since 1930. The break point between cooling and warming is probably debatable. But the most outstanding feature in the data is the extreme dip and rebound in the early 1980s. That dip appears about the time of the eruption of El Chichon in Mexico, and there’s another dip in 1991, which is when Mount Pinatubo erupted. Mount Pinatubo was a stronger eruption, so the 1982 dip appears unusual. Bottom line: keep in mind that any study of the recent warming of Greenland and Iceland surface temperatures will be greatly impacted by the start year.

Figure 5

Figure 5

The other point: Based on the linear trends (of the monthly data not the illustrated smoothed versions), land surface air temperature anomalies for Greenland and Iceland have not warmed since 1930. See Figure 6. Phrased another way, Greenland and Iceland surface temperatures have not warmed in 80+ years. But the models show they should have warmed about 1.3 deg C during that time. Granted, land surface temperatures now are warmer than then were in the 1930s and ’40s, but the models can’t simulate the cooling that took place from the 1930s to the latter part of the 20th Century, and they can’t be used to explain the recent warming.

Figure 6

Figure 6

Again, the models show no skill at being able to simulate surface temperatures. No skill at all. Even for critical locations like Greenland.

STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN

We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.

The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:

The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:

If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

Gavin Schmidt replied with a general discussion of models:

Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).

To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.

The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):

Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.

In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.

CLOSING

Will the warming continue in Greenland and Iceland? If so, for how long? Or will the surface temperatures in Greenland and Iceland undergo another multidecadal period of cooling in the near future? The models show no skill at being able to simulate land surface air temperatures in Greenland and Iceland, so we can’t rely on them for predictions of the future.

We can add the surface temperatures of Greenland and Iceland to the growing list of climate model failures. The others included:

Scandinavian Land Surface Air Temperature Anomalies

Alaska Land Surface Air Temperatures

Daily Maximum and Minimum Temperatures and the Diurnal Temperature Range

Hemispheric Sea Ice Area

Global Precipitation

Satellite-Era Sea Surface Temperatures

Global Surface Temperatures (Land+Ocean) Since 1880

And we recently illustrated and discussed in the post Meehl et al (2013) Are Also Looking for Trenberth’s Missing Heat that the climate models used in that study show no evidence that they are capable of simulating how warm water is transported from the tropics to the mid-latitudes at the surface of the Pacific Ocean, so why should we believe they can simulate warm water being transported to depths below 700 meters without warming the waters above 700 meters?

I’ve got at least one more model-data post, and it’s about the land surface temperatures of another continent. The models almost double the rate of warming there.

Looks like I’ve got a lot of ammunition for my upcoming show and tell book. It presently has the working title Climate Models are Crap with the subtitle An Illustrated Overview of IPCC Climate Model Incompetence.

It’s unfortunate that the IPCC and the government funding agencies were only interested in studies about human-induced global warming. They created their consensus the old fashioned way: they paid for it. Now, the climate science community is still not able to differentiate between manmade warming and natural variability. They’re no closer to that goal than they were when they formed the IPCC. Decades of research efforts and their costs have been wasted by the single-sightedness of the IPCC and those doling out research funds.

They got what they paid for.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

78 Comments
Inline Feedbacks
View all comments
Theo Goodwin
July 7, 2013 11:02 am

John Day says:
July 7, 2013 at 9:28 am
“Are you not aware that noise removal by signal averaging is a standard signal-processsing techinique? It works quite well, to the extent that the variations are truly random (i.e. not systemic bias etc).”
How much background knowledge, context, are you willing to take for granted? If I gave you the raw output of 100 computer models, strings of numbers, would you be willing to say that the average has some meaning? Of course not, because you do not know what the models represent or any differences among them. Yet in reference to his spaghetti graphs of many models Schmidt does not tell us the differences among the models. And he assumes, without explanation, that all the models represent world climate. So, how are we supposed to separate noise from the model differences?
Climate science cannot be just computer models and statistical magic. At some point, some part of it has to connect to our experiential knowledge of climate.

July 7, 2013 11:24 am

@TheoGoodwin
How much background knowledge, context, are you willing to take for granted? If I gave you the raw output of 100 computer models, strings of numbers, would you be willing to say that the average has some meaning?
I confess that I know nothing about the particular models that Schmidt was averaging, so perhaps he didn’t do this correctly.
But I was specifically reacting to Bob Tisdale’s comment that implied that merely trying to separate signal from noise is wrong because it means you are “not interested” in the noise. That seemed “wrong headed” to me, but perhaps I am misunderstaning what Tisdale wrote.
:-:

Theo Goodwin
July 7, 2013 12:05 pm

John Day says:
July 7, 2013 at 11:24 am
“But I was specifically reacting to Bob Tisdale’s comment that implied that merely trying to separate signal from noise is wrong because it means you are “not interested” in the noise. That seemed “wrong headed” to me, but perhaps I am misunderstaning what Tisdale wrote.”
As I wrote earlier, would you be willing to take the raw data from 100 unidentified models and apply your formula to separate signal from noise? Of course not. That is what Schmidt asks us to do. He cares nothing for the differences among the models. He cares nothing for the relative validation of models. He makes no effort to present information about validation or differences.
What he offers us is a perfectly circular argument. Assume the models are validated with regard to climate and assume that the differences among them are not important then average the models to separate climate signal from noise. Obviously, he has assumed his conclusion, namely, that the models are validated and in the same ballpark. Typical Alarmist reasoning.

July 7, 2013 2:43 pm

@TheoGoodwin
As I wrote earlier, would you be willing to take the raw data from 100 unidentified models and apply your formula to separate signal from noise? Of course not. That is what Schmidt asks us to do. He cares nothing for the differences among the models. He cares nothing for the relative validation of models. He makes no effort to present information about validation or differences.
Q: What’s difference between “raw data” taken from a _model_ and “raw data” taken from _direct measurement_?
A: None. No reliable scientific measurements can be done without using a model.
Some models are so simple and reliable that we take them for granted and don’t realize that they are indeed “models”. So, measurements made using a common yardstick, for example, are only as reliable as the graduation markings incribed on them, which are subject to error in design/manufacturiing, or later through warping or expansion/contraction of the stick itself. The set instructions which determine the design of the graudation spacings and choices of construction methods and materials comprise the “model”.
Even if I could somehow manufacture a perfect yardstick, perfectly inscribed and guaranteed not to change its dimensions, the “readings” from it would still be subject to human errors of observation. E.g. I could misread “6” as a “9”.
And, even if I could somehow validate the readings (perhaps by averaging a series of measurements), my “validated yardstick” would return certainly bosgus results if I tried to use if for measuring the diameter of a human hair, or the distance between New York and Paris.
The same arguments can be applied to any other instrument readings, e.g. thermometers and clocks used in climate studies. They all rely on “internal models” based on the thermal expansion or conductivity properties of materials, and simulation of time using “click and tick” emulations. No such instrument, made by humans, can be validated for all possible ranges of physical measurements. Some are these models (“proxies”) are more reliable than others. But they all return incorrect data when not “used as directed”.
George Box was right, all models are wrong. Some are useful.
So, yes Theo, I actuallly would be willing to take raw data from Schmidt’s different models, and try to use them for prediictive modeling. And I wouldn’t be too concerned about any ‘validation’ that he may or may not have performed. That’s because validations are rather limited in scope, carrying no rigid guarantees for future applicability. I also would not be too concerned, up front, about “differences among the models”, because that might uust be ‘noise’. (Which I will be ‘highly interested’ in, for the purpose of segmenting and eliminating it)
I _would_ be concerned about “harmonizing” the data from the different models, in terms of temporal and spatial synchronization, physical units of meaures etc. so that the data can be consistently interpreted as an ensemble of data.
From my “data modeler’s” perspective, the only important attribute of data, ulitimately, is the accuracy of explanations and predictions made by models using this data.
So does Schmidt’s (or anybody’s) “ensemble model” work better than the individual models it is composed of? We can test this hypothesis by observing the ensemble model’s skill at predicting the future (no data available) by making it predict the past (tons of data available). If a model consistently scores high on predicting the past, we can be somewhat confident that it will continue to peform well at least into the near future (assuming boundary conditions don’t change too much etc).
My understanding is that the current crop of NOAA/IPCC climate models have had rather poor performance in this regard. But we should not bash research just because an experiment (or two) has failed. That’s the nature of science.
Preliminary models are often completely wrong and need to be retuned or replaced until they work reliably at predicting/explaining the past.Then we might finally have one of those so-called “useful models”
😐

Terry Oldberg
July 7, 2013 4:07 pm

John Day:
Contrary to your assertion, the CMIP5 ensemble model does not make predictions. It makes projections. The word “prediction” has a distinct meaning and this meaning differs from the meaning of the word “projection.”
When the two words are treated as synonyms, the result is to create a polysemic term, that is, a term with more than one meaning. When such a term is used in making an argument, this argument is an example of an “equivocation.” By logical rule, a proper conclusion may not be drawn from an equivocation. To draw a conclusion from an equivocation is the deceptive argument that is known to philosophers as the “equivocation fallacy.” Participating climatologists use the equivocation fallacy in creating the misimpression that their pseudoscience is a science ( http://wmbriggs.com/blog/?p=7923 ).
By the way, George Box’s claim is incorrect. Using available technology, it is possible to build a model that is not wrong.

Theo Goodwin
July 7, 2013 4:21 pm

John Day says:
July 7, 2013 at 2:43 pm
‘@TheoGoodwin
“As I wrote earlier, would you be willing to take the raw data from 100 unidentified models and apply your formula to separate signal from noise? Of course not. That is what Schmidt asks us to do. He cares nothing for the differences among the models. He cares nothing for the relative validation of models. He makes no effort to present information about validation or differences. ”
Q: What’s difference between “raw data” taken from a _model_ and “raw data” taken from _direct measurement_?
A: None. No reliable scientific measurements can be done without using a model.’
I should not have used the word ‘data’. I meant to use the word ‘output’. By “raw,” I meant only that the datat cannot be identified by you. The source of two sets of output might be a model of climate and a model of erosion throughout the world. The point being that Schmidt refuses to discuss the differences among the models averaged. How can we identify the noise if we do not know the differences?
As regards the remainder of your reply, I will cut to the chase and address the following:
“So does Schmidt’s (or anybody’s) “ensemble model” work better than the individual models it is composed of? We can test this hypothesis by observing the ensemble model’s skill at predicting the future (no data available) by making it predict the past (tons of data available). If a model consistently scores high on predicting the past, we can be somewhat confident that it will continue to peform well at least into the near future (assuming boundary conditions don’t change too much etc).”
The evidence for the failure of his “ensemble model” is clear as a bell. In his spaghetti graph of the ensemble, the model closest to the bottom of the spaghetti reads higher than observed temperatures but is closer than all other models to observed temperatures. The model that is second from the bottom does second best and so on for all the models. The “ensemble average” is in the middle. The paradoxes that follow from this are endless. I will leave you with just one.
How can it be that the model at the bottom of the spaghetti is closest to observed temperatures yet contains as much or more noise than all the other models? In other words, how can the model on the bottom of the spaghetti graph be both closest to observed temperatures yet farthest from the “ensemble mean” that, according to Schmidt, most accurately shows the true signal?

Theo Goodwin
July 7, 2013 4:24 pm

Correction:
“By “raw,” I meant only that the datat cannot be identified by you.”
should read:
By “raw,” I meant only that the output cannot be identified by you.
I have a glitchy keyboard. Pardon me.

July 7, 2013 4:52 pm

Terry O – It’s not a theory, it’s a calculation. No one else has done anywhere near that well.

Paul Penrose
July 7, 2013 5:17 pm

Gavin can prattle on about “random noise” which “cancels out when averaged” all he wants. But until someone proves that this “noise” has a normal distribution, averaging is completely inappropriate. This is just stats 101 people.

July 7, 2013 5:44 pm

@TheoGoodwin
> The evidence for the failure of his “ensemble model” is clear as a bell.
Ok, I have no problem with knowing that a particular ensemble model has failed. I wasn’t trying to argue that it must always succeed, only that researchers shouldn’t be demonized for the failure of their experiments.
However, a government researcher’s failure to provide sufficient information about the experiments, such that other researchers can attempt to duplicate (or even fix) these experiments is another matter, is hard to justify, unless it’s classified or proprietary (which I don’t think would apply to climate research).
What I was really trying to understand was Tisdale’s paraphrasing of Schmidt, implying that attempting to separate the noise component from a signal should be considered being “not interested” in the noise. I think if you’re concerned enough to try to remove noise, then that qualifies as “being interested”
Oldberg
>Contrary to your assertion, the CMIP5 ensemble model does not make predictions. It makes projections.
That’s sarcasm, right? If not, can you explain the difference between a “prediction” and “projection”, and why this is important for modeling purposes? (For extra credit, contrast both of terms with “forecast”, and justify these distinctions)
>By the way, George Box’s claim is incorrect. Using available technology, it is possible to build a model that is not wrong.
More trollish humor? If not, please tell me where I can get this technology. I’ve been modeling for several decades, and have yet to find a model that is never wrong.
What’s that? Oh, you meant to say “a model that is always right, some of the time”. Like a stopped clock, for example?
😐

Terry Oldberg
Reply to  John Day
July 7, 2013 9:20 pm

John Day:
I explore the distinction between a “projection” and a “prediction” and the logical necessity for maintenance of this distinction in the peer reviewed article at http://wmbriggs.com/blog/?p=7923 .
In implying that I am a troll, you lower yourself to making an ad hominem argument. Such an argument is illogical and misleading.
Box’s claim that all models are wrong is refuted by a single example of a model that is non- wrong; by “non-wrong” I mean that the claims of this model have been tested without being refuted.. Modus Ponens is one example. Thermodynamics is a second example. Quantum theory is a third example. Shannon’s theory of communication is a fourth example. For a tutorial on a technology that is particularly adept in generating these and other non-wrong models, see the peer reviewed articles at http://judithcurry.com/2010/11/22/principles-of-reasoning-part-i-abstraction/ , http://judithcurry.com/2010/11/25/the-principles-of-reasoning-part-ii-solving-the-problem-of-induction/ and http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ .

July 7, 2013 5:47 pm

BT: the planet is greening not browning, Barton Paul Levenson. Haven’t you been paying attention?
BPL: Yes. In fact, I’m studying just that. The planet is not “greening.” The fraction of Earth’s land surface in “severe drought” (PDSI -3.0 or below) has doubled since 1970, to about 20%. What’s more, the increase is accelerating.

milodonharlani
July 7, 2013 6:37 pm

Barton Paul Levenson says:
July 7, 2013 at 5:47 pm
Why would you tell such a blatantly outrageous lie, so easily checked?
http://drought.wcrp-climate.org/workshop/Talks/Shrier.pdf
Even the shamelessly cooked books of CRU & IPCC show you up.

July 7, 2013 7:57 pm

Bob Tisdale says:
July 7, 2013 at 6:53 pm
John Day says: “So, I guess I don’t understand how wanting to get rid of noise equates to being ‘not interested’ in noise.”
And I don’t understand your statement.

Perhaps I misunderstood your paraphrasing. I thought you were disagreeing with Schmidt’s assertion that averaging removes the random noise component, thus improving the estimate of the deterministic component of the model.
If were you merely agreeing with that then I was mistaken. Sorry for confusing your intent.

Terry Oldberg
July 7, 2013 9:32 pm

Dan Pangburn:
When you say “It’s not a theory, it’s a calculation,” I don’t know what you mean. Please amplify.

July 8, 2013 3:11 am

My statement is based on time-series analysis of the PDSI, as revised to use the Penman-Monteith equation for evapotranspiration rather than the older Thornthwaite equation. “Severe drought” averaged about 10% of Earth’s land surface from 1948 to 1970, and since then has risen, irregularly, to about 20%. The trend is up, statistically significant, and accelerating. What’s more, I can explain 86% of the variance using air temperatures and past drought. I’m currently writing a paper on the subject.

July 8, 2013 5:28 am

@TerryOldberg
Box’s claim that all models are wrong is refuted by a single example of a model that is non- wrong; by “non-wrong” I mean that the claims of this model have been tested without being refuted.. Modus Ponens is one example.
You said one could “build a model that was not wrong”. That’s not the same as saying a model has been “tested without being refuted” so far. The next test may falsify it. So you haven’t falsified Box’s dictum.
Box was referring to models that make predictions based on observations or measurments. All such models are necessariliy “idealized” approximations, therefore not always correct.
For example, the notion of a “circle” which perfectly obeys the model “r²=x²+y²” is an idealized concept. No such object exists in the real world. The orbits of physical bodies or shapes of floating globs of mercury are always perturbed slightly by other objects (including the observer) such that a “perfect circle” can’t really exist, except in our minds. But this simple formula for a circle is still very useful, nevertheless. Close enough for most real-world applications.
Modus Ponens is a law of logic which states that if we know that “A logically implies B”, then knowing “A is true” proves that “B is true”. Problem is, in the real word, we don’t have access to such infallible truths. So applying Modus Ponens to propositions like “Smoke implies Fire”, will not produce absolute truth.
The closest proposition to Absolute Truth that I’ve found so far is: “There are unused icons on your desktop”. But the logical implications of this truth are still unclear to me.
😐

Terry Oldberg
Reply to  John Day
July 8, 2013 11:32 am

John Day:
Box claims to know that “all models are wrong.” Science, though, contains numerous models not known to be wrong. These models are the scientific theories. As the set of scientific theories is not empty it may be concluded that Box’s claim is incorrect. It is incorrect even though one or more of today’s scientific theories may be found wrong in future testing.
Thermodynamics is an example of a scientific theory. Abstraction is a generally useful idea in theorizing and is one of the ideas that yields thermodynamics. Let A1, A2,… represent descriptions of a system that provide microscopic detail. In thermodynamics, these states are called “microstates.” Let B represent a description of a system that provides macroscopic detail. In thermodynamics, this state is called the “macrostate.” The macrostate is formed from the microstates by abstracting (removing) the description from some of the details. An abstraction may be formed by placement of the microstates in an inclusive disjunction. The resulting macrostate desciption is: A1 OR A2 OR…
Usually, in theorizing, more than one abstraction is a logical possibility. In this circumstance, the principles of entropy minimization and maximization distinguish the one correct abstraction from the many incorrect ones. Thermodynamics has entropy maximization embedded in it as the second law of thermodynamics. The second law states that the entropy of the inferred microstate is maximized under the constraint of energy conservation. The entropy of the inferred microstate is the missing information about the microstate, per event. Entropy minimization and maximization are principles of reasoning.

barry
July 8, 2013 9:20 am

I fully support the scientific logic of this thread. Increases in atmospheric CO2 are evenly distributed in the atmosphere. The potential for CO2 forcing due to the increase in atmospheric CO2 should therefore be more or less equal by latitude.

The tropics have much more water vapour than the poles. The impact of CO2 increase on different latitudes is different. CO2 has more impact where there are less greenhouse gases (less WV at the poles). Polar amplification, especially in the North Pole in the shorter term, is what is anticipated. Polar data should be comprehensive to test this, not just from one region.
The South Pole is relatively thermally isolated from the rest of the planet by circumpolar winds and ocean currents. Amplified warming can be seen at this time outside that zone, but not within it.

July 8, 2013 12:43 pm

Terry Oldberg said:
“Science, though, contains numerous models not known to be wrong. These models are the scientific theories.”
You are either a troll, or a dunce. Take your pick.
All scientific “theories” are just that: theories.
http://en.wikipedia.org/wiki/Scientific_method#Properties_of_scientific_inquiry
“Scientific knowledge is closely tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be considered completely certain, since new evidence falsifying it might be discovered.”

Terry Oldberg
Reply to  Johanus
July 8, 2013 2:20 pm

Johanus:
By attacking my character, you’ve made an ad hominem argument. An argument of this kind is illogical, irrelevant and illegal.
The material that you quote from Wikipedia is consistent with my understanding and with the quote that you attribute to me. Thus, aside from your inaccurate and defamatory characterizations of me, there seem to be no areas of disagreement between us.

July 8, 2013 6:34 pm

Terry O – I mean what I said. Perhaps you should try reading the papers at all of the links all the way through. Maybe http://lowaltitudeclouds.blogspot.com/ will help.

Brian H
July 14, 2013 7:07 pm

Christopher Essex, professor of Applied Mathematics at the University of Western Ontario,… discusses the folly of attempting to find scientific meaning in an ensemble of un-validated climate models.

“Ensemble averaging does not cleanse models of their fundamental, enormously challenging deficiencies no matter how many realisations are included. As more and more model realisations are rolled into some ad hoc averaging process, there is no mathematical reason whatsoever why the result should converge to the right answer, let alone [or even] converge at all in the limit. Why ever would anyone but the most desperate of minds dare to hope otherwise?”