Models Fail: Greenland and Iceland Land Surface Air Temperature Anomalies

I’m always amazed when global warming enthusiasts announce that surface temperatures in some part of the globe are warming faster than simulated by climate models. Do they realize they’re advertising that the models don’t work well for the area in question? And that their shouting it from the hilltops only helps to highlight the limited value of the models?

Greenland is a hotspot for climate change alarmism in more ways than one. A chunk of glacial ice sits atop Greenland, and as it melts, it contributes to the rise in sea levels. If surface temperatures in Greenland warm in the future, the warming rate will impact how quickly Greenland ice melts and its contribution to future sea levels. Greenland is also one of the locations around globe where land surface air temperatures in recent decades have been warming faster than simulated by models. See Figure 1, which is a model-data comparison of the surface air temperature anomalies of Greenland and its close neighbor Iceland. Somehow, that modeling failure turns into proclamations of doom, with the Chicken Littles of the anthropogenic global warming movement proclaiming we’re going to drown because rising sea levels.

Figure 1

Figure 1

A more detailed discussion of Figure 1: It compares the new and improved UK Met Office CRUTEM4 land surface air temperature anomalies for Greenland and Iceland (60N-85N, 75W-10W), for the period of January 1970 to February 2013, and the multi-model ensemble-member mean of the models stored in the CMIP5 archives, based on the scenario RCP6.0. As you’ll recall, the models in the CMIP5 archive are being used by the IPCC for its upcoming 5th Assessment Report. Based on the linear trends, since 1970, Greenland and Iceland surface air temperatures are warming at a rate that’s about 65% faster than predicted by the models. That’s not a very good showing for the models. And, for example, the disparity between the models and observations is even greater if we start the comparison in 1995, Figure 2. During the last 18 years, Greenland and Iceland land surface temperatures have been warming at a rate that’s more than 2.5 times faster than simulated by models. Obviously the modelers haven’t a clue about what causes land surface temperatures to warm there.

Figure 2

Figure 2

LOOKING AT THE RECENT WARMING PERIOD DOESN’T TELL THE WHOLE STORY

The data in Figure 1 covers a period of a little more than 40 years. Let’s look at a model-data comparison for the 40-year period before that, January 1930 to December 1969. Refer to Figure 3. During that multidecadal period, land surface air temperature anomalies in Greenland and Iceland actually cooled, and they cooled at a significant rate. On the other hand, the models show a miniscule long-term cooling from 1930 to 1969, but the trend is basically flat. The models fail again.

Figure 3

Figure 3

In our example in Figure 2, we looked at the trends from 1995 to present, so Figure 4 compares the models and data from January 1930 to December 1994. The data show cooling at a significant rate, about 0.25 deg C per decade, but now the models show warming.

Figure 4

Figure 4

TWO MORE REASONS FOR THIS EXERCISE

In addition to showing you how poorly the models simulate the land surface temperatures of Greenland and Iceland, another point I wanted to make was that you have to be wary of the start year of any study of Greenland surface temperatures. Figure 5 compares the models and data for Greenland and Iceland from 1930 to present. In it, the data and model output have been smoothed with 13-month running-average filters to minimize the monthly variations. Greenland and Iceland obviously cooled for much of the period since 1930. The break point between cooling and warming is probably debatable. But the most outstanding feature in the data is the extreme dip and rebound in the early 1980s. That dip appears about the time of the eruption of El Chichon in Mexico, and there’s another dip in 1991, which is when Mount Pinatubo erupted. Mount Pinatubo was a stronger eruption, so the 1982 dip appears unusual. Bottom line: keep in mind that any study of the recent warming of Greenland and Iceland surface temperatures will be greatly impacted by the start year.

Figure 5

Figure 5

The other point: Based on the linear trends (of the monthly data not the illustrated smoothed versions), land surface air temperature anomalies for Greenland and Iceland have not warmed since 1930. See Figure 6. Phrased another way, Greenland and Iceland surface temperatures have not warmed in 80+ years. But the models show they should have warmed about 1.3 deg C during that time. Granted, land surface temperatures now are warmer than then were in the 1930s and ’40s, but the models can’t simulate the cooling that took place from the 1930s to the latter part of the 20th Century, and they can’t be used to explain the recent warming.

Figure 6

Figure 6

Again, the models show no skill at being able to simulate surface temperatures. No skill at all. Even for critical locations like Greenland.

STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN

We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.

The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:

The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:

If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

Gavin Schmidt replied with a general discussion of models:

Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).

To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.

The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):

Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.

In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.

CLOSING

Will the warming continue in Greenland and Iceland? If so, for how long? Or will the surface temperatures in Greenland and Iceland undergo another multidecadal period of cooling in the near future? The models show no skill at being able to simulate land surface air temperatures in Greenland and Iceland, so we can’t rely on them for predictions of the future.

We can add the surface temperatures of Greenland and Iceland to the growing list of climate model failures. The others included:

Scandinavian Land Surface Air Temperature Anomalies

Alaska Land Surface Air Temperatures

Daily Maximum and Minimum Temperatures and the Diurnal Temperature Range

Hemispheric Sea Ice Area

Global Precipitation

Satellite-Era Sea Surface Temperatures

Global Surface Temperatures (Land+Ocean) Since 1880

And we recently illustrated and discussed in the post Meehl et al (2013) Are Also Looking for Trenberth’s Missing Heat that the climate models used in that study show no evidence that they are capable of simulating how warm water is transported from the tropics to the mid-latitudes at the surface of the Pacific Ocean, so why should we believe they can simulate warm water being transported to depths below 700 meters without warming the waters above 700 meters?

I’ve got at least one more model-data post, and it’s about the land surface temperatures of another continent. The models almost double the rate of warming there.

Looks like I’ve got a lot of ammunition for my upcoming show and tell book. It presently has the working title Climate Models are Crap with the subtitle An Illustrated Overview of IPCC Climate Model Incompetence.

It’s unfortunate that the IPCC and the government funding agencies were only interested in studies about human-induced global warming. They created their consensus the old fashioned way: they paid for it. Now, the climate science community is still not able to differentiate between manmade warming and natural variability. They’re no closer to that goal than they were when they formed the IPCC. Decades of research efforts and their costs have been wasted by the single-sightedness of the IPCC and those doling out research funds.

They got what they paid for.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
===============
Climate models represent weather over time. The variability in weather is not noise, it is chaos. While chaos appears random, it is not. You cannot average chaos to get the forced component.
This is why the ensemble mean is worthless as a prediction of future climate. As time increases the chaotic component of the signal does not average to zero as does random noise. Rather, it causes the temperature to vary unpredictably, making is impossible to separate the forced components from the chaotic components.

Roberto

In most statistics, one of the most important things you have to publish is the significance of the measurements. Nobody says “this is significant” without saying HOW significant. How was that statement validated?
The same with normal computer models. You validate them again and again. You don’t go around saying they passed or they failed without having the data to show HOW they passed or failed. The test cases. Precisely what did the model say, and how well was that supposed to match reality? What were the criteria?
I can’t imagine a computer professional who doesn’t know that and live it.
But the alarmist fans have a clearly non-normal approach. “These models are marvelous”, with little clue how they know that or how marvelous they are supposed to be before we trust them.

The climate model do however have value when interpreted correctly. What the spaghetti graph of climate models does demonstrate is the natural variability of the system. The models are telling us that due to natural variability alone, any one of the many, many different results are possible, and there is no way to know which one will actually occur.
What the spaghetti graph of climate models shows is that due to natural variability alone, we may get wide increases or decreases in temperature, with the identical forcings. This variability however is not “noise”, it is chaos. Thus, the reliability of the mean does not improve over time.

Roberto says:
July 6, 2013 at 7:26 am
I can’t imagine a computer professional who doesn’t know that and live it.
============
Commercial software lives and dies depending on whether its results match the NEEDS of its customers. Academic software however lives and dies depending on how well its results match the BELIEFS of those controlling the grant monies.
Thus, commercial software requires validation to ensure that it is working correctly. However, in academic software such validation will work against you if the beliefs of those controlling the grant monies do not match reality. Thus, there is no value in validating academic software and good reasons not to.

ferd berple says:
July 6, 2013 at 7:32 am

The climate model do however have value when interpreted correctly. What the spaghetti graph of climate models does demonstrate is the natural variability of the system. The models are telling us that due to natural variability alone, any one of the many, many different results are possible, and there is no way to know which one will actually occur.

Unless there’s code in the models that tries to model that natural variablility, I’d argue that the models are merely showing the chaotic behavior of what they do model.
Once they are better at modeling things like the PDO, the NAO, and water vapor to cloud transitions, and everything else that makes up “natural variability” then I might agree they’re modeling that. Until then, I see that phrase more as a screen for things the modelers don’t understand than something visible in model output.

Jim G

“And that their shouting it from the hilltops only helps to highlight the limited value of the models?”
Mr. Berple has put his finger upon the real issue. Climate is a multivariate chaotic system and “limited value” is a vast overstatement with regards to value of existing models of same. Their abuse of statistics is monumental.

Eyeballing figure 6 tells me that the model results are completely different to the data.
The models and data are just two wiggly lines laid one on top of the other – they don’t match at all.
Bob’s right: the models are cr*p

Alaska Mike

Bob Tisdale:
Figure one and two show data through 2013. Why was figure four cutoff at 1995? If you have the data why not show it through the present date?

noaaprogrammer

The core algorithm for simulating future climate should be the random walk of a drunk.

William Astley

I fully support the scientific logic of this thread. Increases in atmospheric CO2 are evenly distributed in the atmosphere. The potential for CO2 forcing due to the increase in atmospheric CO2 should therefore be more or less equal by latitude. As the actual amount of warming due to the CO2 increase should be proportional to the amount of emitted radiation at the latitude in question, the greatest amount of warming due to the CO2 mechanism should be in the tropics. That is not observed. The latitude pattern of warming does not match the signature of the CO2 forcing mechanism.
To explain the observed latitudinal pattern of warming – ignoring the second anomaly that there has been no warming for 16 years which requires a smart mechanism to turn off the CO2 forcing mechanism or suddenly to hid the CO2 forcing energy imbalance – using the CO2 forcing mechanism there would need to be a smart natural forcing mechanism that would inhibit warming in the tropics and amplify warming in other latitudes.
The observed warming in the last 70 years is not global, evenly distributed. What is observed is the most amount of warming is in the Northern hemisphere at high latitudes (The Northern hemisphere has warmed twice as much warming as the global as whole and four times as much warming as the tropics). The idiotic warmists ignore the fact that the latitudinal pattern of warming disproves their hypothesis and the no warming for 16 also invalidates the modeling of the CO2 mechanism and their assertion that the majority of the 20th century warming was caused by CO2. The warmists are not interested in solving a scientific problem, therefore any warming is good enough to support CO2 warming. AGW is being used to push the green parties’ and green NGO’s political agenda which is a set of very expensive scams.
Fortunately or unfortunately CO2 is not the primary driver of climate. Clouds in the tropics increase or decrease to resist forcing change by reflecting more or less sunlight off into space. Super cycle changes to the solar magnetic cycle are the driver of both the 1500 year warming/cooling cycle and the Heinrich abrupt climate events that initiate and terminate interglacials.
The warmists also ignore the fact that the same warming pattern that we are now observing has occurred cyclically in the past and the past cycles correlate with solar magnetic cycle changes. Ironically, if significant global cooling and crop failures’ ending the climate wars is considered to be ironic rather than tragic, the sun is rapidly heading to a deep Maunder like minimum. The planet is going to cool. The physics of past and future climate change is independent of the climate wars and incorrect theories. Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.solen.info/solar/images/comparison_recent_cycles.png
Gavin Schmidt also stated that if the planet did not amplify forcing changes and CO2 has not the primary driver of climate that climatologists could not explain the glacial/interglacial cycle.
Gavin Schmidt needs to read through the following books. (Unstoppable warming every 1500 years or so is of coursed followed by unstoppable cooling when the sun enters a deep Maunder like minimum.)
http://www.amazon.com/Unstoppable-Global-Warming-Updated-Expanded/dp/0742551245
http://www.amazon.com/dp/1840468661
Limits on CO2 Climate Forcing from Recent Temperature Data of Earth
The global atmospheric temperature anomalies of Earth reached a maximum in 1998 which has not been exceeded during the subsequent 10 years (William: 16 years and counting).
…1) The atmospheric CO2 is slowly increasing with time [Keeling et al. (2004)]. The climate forcing according to the IPCC varies as ln (CO2) [IPCC (2001)] (The mathematical expression is given in section 4 below). The ΔT response would be expected to follow this function. A plot of ln (CO2) is found to be nearly linear in time over the interval 1979-2004. Thus ΔT from CO2 forcing should be nearly linear in time also. … ….2) The atmospheric CO2 is well mixed and shows a variation with latitude which is less than 4% from pole to pole [Earth System Research Laboratory. 2008]. Thus one would expect that the latitude variation of ΔT from CO2 forcing to be also small. It is noted that low variability of trends with latitude is a result in some coupled atmosphere-ocean models. For example, the zonal-mean profiles of atmospheric temperature changes in models subject to “20CEN” forcing ( includes CO2 forcing) over 1979-1999 are discussed in Chap 5 of the U.S. Climate Change Science Program [Karl et al.2006]. The PCM model in Fig 5.7 shows little pole to pole variation in trends below altitudes corresponding to atmospheric pressures of 500hPa.
If the climate forcing were only from CO2 one would expect from property #2 a small variation with latitude. However, it is noted that NoExtropics is 2 times that of the global and 4 times that of the Tropics. Thus one concludes that the climate forcing in the NoExtropics includes more than CO2 forcing… ….Models giving values of greater than 1 would need a negative climate forcing to partially cancel that from CO2. This negative forcing cannot be from aerosols. …
These conclusions are contrary to the IPCC [2007] statement: “[M]ost of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf
On the Observational Determination of Climate Sensitivity and Its Implications Richard S. Lindzen1 and Yong-Sang Choi2
http://icecap.us/images/uploads/DOUGLASPAPER.pdf
A comparison of tropical temperature trends with model predictions
We examine tropospheric temperature trends of 67 runs from 22 ‘Climate of the 20th Century’ model simulations and try to reconcile them with the best available updated observations (in the tropics during the satellite era). Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs. These conclusions contrast strongly with those of recent publications based on essentially the same data.

LT

Without removing El-Chichon and Pinatubo effects you cannot say for sure what the real trend is

D. J. Hawkins

Bob Tisdale;
avast! is flagging your e-mail as carrying some malware. Don’t know if it’s something hinky with avast!, but you might want to double check.

D. J. Hawkins

Bob Tisdale;
More specifically:
Infection Details
URL: http://bobtisdale.files.wordpress.com/20
Process: C:\PROGRA~1\Google\GOOGLE~1\GOEC62~1.DLL
Infection: URL:Mal

Alan S. Blue

Stop comparing ‘trendline to trendline’ smoothed or unsmooothed. The model (or ensembles of models) is a prediction. Compare the least-squared error month-by-month to the actual observations.

Iceland’s (Reykjavik) temperatures closely follow ocean currents response to the geo-tectonics; in the winter along Reykjanes Ridge (to the south) and in the summers Kolbeinsey Ridge (to the north), giving decadal predictive or forecasting opportunity. Annual temperatures forecast is obtained by averaging of the above two:
http://www.vukcevic.talktalk.net/RF.htm

Robert Austin

ferd berple says:
July 6, 2013 at 7:32 am

The climate model do however have value when interpreted correctly. What the spaghetti graph of climate models does demonstrate is the natural variability of the system.

I can’t say that I agree that climate models have value as predictors of future climate. Perhaps they have some kind of academic value as a kind of “Sim City” exercise in producing the climate of an artificial and fictitious world. The spaghetti graph of multiple climate models does not demonstrate natural variability, it simply shows that the output of different models produces notably differing results. It is the variation in the runs of an individual model that is alleged to show natural variability. So the mean of a number of runs of an individual model perhaps has some meaning as filtering out what the designer thinks as climate noise or natural climate variability. This is totally different from the practice of the ensemble averaging multiple models. Since there is no way of knowing which if any of the climate models is closest in projecting the actual trajectory of empirical data, averaging the results of different models has no actual validity. I guess at best the average of the model ensemble might be construed as the “concensus” of the modelers!

Leo Morgan

Could somebody ask a professional on my behalf, what is the expected effect on temperature of Earth’s declining magnetic field?
Just how much energy did the strong field deflect? Is it nothing or is it a significant amount? Did the effect warm the earth’s interior? Are all those charged particles now going into the atmosphere to warm it?
Could the decline in the Earth’s magnetic field be a factor in the underestimate of the warmth of the atmosphere above Greenland and iceland?
P.S. can anyone refer me to a site that analyses the amount we’ve spent on not changing the world’s temperature?

Yes, the models don’t match the data in Iceland.
But what is the DATA? How has it been adjusted?
GHCN’s Dodgy Adjustments In Iceland, or what I call The Smoking Shotgun in Iceland. Iceland is the test case that exposes the shenanigans going on is adjustment of temperatures records.
So the real question in Iceland and Greenland is, “Are the ‘Data’ from CRUTEM4 worth anything?” Are they any better than GHCN/GISS? With the initials “CRU…”, I’m skeptical.

Jimbo

Here are some paper abstracts showing more rapid or similar rates of warming in Greenland covering the period between 1920 to 1940 as well as one paper which says:

…we conclude that the current decadal mean temperature in Greenland has not exceeded the envelope of natural variability over the past 4000 years,….

All occurred under the ‘safe’ level of co2 at 350ppm.

Jimbo

How will Greenland ice sheet respond to the speculated global mean temperature at the end of the century? I don’t have a clue but we can look at the Eemian very warm inter-glacial (warmer than the hottest decade evaaaaah).

(Nature – 2013) “…a modest ice-sheet response to the strong warming in the early Eemian…”
http://www.nature.com/nature/journal/v493/n7433/full/nature11789.html

Paul Penrose

I have to agree with Fred here; the ensemble means are meaningless. It is obvious that the individual runs of DIFFERENT models are not single realizations of the same thing, so it is meaningless to average them together. Even averaging individual runs of the same model is questionable. Has anyone even tried to characterize the “noise”? I’ll bet it’s far from a normal distribution. So the whole thing is just an exercise in statistical shenanigans.

ferd berple: What the spaghetti graph of climate models does demonstrate is the natural variability of the system.
Robert Austin: The spaghetti graph of multiple climate models does not demonstrate natural variability, it simply shows that the output of different models produces notably differing results.
I side with Robert. The Spaghetti graph at best shows the uncertainty in the mean signal from uncertainty in global alleged physical parameters with attempts to calibrate to the historical data. Show me where the calibration to natural variability exists in the process? We only have one run of the real data, if you discount the various revisions of the historical record.

rogerknights

Looks like I’ve got a lot of ammunition for my upcoming show and tell book. It presently has the working title Climate Models are Crap with the subtitle An Illustrated Overview of IPCC Climate Model Incompetence.

How about “Climola”? (Followed by one or more explanatory subtitles.) This lets you avoid using the word “crap” explicitly, but suggesting it by way of “climola’s” chiming subconsciously with “shinola” and “shinola’s” association with “sh*t”.

rogerknights

Or how about “Garbage Out: The Fruit of the IPCC’s Climate Models”?
Whatever the subtitle, I think that “Garbage Out” is a really “strong” title–it’s a “grabber”–and a winner.
“Fruit” brings to mind the advice to judge the models by their fruits–i.e., the outcome of their predictions (which are garbage).
By further subconscious association, “Garbage Out” suggests “take out the garbage”–i.e., get rid of the models.

Chad Wozniak

More proof that models are worse than useless – they are nothing but a priori constructs fully intended to deceive rather than enlighten, a positive evil.

rogerknights

PS: One last (hah!) tweak:
Garbage Out: The Wormy “Fruit” of the IPCC’s Climate Models
The sneer quotes around Fruit more strongly link it to Garbage and also imply that the word is being used figuratively, as in the Gospels and bringing to mind their advice.
Or maybe here’s another tweak, one that brings to mind the phrase, “Gospel in, Garbage out.”
Garbage Out: The Wormy “Fruit” of the IPCC’s Climate Gospel
The downside of that one is that the word “model” is omitted explicitly. But it may not be necessary to include it, since the whole title suggests it. (But probably this is one tweak too far.)

rogerknights

PPS: Someone has probably used “Garbage Out” as a title already. But book titles aren’t copyrighted. Some titles have been used on as many as eight different books.

Alaska Mike

Bob Tisdale:
Thank you for answering my curiosity of “Why was figure four cutoff at 1995? If you have the data why not show it through the present date?” and steering my attention to your point. I had to re-read your article a second time. I freely admit I’m not the “Sharpest Statistical Tool in the shed,” but I do try. Again, I appreciate your personal note, and everything all contributors do for Anthony and WUWT.
Alaska Mike

I like “Gospel In, Garbage Out” “Gospel” may or may not be the Truth, depending upon your point of view and upbrining, but it is definately the foundation consensus of the sect. It also brings in the religious furvor of CAGW believers. Now, if there was only a way to work in: “Drinking the Climate Change Cool-aid.”

BT: with the Chicken Littles of the anthropogenic global warming movement proclaiming we’re going to drown because rising sea levels.
BPL: I think you underestimate the danger. A city doesn’t have to be underwater to be threatened by rising sea levels. The water only has to get high enough to back up sewers and seep into aquifers. Without sewage disposal, and without fresh water, a city becomes a death trap.
And the rise won’t be perfectly even. It will come in a series of stochastic events–storm surges. Some will flood cities and some won’t.
But you’re right that rising sea level is not the most immediate danger from global warming. That would be agricultural failure due to rising drought in continental interiors.

Scott

If there is localised heating above the Model projection then for the models to be valid given the models main driver, there must be higher CO2 levels in Greenland.
Are there any CO2 measurements in Greenland? because without it they just have an epic fail regardless of what else is happening.

A usually overlooked feature of these IPCC climate models is that they provide no information to a policy maker about the outcomes from his or her policy decisions thus being useless for the purpose of making policy. That they seem to policy makers to be useful for this purpose suggests these policy makers have been deceived through applications of the equivocation fallacy on the part of IPCC climatologists ( see http://wmbriggs.com/blog/?p=7923 for proof ).

That they seem to policy makers to be useful for this purpose suggests these policy makers have been deceived
Who has been deceived?
Policy makers aren’t deceived, but will embrace a model that supports the decision they want.
CBO: Obamacare Will Spend More, Tax More, and Reduce the Deficit Less Than We Previously Thought Forbes: 7/27/’12
No One believed the CBO estimates. But Democrats used it as cover and to shift responsibility for busting the budget.
The same thing is happening with the use of climate models. Policy makers don’t believe the models, but they use the models as a “Get out of Jail” card.

Terry Oldberg

Stephen Rasey:
As you suggest, policy makers may not have been deceived by IPCC climatologists. Instead, they may have been a party to this deception. If they did it for money, they were guilty of civil and criminal fraud under the laws of the U.S.

Dan Pangburn

Some of the mistakes made by the IPCC and the Consensus are revealed at http://consensusmistakes.blogspot.com/

Theo Goodwin

Ric Werme says:
July 6, 2013 at 7:52 am
Ric Werme has this right. Let us constrast actual reasoning and Schmidt’s reasoning. In actual reasoning, one might create several models of the flight pattern of migrating geese, average the models, and come up with a “Vee” shape. By contrast, consider Schmidt’s reasoning:
“Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
All this says is that the several scatter diagrams will reveal the “correct shape” after they are averaged. The models have to be about something. And the average of the scatter diagrams have to reveal some thing that was hypothesized in all the models. In other words, we need some geese, some pictures of them migrating, and so on. As Ric says:
“Once they are better at modeling things like the PDO, the NAO, and water vapor to cloud transitions, and everything else that makes up “natural variability” then I might agree they’re modeling that. Until then, I see that phrase more as a screen for things the modelers don’t understand than something visible in model output.”
Finally, as ferd berple says:
“This variability however is not “noise”, it is chaos. Thus, the reliability of the mean does not improve over time.”
I do not know that it is chaos but I do know that it is indistinguishable from chaos.

Theo Goodwin

Paul Penrose says:
July 6, 2013 at 11:28 am
“I have to agree with Fred here; the ensemble means are meaningless. It is obvious that the individual runs of DIFFERENT models are not single realizations of the same thing, so it is meaningless to average them together. Even averaging individual runs of the same model is questionable. Has anyone even tried to characterize the “noise”? I’ll bet it’s far from a normal distribution. So the whole thing is just an exercise in statistical shenanigans.”
Right. In this context, only Schmidt has a clue what the characteristics of “the noise” are. And he has been unable to enlighten the rest of us. Until he does enlighten the rest of us, I strongly suggest that we stop talking about “the noise.”

Dan Pangburn

A simple equation at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html calculates average global temperatures since they have been accurately measured world wide (about 1895) with an accuracy of 90%, irrespective of whether the influence of CO2 is included or not. The equation uses a single external forcing, a proxy which is the time-integral of sunspot numbers. A graph in that paper, shows the calculated temperature anomaly trajectory overlaid on measurements.
‘The End of Global Warming’ at http://endofgw.blogspot.com/ expands recent (since 1996) temperature anomaly measurements by the five reporting agencies and includes a graph showing the growing separation between the rising CO2 and not-rising average global temperature.

Terry Oldberg

Dan Pangburn:
That the theory which you reference ( http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html) has an accuracy of less than 100% signifies that it is invalidated by the evidence. Hence, this theory is logically rejected.

nevket240

http://www.adriankweb.pwp.blueyonder.co.uk/Climate_Change.htm
Leo Morgan on July 6, 2013 at 10:57 am
Climate Change and the Earth’s Magnetic Poles,
A Possible Connection
Cheers Adrian

@BobTisdale: “If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?
A valid question to which Gavin Schmidt replied:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
To which you responded “To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.
Are you not aware that noise removal by signal averaging is a standard signal-processsing techinique? It works quite well, to the extent that the variations are truly random (i.e. not systemic bias etc). https://en.wikipedia.org/wiki/Signal_averaging
So, I don’t think it’s the case that he’s “not interested” in the noise, but intends to improve the signal-to-noise ratio of the ensemlbe model output by averaging out the noise orthogonal to the true signals.

Brian H

The multi-model KISS principle:
The average of a suite of incorrect assumptions is the average incorrect assumption.
Duh.

@BobTisdale
It sure seems as though, in very plain terms, that if he was pursuing the forced component, which is the better predictor, then he was not interested in the random component or noise.
You seem to agree that he’s trying to eliminate noise by separating the ‘forced component’ from the “random component”, assumed to be uncorrelated, by averaging. So, I guess I don’t understand how wanting to get rid of noise equates to being “not interested” in noise.
When you solve a problem, you often have to eliminate distracting or irrevlevant details. Don’t you agree that some ‘interest’ in these details is required to characterize them and then eliminate them?
First of alll, I’m not making any claims that his models have any skill in predicting climate, but if they did have some skilll, even a little, this technique should boost the SNR (if the random noise was truly uncorrelated (orthogonal in a vector sense) to the “real” signal (i.e. some predictions). (The principle behind this is simply that the expected value of centered random noise is zero).
So are you claiming that these “random” components are actually correlated to the true climinate signal, and that there are no uncorrelated signals to process like this? To paraphrase Leif Svalgaard, “The Earth is a noisy place”, so I’m inclined to believe that averaging should work to some extent by eliminating some random components in the signals.
Do you think that’s wrong? You seem to be surprised that time-series signals can be decomposed into determiinistic (“forced”) and stochastic (“random”) components. (http://en.wikipedia.org/wiki/Wold's_theorem) .Assuming stationarity of course, but if not statiionary a series can be further partittioned to piecewise approximation of stationarity.
I think that’s all what Schmidt was trying to do. It won’t perform miracles (i.e. resurrect a dead model), but might produce some useful SNR enhancements.