AR5 Climate Forecasts: What to Believe

MIT’s “wheel of climate” forecaster – image courtesy Donna Coveney/MIT

Guest post by Pat Frank

The summary of results from version 5 of the Coupled Model Intercomparison Project (CMIP5) has come out. [1] CMIP5 is evaluating the state-of-the-art general circulation climate models (GCMs) that will be used in the IPCC’s forthcoming 5th Assessment Report (AR5) to project climate futures. The fidelity of these models will tell us what we may believe when climate futures are projected. This essay presents a small step in evaluating the CMIP5 GCMs, and the reliability of the IPCC conclusions based on them.

I also wanted to see how the new GCMs did compared to the CMIP3 models described in 2003. [2] The CMIP3 were used in IPCC AR4. Those models produced an estimated 10.1% error in cloudiness, which translated into an uncertainty in cloud feedback of (+/-)2.8 Watts/m^2. [3] That uncertainty is equivalent to (+/-)100% of the excess forcing due to all the GHGs humans have released into the atmosphere since 1900. Cloudiness uncertainty, all by itself, is as large as the entire effect the IPCC is trying to resolve.

If we’re to know how reliable model projections are, the energetic uncertainty, e.g., due to cloudiness error, must be propagated into GCM calculations of future climate. From there the uncertainty should appear as error bars that condition the projected future air temperature. However inclusion of true physical error bars never seems to happen.

Climate modelers invariably publish projections of temperature futures without any sort of physical uncertainty bars at all. Here is a very standard example, showing several GCM projections of future arctic surface air temperatures. There’s not an uncertainty bar among them.

The IPCC does the same thing, brought to you here courtesy of an uncritical US EPA. The shaded regions in the IPCC SRES graphic refer to the numerical variability of individual GCM model runs. They have nothing to do with physical uncertainty or with the reliability of the projections.

Figure S1 in Jiang, et al.’s Auxiliary Material [1] summarized how accurately the CMIP5 GCMs hindcasted global cloudiness. Here it is:

clip_image002

Here’s what Jiang, et al., say about their results, “Figure S1 shows the multi-year global, tropical, mid-latitude and high-latitude mean TCFs from CMIP3 and CMIP5 models, and from the MODIS and ISCCP observations. The differences between the MODIS and ISCCP are within 3%, while the spread among the models is as large as ~15%.” “TCF” is Total Cloud Fraction.

Including the CMIP3 model results with the new CMIP5 outputs conveniently allows a check for improvement over the last 9 years. It also allows a comparison of the official CMIP3 cloudiness error with the 10.1% average cloudiness error I estimated earlier from the outputs of equivalent GCMs.

First, here are Tables of the CMIP3 and CMIP5 cloud projections and their associated errors. Observed cloudiness was taken as the average of the ISCCP and Modis Aqua satellite measurements (the last two bar groupings in Jiang Figure S1), which was 67.7% cloud cover. GCM abbreviations follow the references below.

Table 1: CMIP3 GCM Global Fractional Cloudiness Predictions and Error

Model Source CMIP3 GCM Global Average Cloudiness Fraction Fractional Global Cloudiness Error
NCC bcm2 67.7 0.00
CCCMA cgcm3.1 60.7 -0.10
CNRM cm3 73.8 0.09
CSIRO mk3 65.8 -0.03
GFDL cm2 66.3 -0.02
GISS e-h 57.9 -0.14
GISS e-r 59.8 -0.12
INM cm3 67.3 -0.01
IPSL cm4 62.6 -0.08
MIROC miroc3.2 54.2 -0.20
NCAR ccsm3 55.6 -0.18
UKMO hadgem1 54.2 -0.20
Avg. 62.1 R.M.S. Avg. ±12.1%

Table 2: CMIP5 GCM Global Fractional Cloudiness Predictions and Error

Model Source CMIP5 GCM Global Average Cloudiness Fraction Fractional Global Cloudiness Error
NCC noresm 54.2 -0.20
CCCMA canesm2 61.6 -0.09
CNRM cm5 57.9 -0.14
CSIRO mk3.6 69.1 0.02
GFDL cm3 71.9 0.06
GISS e2-h 61.2 -0.10
GISS e2-r 61.6 -0.09
INM cm4 64.0 -0.06
IPSL cm5a 57.9 -0.14
MIROC miroc5 57.0 -0.16
NCAR cam5 63.5 -0.06
UKMO hadgem2-a 54.2 -0.20
Avg. 61.2 R.M.S. Avg. ±12.4%

Looking at these results, some models improved between 2003 and 2012, some did not, and some apparently became less accurate. The error fractions are one-dimensional numbers that are condensed from errors in three-dimensions. The error fractions do not mean that some given GCM predicts a constant fraction of cloudiness above or below the observed cloudiness. GCM cloud predictions weave about the observed cloudiness in all three dimensions; here predicting more cloudiness, there less. [4, 5] The GCM fractional errors are the positive and negative cloudiness errors integrated over the entire globe, suppressed into single numbers. The total average error of all the GCMs is represented as the root-mean-square average of the individual GCM fractional errors.

CMIP5 cloud cover was averaged over 25 model years (1980-2004) while CMIP3 averages represent 20 model years (1980-1999). GCM average cloudiness was calculated from “monthly mean grid-box averages.” So a 20-year global average included 12×20 monthly global cloudiness realizations, reducing GCM random calculational error by at least 15x.

It’s safe to surmise, therefore, that the residual errors recorded in Tables 1 and 2 represent systematic GCM cloudiness error, similar to the cloudiness error found previously. The average GCM cloudiness systematic error has not diminished between 2003 and 2012. The CMIP3 models averaged 12.1% error, which is not significantly different from my estimate of 10.1% cloudiness error for GCMs of similar sophistication.

We can now proceed to propagate GCM cloudiness error into a global air temperature projection, to see how uncertainty grows over GCM projection time. GCMs typically calculate future climate in a step-wise fashion, month-by-month to year-by-year. In each step, the climate variables calculated in the previous time period are extrapolated into the next time period. Any errors residing in those calculated variables must propagate forward with them. When cloudiness is calculationally extrapolated from year-to-year in a climate projection, the 12.4 % average cloudiness error in CMIP5 GCMs represents an uncertainty of (+/-)3.4 Watts/m^2 in climate energy. [3] That (+/-)3.4 Watts/m^2 of energetic uncertainty must propagate forward in any calculation of future climate.

Climate models represent bounded systems. Bounded variables are constrained to remain within certain limits set by the physics of the system. However, systematic uncertainty is not bounded. In a step-wise calculation, any systematic error or uncertainty in input variables must propagate as (+/-)sqrt(sum(per-step error)^2) into output variables. Per-step uncertainty increments with each new step. This condition is summarized in the next Figure.

clip_image004

The left picture illustrates the way a GCM projects future climate in a step-wise fashion. [6] The white rectangles represent a climate propagated forward through a series of time steps. Each prior climate includes the variables that calculationally evolve into the climate of the next step. Errors in the prior conditions, uncertainties in parameterized quantities, and limitations of theory all produce errors in projected climates. The errors in each preceding step of the evolving climate calculation propagate into the following step.

This propagation produces the growth of error shown in the right side of the figure, illustrated with temperature. The initial conditions produce the first temperature. But errors in the calculation produce high- and low uncertainty bounds (e_T1, etc.) on the first projected temperature. The next calculational step produces its own errors, which add in and expand the high and low uncertainty bounds around the next temperature.

When the error is systematic, prior uncertainties do not diminish like random error. Instead, uncertainty propagates forward, producing a widening spread of uncertainty around time-stepped climate calculations. The wedge on the far right summarizes the way propagated systematic error increases as a step-wise calculation is projected across time.

When the uncertainty due to the growth of systematic error becomes larger than the physical bound, the calculated variable no longer has any physical meaning. In the Figure above, the projected temperature would become no more meaningful than a random guess.

With that in mind, the CMIP5 average error in global cloudiness feedback can be propagated into a time-wise temperature projection. This will show the average uncertainty in projected future air temperature due to the errors GCMs make in cloud feedback.

Here, then, is the CMIP5 12.4% average systematic cloudiness error propagated into everyone’s favorite global temperature projection: Jim Hansen’s famous doomsday Figure; the one he presented in his 1988 Congressional testimony. The error bars were calculated using the insight provided by my high-fidelity GCM temperature projection emulator, as previously described in detail (892 kB pdf download).

The GCM emulator showed that, within GCMs, global surface air temperature is merely a linear function of net GHG W/m^2. The uncertainty in calculated air temperature is then just a linear function of the uncertainty in W/m^2. The average GCM cloudiness error was propagated as:

clip_image006

“Air temperature” is the surface air temperature in Celsius. Air temperature increments annually with increasing GHG forcing. “Total forcing” is the forcing in W/m^2 produced by CO2, nitrous oxide, and methane in each of the “i” projection years. Forcing increments annually with the levels of GHGs. The equation is the standard propagation of systematic errors (very nice lecture notes here (197 kb pdf)). And now, the doomsday prediction:

clip_image008

The Figure is self-explanatory. Lines A, B, and C in part “a” are the projections of future temperature as Jim Hansen presented them in 1988. Part “b” shows the identical lines, but now with uncertainty bars propagated from the CMIP5 12.4 % average systematic global cloudiness error. The uncertainty increases with each annual step. It is here safely assumed that Jim Hansen’s 1988 GISS Model E was not up to CMIP5 standards. So, the uncertainties produced by CMIP5 model inaccuracies are a true minimum estimate of the uncertainty bars around Jim Hansen’s 1988 temperature projections.

The large uncertainty by year 2020, about (+/-)25 C, has compressed the three scenarios so that they all appear to nearly run along the baseline. The uncertainty of (+/-)25 C does not mean I’m suggesting that the model predicts air temperatures could warm or cool by 25 C between 1958 and 2020. Instead, the uncertainty represents the pixel size resolvable by a CMIP5 GCM.

Each year, the pixel gets larger, meaning the resolution of the GCM gets worse. After only a few years, the view of a GCM is so pixelated that nothing important can be resolved. That’s the meaning of the final (+/-25) C uncertainty bars. They mean that, at a distance of 62 years, the CMIP5 GCMs cannot resolve any air temperature change smaller than about 25 C (on average).

Under Jim Hansen’s forcing scenario, after one year the (+/-)3.4 Watts/m^2 cloudiness error already produces a single-step pixel size of (+/-)3.4 C. In short, CMIP5-level GCMs are completely unable to resolve any change in global surface air temperature that might occur at any time, due to the presence of human-produced carbon dioxide.

Likewise, Jim Hansen’s 1988 version of GISS Model E was unable predict anything about future climate. Neither can his 2012 version. The huge uncertainty bars makes his 1988 projections physically meaningless. Arguing about which of them tracks the global anomaly trend is like arguing about angels and heads of pins. Neither argument has any factual resolution.

Similar physical uncertainty bars should surround any GCM ensemble projection of global temperature. However, the likelihood of seeing them in the AR5 (or in any peer-reviewed climate journal) is vanishingly small.

Nevertheless, CMIP5 climate models are unable to predict global surface air temperature even one year in advance. Their projections have no particular physical meaning and are entirely unreliable.

Following on from the title of this piece: there will be no reason to believe any – repeat, any — CMIP5-level future climate projection in the AR5.

Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.

Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about. Recent pronouncements about human culpability should be seen in that light.

The verdict against the IPCC is identical: they do not know what they’re talking about.

Does anyone think that will stop them talking?

References:

1. Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117, D14105.

2. Covey, C., et al., An overview of results from the Coupled Model Intercomparison Project. Global Planet. Change, 2003. 37, 103-133.

3. Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18, 237-273.

4. AchutaRao, K., et al., Report UCRL-TR-202550. An Appraisal of Coupled Climate Model Simulations, D. Bader, Editor. 2004, Lawrence Livermore National Laboratory: Livermore.

5. Gates, W.L., et al., An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). Bull. Amer. Met. Soc., 1999. 80(1), 29-55.

6. Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety. 2000, IECEC: Las Vegas. p. 1026-1031.

Climate Model Abbreviations

BCC: Beijing Climate Center, China.

CCCMA: Canadian Centre for Climate Modeling and Analysis, Canada.

CNRM: Centre National de Recherches Météorologiques, France.

CSIRO: Commonwealth Scientific and Industrial Research Organization, Queensland Australia.

GFDL: Geophysical Fluid Dynamics Laboratory, USA.

GISS: Goddard Institute for Space Studies, USA.

INM: Institute for Numerical Mathematics, Russia.

IPSL: Institut Pierre Simon Laplace, France.

MIROC: U. Tokyo/Nat. Ins. Env. Std./Japan Agency for Marine-Earth Sci.&Tech., Japan.

NCAR: National Center for Atmospheric Research, USA.

NCC: Norwegian Climate Centre, Norway.

UKMO: Met Office Hadley Centre, UK.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Maus

Interesting but misguided. You were speaking about rounding errors when you should have been speaking of rounding accuracies. When viewed appropriately a lack of solid measurement and estimation allows certain knowledge of the instantaneous anthropogenic temperature anomaly induced on the wing of a bumble bee failing to fly in a computer simulation.

Stephen Wilde

A suggestion as to how to get the models to more accurately reflect cloudiness changes.:
Measure the length of the lines of air mass mixing along the world’s frontal zones which appears to vary by thousands of miles when the jets become more meridional as comparted to zonal and / or shift equatorward as compared to taking a more poleward position.
I think that is the ‘canary in the coalmine’ as regards net changes in the global energy budget because it links directly to albedo and the amount of solar energy able to enter the oceans.

Er, unbelievable. Here is presented, with pseudo-precise numerical content, the result of millions of dollars of computer code massaging, and, well, dick-all to do with the real world. Presented by an organization corrupt to the gneiss below its roots, and promoting a pre-concluded agenda.

This article, like many published in WUWT, conflates the idea that is referenced by the term “projection” with the term with the idea that is referenced by the word “prediction.” Actually, the two ideas differ and in ways that are logically important.
Predictions state empirically falsifiable claims about the unobserved but observable outcomes of events. Projections do not. Being unfalsifiable, projections lie outside science.

Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about. Recent pronouncements about human culpability should be seen in that light.

Quick! Somebody tell the New York Times! Global warming called off! Stop the presses!
Oh wait. There’s no one listening. . .
/Mr Lynn

In my post of August 23, 2012 at 8:35 pm, please strike the phrase “with the term” and replace it with “with the idea.”

Mark Luhman

The amount of money wasted on this fools errand is astounding. It is a simple fact no amount of computer power of tweaking of the program will ever allow us to forecast, predict or project what going to happen in the future as to what going to happen with climate. It is a fact a chaotic system cannot be forecast any further out with any accuracy beyond a few days. Why anyone should pay attention to these expense toys is beyond me. Beyond that why is the tax payer being bilked for the tab on this idiocy

Theo Goodwin

Another excellent post! Thanks again, Pat Frank.

gregole

Pat Frank,
Thank you for this fine post. It really got to me and is bookmarked.
I am somewhat of a late-comer to CAGW criticism – it took Climategate 1.0 to get my attention.
First technical question I had was “what do the thermometers actually say?” Stumbled onto something called surfacestations.org – promptly barfed just considering measurement uncertainty – and began marveling at just how radiant physics would work considering CO2 concentration contribution; practically instantaneously considering that earth atmosphere heat content is heavily dependent upon the hydrological cycle; hmmm…clouds…
Wouldn’t an insulating component of our atmosphere (Man-Made CO2) influence cloud formation, which in turn would influence heating as well as atmospheric heat content? In my mental picture/thought experiment it just makes sense that that process feedback is (highly) negative. No need to panic. But let’s get some better measurements; I mean really!
Clouds…
http://www.clouds-online.com/
High clouds, low clouds, wispy ones and rainy ones, storm clouds, white clouds set against brilliant blue skies, sun crowding grey sky clouds – all kinds of clouds.
My poetic point? Perhaps it isn’t just clouds as in quantitative but also clouds as in qualitative. Point here posted? Models don’t even come close to modeling these wispy, beautiful, billowing, dangerous at times, essential-to-our understanding of our atmosphere, things we call clouds. And the science is settled? What! Pure foolishness to claim that.

Allan MacRae

[Excerpt from the above article]
Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.
Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about.
Recent pronouncements about human culpability should be seen in that light.
The verdict against the IPCC is identical: they do not know what they’re talking about.
[end of excerpt]
Comparison to a recent excerpt from wattsup:
http://wattsupwiththat.com/2012/08/15/more-hype-on-greenlands-summer-melt/#comment-1058781
It should now be obvious that global warming alarmists and institutions like the IPCC have NO predictive track record – every one of their major scientific predictions has proven false, and their energy and economic recommendations have proven costly and counterproductive.
____________
Close enough.

Allan MacRae
It sounds as though you are under the misimpression that IPCC climate models make “predictions.” As the climatologist Kevin Trenberth emphatically stated in the blog of the journal Nature in 2007, they make only “projections.” Though predictions are falsifiable, projections are not. The models are not wrong but rather are not scientific.

zefal

They will adjust them after the fact as needed. Then show you how spot on their predictions were. No need for no stinkin’ physical uncertainty bars.
Physical uncertainty bars are for sissys! I used to have a license plate that read this in bold lettering on my pink bike with banana seat, sissy bars, and basket.

Terry Oldberg says that if “Predictions” are not falsifiable they cannot be regarded as science. They should not even be called predictions.
He is right. When Einstein predicted the deflection a light beam would suffer at grazing incidence to the sun he initially got the answer wrong by a factor of 2. Back then when “Science” still cared about “Integrity” his theory would have been discredited if he had failed to correct his error before Eddington published his observations. That was “Science”. CMIP3 and CMIP5 are “Non-Science” (aka “Nonsense”).

Jim D

The statisticians here should have a field day deconstructing this one. If he wanted a worse error, he could have propagated it monthly instead of yearly. That would have been more impressive. How about daily? Can anyone figure out what he did wrong? Or, if the error is only 12% for a year, it must be tiny for a day to only grow to 12% for a year. Just needs some more thought.

AlaskaHound

The true responses seen here are lacking the basics.
Diurnal cloud cover, height, opacity to the wavelengths that matter and where they are during dayside and night-time throughout each and every day, is key.
A guess is always a guess and if you cannot differentiate between day (incoming solar attenuation) and night (OLR attenuation) you have not produced anything of meaningful value…

pat

They are on the cutting edge of more government money.
(Most do not realize that as much as 60% of science grants go to the applicants, personally)

Not sure i agree with your comment about error bars. Error bars are measures of statistical uncertainty or variance. The output of climate models isn’t data, in the sense of measurements subject to statistical rules. Gavin Schmidt is clear about this.
What you are talking about is predictive accuracy. And I agree with Terry Oldberg. Until the climate modellers concede the model outputs are predictions, their outputs are just projections, and therefore not science.
I wish rather more people understood this.

Philip Bradley:
Thanks for engaging on the projections vs predictions issue. Before their models can make predictions, climatologists must define the underlying statistical population, for a prediction is an extrapolation to the outcome of an event in such a population. A predictive model makes claims about the relative frequencies of the various possible outcomes; these claims are checked by observing the relative frequencies in a sampling of observed events that is drawn for this purpose from the population. In modern climatology, though, there is no population and thus is no opportunity to make a check. This is why the research cannot truly be said to be “scientific.”

george e smith

How do you calculate a 10.1% error in cloudiness with a model. And how do you know that error is correct; specially since as far as I ca tell, it isn’t even possible to measure global cloudiness with that kind of precision. It’s bad enough to claim they know the global temperature, given the poor sampling regimen, but the number of temperature recording sites must be orders of magnitude greater than the number of cloud measuring sites, specially for the 70 % that is ocean.
Sorry satellites don’t cut the mustard. A single satellite can’t return to a given point to take another sample in less than 84 minutes, and when the satellite does get back to the same place, it isn’t the same place; that’s about 22 degrees away. And clouds can appear out of nowhere, and then disappear in a few minutes. and how many satellites do we have that can simultaneously observe different poins on the surface.
Well I forgot, you can tell a temperature by boring one hole in one tree, so maybe you can get a cloudiness number from a single photograph.
Totally ridiculous to believe they can sample cloud cover of the whole globe in conformance to the Nyquist sampling criterion.
In this instance, what I mean by cloud cover is the total solar spectrum Joules per square metre at the surface under the cloud, divided by the Joules per square metre above the clouds (TSI x time)

Bob K

Anthony, the link “(very nice lecture notes here (197 kb pdf))” doesn’t work, it has an extraneous slash at the end. The correct URL is http://www.phas.ubc.ca/~oser/p509/Lec_12.pdf

What an excellent article! Thanks to Pat Frank. Clouds are the elephant in the room of climate science. They come, they go and are the Venetian blinds of the earth. If you cannot model the clouds you cannot predict weather or climate.The IPCC is wasting both our money and their time.

Allan MacRae

Terry Oldberg says: August 23, 2012 at 8:35 pm
This article, like many published in WUWT, conflates the idea that is referenced by the term “projection” with the idea that is referenced by the word “prediction.” Actually, the two ideas differ and in ways that are logically important.
Predictions state empirically falsifiable claims about the unobserved but observable outcomes of events. Projections do not. Being unfalsifiable, projections lie outside science.
___________
Fair point Terry, but either way (prediction or projection). the ultimate conclusion is the same:
The climate models produce nonsense, and yet they are being used in attribution studies to falsely suggest that humanity is the primary cause of the global warming, and to the justify the squandering of trillions of dollars on worthless measures to fight this fictitious catastrophic humanmade global warming.
In summary the climate models are mere “projections” when one correctly points out their dismal predictive track record, but they are also portrayed as robust “predictions” when they serve the cause of global warming alarmism.
In fact, it is the warming alarmists themselves who have actively engaged in this duplicity.

Allan MacRae:
The authors of the IPCC’s periodic assessment reports have been repeatedly guilty of using the ambiguity of reference by terms to the associated ideas as a weapon in attempting to achieve political or financial ends that are favorable to them. Their technique is insert fallacies based on ambiguity of reference into their arguments. Using ambiguity of reference in this way, one can compose an argument that appears to be true when it is false or unproved. Confusion of “projection” with “prediction” serves this end. So does confusion of “evaluation” with “validation.” (Though IPCC climate models are not statistically “validated” they are “evaluated.”)
Ambiguity of reference by the word “science” plays a similar role. In the English vernacular, the word references the idea of “demonstrable knowledge” and the different idea of “the process that is operated by people calling themselves “scientists.” The IPCC’s “scientists” do not produce demonstrable knowledge but do call thermselves “scientists” and operate a process.
Climate “science” is not “demonstrable knowledge,” IPCC “projections” are not “predictions” and IPCC “evaluation” is not “validation” but the IPCC has led its dupes to believe the opposite. Through this technique, IPCC authors have duped the people of the world into parting with a great deal of money; currently, the IPCC is preparing them to part with even more.

Well it is obvious from this post that clouds are still not understood so how can any model produced work without this understanding? There are many other inputs to climate, many extraterrestrial, that are little understood and some we have yet to discover. If we knew all this, and had a computer big enough to handle the billions of bits of information fast enough, them a model might work.
My experience of cloud is that they have a cooling effect and negative feedback. Unfortunately belief in the GHG theory forces the opposite line of thought.

The CMIP5 models for the upcoming AR5 have gone from bad to worse at simulating global sea surface temperatures.
Here’s an excerpt from Chapter 5.5 of my upcoming book Who Turned on the Heat? The Unsuspected Global Warming Culprit, El Niño-Southern Oscillation:
We’ve seen that sea surface temperatures for a major portion (33%) of the global oceans have not warmed in 30 years. This contradicts the hypothesis of anthropogenic global warming. The models show the East Pacific Ocean should have warmed 0.42 to 0.45 deg C during the satellite era, but they haven’t warmed at all.
On the other hand, if we look at the global sea surface temperature record for the past 30 years, we can see that they have warmed. The IPCC’s climate models indicate sea surface temperatures globally should have warmed and they have. See Figure 5-5. Note, however, based on the linear trends the older climate models have overestimated the warming by 73% and the newer models used in the upcoming IPCC AR5 have overshot the mark by 94%. In simple words, the IPCC’s models have gone from terrible to worse-than-that. The recent models estimate a warming rate for the global oceans that’s almost twice the actual warming. In other words, the sea surface temperatures of the global oceans are not responding to the increase in manmade greenhouse gases as the climate models say they should have. Not a good sign for any projections of future climate the IPCC may make.
http://i45.tinypic.com/hs9bbr.jpg

john
Ian W

Jim D says:
August 23, 2012 at 10:10 pm
The statisticians here should have a field day deconstructing this one. If he wanted a worse error, he could have propagated it monthly instead of yearly. That would have been more impressive. How about daily? Can anyone figure out what he did wrong? Or, if the error is only 12% for a year, it must be tiny for a day to only grow to 12% for a year. Just needs some more thought.

I am sure that ‘statisticians would have a field day’ but then there has to be understanding of what figures actually mean. A tiny variance in the cloud cover at the tropics will have a far larger effect than a similar variance in temperate or polar latitudes. A tiny variance during the day will have a different effect than a tiny variance during the night. High level cirrus has different effect than low level stratus. Convective clouds are an indication of heat being carried out of the atmosphere stratus are not. But statisticians would only deal in numbers not understanding the impact of the qualitative differences on the quantitative. Indeed by reducing the errors to just a single percentage vast amounts of information has been discarded. It would be interesting to know which cloud the models fail to ‘predict/project/extrapolate’ and where on the globe those models fail as that has a huge effect on the probability of a closer to reality result. But statisticians playing with averaged numbers will not have that information as it has been discarded. So yes it needs more thought – and less just playing with numbers.
Personally, I would like to see strict Validation of these models with repeated runs from different starting points with no change in parameterization between those runs. So starting at 1850, then at 1900, then at 1950, then at 2000 and compare their results to actual analyzed atmosphere (including actual cloud cover and type), maximum and minimum temperatures and humidities and sea temperatures perhaps resolved at one degree latitude and longitude. If a model fails to meet the ‘skill’ level required in its prediction then it should be discarded and further funding to that modeling group should cease. If the various research groups knew that they had a real meteorological skill/validation bar to cross that was sufficiently challenging or lose funding perhaps they would improve more than this article suggests they have.
Remember the results of these models are trumpeted throughout the world by the UN and used to kill major industries and keep 3rd world countries in fuel poverty – we aren’t talking about a simple post-doc project.

Robert of Ottawar

Thankyou for spelling this out.

Roger Longstaff

I think that the problems with GCMs extend beyond those of albedo, or cloudiness. Albedo can be measured directly and continuously using geostationaty satellites – presumably the models have been “tuned” to hindcast the known data, but this says nothing about predictive capability.
In reading about the mechanics of GCMs a few months ago I found references to low pass filtering between integration time steps. Filtering is only valid when there is an a priori knowledge of “the signal” (eg a radio signal), however GCM “experiments” assume that the signal is produced by GHGs, therefore this is all that they can ever hope to detect, as all other mechanisms are filtered out.
Even worse, GCMs use pause/reset/restart techniqies in integration when physical laws are vioated (eg conservation of mass, momentum and energy), or boundary conditions are exceeded, in order to preserve “stability”. Such models can produce no useful information, by definition.
The UK Met Office’s climate models lose fidelity after just a few days of simulated elapsed time, and produce results that are shown to be hopeless after a few days of reality. To believe that these models can produce any sensible data AT ALL with simulated elapsed times of decades is laughable to anybody with the meanest knowledge of mathematics and physics.

While I agree with Terry Oldberg on the predictions vs projections thing, either way the modellers have zero stake in accuracy. A bookmaker taking bets on future climate would be much more likely to get it right, because being wrong costs real dollars.
So all we need to know the real answer is a reliable independently verifiable measure of climate and real climate gambling.
So who is up to running the Great Climate Casino – place your bets now.

michael hart

Thanks, Pat. I like to see someone who takes the trouble to be specific about what they are NOT saying when explaining their work. It helps crystallize genuine understanding and is the mark of an honest educator.
Regarding “The verdict against the IPCC is identical: they do not know what they’re talking about. Does anyone think that will stop them talking?”: This is where the so called “Precautionary Principle” is often invoked, essentially as an argument from ignorance.
More honestly put, the precautionary principle usually amounts to no more than: “I don’t know what the answer is, but I do know that I can frighten you into doing what I say.”
Well, I for one, am not frightened by climate models [only by some of the people that use them].

wsbriggs

I read this and thought that Willis had it all covered, still does. His hypothesis should be the touchstone for every serious research program in climate. He’s not alone, but his explanations are the clearest of all of them.

OssQss

http://www.stuff.co.nz/science/7400061/Global-warming-science-tackled
Next up, lets talk about how well we understand the relationship between cosmic rays and cloud formation 🙂

wiki Freeman Dyson to see what he says about GCM’s.
From my lay viewpoint I’d say that GCM’s are astounding intellectual achievements, but they seem to be pushed beyond their capabilities and misused to promote political agendas. Sort of like quantum chromodynamics being used to select the next US President.
In fairness to the GCM’s, please remember that all parameter values are not equally probable. Any particular electron could be here in this room with me or it could be orbiting the planet Zog, but the wave function might say that the second case has negligible probability. I use this technique to build fairly robust economic models.
In order to understand GHG forcing better I’ve been reading up on atmospherics. One of the key variables is naturally the bond albedo. Google Earth’s albedo and you’ll get values ranging from 0,28 to 0,31. Ditto for Mars and the spread is 0,16 to 0,25!! This makes quite a big difference and I’d suggest that they spend more time nailing it down before any climate modelers ask us to trust their projections.

Thank you, Pat.
I wondered lonely as a cloud
Why no one else said this aloud.
Ian W says: August 24, 2012 at 3:40 am
Quite so. One gains an impression that modellers reject runs that give large errors by their criteria, while refusing to use time-honoured criteria.

Tom in Florida

Philip Bradley says:
August 23, 2012 at 11:22 pm
“What you are talking about is predictive accuracy. And I agree with Terry Oldberg. Until the climate modellers concede the model outputs are predictions, their outputs are just projections, and therefore not science.
I wish rather more people understood this.”
Unfortunately the average voter, and politician for that matter, does not understand the difference. I believe this lack of understanding is being used by modelers to advance the gravy train of monetary grants while allowing them to have an escape clause when they are eventually proven wrong.
george e smith says:
August 23, 2012 at 11:43 pm
“How do you calculate a 10.1% error in cloudiness with a model. And how do you know that error is correct; ”
Yes, what are the error bars associated with the error bars?

observa

Sorry teach but my eyes glazed over long before “version 5 of the Coupled Model Intercomparison Project “.
I must learn to pay attention in class
I must learn to pay attention in class
I must..

Bill Illis

Nice article Pat Frank.
Water Vapour and Clouds are the make or break factors in the theory. If these factors are not the positive feedback assumed in the theory, then then entire proposition falls apart. In fact, if they are just 50% of the positive feedback assumed, the theory still falls apart because there is not enough feedbacks on the feedbacks to get anywhere near to 3.0C per CO2 doubling.
Clouds are a major uncertainty (they could be positive or negative) but the biggest assumed feedback is Water Vapour. Water vapour is supposed to be +6.0% already and +22% by 2100.
So far, it is Zero. The IPCC AR5 Water Vapour forecast versus actual observations to date. The climate models are way off already.
http://s8.postimage.org/epbk2gtlh/ENSO_TCWVIPCC_July12.png
And here is all the climate model forecasts over time going back to Hansen and the first IPCC forecast versus actual temperature observations to date.
http://s17.postimage.org/jobns27n3/IPCC_Forecasts_Obs_July2012.png

Venter

Terry Oldberg
We all can agree on 2 areas
1.] Climate models make projections and not predictions.
2.] The IPCC approach is not scientific as there is no way to have a statistically and empirically validated control and trial population for testing any hypothesis.
Nobody disagrees with both of these points. Can you please now stop posting these same points over and over in every thread for years?

Venter:
While you claim that “nobody disagrees with both of these points” the evidence is sadly to the contrary. As of this moment, professional climatologists have yet to identify the statistical population that underlies the claims of their climate models yet they claim to be “scientists” pursuing “science.” In AR4, IPCC authors present “evaluations” of models that are designed to convince the statistically naive that statistical validation is taking place when it is not. Here in WUWT, article after article presents a new model plus a scientifically meaningless IPCC-style “evaluation” of this model. Bloggers routinely weigh in on the issue of whether a given “evaluation” falsifies the model or does not falsify it when an “evaluation” does neither. Skeptics such as Lord Monckton appeal to IPCC-style “evaluations” in presenting logically flawed arguments for the exaggeration of the magnitude of the equilibrium climate sensitivity by the IPCC.
The truth is that climate models are currently insusceptible to being either falsified or validated by the evidence. They will remain in this state until: a) the IPCC identifies the statistical population and b) models are constructed for the purpose of predicting the outcomes of the events in this population. Very few WUWT bloggers exhibit an understanding of this state of affairs. The editors of WUWT do not exhibit an understanding of it in selecting articles for publication. Aside from you and me and a few others, amateur climatologists and professionals are alike in acting as though they don’t have a clue to how scientific research is actually performed.

A great Aussie saying I came across a minute ago, pertinent to this debate: “The hamster’s dead, but the wheel’s still turning.”

Sadly, although I’ve done very similar analyses of the doomsday curves from Hansen et. al. that presume different forcing/feedback scenarios, I do have to take issue with the methodology used to forward propagate the errors. The GCMs are numerical simulations — they are at heart systems of parametric coupled nonlinear differential equations. The solutions they generate are not, nor are they expected to be, normal. Hence simple forward propagation of errors on the basis of an essentially multivariate normal assumption (e.g. the one underlying Pearson’s \chi^2) is almost certainly egregiously incorrect for precisely the reasons warned about in the Oser lecture — the system is more, or at least differently, constrained than the simple linear addition of Gaussian errors would suggest.
With a complex system of this sort, the “right” thing to do (and the thing that is in fact done, in many of the GCMs) is to concede that one has no bloody idea how the DISTRIBUTION of final states will vary given a DISTRIBUTION of input parameters around the best-guess inputs, and hence use Monte Carlo to generate a simulated distribution of final states using input that are in turn selected from the presumed distributions of input parameters. Ensemble models do this all the time to predict hurricane trajectories, for example — they run the hurricane model a few dozen or hundred times and form a shotgun-blast future locus of possible hurricane trajectories to determine the most probable range of values.
To put it in more physical terms, because there are feedbacks in the system that can be negative or positive, the variations in cloudiness are almost certainly not Gaussian, they are very probably strongly correlated with other variables in the GCMs. Their uncertainty in part derives from uncertainty in the surface temperatures, the SSTs, the presence or absence of ENSO positive, and far more. Some of these “uncertainties” — feedbacks — amplify error/variation, but most of them do the opposite or the climate would be unstable, far more unstable than it is observed to be. In other words, while the butterfly effect works on weather state, it tends to be suppressed for climate; the climate doesn’t seem enormously unstable toward hothouse Earth or snowball Earth even on a geological time scale, and the things that cause instability on this scale do not appear to be any of the parameters included in the GCMs anyway because we don’t really know what they are or how they work.
However, the basic message of this is sound enough — there should be error estimates on the “projections” of GCMs — at the very least a Monte Carlo of the spread of outcomes given noise on the parameters.
rgb

Gail Combs

pat says:
August 23, 2012 at 10:52 pm
They are on the cutting edge of more government money.
(Most do not realize that as much as 60% of science grants go to the applicants, personally)
_______________________
Actually even the scientists are complaining. The grant money generally gets wasted in bureaucratic red tape they are saying.

Scientific American(tm) Dr. No Money: The Broken Science Funding System
….Most scientists finance their laboratories (and often even their own salaries) by applying to government agencies and private foundations for grants. The process has become a major time sink. In 2007 a U.S. government study found that university faculty members spend about 40 percent of their research time navigating the bureaucratic labyrinth, and the situation is no better in Europe…
…An experimental physicist at Columbia University says he once calculated that some grants he was seeking had a net negative value: they would not even pay for the time that applicants and peer reviewers spent on them…..
In 2009 two Canadian academics calculated that the country’s Natural Sciences and Engineering Research Council spent more than C$40 million administering its basic “discovery” grants. It would have been cheaper simply to award each applicant the average grant of C$30,000. Yet another idea is a lottery system…..

The comments are the really interesting part of the article.
I would rather see a lottery system. This would hopefully do away with the entrenched old boy system that keeps innovative ideas from being considered.
This commenter expresses how I feel about the situation.

… I have also found academia to be grossly inefficient. In my experience, federal dollars would be better spent backing high risk technologies at small startups where every dollar is dear and treated accordingly. … I could not disagree more strongly that federal dollars should back certain researchers with “proven track records”. The system is already an enormous “old-boys club” to the detriment of new and innovative ideas, as evidenced by the NSF first-time funding rate referenced in the article. Until patent laws are changed in such a way that venture capital begins to back riskier, longer range technologies at small companies, there is a important role to be played here by the federal government.

An honest lottery system would have allowed “Denier” Scientists a bit more of a fair shake and perhaps would have kept the IPCC – CO2 mania more under control.

Retired senior NASA atmospheric scientist, Dr. John S. Theon, the former supervisor of James Hansen, has now publicly declared himself a skeptic and declared that Hansen “embarrassed NASA”. He violated NASA’s official agency position on climate forecasting (“we did not know enough to forecast climate change or mankind’s effect on it”). Hansen thus embarrassed NASA by coming out with his claims of global warming in 1988 in his testimony before Congress. [January 15, 2009]
Theon declared: “Climate models are useless”.
See: James Hansen’s Former NASA Supervisor Declares Himself a Skeptic – Says Hansen ‘Embarrassed NASA’, ‘Was Never Muzzled’, & Models ‘Useless’ (Watts Up With That?, January 27, 2009) http://wattsupwiththat.com/2009/01/27/james-hansens-former-nasa-supervisor-declares-himself-a-skeptic-says-hansen-embarrassed-nasa-was-never-muzzled/
It is difficult to believe that energy poverty is being imposed on Earth to prevent very flawed predictions from becoming true.
This article again shows how useless GCMs are. Thanks, Pat!

Pamela Gray

This is easily fixed. Add three conditions under each CO2 scenario: El Nino, La Nina, and neutral (we know what the jet stream and clouds kinda do under each condition). Run several trials with the dials turned up to different settings for El Nino, turned up for different settings for La Nina, and several runs between the margins for neutral, and then average for each CO2 scenario. Wonder if one of the models does this already?

Pamela Gray

And just for giggles, get rid of the CO2 fudge factor and run that under the three ENSO conditions (so you will have three conditions with, and three conditions without CO2 FF). Just to compare. Include accuracy bars of course.

Tim Clark

SO WHAT if you have determined that GCM projections are worthless!!!!
Us warmers can take solace knowing we STILL can cling to the universally proven, physiologically compatible, unprecedented, data substantiated, untainted, statistically validated, global encompassing, court approved, and PEER reviewed tree ring data showing MASSIVE specie extinction by 2030.
/sarc

BillC

Robert Brown – thanks for that comment.

Sorry satellites don’t cut the mustard. A single satellite can’t return to a given point to take another sample in less than 84 minutes, and when the satellite does get back to the same place, it isn’t the same place; that’s about 22 degrees away. And clouds can appear out of nowhere, and then disappear in a few minutes. and how many satellites do we have that can simultaneously observe different poins on the surface.
http://en.wikipedia.org/wiki/List_of_Earth_observation_satellites
Actually, satellites do cut the mustard, they even cut it quite finely. Geostationary satellites in particular can easily resolve and correct for geometry of almost the whole globe, and the one place they are weak is tightly sampled by polar orbit satellites. They are the one source of reliable global data.
rgb

Eric Steig

How about a little skepticism from Anthony Watts before posting such things? On what possible grounds do the error bars grow with time in the way shown, and how on earth do the mean A,B,C, projections collapse into one another. Neither makes any sense to me.
REPLY: Well I’m sure Pat can answer, but from my view a as forecaster, forecasts grow more uncertain the further they are in the future, just as temperature and paleo data grow more uncertain back in time. We all know that a weather forecast for two weeks ahead is far less certain than one for two days ahead. Conversely, unless properly recorded/measured, the past is less certain than the present.
For example due to errors in siting/exposure (like on the north side of buildings) we know the temperatures measured in 1850 -vs- 1950 have wider error bars. See the recent BEST graph to see what I mean:

If you think error bars should get smaller into the future for a forecast, I have a big chunk of ice I can sell you in a warming Antarctica. – Anthony

Bill Illis

There is a new paper by Troy Masters of UCLA which indicates that the Cloud Feedback has been negative from 2000 to 2010 using all satellite and temperature data sources. There is much uncertainty based on the dataset and timeframes used but generally the Feedback is negative. [The paper doesn’t really say that but that is what the data shows – it got quite a going-over by Andrew Dessler who had recommended rejection and other referees had recommended it be less certain about the negative cloud feedback – so to get it published, vagueness is always required if you are going against the theory/gospel].
http://www.earth-syst-dynam.net/3/97/2012/esd-3-97-2012.pdf
There is a supplemental to the paper which contains all the data (and more) which would be immensely helpful to those who are looking at this issue in more depth (zip file so I won’t link to it).

Pamela Gray

Don’t know if the link will work, but Oregon and many other states use this method for agricultural forecasting. Why? State after state has determined that analogue model forecasting outperforms climatological (IE dynamical) model methods. Farmers get testy when the forecast results in wilted or frozen crops and dead or worthless animals. That state agricultural departments are ignoring CO2/climate change models is telling.
cms.oregon.gov/oda/nrd/docs/forecast_method.ppt

Arno Arrak

Why waste time on climate models? Miskolczi has shown that water vapor feedback is negative while these bozos have it positive. Furthermore, their cloudiness error is huge. It has been suggested that cloudiness actually has a negative feedback and no one has been able to dispute that. According to Miskolczi’s work with NOAA weather balloon data the infrared absorbance of the entire atmosphere remained constant for 61 years while carbon dioxide at the same time increased by 21.6 percent. This is a substantial addition of carbon dioxide but it had no effect whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, case closed. You might ask of course, how is it possible for carbon dioxide to not absorb IR and thereby violate the laws of physics. This of course is not what is happening. According to Miskolczi theory the existence of a stable climate requires that the IR optical thickness of the atmosphere should remain constant at a value of about 1.86. If deviations occur the feedbacks among the greenhouse gases present will restore it to that value. For practical purposes it boils down to carbon dioxide and water vapor. Carbon dioxide is not adjustable but water vapor has an infinite source in the oceans and can vary. If more carbon dioxide is added to the atmosphere the amount of water vapor changes to compensate for its absorption and the result is that the optical thickness does not change. This means that water vapor feedback is negative, the exact opposite of what IPCC says and what these models incorporate. If you believe in positive water vapor feedback the amount of water vapor in the atmosphere should increase in step with the increase in carbon dioxide as determined by Mauna Loa observatory. Satellites just don’t see it. And Miskolczi has shown that the infrared optical thickness of the atmosphere has remained constant for the period of observation recorded by NOAA weather balloon database that goes back to 1948. He used seven subsets of the NOAA database covering this time period and reported to the EGU meeting in Vienna last year that the observed value of the optical thickness in all cases approached 1.87 within three significant figures. This is a purely empirical observation, not dependent upon any theory, and override any calculations from theory that do not agree with them. It specifically overrides all those GCM-s like CMIP5 that this article is boosting.

Pamela Gray

The above link is a powerpoint so you have to cut and paste into your browser in order to see the slides that form the basis of the following regularly updated forecast. For Oregon in the next three months we could be seeing some cold and some warm temperatures, and slightly damp conditions, followed by warmer and dryer conditions, followed by cold and variably damp/dry conditions. Put in extra wood for the winter. I think it will be colder than usual this winter and the drought may continue. It’s a good thing I put in a new well a few years back on the ranch. Under these conditions, the ranch used to run out of water in the 50’s. Our well was only 25 feet deep and depended on shallow ground water coming out of the mountains. It now goes 300 plus feet down into an aquifer. There are places here with wells that go 900 plus ft down.
http://cms.oregon.gov/ODA/NRD/docs/pdf/dlongrange.pdf?ga=t