
Guest post by Pat Frank
The summary of results from version 5 of the Coupled Model Intercomparison Project (CMIP5) has come out. [1] CMIP5 is evaluating the state-of-the-art general circulation climate models (GCMs) that will be used in the IPCC’s forthcoming 5th Assessment Report (AR5) to project climate futures. The fidelity of these models will tell us what we may believe when climate futures are projected. This essay presents a small step in evaluating the CMIP5 GCMs, and the reliability of the IPCC conclusions based on them.
I also wanted to see how the new GCMs did compared to the CMIP3 models described in 2003. [2] The CMIP3 were used in IPCC AR4. Those models produced an estimated 10.1% error in cloudiness, which translated into an uncertainty in cloud feedback of (+/-)2.8 Watts/m^2. [3] That uncertainty is equivalent to (+/-)100% of the excess forcing due to all the GHGs humans have released into the atmosphere since 1900. Cloudiness uncertainty, all by itself, is as large as the entire effect the IPCC is trying to resolve.
If we’re to know how reliable model projections are, the energetic uncertainty, e.g., due to cloudiness error, must be propagated into GCM calculations of future climate. From there the uncertainty should appear as error bars that condition the projected future air temperature. However inclusion of true physical error bars never seems to happen.
Climate modelers invariably publish projections of temperature futures without any sort of physical uncertainty bars at all. Here is a very standard example, showing several GCM projections of future arctic surface air temperatures. There’s not an uncertainty bar among them.
The IPCC does the same thing, brought to you here courtesy of an uncritical US EPA. The shaded regions in the IPCC SRES graphic refer to the numerical variability of individual GCM model runs. They have nothing to do with physical uncertainty or with the reliability of the projections.
Figure S1 in Jiang, et al.’s Auxiliary Material [1] summarized how accurately the CMIP5 GCMs hindcasted global cloudiness. Here it is:
Here’s what Jiang, et al., say about their results, “Figure S1 shows the multi-year global, tropical, mid-latitude and high-latitude mean TCFs from CMIP3 and CMIP5 models, and from the MODIS and ISCCP observations. The differences between the MODIS and ISCCP are within 3%, while the spread among the models is as large as ~15%.” “TCF” is Total Cloud Fraction.
Including the CMIP3 model results with the new CMIP5 outputs conveniently allows a check for improvement over the last 9 years. It also allows a comparison of the official CMIP3 cloudiness error with the 10.1% average cloudiness error I estimated earlier from the outputs of equivalent GCMs.
First, here are Tables of the CMIP3 and CMIP5 cloud projections and their associated errors. Observed cloudiness was taken as the average of the ISCCP and Modis Aqua satellite measurements (the last two bar groupings in Jiang Figure S1), which was 67.7% cloud cover. GCM abbreviations follow the references below.
Table 1: CMIP3 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP3 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | bcm2 | 67.7 | 0.00 |
| CCCMA | cgcm3.1 | 60.7 | -0.10 |
| CNRM | cm3 | 73.8 | 0.09 |
| CSIRO | mk3 | 65.8 | -0.03 |
| GFDL | cm2 | 66.3 | -0.02 |
| GISS | e-h | 57.9 | -0.14 |
| GISS | e-r | 59.8 | -0.12 |
| INM | cm3 | 67.3 | -0.01 |
| IPSL | cm4 | 62.6 | -0.08 |
| MIROC | miroc3.2 | 54.2 | -0.20 |
| NCAR | ccsm3 | 55.6 | -0.18 |
| UKMO | hadgem1 | 54.2 | -0.20 |
| Avg. 62.1 | R.M.S. Avg. ±12.1% |
Table 2: CMIP5 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP5 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | noresm | 54.2 | -0.20 |
| CCCMA | canesm2 | 61.6 | -0.09 |
| CNRM | cm5 | 57.9 | -0.14 |
| CSIRO | mk3.6 | 69.1 | 0.02 |
| GFDL | cm3 | 71.9 | 0.06 |
| GISS | e2-h | 61.2 | -0.10 |
| GISS | e2-r | 61.6 | -0.09 |
| INM | cm4 | 64.0 | -0.06 |
| IPSL | cm5a | 57.9 | -0.14 |
| MIROC | miroc5 | 57.0 | -0.16 |
| NCAR | cam5 | 63.5 | -0.06 |
| UKMO | hadgem2-a | 54.2 | -0.20 |
| Avg. 61.2 | R.M.S. Avg. ±12.4% |
Looking at these results, some models improved between 2003 and 2012, some did not, and some apparently became less accurate. The error fractions are one-dimensional numbers that are condensed from errors in three-dimensions. The error fractions do not mean that some given GCM predicts a constant fraction of cloudiness above or below the observed cloudiness. GCM cloud predictions weave about the observed cloudiness in all three dimensions; here predicting more cloudiness, there less. [4, 5] The GCM fractional errors are the positive and negative cloudiness errors integrated over the entire globe, suppressed into single numbers. The total average error of all the GCMs is represented as the root-mean-square average of the individual GCM fractional errors.
CMIP5 cloud cover was averaged over 25 model years (1980-2004) while CMIP3 averages represent 20 model years (1980-1999). GCM average cloudiness was calculated from “monthly mean grid-box averages.” So a 20-year global average included 12×20 monthly global cloudiness realizations, reducing GCM random calculational error by at least 15x.
It’s safe to surmise, therefore, that the residual errors recorded in Tables 1 and 2 represent systematic GCM cloudiness error, similar to the cloudiness error found previously. The average GCM cloudiness systematic error has not diminished between 2003 and 2012. The CMIP3 models averaged 12.1% error, which is not significantly different from my estimate of 10.1% cloudiness error for GCMs of similar sophistication.
We can now proceed to propagate GCM cloudiness error into a global air temperature projection, to see how uncertainty grows over GCM projection time. GCMs typically calculate future climate in a step-wise fashion, month-by-month to year-by-year. In each step, the climate variables calculated in the previous time period are extrapolated into the next time period. Any errors residing in those calculated variables must propagate forward with them. When cloudiness is calculationally extrapolated from year-to-year in a climate projection, the 12.4 % average cloudiness error in CMIP5 GCMs represents an uncertainty of (+/-)3.4 Watts/m^2 in climate energy. [3] That (+/-)3.4 Watts/m^2 of energetic uncertainty must propagate forward in any calculation of future climate.
Climate models represent bounded systems. Bounded variables are constrained to remain within certain limits set by the physics of the system. However, systematic uncertainty is not bounded. In a step-wise calculation, any systematic error or uncertainty in input variables must propagate as (+/-)sqrt(sum(per-step error)^2) into output variables. Per-step uncertainty increments with each new step. This condition is summarized in the next Figure.
The left picture illustrates the way a GCM projects future climate in a step-wise fashion. [6] The white rectangles represent a climate propagated forward through a series of time steps. Each prior climate includes the variables that calculationally evolve into the climate of the next step. Errors in the prior conditions, uncertainties in parameterized quantities, and limitations of theory all produce errors in projected climates. The errors in each preceding step of the evolving climate calculation propagate into the following step.
This propagation produces the growth of error shown in the right side of the figure, illustrated with temperature. The initial conditions produce the first temperature. But errors in the calculation produce high- and low uncertainty bounds (e_T1, etc.) on the first projected temperature. The next calculational step produces its own errors, which add in and expand the high and low uncertainty bounds around the next temperature.
When the error is systematic, prior uncertainties do not diminish like random error. Instead, uncertainty propagates forward, producing a widening spread of uncertainty around time-stepped climate calculations. The wedge on the far right summarizes the way propagated systematic error increases as a step-wise calculation is projected across time.
When the uncertainty due to the growth of systematic error becomes larger than the physical bound, the calculated variable no longer has any physical meaning. In the Figure above, the projected temperature would become no more meaningful than a random guess.
With that in mind, the CMIP5 average error in global cloudiness feedback can be propagated into a time-wise temperature projection. This will show the average uncertainty in projected future air temperature due to the errors GCMs make in cloud feedback.
Here, then, is the CMIP5 12.4% average systematic cloudiness error propagated into everyone’s favorite global temperature projection: Jim Hansen’s famous doomsday Figure; the one he presented in his 1988 Congressional testimony. The error bars were calculated using the insight provided by my high-fidelity GCM temperature projection emulator, as previously described in detail (892 kB pdf download).
The GCM emulator showed that, within GCMs, global surface air temperature is merely a linear function of net GHG W/m^2. The uncertainty in calculated air temperature is then just a linear function of the uncertainty in W/m^2. The average GCM cloudiness error was propagated as:
“Air temperature” is the surface air temperature in Celsius. Air temperature increments annually with increasing GHG forcing. “Total forcing” is the forcing in W/m^2 produced by CO2, nitrous oxide, and methane in each of the “i” projection years. Forcing increments annually with the levels of GHGs. The equation is the standard propagation of systematic errors (very nice lecture notes here (197 kb pdf)). And now, the doomsday prediction:
The Figure is self-explanatory. Lines A, B, and C in part “a” are the projections of future temperature as Jim Hansen presented them in 1988. Part “b” shows the identical lines, but now with uncertainty bars propagated from the CMIP5 12.4 % average systematic global cloudiness error. The uncertainty increases with each annual step. It is here safely assumed that Jim Hansen’s 1988 GISS Model E was not up to CMIP5 standards. So, the uncertainties produced by CMIP5 model inaccuracies are a true minimum estimate of the uncertainty bars around Jim Hansen’s 1988 temperature projections.
The large uncertainty by year 2020, about (+/-)25 C, has compressed the three scenarios so that they all appear to nearly run along the baseline. The uncertainty of (+/-)25 C does not mean I’m suggesting that the model predicts air temperatures could warm or cool by 25 C between 1958 and 2020. Instead, the uncertainty represents the pixel size resolvable by a CMIP5 GCM.
Each year, the pixel gets larger, meaning the resolution of the GCM gets worse. After only a few years, the view of a GCM is so pixelated that nothing important can be resolved. That’s the meaning of the final (+/-25) C uncertainty bars. They mean that, at a distance of 62 years, the CMIP5 GCMs cannot resolve any air temperature change smaller than about 25 C (on average).
Under Jim Hansen’s forcing scenario, after one year the (+/-)3.4 Watts/m^2 cloudiness error already produces a single-step pixel size of (+/-)3.4 C. In short, CMIP5-level GCMs are completely unable to resolve any change in global surface air temperature that might occur at any time, due to the presence of human-produced carbon dioxide.
Likewise, Jim Hansen’s 1988 version of GISS Model E was unable predict anything about future climate. Neither can his 2012 version. The huge uncertainty bars makes his 1988 projections physically meaningless. Arguing about which of them tracks the global anomaly trend is like arguing about angels and heads of pins. Neither argument has any factual resolution.
Similar physical uncertainty bars should surround any GCM ensemble projection of global temperature. However, the likelihood of seeing them in the AR5 (or in any peer-reviewed climate journal) is vanishingly small.
Nevertheless, CMIP5 climate models are unable to predict global surface air temperature even one year in advance. Their projections have no particular physical meaning and are entirely unreliable.
Following on from the title of this piece: there will be no reason to believe any – repeat, any — CMIP5-level future climate projection in the AR5.
Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.
Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about. Recent pronouncements about human culpability should be seen in that light.
The verdict against the IPCC is identical: they do not know what they’re talking about.
Does anyone think that will stop them talking?
References:
1. Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117, D14105.
2. Covey, C., et al., An overview of results from the Coupled Model Intercomparison Project. Global Planet. Change, 2003. 37, 103-133.
3. Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18, 237-273.
4. AchutaRao, K., et al., Report UCRL-TR-202550. An Appraisal of Coupled Climate Model Simulations, D. Bader, Editor. 2004, Lawrence Livermore National Laboratory: Livermore.
5. Gates, W.L., et al., An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). Bull. Amer. Met. Soc., 1999. 80(1), 29-55.
6. Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety. 2000, IECEC: Las Vegas. p. 1026-1031.
Climate Model Abbreviations
BCC: Beijing Climate Center, China.
CCCMA: Canadian Centre for Climate Modeling and Analysis, Canada.
CNRM: Centre National de Recherches Météorologiques, France.
CSIRO: Commonwealth Scientific and Industrial Research Organization, Queensland Australia.
GFDL: Geophysical Fluid Dynamics Laboratory, USA.
GISS: Goddard Institute for Space Studies, USA.
INM: Institute for Numerical Mathematics, Russia.
IPSL: Institut Pierre Simon Laplace, France.
MIROC: U. Tokyo/Nat. Ins. Env. Std./Japan Agency for Marine-Earth Sci.&Tech., Japan.
NCAR: National Center for Atmospheric Research, USA.
NCC: Norwegian Climate Centre, Norway.
UKMO: Met Office Hadley Centre, UK.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I think that the problems with GCMs extend beyond those of albedo, or cloudiness. Albedo can be measured directly and continuously using geostationaty satellites – presumably the models have been “tuned” to hindcast the known data, but this says nothing about predictive capability.
In reading about the mechanics of GCMs a few months ago I found references to low pass filtering between integration time steps. Filtering is only valid when there is an a priori knowledge of “the signal” (eg a radio signal), however GCM “experiments” assume that the signal is produced by GHGs, therefore this is all that they can ever hope to detect, as all other mechanisms are filtered out.
Even worse, GCMs use pause/reset/restart techniqies in integration when physical laws are vioated (eg conservation of mass, momentum and energy), or boundary conditions are exceeded, in order to preserve “stability”. Such models can produce no useful information, by definition.
The UK Met Office’s climate models lose fidelity after just a few days of simulated elapsed time, and produce results that are shown to be hopeless after a few days of reality. To believe that these models can produce any sensible data AT ALL with simulated elapsed times of decades is laughable to anybody with the meanest knowledge of mathematics and physics.
While I agree with Terry Oldberg on the predictions vs projections thing, either way the modellers have zero stake in accuracy. A bookmaker taking bets on future climate would be much more likely to get it right, because being wrong costs real dollars.
So all we need to know the real answer is a reliable independently verifiable measure of climate and real climate gambling.
So who is up to running the Great Climate Casino – place your bets now.
Thanks, Pat. I like to see someone who takes the trouble to be specific about what they are NOT saying when explaining their work. It helps crystallize genuine understanding and is the mark of an honest educator.
Regarding “The verdict against the IPCC is identical: they do not know what they’re talking about. Does anyone think that will stop them talking?”: This is where the so called “Precautionary Principle” is often invoked, essentially as an argument from ignorance.
More honestly put, the precautionary principle usually amounts to no more than: “I don’t know what the answer is, but I do know that I can frighten you into doing what I say.”
Well, I for one, am not frightened by climate models [only by some of the people that use them].
I read this and thought that Willis had it all covered, still does. His hypothesis should be the touchstone for every serious research program in climate. He’s not alone, but his explanations are the clearest of all of them.
http://www.stuff.co.nz/science/7400061/Global-warming-science-tackled
Next up, lets talk about how well we understand the relationship between cosmic rays and cloud formation 🙂
wiki Freeman Dyson to see what he says about GCM’s.
From my lay viewpoint I’d say that GCM’s are astounding intellectual achievements, but they seem to be pushed beyond their capabilities and misused to promote political agendas. Sort of like quantum chromodynamics being used to select the next US President.
In fairness to the GCM’s, please remember that all parameter values are not equally probable. Any particular electron could be here in this room with me or it could be orbiting the planet Zog, but the wave function might say that the second case has negligible probability. I use this technique to build fairly robust economic models.
In order to understand GHG forcing better I’ve been reading up on atmospherics. One of the key variables is naturally the bond albedo. Google Earth’s albedo and you’ll get values ranging from 0,28 to 0,31. Ditto for Mars and the spread is 0,16 to 0,25!! This makes quite a big difference and I’d suggest that they spend more time nailing it down before any climate modelers ask us to trust their projections.
Thank you, Pat.
I wondered lonely as a cloud
Why no one else said this aloud.
Ian W says: August 24, 2012 at 3:40 am
Quite so. One gains an impression that modellers reject runs that give large errors by their criteria, while refusing to use time-honoured criteria.
Philip Bradley says:
August 23, 2012 at 11:22 pm
“What you are talking about is predictive accuracy. And I agree with Terry Oldberg. Until the climate modellers concede the model outputs are predictions, their outputs are just projections, and therefore not science.
I wish rather more people understood this.”
Unfortunately the average voter, and politician for that matter, does not understand the difference. I believe this lack of understanding is being used by modelers to advance the gravy train of monetary grants while allowing them to have an escape clause when they are eventually proven wrong.
george e smith says:
August 23, 2012 at 11:43 pm
“How do you calculate a 10.1% error in cloudiness with a model. And how do you know that error is correct; ”
Yes, what are the error bars associated with the error bars?
Sorry teach but my eyes glazed over long before “version 5 of the Coupled Model Intercomparison Project “.
I must learn to pay attention in class
I must learn to pay attention in class
I must..
Nice article Pat Frank.
Water Vapour and Clouds are the make or break factors in the theory. If these factors are not the positive feedback assumed in the theory, then then entire proposition falls apart. In fact, if they are just 50% of the positive feedback assumed, the theory still falls apart because there is not enough feedbacks on the feedbacks to get anywhere near to 3.0C per CO2 doubling.
Clouds are a major uncertainty (they could be positive or negative) but the biggest assumed feedback is Water Vapour. Water vapour is supposed to be +6.0% already and +22% by 2100.
So far, it is Zero. The IPCC AR5 Water Vapour forecast versus actual observations to date. The climate models are way off already.
http://s8.postimage.org/epbk2gtlh/ENSO_TCWVIPCC_July12.png
And here is all the climate model forecasts over time going back to Hansen and the first IPCC forecast versus actual temperature observations to date.
http://s17.postimage.org/jobns27n3/IPCC_Forecasts_Obs_July2012.png
Terry Oldberg
We all can agree on 2 areas
1.] Climate models make projections and not predictions.
2.] The IPCC approach is not scientific as there is no way to have a statistically and empirically validated control and trial population for testing any hypothesis.
Nobody disagrees with both of these points. Can you please now stop posting these same points over and over in every thread for years?
Venter:
While you claim that “nobody disagrees with both of these points” the evidence is sadly to the contrary. As of this moment, professional climatologists have yet to identify the statistical population that underlies the claims of their climate models yet they claim to be “scientists” pursuing “science.” In AR4, IPCC authors present “evaluations” of models that are designed to convince the statistically naive that statistical validation is taking place when it is not. Here in WUWT, article after article presents a new model plus a scientifically meaningless IPCC-style “evaluation” of this model. Bloggers routinely weigh in on the issue of whether a given “evaluation” falsifies the model or does not falsify it when an “evaluation” does neither. Skeptics such as Lord Monckton appeal to IPCC-style “evaluations” in presenting logically flawed arguments for the exaggeration of the magnitude of the equilibrium climate sensitivity by the IPCC.
The truth is that climate models are currently insusceptible to being either falsified or validated by the evidence. They will remain in this state until: a) the IPCC identifies the statistical population and b) models are constructed for the purpose of predicting the outcomes of the events in this population. Very few WUWT bloggers exhibit an understanding of this state of affairs. The editors of WUWT do not exhibit an understanding of it in selecting articles for publication. Aside from you and me and a few others, amateur climatologists and professionals are alike in acting as though they don’t have a clue to how scientific research is actually performed.
A great Aussie saying I came across a minute ago, pertinent to this debate: “The hamster’s dead, but the wheel’s still turning.”
Sadly, although I’ve done very similar analyses of the doomsday curves from Hansen et. al. that presume different forcing/feedback scenarios, I do have to take issue with the methodology used to forward propagate the errors. The GCMs are numerical simulations — they are at heart systems of parametric coupled nonlinear differential equations. The solutions they generate are not, nor are they expected to be, normal. Hence simple forward propagation of errors on the basis of an essentially multivariate normal assumption (e.g. the one underlying Pearson’s
) is almost certainly egregiously incorrect for precisely the reasons warned about in the Oser lecture — the system is more, or at least differently, constrained than the simple linear addition of Gaussian errors would suggest.
With a complex system of this sort, the “right” thing to do (and the thing that is in fact done, in many of the GCMs) is to concede that one has no bloody idea how the DISTRIBUTION of final states will vary given a DISTRIBUTION of input parameters around the best-guess inputs, and hence use Monte Carlo to generate a simulated distribution of final states using input that are in turn selected from the presumed distributions of input parameters. Ensemble models do this all the time to predict hurricane trajectories, for example — they run the hurricane model a few dozen or hundred times and form a shotgun-blast future locus of possible hurricane trajectories to determine the most probable range of values.
To put it in more physical terms, because there are feedbacks in the system that can be negative or positive, the variations in cloudiness are almost certainly not Gaussian, they are very probably strongly correlated with other variables in the GCMs. Their uncertainty in part derives from uncertainty in the surface temperatures, the SSTs, the presence or absence of ENSO positive, and far more. Some of these “uncertainties” — feedbacks — amplify error/variation, but most of them do the opposite or the climate would be unstable, far more unstable than it is observed to be. In other words, while the butterfly effect works on weather state, it tends to be suppressed for climate; the climate doesn’t seem enormously unstable toward hothouse Earth or snowball Earth even on a geological time scale, and the things that cause instability on this scale do not appear to be any of the parameters included in the GCMs anyway because we don’t really know what they are or how they work.
However, the basic message of this is sound enough — there should be error estimates on the “projections” of GCMs — at the very least a Monte Carlo of the spread of outcomes given noise on the parameters.
rgb
pat says:
August 23, 2012 at 10:52 pm
They are on the cutting edge of more government money.
(Most do not realize that as much as 60% of science grants go to the applicants, personally)
_______________________
Actually even the scientists are complaining. The grant money generally gets wasted in bureaucratic red tape they are saying.
The comments are the really interesting part of the article.
I would rather see a lottery system. This would hopefully do away with the entrenched old boy system that keeps innovative ideas from being considered.
This commenter expresses how I feel about the situation.
An honest lottery system would have allowed “Denier” Scientists a bit more of a fair shake and perhaps would have kept the IPCC – CO2 mania more under control.
Retired senior NASA atmospheric scientist, Dr. John S. Theon, the former supervisor of James Hansen, has now publicly declared himself a skeptic and declared that Hansen “embarrassed NASA”. He violated NASA’s official agency position on climate forecasting (“we did not know enough to forecast climate change or mankind’s effect on it”). Hansen thus embarrassed NASA by coming out with his claims of global warming in 1988 in his testimony before Congress. [January 15, 2009]
Theon declared: “Climate models are useless”.
See: James Hansen’s Former NASA Supervisor Declares Himself a Skeptic – Says Hansen ‘Embarrassed NASA’, ‘Was Never Muzzled’, & Models ‘Useless’ (Watts Up With That?, January 27, 2009) http://wattsupwiththat.com/2009/01/27/james-hansens-former-nasa-supervisor-declares-himself-a-skeptic-says-hansen-embarrassed-nasa-was-never-muzzled/
It is difficult to believe that energy poverty is being imposed on Earth to prevent very flawed predictions from becoming true.
This article again shows how useless GCMs are. Thanks, Pat!
This is easily fixed. Add three conditions under each CO2 scenario: El Nino, La Nina, and neutral (we know what the jet stream and clouds kinda do under each condition). Run several trials with the dials turned up to different settings for El Nino, turned up for different settings for La Nina, and several runs between the margins for neutral, and then average for each CO2 scenario. Wonder if one of the models does this already?
And just for giggles, get rid of the CO2 fudge factor and run that under the three ENSO conditions (so you will have three conditions with, and three conditions without CO2 FF). Just to compare. Include accuracy bars of course.
SO WHAT if you have determined that GCM projections are worthless!!!!
Us warmers can take solace knowing we STILL can cling to the universally proven, physiologically compatible, unprecedented, data substantiated, untainted, statistically validated, global encompassing, court approved, and PEER reviewed tree ring data showing MASSIVE specie extinction by 2030.
/sarc
Robert Brown – thanks for that comment.
Sorry satellites don’t cut the mustard. A single satellite can’t return to a given point to take another sample in less than 84 minutes, and when the satellite does get back to the same place, it isn’t the same place; that’s about 22 degrees away. And clouds can appear out of nowhere, and then disappear in a few minutes. and how many satellites do we have that can simultaneously observe different poins on the surface.
http://en.wikipedia.org/wiki/List_of_Earth_observation_satellites
Actually, satellites do cut the mustard, they even cut it quite finely. Geostationary satellites in particular can easily resolve and correct for geometry of almost the whole globe, and the one place they are weak is tightly sampled by polar orbit satellites. They are the one source of reliable global data.
rgb
How about a little skepticism from Anthony Watts before posting such things? On what possible grounds do the error bars grow with time in the way shown, and how on earth do the mean A,B,C, projections collapse into one another. Neither makes any sense to me.

REPLY: Well I’m sure Pat can answer, but from my view a as forecaster, forecasts grow more uncertain the further they are in the future, just as temperature and paleo data grow more uncertain back in time. We all know that a weather forecast for two weeks ahead is far less certain than one for two days ahead. Conversely, unless properly recorded/measured, the past is less certain than the present.
For example due to errors in siting/exposure (like on the north side of buildings) we know the temperatures measured in 1850 -vs- 1950 have wider error bars. See the recent BEST graph to see what I mean:
If you think error bars should get smaller into the future for a forecast, I have a big chunk of ice I can sell you in
a warmingAntarctica. – AnthonyThere is a new paper by Troy Masters of UCLA which indicates that the Cloud Feedback has been negative from 2000 to 2010 using all satellite and temperature data sources. There is much uncertainty based on the dataset and timeframes used but generally the Feedback is negative. [The paper doesn’t really say that but that is what the data shows – it got quite a going-over by Andrew Dessler who had recommended rejection and other referees had recommended it be less certain about the negative cloud feedback – so to get it published, vagueness is always required if you are going against the theory/gospel].
http://www.earth-syst-dynam.net/3/97/2012/esd-3-97-2012.pdf
There is a supplemental to the paper which contains all the data (and more) which would be immensely helpful to those who are looking at this issue in more depth (zip file so I won’t link to it).
Don’t know if the link will work, but Oregon and many other states use this method for agricultural forecasting. Why? State after state has determined that analogue model forecasting outperforms climatological (IE dynamical) model methods. Farmers get testy when the forecast results in wilted or frozen crops and dead or worthless animals. That state agricultural departments are ignoring CO2/climate change models is telling.
cms.oregon.gov/oda/nrd/docs/forecast_method.ppt
Why waste time on climate models? Miskolczi has shown that water vapor feedback is negative while these bozos have it positive. Furthermore, their cloudiness error is huge. It has been suggested that cloudiness actually has a negative feedback and no one has been able to dispute that. According to Miskolczi’s work with NOAA weather balloon data the infrared absorbance of the entire atmosphere remained constant for 61 years while carbon dioxide at the same time increased by 21.6 percent. This is a substantial addition of carbon dioxide but it had no effect whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, case closed. You might ask of course, how is it possible for carbon dioxide to not absorb IR and thereby violate the laws of physics. This of course is not what is happening. According to Miskolczi theory the existence of a stable climate requires that the IR optical thickness of the atmosphere should remain constant at a value of about 1.86. If deviations occur the feedbacks among the greenhouse gases present will restore it to that value. For practical purposes it boils down to carbon dioxide and water vapor. Carbon dioxide is not adjustable but water vapor has an infinite source in the oceans and can vary. If more carbon dioxide is added to the atmosphere the amount of water vapor changes to compensate for its absorption and the result is that the optical thickness does not change. This means that water vapor feedback is negative, the exact opposite of what IPCC says and what these models incorporate. If you believe in positive water vapor feedback the amount of water vapor in the atmosphere should increase in step with the increase in carbon dioxide as determined by Mauna Loa observatory. Satellites just don’t see it. And Miskolczi has shown that the infrared optical thickness of the atmosphere has remained constant for the period of observation recorded by NOAA weather balloon database that goes back to 1948. He used seven subsets of the NOAA database covering this time period and reported to the EGU meeting in Vienna last year that the observed value of the optical thickness in all cases approached 1.87 within three significant figures. This is a purely empirical observation, not dependent upon any theory, and override any calculations from theory that do not agree with them. It specifically overrides all those GCM-s like CMIP5 that this article is boosting.
The above link is a powerpoint so you have to cut and paste into your browser in order to see the slides that form the basis of the following regularly updated forecast. For Oregon in the next three months we could be seeing some cold and some warm temperatures, and slightly damp conditions, followed by warmer and dryer conditions, followed by cold and variably damp/dry conditions. Put in extra wood for the winter. I think it will be colder than usual this winter and the drought may continue. It’s a good thing I put in a new well a few years back on the ranch. Under these conditions, the ranch used to run out of water in the 50’s. Our well was only 25 feet deep and depended on shallow ground water coming out of the mountains. It now goes 300 plus feet down into an aquifer. There are places here with wells that go 900 plus ft down.
http://cms.oregon.gov/ODA/NRD/docs/pdf/dlongrange.pdf?ga=t