
Guest post by Pat Frank
The summary of results from version 5 of the Coupled Model Intercomparison Project (CMIP5) has come out. [1] CMIP5 is evaluating the state-of-the-art general circulation climate models (GCMs) that will be used in the IPCC’s forthcoming 5th Assessment Report (AR5) to project climate futures. The fidelity of these models will tell us what we may believe when climate futures are projected. This essay presents a small step in evaluating the CMIP5 GCMs, and the reliability of the IPCC conclusions based on them.
I also wanted to see how the new GCMs did compared to the CMIP3 models described in 2003. [2] The CMIP3 were used in IPCC AR4. Those models produced an estimated 10.1% error in cloudiness, which translated into an uncertainty in cloud feedback of (+/-)2.8 Watts/m^2. [3] That uncertainty is equivalent to (+/-)100% of the excess forcing due to all the GHGs humans have released into the atmosphere since 1900. Cloudiness uncertainty, all by itself, is as large as the entire effect the IPCC is trying to resolve.
If we’re to know how reliable model projections are, the energetic uncertainty, e.g., due to cloudiness error, must be propagated into GCM calculations of future climate. From there the uncertainty should appear as error bars that condition the projected future air temperature. However inclusion of true physical error bars never seems to happen.
Climate modelers invariably publish projections of temperature futures without any sort of physical uncertainty bars at all. Here is a very standard example, showing several GCM projections of future arctic surface air temperatures. There’s not an uncertainty bar among them.
The IPCC does the same thing, brought to you here courtesy of an uncritical US EPA. The shaded regions in the IPCC SRES graphic refer to the numerical variability of individual GCM model runs. They have nothing to do with physical uncertainty or with the reliability of the projections.
Figure S1 in Jiang, et al.’s Auxiliary Material [1] summarized how accurately the CMIP5 GCMs hindcasted global cloudiness. Here it is:
Here’s what Jiang, et al., say about their results, “Figure S1 shows the multi-year global, tropical, mid-latitude and high-latitude mean TCFs from CMIP3 and CMIP5 models, and from the MODIS and ISCCP observations. The differences between the MODIS and ISCCP are within 3%, while the spread among the models is as large as ~15%.” “TCF” is Total Cloud Fraction.
Including the CMIP3 model results with the new CMIP5 outputs conveniently allows a check for improvement over the last 9 years. It also allows a comparison of the official CMIP3 cloudiness error with the 10.1% average cloudiness error I estimated earlier from the outputs of equivalent GCMs.
First, here are Tables of the CMIP3 and CMIP5 cloud projections and their associated errors. Observed cloudiness was taken as the average of the ISCCP and Modis Aqua satellite measurements (the last two bar groupings in Jiang Figure S1), which was 67.7% cloud cover. GCM abbreviations follow the references below.
Table 1: CMIP3 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP3 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | bcm2 | 67.7 | 0.00 |
| CCCMA | cgcm3.1 | 60.7 | -0.10 |
| CNRM | cm3 | 73.8 | 0.09 |
| CSIRO | mk3 | 65.8 | -0.03 |
| GFDL | cm2 | 66.3 | -0.02 |
| GISS | e-h | 57.9 | -0.14 |
| GISS | e-r | 59.8 | -0.12 |
| INM | cm3 | 67.3 | -0.01 |
| IPSL | cm4 | 62.6 | -0.08 |
| MIROC | miroc3.2 | 54.2 | -0.20 |
| NCAR | ccsm3 | 55.6 | -0.18 |
| UKMO | hadgem1 | 54.2 | -0.20 |
| Avg. 62.1 | R.M.S. Avg. ±12.1% |
Table 2: CMIP5 GCM Global Fractional Cloudiness Predictions and Error
| Model Source | CMIP5 GCM | Global Average Cloudiness Fraction | Fractional Global Cloudiness Error |
| NCC | noresm | 54.2 | -0.20 |
| CCCMA | canesm2 | 61.6 | -0.09 |
| CNRM | cm5 | 57.9 | -0.14 |
| CSIRO | mk3.6 | 69.1 | 0.02 |
| GFDL | cm3 | 71.9 | 0.06 |
| GISS | e2-h | 61.2 | -0.10 |
| GISS | e2-r | 61.6 | -0.09 |
| INM | cm4 | 64.0 | -0.06 |
| IPSL | cm5a | 57.9 | -0.14 |
| MIROC | miroc5 | 57.0 | -0.16 |
| NCAR | cam5 | 63.5 | -0.06 |
| UKMO | hadgem2-a | 54.2 | -0.20 |
| Avg. 61.2 | R.M.S. Avg. ±12.4% |
Looking at these results, some models improved between 2003 and 2012, some did not, and some apparently became less accurate. The error fractions are one-dimensional numbers that are condensed from errors in three-dimensions. The error fractions do not mean that some given GCM predicts a constant fraction of cloudiness above or below the observed cloudiness. GCM cloud predictions weave about the observed cloudiness in all three dimensions; here predicting more cloudiness, there less. [4, 5] The GCM fractional errors are the positive and negative cloudiness errors integrated over the entire globe, suppressed into single numbers. The total average error of all the GCMs is represented as the root-mean-square average of the individual GCM fractional errors.
CMIP5 cloud cover was averaged over 25 model years (1980-2004) while CMIP3 averages represent 20 model years (1980-1999). GCM average cloudiness was calculated from “monthly mean grid-box averages.” So a 20-year global average included 12×20 monthly global cloudiness realizations, reducing GCM random calculational error by at least 15x.
It’s safe to surmise, therefore, that the residual errors recorded in Tables 1 and 2 represent systematic GCM cloudiness error, similar to the cloudiness error found previously. The average GCM cloudiness systematic error has not diminished between 2003 and 2012. The CMIP3 models averaged 12.1% error, which is not significantly different from my estimate of 10.1% cloudiness error for GCMs of similar sophistication.
We can now proceed to propagate GCM cloudiness error into a global air temperature projection, to see how uncertainty grows over GCM projection time. GCMs typically calculate future climate in a step-wise fashion, month-by-month to year-by-year. In each step, the climate variables calculated in the previous time period are extrapolated into the next time period. Any errors residing in those calculated variables must propagate forward with them. When cloudiness is calculationally extrapolated from year-to-year in a climate projection, the 12.4 % average cloudiness error in CMIP5 GCMs represents an uncertainty of (+/-)3.4 Watts/m^2 in climate energy. [3] That (+/-)3.4 Watts/m^2 of energetic uncertainty must propagate forward in any calculation of future climate.
Climate models represent bounded systems. Bounded variables are constrained to remain within certain limits set by the physics of the system. However, systematic uncertainty is not bounded. In a step-wise calculation, any systematic error or uncertainty in input variables must propagate as (+/-)sqrt(sum(per-step error)^2) into output variables. Per-step uncertainty increments with each new step. This condition is summarized in the next Figure.
The left picture illustrates the way a GCM projects future climate in a step-wise fashion. [6] The white rectangles represent a climate propagated forward through a series of time steps. Each prior climate includes the variables that calculationally evolve into the climate of the next step. Errors in the prior conditions, uncertainties in parameterized quantities, and limitations of theory all produce errors in projected climates. The errors in each preceding step of the evolving climate calculation propagate into the following step.
This propagation produces the growth of error shown in the right side of the figure, illustrated with temperature. The initial conditions produce the first temperature. But errors in the calculation produce high- and low uncertainty bounds (e_T1, etc.) on the first projected temperature. The next calculational step produces its own errors, which add in and expand the high and low uncertainty bounds around the next temperature.
When the error is systematic, prior uncertainties do not diminish like random error. Instead, uncertainty propagates forward, producing a widening spread of uncertainty around time-stepped climate calculations. The wedge on the far right summarizes the way propagated systematic error increases as a step-wise calculation is projected across time.
When the uncertainty due to the growth of systematic error becomes larger than the physical bound, the calculated variable no longer has any physical meaning. In the Figure above, the projected temperature would become no more meaningful than a random guess.
With that in mind, the CMIP5 average error in global cloudiness feedback can be propagated into a time-wise temperature projection. This will show the average uncertainty in projected future air temperature due to the errors GCMs make in cloud feedback.
Here, then, is the CMIP5 12.4% average systematic cloudiness error propagated into everyone’s favorite global temperature projection: Jim Hansen’s famous doomsday Figure; the one he presented in his 1988 Congressional testimony. The error bars were calculated using the insight provided by my high-fidelity GCM temperature projection emulator, as previously described in detail (892 kB pdf download).
The GCM emulator showed that, within GCMs, global surface air temperature is merely a linear function of net GHG W/m^2. The uncertainty in calculated air temperature is then just a linear function of the uncertainty in W/m^2. The average GCM cloudiness error was propagated as:
“Air temperature” is the surface air temperature in Celsius. Air temperature increments annually with increasing GHG forcing. “Total forcing” is the forcing in W/m^2 produced by CO2, nitrous oxide, and methane in each of the “i” projection years. Forcing increments annually with the levels of GHGs. The equation is the standard propagation of systematic errors (very nice lecture notes here (197 kb pdf)). And now, the doomsday prediction:
The Figure is self-explanatory. Lines A, B, and C in part “a” are the projections of future temperature as Jim Hansen presented them in 1988. Part “b” shows the identical lines, but now with uncertainty bars propagated from the CMIP5 12.4 % average systematic global cloudiness error. The uncertainty increases with each annual step. It is here safely assumed that Jim Hansen’s 1988 GISS Model E was not up to CMIP5 standards. So, the uncertainties produced by CMIP5 model inaccuracies are a true minimum estimate of the uncertainty bars around Jim Hansen’s 1988 temperature projections.
The large uncertainty by year 2020, about (+/-)25 C, has compressed the three scenarios so that they all appear to nearly run along the baseline. The uncertainty of (+/-)25 C does not mean I’m suggesting that the model predicts air temperatures could warm or cool by 25 C between 1958 and 2020. Instead, the uncertainty represents the pixel size resolvable by a CMIP5 GCM.
Each year, the pixel gets larger, meaning the resolution of the GCM gets worse. After only a few years, the view of a GCM is so pixelated that nothing important can be resolved. That’s the meaning of the final (+/-25) C uncertainty bars. They mean that, at a distance of 62 years, the CMIP5 GCMs cannot resolve any air temperature change smaller than about 25 C (on average).
Under Jim Hansen’s forcing scenario, after one year the (+/-)3.4 Watts/m^2 cloudiness error already produces a single-step pixel size of (+/-)3.4 C. In short, CMIP5-level GCMs are completely unable to resolve any change in global surface air temperature that might occur at any time, due to the presence of human-produced carbon dioxide.
Likewise, Jim Hansen’s 1988 version of GISS Model E was unable predict anything about future climate. Neither can his 2012 version. The huge uncertainty bars makes his 1988 projections physically meaningless. Arguing about which of them tracks the global anomaly trend is like arguing about angels and heads of pins. Neither argument has any factual resolution.
Similar physical uncertainty bars should surround any GCM ensemble projection of global temperature. However, the likelihood of seeing them in the AR5 (or in any peer-reviewed climate journal) is vanishingly small.
Nevertheless, CMIP5 climate models are unable to predict global surface air temperature even one year in advance. Their projections have no particular physical meaning and are entirely unreliable.
Following on from the title of this piece: there will be no reason to believe any – repeat, any — CMIP5-level future climate projection in the AR5.
Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.
Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about. Recent pronouncements about human culpability should be seen in that light.
The verdict against the IPCC is identical: they do not know what they’re talking about.
Does anyone think that will stop them talking?
References:
1. Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117, D14105.
2. Covey, C., et al., An overview of results from the Coupled Model Intercomparison Project. Global Planet. Change, 2003. 37, 103-133.
3. Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18, 237-273.
4. AchutaRao, K., et al., Report UCRL-TR-202550. An Appraisal of Coupled Climate Model Simulations, D. Bader, Editor. 2004, Lawrence Livermore National Laboratory: Livermore.
5. Gates, W.L., et al., An Overview of the Results of the Atmospheric Model Intercomparison Project (AMIP I). Bull. Amer. Met. Soc., 1999. 80(1), 29-55.
6. Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety. 2000, IECEC: Las Vegas. p. 1026-1031.
Climate Model Abbreviations
BCC: Beijing Climate Center, China.
CCCMA: Canadian Centre for Climate Modeling and Analysis, Canada.
CNRM: Centre National de Recherches Météorologiques, France.
CSIRO: Commonwealth Scientific and Industrial Research Organization, Queensland Australia.
GFDL: Geophysical Fluid Dynamics Laboratory, USA.
GISS: Goddard Institute for Space Studies, USA.
INM: Institute for Numerical Mathematics, Russia.
IPSL: Institut Pierre Simon Laplace, France.
MIROC: U. Tokyo/Nat. Ins. Env. Std./Japan Agency for Marine-Earth Sci.&Tech., Japan.
NCAR: National Center for Atmospheric Research, USA.
NCC: Norwegian Climate Centre, Norway.
UKMO: Met Office Hadley Centre, UK.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Interesting but misguided. You were speaking about rounding errors when you should have been speaking of rounding accuracies. When viewed appropriately a lack of solid measurement and estimation allows certain knowledge of the instantaneous anthropogenic temperature anomaly induced on the wing of a bumble bee failing to fly in a computer simulation.
A suggestion as to how to get the models to more accurately reflect cloudiness changes.:
Measure the length of the lines of air mass mixing along the world’s frontal zones which appears to vary by thousands of miles when the jets become more meridional as comparted to zonal and / or shift equatorward as compared to taking a more poleward position.
I think that is the ‘canary in the coalmine’ as regards net changes in the global energy budget because it links directly to albedo and the amount of solar energy able to enter the oceans.
Er, unbelievable. Here is presented, with pseudo-precise numerical content, the result of millions of dollars of computer code massaging, and, well, dick-all to do with the real world. Presented by an organization corrupt to the gneiss below its roots, and promoting a pre-concluded agenda.
This article, like many published in WUWT, conflates the idea that is referenced by the term “projection”
with the termwith the idea that is referenced by the word “prediction.” Actually, the two ideas differ and in ways that are logically important.Predictions state empirically falsifiable claims about the unobserved but observable outcomes of events. Projections do not. Being unfalsifiable, projections lie outside science.
Quick! Somebody tell the New York Times! Global warming called off! Stop the presses!
Oh wait. There’s no one listening. . .
/Mr Lynn
In my post of August 23, 2012 at 8:35 pm, please strike the phrase “with the term” and replace it with “with the idea.”
The amount of money wasted on this fools errand is astounding. It is a simple fact no amount of computer power of tweaking of the program will ever allow us to forecast, predict or project what going to happen in the future as to what going to happen with climate. It is a fact a chaotic system cannot be forecast any further out with any accuracy beyond a few days. Why anyone should pay attention to these expense toys is beyond me. Beyond that why is the tax payer being bilked for the tab on this idiocy
Another excellent post! Thanks again, Pat Frank.
Pat Frank,
Thank you for this fine post. It really got to me and is bookmarked.
I am somewhat of a late-comer to CAGW criticism – it took Climategate 1.0 to get my attention.
First technical question I had was “what do the thermometers actually say?” Stumbled onto something called surfacestations.org – promptly barfed just considering measurement uncertainty – and began marveling at just how radiant physics would work considering CO2 concentration contribution; practically instantaneously considering that earth atmosphere heat content is heavily dependent upon the hydrological cycle; hmmm…clouds…
Wouldn’t an insulating component of our atmosphere (Man-Made CO2) influence cloud formation, which in turn would influence heating as well as atmospheric heat content? In my mental picture/thought experiment it just makes sense that that process feedback is (highly) negative. No need to panic. But let’s get some better measurements; I mean really!
Clouds…
http://www.clouds-online.com/
High clouds, low clouds, wispy ones and rainy ones, storm clouds, white clouds set against brilliant blue skies, sun crowding grey sky clouds – all kinds of clouds.
My poetic point? Perhaps it isn’t just clouds as in quantitative but also clouds as in qualitative. Point here posted? Models don’t even come close to modeling these wispy, beautiful, billowing, dangerous at times, essential-to-our understanding of our atmosphere, things we call clouds. And the science is settled? What! Pure foolishness to claim that.
[Excerpt from the above article]
Climate projections that may appear in the IPCC AR5 will be completely vacant of physical credibility. Attribution studies focusing on CO2 will necessarily be meaningless. This failure is at the center of AGW Climate Promotions, Inc.
Anyone, in short, who claims that human-produced CO2 has caused the climate to warm since 1900 (or 1950, or 1976) is implicitly asserting the physical reliability of climate models. Anyone seriously asserting the physical reliability of climate models literally does not know what they are talking about.
Recent pronouncements about human culpability should be seen in that light.
The verdict against the IPCC is identical: they do not know what they’re talking about.
[end of excerpt]
Comparison to a recent excerpt from wattsup:
http://wattsupwiththat.com/2012/08/15/more-hype-on-greenlands-summer-melt/#comment-1058781
It should now be obvious that global warming alarmists and institutions like the IPCC have NO predictive track record – every one of their major scientific predictions has proven false, and their energy and economic recommendations have proven costly and counterproductive.
____________
Close enough.
Allan MacRae
It sounds as though you are under the misimpression that IPCC climate models make “predictions.” As the climatologist Kevin Trenberth emphatically stated in the blog of the journal Nature in 2007, they make only “projections.” Though predictions are falsifiable, projections are not. The models are not wrong but rather are not scientific.
They will adjust them after the fact as needed. Then show you how spot on their predictions were. No need for no stinkin’ physical uncertainty bars.
Physical uncertainty bars are for sissys! I used to have a license plate that read this in bold lettering on my pink bike with banana seat, sissy bars, and basket.
Terry Oldberg says that if “Predictions” are not falsifiable they cannot be regarded as science. They should not even be called predictions.
He is right. When Einstein predicted the deflection a light beam would suffer at grazing incidence to the sun he initially got the answer wrong by a factor of 2. Back then when “Science” still cared about “Integrity” his theory would have been discredited if he had failed to correct his error before Eddington published his observations. That was “Science”. CMIP3 and CMIP5 are “Non-Science” (aka “Nonsense”).
The statisticians here should have a field day deconstructing this one. If he wanted a worse error, he could have propagated it monthly instead of yearly. That would have been more impressive. How about daily? Can anyone figure out what he did wrong? Or, if the error is only 12% for a year, it must be tiny for a day to only grow to 12% for a year. Just needs some more thought.
The true responses seen here are lacking the basics.
Diurnal cloud cover, height, opacity to the wavelengths that matter and where they are during dayside and night-time throughout each and every day, is key.
A guess is always a guess and if you cannot differentiate between day (incoming solar attenuation) and night (OLR attenuation) you have not produced anything of meaningful value…
They are on the cutting edge of more government money.
(Most do not realize that as much as 60% of science grants go to the applicants, personally)
Not sure i agree with your comment about error bars. Error bars are measures of statistical uncertainty or variance. The output of climate models isn’t data, in the sense of measurements subject to statistical rules. Gavin Schmidt is clear about this.
What you are talking about is predictive accuracy. And I agree with Terry Oldberg. Until the climate modellers concede the model outputs are predictions, their outputs are just projections, and therefore not science.
I wish rather more people understood this.
Philip Bradley:
Thanks for engaging on the projections vs predictions issue. Before their models can make predictions, climatologists must define the underlying statistical population, for a prediction is an extrapolation to the outcome of an event in such a population. A predictive model makes claims about the relative frequencies of the various possible outcomes; these claims are checked by observing the relative frequencies in a sampling of observed events that is drawn for this purpose from the population. In modern climatology, though, there is no population and thus is no opportunity to make a check. This is why the research cannot truly be said to be “scientific.”
How do you calculate a 10.1% error in cloudiness with a model. And how do you know that error is correct; specially since as far as I ca tell, it isn’t even possible to measure global cloudiness with that kind of precision. It’s bad enough to claim they know the global temperature, given the poor sampling regimen, but the number of temperature recording sites must be orders of magnitude greater than the number of cloud measuring sites, specially for the 70 % that is ocean.
Sorry satellites don’t cut the mustard. A single satellite can’t return to a given point to take another sample in less than 84 minutes, and when the satellite does get back to the same place, it isn’t the same place; that’s about 22 degrees away. And clouds can appear out of nowhere, and then disappear in a few minutes. and how many satellites do we have that can simultaneously observe different poins on the surface.
Well I forgot, you can tell a temperature by boring one hole in one tree, so maybe you can get a cloudiness number from a single photograph.
Totally ridiculous to believe they can sample cloud cover of the whole globe in conformance to the Nyquist sampling criterion.
In this instance, what I mean by cloud cover is the total solar spectrum Joules per square metre at the surface under the cloud, divided by the Joules per square metre above the clouds (TSI x time)
Anthony, the link “(very nice lecture notes here (197 kb pdf))” doesn’t work, it has an extraneous slash at the end. The correct URL is http://www.phas.ubc.ca/~oser/p509/Lec_12.pdf
What an excellent article! Thanks to Pat Frank. Clouds are the elephant in the room of climate science. They come, they go and are the Venetian blinds of the earth. If you cannot model the clouds you cannot predict weather or climate.The IPCC is wasting both our money and their time.
Terry Oldberg says: August 23, 2012 at 8:35 pm
This article, like many published in WUWT, conflates the idea that is referenced by the term “projection” with the idea that is referenced by the word “prediction.” Actually, the two ideas differ and in ways that are logically important.
Predictions state empirically falsifiable claims about the unobserved but observable outcomes of events. Projections do not. Being unfalsifiable, projections lie outside science.
___________
Fair point Terry, but either way (prediction or projection). the ultimate conclusion is the same:
The climate models produce nonsense, and yet they are being used in attribution studies to falsely suggest that humanity is the primary cause of the global warming, and to the justify the squandering of trillions of dollars on worthless measures to fight this fictitious catastrophic humanmade global warming.
In summary the climate models are mere “projections” when one correctly points out their dismal predictive track record, but they are also portrayed as robust “predictions” when they serve the cause of global warming alarmism.
In fact, it is the warming alarmists themselves who have actively engaged in this duplicity.
Allan MacRae:
The authors of the IPCC’s periodic assessment reports have been repeatedly guilty of using the ambiguity of reference by terms to the associated ideas as a weapon in attempting to achieve political or financial ends that are favorable to them. Their technique is insert fallacies based on ambiguity of reference into their arguments. Using ambiguity of reference in this way, one can compose an argument that appears to be true when it is false or unproved. Confusion of “projection” with “prediction” serves this end. So does confusion of “evaluation” with “validation.” (Though IPCC climate models are not statistically “validated” they are “evaluated.”)
Ambiguity of reference by the word “science” plays a similar role. In the English vernacular, the word references the idea of “demonstrable knowledge” and the different idea of “the process that is operated by people calling themselves “scientists.” The IPCC’s “scientists” do not produce demonstrable knowledge but do call thermselves “scientists” and operate a process.
Climate “science” is not “demonstrable knowledge,” IPCC “projections” are not “predictions” and IPCC “evaluation” is not “validation” but the IPCC has led its dupes to believe the opposite. Through this technique, IPCC authors have duped the people of the world into parting with a great deal of money; currently, the IPCC is preparing them to part with even more.
Well it is obvious from this post that clouds are still not understood so how can any model produced work without this understanding? There are many other inputs to climate, many extraterrestrial, that are little understood and some we have yet to discover. If we knew all this, and had a computer big enough to handle the billions of bits of information fast enough, them a model might work.
My experience of cloud is that they have a cooling effect and negative feedback. Unfortunately belief in the GHG theory forces the opposite line of thought.
The CMIP5 models for the upcoming AR5 have gone from bad to worse at simulating global sea surface temperatures.
Here’s an excerpt from Chapter 5.5 of my upcoming book Who Turned on the Heat? The Unsuspected Global Warming Culprit, El Niño-Southern Oscillation:
We’ve seen that sea surface temperatures for a major portion (33%) of the global oceans have not warmed in 30 years. This contradicts the hypothesis of anthropogenic global warming. The models show the East Pacific Ocean should have warmed 0.42 to 0.45 deg C during the satellite era, but they haven’t warmed at all.
On the other hand, if we look at the global sea surface temperature record for the past 30 years, we can see that they have warmed. The IPCC’s climate models indicate sea surface temperatures globally should have warmed and they have. See Figure 5-5. Note, however, based on the linear trends the older climate models have overestimated the warming by 73% and the newer models used in the upcoming IPCC AR5 have overshot the mark by 94%. In simple words, the IPCC’s models have gone from terrible to worse-than-that. The recent models estimate a warming rate for the global oceans that’s almost twice the actual warming. In other words, the sea surface temperatures of the global oceans are not responding to the increase in manmade greenhouse gases as the climate models say they should have. Not a good sign for any projections of future climate the IPCC may make.
http://i45.tinypic.com/hs9bbr.jpg
We know it’s all about the money. Not Science.
http://www.bloomberg.com/news/2011-08-08/climatecare-buys-itself-out-from-jpmorgan-targets-u-n-offsets.html
JP Morgan also ran the Oil for Food Program and was involved in this:
http://dailybail.com/home/report-senior-atf-agent-in-charge-of-fast-n-furious-gun-runn.html
http://www.telegraph.co.uk/news/politics/tony-blair/8772418/Tony-Blair-visited-Libya-to-lobby-for-JP-Morgan.html
NOTE: Tony Blair was partners with Al Gore in the UK based Carbon Exchange.
Jim D says:
August 23, 2012 at 10:10 pm
The statisticians here should have a field day deconstructing this one. If he wanted a worse error, he could have propagated it monthly instead of yearly. That would have been more impressive. How about daily? Can anyone figure out what he did wrong? Or, if the error is only 12% for a year, it must be tiny for a day to only grow to 12% for a year. Just needs some more thought.
I am sure that ‘statisticians would have a field day’ but then there has to be understanding of what figures actually mean. A tiny variance in the cloud cover at the tropics will have a far larger effect than a similar variance in temperate or polar latitudes. A tiny variance during the day will have a different effect than a tiny variance during the night. High level cirrus has different effect than low level stratus. Convective clouds are an indication of heat being carried out of the atmosphere stratus are not. But statisticians would only deal in numbers not understanding the impact of the qualitative differences on the quantitative. Indeed by reducing the errors to just a single percentage vast amounts of information has been discarded. It would be interesting to know which cloud the models fail to ‘predict/project/extrapolate’ and where on the globe those models fail as that has a huge effect on the probability of a closer to reality result. But statisticians playing with averaged numbers will not have that information as it has been discarded. So yes it needs more thought – and less just playing with numbers.
Personally, I would like to see strict Validation of these models with repeated runs from different starting points with no change in parameterization between those runs. So starting at 1850, then at 1900, then at 1950, then at 2000 and compare their results to actual analyzed atmosphere (including actual cloud cover and type), maximum and minimum temperatures and humidities and sea temperatures perhaps resolved at one degree latitude and longitude. If a model fails to meet the ‘skill’ level required in its prediction then it should be discarded and further funding to that modeling group should cease. If the various research groups knew that they had a real meteorological skill/validation bar to cross that was sufficiently challenging or lose funding perhaps they would improve more than this article suggests they have.
Remember the results of these models are trumpeted throughout the world by the UN and used to kill major industries and keep 3rd world countries in fuel poverty – we aren’t talking about a simple post-doc project.
Thankyou for spelling this out.