Global Warming Slowdown: The View from Space
by Roy W. Spencer, Ph. D.
Since the slowdown in surface warming over the last 15 years has been a popular topic recently, I thought I would show results for the lower tropospheric temperature (LT) compared to climate models calculated over the same atmospheric layers the satellites sense.
Courtesy of John Christy, and based upon data from the KNMI Climate Explorer, below is a comparison of 44 climate models versus the UAH and RSS satellite observations for global lower tropospheric temperature variations, for the period 1979-2012 from the satellites, and for 1975 – 2025 for the models:
Clearly, there is increasing divergence over the years between the satellite observations (UAH, RSS) and the models. The reasons for the disagreement are not obvious, since there are at least a few possibilities:
Read the rest here at Dr. Spencer’s blog: http://www.drroyspencer.com/2013/04/global-warming-slowdown-the-view-from-space/
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

steinarmidtskogen says:
April 18, 2013 at 1:00 am
The standard response to the recent lack of atmospheric warming seems to be that global warming hasn’t paused, because the ocean heat content continues to rise. That may be true, but what about the possibility that the signal in the ocean lags behind the air, because change in the oceans is slower?
Surely the idea of ‘heat going into the oceans is at odds with the ‘Anthropogenic Global Warming’ hypothesis and in any case not possible?
AGW Hypothesis The raised level of CO2 in the troposphere will lead to outgoing longwave radiation being ‘trapped’ (actually scattered) and thus raise the heat of the troposphere. This in itself is not enough to lead to warming but the extra warmth in the atmosphere will lead to accelerated evaporation from the oceans and the extra water vapor will trap more heat leading to more temperature rise in the lower troposphere etc So the entire AGW hypothesis rests on evaporating more water from the oceans.
The effect of more evaporation from the oceans will be to cool the oceans not warm them due to the latent heat of evaporation being taken up by each departing molecule. If the AGW claim is that there is NO extra evaporation of water vapor the base AGW hypothesis is no longer supportable.
‘Downwelling Infrared . Another claim of AGW is that the infrared ‘scattered’ by the CO2 and clouds and some comes back down to the surface (so called downwelling infrared). This is supposed to ‘warm’ the surface or reduce the rate of surface cooling. Again with any damp surface – probably 90% of the Earth’s surface – this is not true. With any water surface or wet surface infrared cannot penetrate more than a few molecules. These top surface molecules will be excited and depart taking the latent heat of evaporation with them. If there is a warm atmosphere then this will increase the rate of evaporation as more water vapor can be held by warm air than dry air, if there is a wind then the rate of evaporation will increase further as the locally high water vapor pressure will be reduced by the incoming drier air. This is NOT rocket science. Everyone blows over a hot drink to cool it, or if you want – get your hands wet – shake spare water off so your hands are just damp – now put them under a hot air drier – your hands feel cold until they are drydue to the latent heat of evaporation of water on your skin this is the case even with a hair drier set at max heat with red-hot warming coils held just above your wet palm.
The effect of infrared and hot air on a damp surfaces such as most land and the oceans will be to cool the surfaces not warm them due to the latent heat of evaporation being taken up by each departing molecule. Only dry surfaces will get hotter from infrared and warm winds.
There are only two ways to warm the ocean.
1. By sensible heat conduction for example from hot shoreline rocks or geothermal and volcanic action this is minor.
2. By solar shortwave radiation penetrating down into the top tens of meters of the oceans being converted to heat. This is the major heat source.
Infrared and warm winds cool the oceans and wet surfaces this is simply demonstrable kindergarten physics.
Dr Trenberth, if there is missing heat in the oceans then the only thing that put it there in sufficient quantity to matter is the short wave radiation from the Sun.
moderator sorry Looks like I missed closing a bold there
[Well, yes, but where do you want the “unbold” to be placed? Mod]
I see some comments to the effect that some of the models may be forecasting from the 1990’s. I believe it is the case that these models only forecast from 2004 -2006 onwards. Everything before that is hindcast, and even much of that appears to be wrong in the early 2000’s.
All of the model runs must have been done after Mt Pinatubo. So everything up to 1995 or so is hindcasting.
What a clever marketing trick by the warmist tricksters.
Building upon what others have said, 39 of the 45 models very accurately capture the sharp downward spike in the early 1990s, and a similar number capture the smaller downward spike in the early 1980s.
Either the modellers were very skilful or very lucky, or they calibrated their models against these events which had already occurred. My guess is that 39 of the models are hindcasting the event in the early 1990s.
Friends:
Several posters have commented on model training and the modelled climate sensitivity. This comment from richard verney (at April 18, 2013 at 2:19 am) is typical.
Hence, it seems sensible for me to yet again post the following information on WUWT.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
This is demonstrated by the above graph which compares model projections with reality.
And it demolishes all claims concerning the use of model “ensembles” because average wrong is wrong.
Richard
All they have to do to correct their models to follow reality is to lower their assumed fudge factors, “CO2 sensitivity” and percentage anthropogenic contribution. Why not try zero sensitivity and 5% for a start? In this process they will likely find their missing heat.
For decades I listened and read people blasting NASA and space exploration as a waste of money better spend on social programs etc. I wonder if they are the same people that (in the EU alone in about decade) have spend 280 Billion $$ and those in the UK that just keep’n on keep’n on digging the same hole and filling it with more money?
There are 44 climate models paid for by tax payers and the number correct is 0. 0 out of 44. This says a lot about probability science – the direction in which all science is heading.
@Golden – The lack of model fidelity likely says very little about probability science. In my opinion, it says a boat load more about probability practicioners.
Now what that graph tells me is this – natural selection. As the system they are trying to model is chaotic, any model contains a number of fudge factors, and can be made to show, warming, catastrophic or not, stable climate, or cooling, depending on the wish of the modeller.
So any model not conforming to the political views at the time will not be accepted or funded. Hence we have the “consensus”, and all of the “scientists” who followed each other produced – surprise, similar looking, but entirely wrong results.
What I anticipate happening, is someone at the very top of the Church of Global Warming, “coming out” with a confession as to how wrong they were, and how corrupt and unscientific the process was.
This will be deeply embarrassing for the “policy makers”, but the collapse of the movement is inevitable, and rats will run.
steinarmidtskogen says: “The standard response to the recent lack of atmospheric warming seems to be that global warming hasn’t paused, because the ocean heat content continues to rise. That may be true…”
And it may also be false. Ocean heat content data has to be adjusted in order for it to show continued warming:
http://bobtisdale.files.wordpress.com/2013/04/figure-12.png
Without the adjustments to the ARGO-era (2003 to present) data, the ocean heat content is also flat since 2003:
http://bobtisdale.files.wordpress.com/2013/04/figure-23.png
The above graphs are from the following post:
http://bobtisdale.wordpress.com/2013/04/17/a-different-perspective-on-trenberths-missing-heat-the-warming-of-the-global-oceans-0-to-2000-meters-in-deg-c/
In this modern age, true scientific research has been replaced by “statisticians forming consensuses”. For instance, the sixth or seventh downward projection on this Sunspot Cycle (24).
One can only hope that real, verifiable models with be produced. After this debacle has been terminated. This will advance our understanding. As is obvious, “statistical models” only verify the data that created the statistics.
Real models are based, not on statistical data, but on underlying physical principles! The statistical data is used to verify the accuracy of the model. That is, hard data in, hard results out, verify to statistical data.
Having done brainwave studies along the auditory pathway, I’ve looked at thousands of runs. Wriggles I know.
This chart looks like two different parameters have been stuck together. Here is what I mean by that. Remove the actual temperature data and just look at the model runs. Notice the difference in the spread between earlier and later points on the chart. At a certain point the spread suddenly enlarges.
Here is the crux: That earlier section is tuned TO the actual observed (and yes adjusted) temperatures. Different models tune different dials, but the process is to monkey with the dials till you get a match to the known temperature. At some point in time you stop tuning and you let the model run under the tuned condition. You can’t touch the dials anymore to continue tuning to the known observation. In other words, you change the model at that point.
What this means is that the earlier part of the run uses one set of parameters (tuned to the observation), but the later part uses another (not tuned). Funny that the models don’t include that little gem on the graphs. That knee point is vital but it is not labeled. It should be. It is not included because?
It is not included because sticking two different constructions together, and not labeling it, is the norm these days. No wonder Micky did it too.
Bottom Line: We have 2 sets of data;
Consensus Science, i.e. the 44 models, and
Actual satellite measurements, (RSS & UAH).
Which data set we trust as representing reality tells us all we need to know about who we are.
BTW – excellent responses to this post!
Again and again, the GCMs prove themselves to be worthless and costing not only a lot of money, but also a lot of societal regression.
richard verney says:
April 18, 2013 at 2:13 am
To put this in perspective, Dr Spencer needs to provide us with more information on the 44 models.
Richard,
To put your comment in perspective, go do the work for yourself. Dr. Spencer has given us an overview. It that isn’t good enough, invest your own time and give us your conclusions (with pertinent information, of course) when you have finished.
To many commenters want everyone else to do the work. Google, Bing , et al are still free. Have at it and be sure to share.
pbh
Moderator — Unbold after the “AGW Hypothesis” heading should fix it – thanks
I read from Spencer’s post that the black line is the average of the 44 models and is approximately what the IPCC’s projects. Observations are diverging from model projections at an alarming rate. When will honest scientist call this charade out? The longer they wait the more embarrassing it will be for them when they have to finally concede that the theory is a dog’s dinner.
A number of the commenters could have dug a bit more deeply into the facts underlying Dr. Spencer’s graphic attributed to Dr. Christy. It is an update to that provided the House Energy and Power Committee in congressional testimony on 9/20/12, readily available via Google. HEPC figure 2.1 only included 38 ‘models’ from CMIP5, as opposed to this new figure with 44.
Without further publication of details, it is impossible to say what was chosen. It is possible to say some things generally. Details on publicly archived CMIP5 are available at chip-pcmdi/llnl.gov/cmip5. Google takes you there immediately.
From the summary information and the accompanying mandatory ‘design of experiments’, there are 29 submitting modeling groups, who have provided data from 61 models on 101 ‘experiments’. For example, NASA GISS provided 4 variants and NOAA GDFL provided 6.
Since the graph contains information out through 2035, it is possible to say with certainty the following from the published protocols. All the ‘experiments’ come from ‘group 1″, decadal hindcasts and predictions to 2035. These runs are intended to test the skill of the models. Group 1 initialization is expressly 2005 ‘actual’, although choice of initial input condition details is still left to each modeling group (i.e. pick any aerosol fudge factor you want). These ‘experiments’ were obviously provided after hindcast tuning optimization in the discretion of the modelling group. So all these models were, for example, ‘tuned’ to Pinatubo and the 1998 ENSO. The modeled scenario is specifically RCP4.5, so rather than variations in CO2, the model differences reflect variations in sensitivity and other model factors.
It is readily apparent that all the models graphed fail the ‘skill’ test of temperature prediction through YE2012, less than one decade out from supposed ‘actual’ 2005 initial conditions. Dr. Christy made the same point in his Congressional testimony last September. It would seem that with appropriate future disclosure and attribution, Dr. Christy is fixing to falsify the entire GCM basis for anything in AR5.
Rud Istvan:
At April 18, 2013 at 9:09 am – in your useful and informative post – you say
As you say, they “could have dug a bit more deeply”. But it would have been wasted effort.
All they needed to do to obtain the information they wanted for themselves and for others was to say what they wanted to know. After that, you or some other knowledgeable person would probably tell them. And you did.
Also, if the information thus obtained were debatable then other knowledgeable people would debate it. Personal searches would be unlikely to obtain such a range of views.
Thus, those who stated what they wanted to know obtained more reliable information – and less effort – than having “dug more deeply” for themselves.
This sharing of knowledge and opinions is why WUWT threads exist, is it not?
Richard
And Hansen/Schmidt still say that the models and reality are a good match.
Hmmm. Like my opinion of myself and that of my ex, I suppose: they match where they cross.
Pamela Gray says: April 18, 2013 at 6:51 am
You raise an interesting point. Are you are suggesting that (the temperature profile patterns look like) the historical part comes from “model” runs that were tweaked for observation but then the predictive parts are run on models WITHOUT the observation-induced tweaks?
Most of us thought that the models were MODIFIED to account for observation, and then the MODIFIED model was run for post-observation time. Is this not the case?
It does not matter how many models are used, they are all garbage because they don’t understand the biological nature of the atmosphere. Physical models come from physics majors, mathematical models come from math majors. If you want to understand the atmosphere, ask a biologist. The biological carbon cycle has been transforming the planet for over a billion years. Biology controls the oceans, biology controls the land surface. Biology controls the atmosphere.
I’d like to see a physicist produce a model of an elephant.
44 physical models by physcists are no more useful than 4000 models by 4000 monkeys.
Carbon dioxide does not accumulate in the atmosphere, it is a gas phase component of biological origin. The geologic/abiotic side is only 1/2 the story, add biology and you will have the first sane model.
You want a realistic model of the planet?? Add biology.
Nice posts by Peter Miller, Courtney, IanW, Astley and Istvan. The entire AGW story has no chance of being fully understood without biology.
Can we start saving some spending by immediately defund every model that is consistently above the average.
How can these modelers not see and compare their results and make the needed parameter changes. We recently saw NASA data on real water vapor, which dropped during rising temperatures in the 1990’s. Have they not made needed changes?
Would anyone here get on the initial flight on a plane only designed by one of these models?