Temperature models vs temperature reality in the lower troposphere

Global Warming Slowdown: The View from Space

by Roy W. Spencer, Ph. D.

Since the slowdown in surface warming over the last 15 years has been a popular topic recently, I thought I would show results for the lower tropospheric temperature (LT) compared to climate models calculated over the same atmospheric layers the satellites sense.

Courtesy of John Christy, and based upon data from the KNMI Climate Explorer, below is a comparison of 44 climate models versus the UAH and RSS satellite observations for global lower tropospheric temperature variations, for the period 1979-2012 from the satellites, and for 1975 – 2025 for the models:


Clearly, there is increasing divergence over the years between the satellite observations (UAH, RSS) and the models. The reasons for the disagreement are not obvious, since there are at least a few possibilities:

Read the rest here at Dr. Spencer’s blog: http://www.drroyspencer.com/2013/04/global-warming-slowdown-the-view-from-space/

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
April 17, 2013 10:28 pm

“If Trenberth is correct …..”. With no sarc tag. Ha ha, you rascal.

April 17, 2013 10:31 pm

What’s the lonely model there at the bottom? The gee-all-this-is-nonsense-we-admit-it model?
(Yes, I clicked the link.)

April 17, 2013 10:34 pm

The models may range from “BAU” cases (upper curves) to “drastic emission curbing” cases (e.g., emission rate fixed at the level of 2000; lower curves). Am I right?

April 17, 2013 10:36 pm

There are 44 climate models? Don’t these people have anything better to do?

April 17, 2013 10:42 pm

Roha: They have to spend all that money on something. There’s only a limited number of conferences in exotic locations to go to.

April 17, 2013 10:43 pm

Could it be that the models are calibrated to pre 1975 data that has been massively adjusted downwards, thus creating a false trend. This would almost certainly produce just the effect we are seeing is if the models were reasonable in other ways.

April 17, 2013 10:43 pm

last line, is = even ! typo sorry

April 17, 2013 10:47 pm

Roy Spencer says,
Additional evidence for lower climate sensitivity in the above plot is the observed response to the 1991 Pinatubo eruption: the temporary temperature dip in 1992-93, and subsequent recovery, is weaker in the observations than in the models. This is exactly what would be predicted with lower climate sensitivity.
The Pinatubo response is IMO the best single piece of evidence we have that the CO2 sensitivity in the models is 2 to 3 times too high.
I await the next major volcanic eruption with interest.

April 17, 2013 11:47 pm

“What if climate change appears to be just mainly a multidecadal natural fluctuation? They’ll kill us probably!” /from Climategate e-mails/

April 18, 2013 12:10 am

Reblogged this on sainsfilteknologi.

April 18, 2013 12:34 am

How come a model attributing most late 20th century warming to the sun isn’t included? It’s the only one that would work.

April 18, 2013 12:58 am

“The reasons for the disagreement are not obvious, since there are at least a few possibilities … ”
Dr Spencer is being too kind, we know why there is extreme divergence and worse than we thought, it’s the bl88dy cheating, isn’t it ? 😉

April 18, 2013 1:00 am

The standard response to the recent lack of atmospheric warming seems to be that global warming hasn’t paused, because the ocean heat content continues to rise. That may be true, but what about the possibility that the signal in the ocean lags behind the air, because change in the oceans is slower?

April 18, 2013 1:20 am

“Clearly, there is increasing divergence over the years”
That’s a bit of an understatement really isn’t it? Because the only reason these models show any point of convergence with reality is because the climtae modellers are at least forced to start off from the same point as reality.
Truth is that as soon as 1 year went by the models diverged from measurements. Most of those models have never been on trend from year 1. It has simply become more obvious over time that even the longer term trends in the models are wrong.
I’m guessing that the two models that do seem to be on trend are part of a group of models based on the function f(whatever the satellites say the climate is + a hockey stick after todays date).

April 18, 2013 1:30 am

RoHa says:
April 17, 2013 at 10:36 pm
There are 44 climate models? Don’t these people have anything better to do?

Correction: There are 44 failed climate models. And no, their full-time job is money grubbing.

April 18, 2013 1:31 am

Can someone put a vertical line on this graph, delineating the forecasting from the hindcasting. I am presuming that where the graphs mimic those ups and downs in the ’80s is all hindcasting.

April 18, 2013 1:32 am

I’m guessing that the two models that do seem to be on trend are part of a group of models based on the function f(whatever the satellites say the climate is + a hockey stick after todays date).

April 18, 2013 1:36 am

If the models are diverging from reality then CHANGE THE MODELS. If this means changing the theory of GHE then all well and good. That theory was ribbish anyway, it violates the laws of thermodynamics and that is enough for me.

Peter Miller
April 18, 2013 1:44 am

The Earth/Gaia must have its own natural thermostat. Whether it be white cloud tops, Trenberth’s explanation of deep ocean burial, or whatever, it does not matter.
The impact of CO2 on our planet’s climate has been grossly exaggerated by alarmists. Whether this was done deliberately or not will be up to history to judge. Computer climate models clearly suffer from a mix of one or more of the following: a) GIGO, b) pre-determined conclusions, c) an inability to model either climate ‘chaos’ or as yet unknown factors affecting climate, d) input data manipulation, and e) Mannian style statistical analysis..
The problem is the Global Warming Gravy Train has now become so huge and far too many politicians have signed up to solving the non-problem of CAGW that it will take many years to dismantle this pointless, expensive industry of public deception.
To paraphrase a great man’s comments: “Never in the field of human scientific endeavour have so many been duped by so few.”

Bob Highland
April 18, 2013 1:44 am

In normal science, when your prognostications don’t match your observations it is surely traditional to go back to the drawing board and revise your assumptions about the processes in play until there is a match, which reconciles observed data and model up to the present and then subsequently has skill in predicting the future.
How can these people, 44 different groups of people for chrissake, sleep at night knowing that their efforts continue to bear rotten fruit? When on average they are wrong by a factor of at least 2, and in some cases up to 4 or 5? In private enterprise, that’s enough to get you fired: but it seems in the peculiar post-normal world of climatology it only signals the need for more funds to be thrown at the problem and Nobel prizes to be awarded for commitment to erroneous science in the face of overwhelming evidence.
If one is allowed to paraphrase, it’s a travesty, that’s what it is!

richard verney
April 18, 2013 2:13 am

To put this in perspective, Dr Spencer needs to provide us with more information on the 44 models.
In particular, we need to know what assumptions (especially those relating to climate sensitivity, and those pertaining to future emission scenarios) were programmed into the models showing the most divergence, and those showing the least divergence from reality.
Whilst all models diverged from reality at an early stage, there are 2 models which are closer to reality, namely the bottom red plot and the bottom greeny/grey plot. The greeny/grey plot as from 2012 shows a sharp upward trend compared to that of the bottom red plot, although by about 2025, these again meet.
I am particularly intrigued to know the assumptions behind those 2 models. If they both assume the scenario C position (ie., an immediate halt to further CO2 emissions), then since this did not occur, it would apppear that CO2 has little effect on temperature, ie., whether man emits it or not.
If the models which are running hot have a high sensitivity to CO2, then given the divergence, it would appear that CO2 sensitivity is small.
It is clear from the above that we do not understand the physics of the atmosphere and the oceans, and/or we are unable to model the physics such tjhat all models are for all practical purposes useless. All 44 models are wrong and their only use is to demonstarte to us our lack of knowledge and understanding is such that we are very far from being in a position to predict what will occur in the coming decades let alone a century out.
As others have said, it should be back to the drawing board both with the underlying GHE conjecture, and with the modelling thereof. In any other scientific discipline it would be.

richard verney
April 18, 2013 2:19 am

ralfellis says:
April 18, 2013 at 1:31 am
None of these models came into existence before 1980, so it follows that pre 1980 plots are hindcast. Whether that was part of the validation process, I do not know.
It would be interesting to know the date when each model first saw the light of day, since I suspect that it was not before the late 1980s (at the earliest with some seeing the light of day for the first time in the 1990s). In which case, right from the get go, the hindcasting was poor and nearly all models were demonstarting that they were programmed hot. This shoudl have been sufficient to have strangled them at birth. .

April 18, 2013 2:47 am

In approx 30 years about 0.3 Deg C ??

William Astley
April 18, 2013 2:49 am

The lack of warming for 16 years is only one of the fundamental issues with the extreme AGW hypothesis. If an idea, a theory is repeated enough, it is natural to assume that the theory is fundamentally correct (i.e. In the case of this problem that the scientific question is what is the sensitivity to the forcing as opposed to the magnitude of the forcing itself without feedbacks). What has been ignored is there are multiple periods in the paleo record when planetary temperature does not correlate with atmospheric CO2. How can that be physically possible? What is the physical explanation for past periods when there was no correlation between atmospheric CO2 levels and planetary temperature?
For example, the following is the Greenland Ice Sheet temperature Vs atmospheric CO2 for the last 11,000 years, determined by the analysis of ice cores. The analysis shows the Greenland Ice sheet gradually becomes colder and experiences the Dansgaard-Oeschger (D-O) warming and cooling cycles (1450 year cycle plus or minus 500 years) and atmospheric CO2 gradually increases.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
As the Greenland Ice sheet observed temperature is over 1000s of years, it appears that it is not possible to explain the lack of correlation with changes in prevailing winds or ocean currents. The Greenland Ice Sheet temperature is not disconnected from planetary temperature. Greenland is a large land mass, the ice cores where taken in the center of the Greenland far from the coast.
The complete lack of correlation between atmospheric CO2 levels and temperature is not limited to the current interglacial, the Holocene. There are periods in the geological record where there are ice epochs of millions of years when atmospheric CO2 was high and periods of millions of years when atmospheric CO2 has low and the planet was warm.
The observational data and basic high level analysis indicates that there may be something fundamental that has been missed, perhaps an assumption about atmospheric processes/conditions, that is in correct.
The AGW theory predicted – this is a fundamental logical pillar of the theory, if the ‘prediction’ does not occur the theory is invalid at a mechanism level – that the AGW warming should be the greatest in the tropics as this is the region on the planet where there is the largest amount of long wave radiation reflected off in to space. As the lower atmosphere is saturated due to the overlap of water vapour and CO2, the ‘theory’ predicted that there would be tropical tropospheric warming at roughly 10 km above the surface of the planet. The tropospheric warming at roughly 10 km would in turn warm the tropics by long wave radiation.
There has been no tropical warming in the 20th century and there has been no tropical tropospheric warming at roughly 10 km. These two observations are logically connected. For there to be warming in the tropics, there would have needed to be tropical tropospheric warming. The fact there was not warming in the tropics supports the assertion that the lack of tropical tropospheric warming is not a measurement issue. The warming that has occurred in the 20th century is in high latitude regions.
It is interesting that that there are cycles of past warming in high latitude regions (1450 years plus or minus 500 years, Dansgaard-Oeschger cycle) followed by cooling in high latitude regions in the paleo record. The D-O cycle is clearly evident in the Greenland sheet core temperature analysis.
A comparison of tropical temperature trends with model predictions
We examine tropospheric temperature trends of 67 runs from 22 ‘Climate of the 20th Century’ model simulations and try to reconcile them with the best available updated observations (in the tropics during the satellite era). Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs. These conclusions contrast strongly with those of recent publications based on essentially the same data.
On the Observational Determination of Climate Sensitivity and Its Implications
Richard S. Lindzen1 and Yong-Sang Choi2
We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000- 2008) satellite instruments. Distinct periods of warming and cooling in the SSTs were used to evaluate feedbacks. An earlier study (Lindzen and Choi, 2009) was subject to significant criticisms. The present paper is an expansion of the earlier paper where the various criticisms are taken into account. … … We argue that feedbacks are largely concentrated in the tropics, and the tropical feedbacks can be adjusted to account for their impact on the globe as a whole. Indeed, we show that including all CERES data (not just from the tropics) leads to results similar to what are obtained for the tropics alone – though with more noise. We again find that the outgoing radiation resulting from SST fluctuations exceeds the zerofeedback response thus implying negative feedback. In contrast to this, the calculated TOA outgoing radiation fluxes from 11 atmospheric models forced by the observed SST are less than the zerofeedback response, consistent with the positive feedbacks that characterize these models. …. …The heart of the global warming issue is so-called greenhouse warming. This refers to the fact that the earth balances the heat received from the sun (mostly in the visible spectrum) by radiating in the infrared portion of the spectrum back to space. Gases that are relatively transparent to visible light but strongly absorbent in the infrared (greenhouse gases) interfere with the cooling of the planet, forcing it to become warmer in order to emit sufficient infrared radiation to balance the net incoming sunlight (Lindzen, 1999). By net incoming sunlight, we mean that portion of the sun’s radiation that is not reflected back to space by clouds, aerosols and the earth’s surface. … …. However, warming from a doubling of CO2 would only be about 1C (based on simple calculations where the radiation altitude and the Planck temperature depend on wavelength in accordance with the attenuation coefficients of well mixed CO2 molecules; a doubling of any concentration in ppmv produces the same warming because of the logarithmic dependence of CO2’s absorption on the amount of CO2) (IPCC, 2007). This modest warming is much less than current climate models suggest for a doubling of CO2. Models predict warming of from 1.5C to 5C and even more for a doubling of CO2. Model predictions depend on the ‘feedback’ within models from the more important greenhouse substances, water vapor and clouds. Within all current climate models, water vapor increases with increasing temperature so as to further inhibit infrared cooling. Clouds also change so that their visible reflectivity decreases, causing increased solar absorption and warming of the earth….
1. A lack of correlation between atmospheric CO2 and planetary temperature does not valid the Dragon Slayers’ train of thought. It seems that line of thought is off page. The ‘laws’ of physics still apply. It appears or let’s assume, to develop a straw man hypothesis, that there is a fundamental assumption about the atmosphere which is not correct which explains what appears to be observed saturation of the mechanism. Perhaps a clue to what is in correct is there needs to be a mechanism explanation for the post 1996 reduction in planetary clouds. (i.e The cloud anomaly also needs an explanation. Perhaps the mechanism explanation for the reduction in planetary clouds can explain both anomalies.)
2. It appears negative feedback primarily in the tropics cannot explain the lack of warming in the tropics and the lack of tropical tropospheric warming. The forcing should have occurred, the warming would be reduced due to an increase in clouds in the tropics. There is however no warming in tropics. It appears the warming in the 20th century was caused by a reduction in planetary clouds rather than forcing from CO2.

just some guy
April 18, 2013 2:51 am

All 44 models are actually dead on accurate. Michael Mann found a skillful tree in Siberia to prove it.

Ian W
April 18, 2013 3:31 am

steinarmidtskogen says:
April 18, 2013 at 1:00 am
The standard response to the recent lack of atmospheric warming seems to be that global warming hasn’t paused, because the ocean heat content continues to rise. That may be true, but what about the possibility that the signal in the ocean lags behind the air, because change in the oceans is slower?

Surely the idea of ‘heat going into the oceans is at odds with the ‘Anthropogenic Global Warming’ hypothesis and in any case not possible?
AGW Hypothesis The raised level of CO2 in the troposphere will lead to outgoing longwave radiation being ‘trapped’ (actually scattered) and thus raise the heat of the troposphere. This in itself is not enough to lead to warming but the extra warmth in the atmosphere will lead to accelerated evaporation from the oceans and the extra water vapor will trap more heat leading to more temperature rise in the lower troposphere etc So the entire AGW hypothesis rests on evaporating more water from the oceans.
The effect of more evaporation from the oceans will be to cool the oceans not warm them due to the latent heat of evaporation being taken up by each departing molecule. If the AGW claim is that there is NO extra evaporation of water vapor the base AGW hypothesis is no longer supportable.
‘Downwelling Infrared . Another claim of AGW is that the infrared ‘scattered’ by the CO2 and clouds and some comes back down to the surface (so called downwelling infrared). This is supposed to ‘warm’ the surface or reduce the rate of surface cooling. Again with any damp surface – probably 90% of the Earth’s surface – this is not true. With any water surface or wet surface infrared cannot penetrate more than a few molecules. These top surface molecules will be excited and depart taking the latent heat of evaporation with them. If there is a warm atmosphere then this will increase the rate of evaporation as more water vapor can be held by warm air than dry air, if there is a wind then the rate of evaporation will increase further as the locally high water vapor pressure will be reduced by the incoming drier air. This is NOT rocket science. Everyone blows over a hot drink to cool it, or if you want – get your hands wet – shake spare water off so your hands are just damp – now put them under a hot air drier – your hands feel cold
until they are drydue to the latent heat of evaporation of water on your skin this is the case even with a hair drier set at max heat with red-hot warming coils held just above your wet palm.
The effect of infrared and hot air on a damp surfaces such as most land and the oceans will be to cool the surfaces not warm them due to the latent heat of evaporation being taken up by each departing molecule. Only dry surfaces will get hotter from infrared and warm winds.
There are only two ways to warm the ocean.
1. By sensible heat conduction for example from hot shoreline rocks or geothermal and volcanic action this is minor.
2. By solar shortwave radiation penetrating down into the top tens of meters of the oceans being converted to heat. This is the major heat source.
Infrared and warm winds cool the oceans and wet surfaces this is simply demonstrable kindergarten physics.
Dr Trenberth, if there is missing heat in the oceans then the only thing that put it there in sufficient quantity to matter is the short wave radiation from the Sun.

Ian W
April 18, 2013 3:32 am

moderator sorry Looks like I missed closing a bold there
[Well, yes, but where do you want the “unbold” to be placed? Mod]

John B
April 18, 2013 3:39 am

I see some comments to the effect that some of the models may be forecasting from the 1990’s. I believe it is the case that these models only forecast from 2004 -2006 onwards. Everything before that is hindcast, and even much of that appears to be wrong in the early 2000’s.

April 18, 2013 4:19 am

All of the model runs must have been done after Mt Pinatubo. So everything up to 1995 or so is hindcasting.
What a clever marketing trick by the warmist tricksters.

Bill Thomson
April 18, 2013 4:21 am

Building upon what others have said, 39 of the 45 models very accurately capture the sharp downward spike in the early 1990s, and a similar number capture the smaller downward spike in the early 1980s.
Either the modellers were very skilful or very lucky, or they calibrated their models against these events which had already occurred. My guess is that 39 of the models are hindcasting the event in the early 1990s.

April 18, 2013 4:28 am

Several posters have commented on model training and the modelled climate sensitivity. This comment from richard verney (at April 18, 2013 at 2:19 am) is typical.

None of these models came into existence before 1980, so it follows that pre 1980 plots are hindcast. Whether that was part of the validation process, I do not know.

Hence, it seems sensible for me to yet again post the following information on WUWT.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
the assumed degree of forcings resulting from human activity that produce warming
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
This is demonstrated by the above graph which compares model projections with reality.
And it demolishes all claims concerning the use of model “ensembles” because average wrong is wrong.

April 18, 2013 4:56 am

All they have to do to correct their models to follow reality is to lower their assumed fudge factors, “CO2 sensitivity” and percentage anthropogenic contribution. Why not try zero sensitivity and 5% for a start? In this process they will likely find their missing heat.

tobias smit
April 18, 2013 5:18 am

For decades I listened and read people blasting NASA and space exploration as a waste of money better spend on social programs etc. I wonder if they are the same people that (in the EU alone in about decade) have spend 280 Billion $$ and those in the UK that just keep’n on keep’n on digging the same hole and filling it with more money?

April 18, 2013 5:32 am

There are 44 climate models paid for by tax payers and the number correct is 0. 0 out of 44. This says a lot about probability science – the direction in which all science is heading.

Andy Wehrle
April 18, 2013 5:45 am

@Golden – The lack of model fidelity likely says very little about probability science. In my opinion, it says a boat load more about probability practicioners.

April 18, 2013 6:00 am

Now what that graph tells me is this – natural selection. As the system they are trying to model is chaotic, any model contains a number of fudge factors, and can be made to show, warming, catastrophic or not, stable climate, or cooling, depending on the wish of the modeller.
So any model not conforming to the political views at the time will not be accepted or funded. Hence we have the “consensus”, and all of the “scientists” who followed each other produced – surprise, similar looking, but entirely wrong results.
What I anticipate happening, is someone at the very top of the Church of Global Warming, “coming out” with a confession as to how wrong they were, and how corrupt and unscientific the process was.
This will be deeply embarrassing for the “policy makers”, but the collapse of the movement is inevitable, and rats will run.

April 18, 2013 6:14 am

steinarmidtskogen says: “The standard response to the recent lack of atmospheric warming seems to be that global warming hasn’t paused, because the ocean heat content continues to rise. That may be true…”
And it may also be false. Ocean heat content data has to be adjusted in order for it to show continued warming:
Without the adjustments to the ARGO-era (2003 to present) data, the ocean heat content is also flat since 2003:
The above graphs are from the following post:

Dr. Lurtz
April 18, 2013 6:31 am

In this modern age, true scientific research has been replaced by “statisticians forming consensuses”. For instance, the sixth or seventh downward projection on this Sunspot Cycle (24).
One can only hope that real, verifiable models with be produced. After this debacle has been terminated. This will advance our understanding. As is obvious, “statistical models” only verify the data that created the statistics.
Real models are based, not on statistical data, but on underlying physical principles! The statistical data is used to verify the accuracy of the model. That is, hard data in, hard results out, verify to statistical data.

Pamela Gray
April 18, 2013 6:51 am

Having done brainwave studies along the auditory pathway, I’ve looked at thousands of runs. Wriggles I know.
This chart looks like two different parameters have been stuck together. Here is what I mean by that. Remove the actual temperature data and just look at the model runs. Notice the difference in the spread between earlier and later points on the chart. At a certain point the spread suddenly enlarges.
Here is the crux: That earlier section is tuned TO the actual observed (and yes adjusted) temperatures. Different models tune different dials, but the process is to monkey with the dials till you get a match to the known temperature. At some point in time you stop tuning and you let the model run under the tuned condition. You can’t touch the dials anymore to continue tuning to the known observation. In other words, you change the model at that point.
What this means is that the earlier part of the run uses one set of parameters (tuned to the observation), but the later part uses another (not tuned). Funny that the models don’t include that little gem on the graphs. That knee point is vital but it is not labeled. It should be. It is not included because?
It is not included because sticking two different constructions together, and not labeling it, is the norm these days. No wonder Micky did it too.

Barry Cullen
April 18, 2013 7:52 am

Bottom Line: We have 2 sets of data;
Consensus Science, i.e. the 44 models, and
Actual satellite measurements, (RSS & UAH).
Which data set we trust as representing reality tells us all we need to know about who we are.
BTW – excellent responses to this post!

April 18, 2013 8:39 am

Again and again, the GCMs prove themselves to be worthless and costing not only a lot of money, but also a lot of societal regression.

McComber Boy
April 18, 2013 8:40 am

richard verney says:
April 18, 2013 at 2:13 am
To put this in perspective, Dr Spencer needs to provide us with more information on the 44 models.
To put your comment in perspective, go do the work for yourself. Dr. Spencer has given us an overview. It that isn’t good enough, invest your own time and give us your conclusions (with pertinent information, of course) when you have finished.
To many commenters want everyone else to do the work. Google, Bing , et al are still free. Have at it and be sure to share.

Ian W
April 18, 2013 8:52 am

Moderator — Unbold after the “AGW Hypothesis” heading should fix it – thanks

April 18, 2013 8:58 am

I read from Spencer’s post that the black line is the average of the 44 models and is approximately what the IPCC’s projects. Observations are diverging from model projections at an alarming rate. When will honest scientist call this charade out? The longer they wait the more embarrassing it will be for them when they have to finally concede that the theory is a dog’s dinner.

Rud Istvan
April 18, 2013 9:09 am

A number of the commenters could have dug a bit more deeply into the facts underlying Dr. Spencer’s graphic attributed to Dr. Christy. It is an update to that provided the House Energy and Power Committee in congressional testimony on 9/20/12, readily available via Google. HEPC figure 2.1 only included 38 ‘models’ from CMIP5, as opposed to this new figure with 44.
Without further publication of details, it is impossible to say what was chosen. It is possible to say some things generally. Details on publicly archived CMIP5 are available at chip-pcmdi/llnl.gov/cmip5. Google takes you there immediately.
From the summary information and the accompanying mandatory ‘design of experiments’, there are 29 submitting modeling groups, who have provided data from 61 models on 101 ‘experiments’. For example, NASA GISS provided 4 variants and NOAA GDFL provided 6.
Since the graph contains information out through 2035, it is possible to say with certainty the following from the published protocols. All the ‘experiments’ come from ‘group 1″, decadal hindcasts and predictions to 2035. These runs are intended to test the skill of the models. Group 1 initialization is expressly 2005 ‘actual’, although choice of initial input condition details is still left to each modeling group (i.e. pick any aerosol fudge factor you want). These ‘experiments’ were obviously provided after hindcast tuning optimization in the discretion of the modelling group. So all these models were, for example, ‘tuned’ to Pinatubo and the 1998 ENSO. The modeled scenario is specifically RCP4.5, so rather than variations in CO2, the model differences reflect variations in sensitivity and other model factors.
It is readily apparent that all the models graphed fail the ‘skill’ test of temperature prediction through YE2012, less than one decade out from supposed ‘actual’ 2005 initial conditions. Dr. Christy made the same point in his Congressional testimony last September. It would seem that with appropriate future disclosure and attribution, Dr. Christy is fixing to falsify the entire GCM basis for anything in AR5.

April 18, 2013 10:04 am

Rud Istvan:
At April 18, 2013 at 9:09 am – in your useful and informative post – you say

A number of the commenters could have dug a bit more deeply into the facts underlying Dr. Spencer’s graphic attributed to Dr. Christy.

As you say, they “could have dug a bit more deeply”. But it would have been wasted effort.
All they needed to do to obtain the information they wanted for themselves and for others was to say what they wanted to know. After that, you or some other knowledgeable person would probably tell them. And you did.
Also, if the information thus obtained were debatable then other knowledgeable people would debate it. Personal searches would be unlikely to obtain such a range of views.
Thus, those who stated what they wanted to know obtained more reliable information – and less effort – than having “dug more deeply” for themselves.
This sharing of knowledge and opinions is why WUWT threads exist, is it not?

April 18, 2013 10:40 am

And Hansen/Schmidt still say that the models and reality are a good match.
Hmmm. Like my opinion of myself and that of my ex, I suppose: they match where they cross.

April 18, 2013 10:47 am

Pamela Gray says: April 18, 2013 at 6:51 am
You raise an interesting point. Are you are suggesting that (the temperature profile patterns look like) the historical part comes from “model” runs that were tweaked for observation but then the predictive parts are run on models WITHOUT the observation-induced tweaks?
Most of us thought that the models were MODIFIED to account for observation, and then the MODIFIED model was run for post-observation time. Is this not the case?

April 18, 2013 11:27 am

It does not matter how many models are used, they are all garbage because they don’t understand the biological nature of the atmosphere. Physical models come from physics majors, mathematical models come from math majors. If you want to understand the atmosphere, ask a biologist. The biological carbon cycle has been transforming the planet for over a billion years. Biology controls the oceans, biology controls the land surface. Biology controls the atmosphere.
I’d like to see a physicist produce a model of an elephant.
44 physical models by physcists are no more useful than 4000 models by 4000 monkeys.
Carbon dioxide does not accumulate in the atmosphere, it is a gas phase component of biological origin. The geologic/abiotic side is only 1/2 the story, add biology and you will have the first sane model.
You want a realistic model of the planet?? Add biology.
Nice posts by Peter Miller, Courtney, IanW, Astley and Istvan. The entire AGW story has no chance of being fully understood without biology.

DD More
April 18, 2013 11:43 am

Can we start saving some spending by immediately defund every model that is consistently above the average.
How can these modelers not see and compare their results and make the needed parameter changes. We recently saw NASA data on real water vapor, which dropped during rising temperatures in the 1990’s. Have they not made needed changes?
Would anyone here get on the initial flight on a plane only designed by one of these models?

Rud Istvan
April 18, 2013 1:17 pm

Of course, and glad to oblige.
But you agree with my conclusions only because we agree in general. If I had been a warmist instead of onlyna lukewarmist, posting pretend data with opposite conclusions, there would have been a kerffuffle. That is why I strongly believe it obligatory to go check everything yourself, since in climate science no one can be trusted. Heck, I don’t even trust me…sometimes.
That is why WUWT crowd sourced weather stations project is so very important. Crowd sourced facts-observations and pictures- put the lie to homogenization and show there is a definite UHI ( or local HI as some have advocated for the name) bias. By itself, it says little other than the science isn’t settled. Add it to all the other ‘little’ observations like missing heat, no troposphere hotspot, sensitivity recalibration, this thread, ice at both poles, …. And one builds a strong case for CAGW nonsense. But each ‘fact’ should be checked by everyone for themselves, which is easy these days.
Nullius in Verba is the motto of Newton’s Royal Society. Was good then, is good now.
Regards and thanks

Rud Istvan
April 18, 2013 1:25 pm

BW, some of the newest GCMs do attempt to incorporate the entire carbon cycle, which as you point out has a significant biological component. (But not exclusively so on time scales of about 800 to 1000 years thanks to Henry’s Law and LeChatellier’s principal, both rooted in physical chemistry, ocean CO2, and the reason Al Gore was so wrong about the Vostok Ice core.
That said, any modlemnotmincluding biogenic sequestration is obviously flawed. That is ( to the best of my knowledge and belief) all of AR4 and most of AR5.
Very good point, Sir. I merely attempt to make it more precise.

April 18, 2013 2:09 pm

Rud Istvan:
At April 18, 2013 at 1:17 pm you say to me

But you agree with my conclusions only because we agree in general. If I had been a warmist instead of onlyna lukewarmist, posting pretend data with opposite conclusions, there would have been a kerffuffle.

That is absolutely untrue and is deeply offensive!
For example, on another thread earlier today I made a post
which began saying

I write to ask a genuine question. This post is NOT ‘knocking copy’.
I argue that climate sensitivity is low and, therefore, if I were being prejudiced then I would be supporting this study by Troy Masters.
But my question is
Does anybody think the paper by Troy Masters has any credibility and, if so, why?
The reason for my question is simple and is as follows.


April 18, 2013 2:57 pm

Obviously the 44 models do not have much of a handle on the Earth’s climate. A model is simply a calculation made with certain assumptions. So who in fact did these 44 calculations, as a group, and did anybody make any calculations that were more accurate making different assumptions, which we could then place more faith in?

Brian H
April 18, 2013 3:29 pm

These are 44 model runs, the actual number of models is around 17 IIRC.

April 18, 2013 6:19 pm

I think the reason for the discrepancy is obvious. The theory behind the models is wrong.

April 18, 2013 7:30 pm

kretchetov says:
April 18, 2013 at 6:00 am
Now what that graph tells me is this – natural selection. As the system they are trying to model is chaotic, any model contains a number of fudge factors, and can be made to show, warming, catastrophic or not, stable climate, or cooling, depending on the wish of the modeller.
So any model not conforming to the political views at the time will not be accepted or funded. Hence we have the “consensus”, and all of the “scientists” who followed each other produced – surprise, similar looking, but entirely wrong results.

Gavin Schmidt has admitted the output of climate models is basically just a quantification of the opinions of the modellers.

Pamela Gray
April 18, 2013 9:44 pm

Once the models are tuned to match historical data, at a certain point the tweaking stops and the dials are glued in place. While tuning, some models put more CO2 fudge factor in, some put more ENSO fudge factor in, some put more dust or lack there of in, some put more or less water vapor in, and some do other adjustments. They keep tweaking their favorite dials till they have a match they can live with. That’s why there are separate models. Each group of scientists have their own favorite dials. Once they have a hindcasted match they glue the dials in place, so to speak, and let’r run forward. So what I am saying is that there are actually two models hooked together at a certain point. One model allows tweaking, the other does not. But the two models appear to be the same model because there is no label identifying the point in time when the tweaking/tuning stopped.
What bothers me is that one, it leaves one with the impression that they got it right in the beginning because they were so knowledgable about how climate works. Second, it serves to make the descrepancy that builds the fault of the actual observations, not the models. We see that kind of thinking when someone decides that a reconstruction was not done right so they re-measure and come up with something more to their liking.

April 19, 2013 7:43 am

Good point by Ian W on April 18, 2013 at 3:31 am.
Here is my layman take on CO2 molecules in the atmosphere and IR radiation in the 15 micron band. EVERY molecule is ‘warmer’ than 194K (-79C) due collisions with surrounding N2, O2 and Ar molecules. 194K is peak emission temperature for 15 microns. Below that temperature 15 micron radiation drops off rapidly. Not a problem for the atmosphere. While the CO2 molecules are being battered about, statistics show it can, at some time, emit a 15 micron photon. Only in the short period before the next collision will the CO2 molecule be able to absorb a passing 15 micron photon (from any direction, not necessarily from surface only). I fail to see any ‘heat’ transfer in that reaction. To me it seems CO2 molecules can radiate more than they absorb, especially in the lower high density atmosphere levels. I see this as a cooling effect.
There is no doubt that 15 micron radiation is reaching the surface from CO2 in the atmosphere but the surface is much warmer than 194K (-79C) so is unlikely to absorb much of that radiation. Whatever is absorbed will leave the surface almost instantly as the below surface molecules much higher levels of energy up to the surface. I cannot see how ‘back radiation’ from CO2 15 micron radiation can warm anything.

Fran Manns
April 20, 2013 7:39 pm

Factor analysis of the curves yielded two strong components;
1) The minute you begin to believe your own hypothesis, you are a dead duck as a scientist. and,
2) Garbage in = garbage out.

%d bloggers like this:
Verified by MonsterInsights