Guest “how can he write this with straight face?” by David Middleton
Even 50-year-old climate models correctly predicted global warming
By Warren Cornwall Dec. 4, 2019Climate change doubters have a favorite target: climate models. They claim that computer simulations conducted decades ago didn’t accurately predict current warming, so the public should be wary of the predictive power of newer models. Now, the most sweeping evaluation of these older models—some half a century old—shows most of them were indeed accurate.
“How much warming we are having today is pretty much right on where models have predicted,” says the study’s lead author, Zeke Hausfather, a graduate student at the University of California, Berkeley.
[…]
Most of the models accurately predicted recent global surface temperatures, which have risen approximately 0.9°C since 1970. For 10 forecasts, there was no statistically significant difference between their output and historic observations, the team reports today in Geophysical Research Letters.
[…]
Seven older models missed the mark by as much as 0.1°C per decade. But the accuracy of five of those forecasts improved enough to match observations when the scientists adjusted a key input to the models: how much climate-changing pollution humans have emitted over the years.
[…]
To take one example, Hausfather points to a famous 1988 model overseen by then–NASA scientist James Hansen. The model predicted that if climate pollution kept rising at an even pace, average global temperatures today would be approximately 0.3°C warmer than they actually are. That has helped make Hansen’s work a popular target for critics of climate science.
[…]
Science! (As in “She blinded me with)
The accuracy of the failed models improved when they adjusted them to fit the observations… Shocking.
The AGU and Wiley currently allow limited access to Hausfather et al., 2019. Of particular note are figures 2 and 3. I won’t post the images here due to the fact that it is a protected limited access document.
Figure 2: Model Failure
Figure 2 has two panels. The upper panel depicts comparisons of the rates of temperature change of the observations vs the models, with error bars that presumably represent 2σ (2 standard deviations). According to my Mark I Eyeball Analysis, of the 17 model scenarios depicted, 6 were above the observations’ 2σ (off the chart too much warming), 4 were near the top of the observations’ 2σ (too much warming), 2 were below the observations’ 2σ (off the chart too little warming), 2 were near the bottom of the observations’ 2σ (too little warming), and 3 were within 1σ (in the ballpark) of the observations.

The lower panel depicted the implied transient climate response (TCR) of the observations and the models. TCR is the direct warming that can be expected from a doubling of atmospheric carbon dioxide. It is an effectively instantaneous response. It is the only relevant climate sensitivity.

In the 3.5 °C ECS case, about 2.0 °C of warming occurs by the time of the doubling of atmospheric CO2. The remaining 1.5 °C of warming supposedly will occur over the subsequent 500 years. We’re constantly being told that we must hold warming by 2100 to no more than relative to pre-industrial temperatures (the coldest climate of the Holocene).

I digitized the lower panel to get the TCR values. Of the 14 sets of observations, the implied TCR ranged from 1.5-2.0 °C, averaging 1.79 °C, with a very small σ of 0.13 °C. Of the 17 model scenarios, 9 exceeded the observed TCR by more than 1σ, 6 were more than 1σ below the observed TCR. Only 2 scenarios were within 1σ of the observed TCR (1.79 °C).

A cross plot of the model TCR vs. observed TCR yields a random scatter…

Atmospheric CO2 is on track to reach that doubling around the end of this century.

An exponential trend function applied to the MLO data indicates that the doubling will occur around the year 2100. If the TCR is 1.79 °C, we will stay below the 2 °C and be barely above the “extremely low emissions” scenario on the Vox graph (figure 3). However, most recent observation-based place the TCR below 1.79 °C. Christy & McNider, 2017 concluded that the TCR was only about 1.1 °C, less than half of the model-derived value.
Putting Climate Change Claims to the Test
Date: 18/06/19Dr John Christy
This is a full transcript of a talk given by Dr John Christy to the GWPF on Wednesday 8th May.When I grew up in the world of science, science was understood as a method of finding information. You would make a claim or a hypothesis, and then test that claim against independent data. If it failed, you rejected your claim and you went back and started over again. What I’ve found today is that if someone makes a claim about the climate, and someone like me falsifies that claim, rather than rejecting it, that person tends to just yell louder that their claim is right. They don’t look at what the contrary information might say.
OK, so what are we talking about? We’re talking about how the climate responds to the emission of additional greenhouse gases caused by our combustion of fossil fuels.
[…]
So here’s the deal. We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world.
[…]
There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified, you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models.
[…]
I have three conclusions for my talk:
Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing, and that’s that little guy right there, will affect the climate of the future.
The weather we really care about isn’t changing, and Mother Nature has many ways on her own to cause her climate to experience considerable variations in cycles. If you think about how many degrees of freedom are in the climate system, what a chaotic nonlinear, dynamical system can do with all those degrees of freedom, you will always have record highs, record lows, tremendous storms and so on. That’s the way that system is.
And lastly, carbon is the world’s dominant source of energy today, because it is affordable and directly leads to poverty eradication as well as the lengthening and quality enhancement of human life. Because of these massive benefits, usage is rising around the world, despite calls for its limitation.
And with that I thank you very much for having me.
GWPF
Dr. Christy’s presentation is well-worth reading in its entirety. This is from the presentation:

Figure 2: Hansen Revisionism
Figure 3 was yet another feeble effort to resuscitate Hansen et al., 1988.

Hansen’s own temperature data, GISTEMP, tracked Scenario C (the one in which we undiscovered fire) up until 2010, only crossing paths with Scenario B during the recent El Niño…

According to Hausfather et al., 2019, Scenario B was actually “business as usual”…
H88’s “most plausible” scenario B overestimated warming experienced subsequent to publication by around 54% (Figure 3). However, much of this mismatch was due to overestimating future external forcing – particularly from CH4 and halocarbons.
Hausfather et al., 2019
I think it might be impossible to not overestimate the warming effect of CH4, because it doesn’t seem to be present in the geologic record. The highest atmospheric CH4 concentrations of the entire Phanerozoic Eon occurred during the Late Carboniferous (Pennsylvanian) and Early Permian Periods, the only time that Earth has been as cold as the Quaternary Period.

The fact is that the observations are behaving as if we have already enacted much of the Green New Deal Cultural Revolution (¡viva Che AOC!)…

Models Have Not Improved in 50 Years
This is one of the alleged #ExxonKnew models…

“Same as it ever was”…

“Same as it ever was”…

If the oil & gas industry defined accurate predictions in the same manner as climate “scientists,” Macondo (Deepwater Horizon) would have been the only failed prediction in the past 30 years… because the rig blowing up and sinking wasn’t within the 5% to 95% range of outcomes in the pre-drill prognosis.
References
Bartdorff, O., Wallmann, K., Latif, M., and Semenov, V. ( 2008), Phanerozoic evolution of atmospheric methane, Global Biogeochem. Cycles, 22, GB1008, doi:10.1029/2007GB002985.
Berner, R.A. and Z. Kothavala, 2001. “GEOCARB III: A Revised Model of Atmospheric CO2 over Phanerozoic Time”. American Journal of Science, v.301, pp.182-204, February 2001.
Christy, J. R., & McNider, R. T. (2017). “Satellite bulk tropospheric temperatures as a metric for climate sensitivity”. Asia‐Pacific Journal of Atmospheric Sciences, 53(4), 511–518. https://doi.org/10.1007/s13143‐017‐0070‐z
Hansen, J., I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, 1988. “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model”. J. Geophys. Res., 93, 9341-9364, doi:10.1029/JD093iD08p09341
Hausfather, Z., Drake, H. F., Abbott, T., & Schmidt, G. A. ( 2019). “Evaluating the performance of past climate model projections”. Geophysical Research Letters, 46. https://doi.org/10.1029/2019GL085378
Royer, D. L., R. A. Berner, I. P. Montanez, N. J. Tabor and D. J. Beerling. “CO2 as a primary driver of Phanerozoic climate”. GSA Today, Vol. 14, No. 3. (2004), pp. 4-10
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
First, they fumble their models to fit the past, then claim they can predict the future with it!
A nit-pick: in your “Figure 4. Implied TCR (°C/2xCO2), observations vs models,” it seems that you’re testing if each model’s mean prediction fits in the σ uncertainty/variation in the *observations*. That’s irrelevant — it’s the reverse comparison that matters: whether the observed data fits in the prediction range (uncertainty) of the models.
David,
If you will send me your address, I’ll send you a copy of my new book “The solar-magnet cause of climate changes and origin of the Ice Ages.” It contains data for all climate changes (for which there are data) over the past 800,000 years. The data are rather astonishing.
Don
Even IF the models did a good job with global temperature – and they DIDN’T and DON’T – they a garbage with temperature smaller scales, garbage with cloud cover, garbage with precipitation (especially on scales smaller than global), etc.
But if you take care with your parameter tuning, you can take a bunch of inaccuracies all over the globe and add them up and get a fair representation of global temperature anomalies. Have to use that g-word again…garbage. Not much different than saying, “Well I derived an answer of 9.8 meters per second squared for acceleration due to gravity. I came up with 7.8 for half the globe and 11.8 for the other half, so it averages to 9.8. Perfection.”
The notion that today’s climate models “predict” is one of the fallacies that prevent us from having a sane public policy on carbon dioxide emissions.
As a retired computer modeller can I ask why the models are even considered worthy of debate? The day it became climate change rather than global warming they ceased to become the way to determine the cause or causes of changes.
With one world you have to model the difference between fossil fuel and not. With regional changes it is a pure heat generation and heat transfer problem immeasurably simpler to deal with by comparing fossil fuel use areas with low fossil fuel areas.
Followers of the climate cult will now say movement of emissions to high anomaly areas but emissions are highest at the source and disperse not concentrate at the north pole. Even if they did we only need examine the rate of heat changes to see if greenhouse effect is plausible given the temperature rises and the fact the Arctic is hardly a sunshine and beachwear destination of choice.
Can anyone point me to the flaw in this argument please?
Thank God the climate did improve the last 50 years.
Am I understanding correctly here?
The proponents of climate models claim that the models are reasonably accurate, when the models can miss by amounts that, in other contextual discussions, are alarming?
So, if a model is off by a few tenths of a degree, this is close to reality, but if reality shows an increase of a few tenths of a degree (within a historic range where this has happened before), then the few tenths of a degree signal catastrophe ahead?
Models are not alarmingly inaccurate by those amounts, yet climate is alarmingly warmer by those amounts?
Or am I missing something?
Nope. You definitely don’t miss much, Robert. You’re exactly right.
“Climate Models Have Not Improved in 50 Years”
– and scepticism won’t help due to new censorship aka “why giv’em a platform, a tribune”
By Guardian, BBC, ABC, CBC, NBC … Command output:
https://www.google.com/search?client=ms-android-huawei&sxsrf=ACYBGNTB12aAh-BXoFNhAKnQ8rajTzzQ6w:1576377324709&q=censorship+in+uk+universities&sa=X&ved=2ahUKEwidnOKTz7bmAhVPLewKHSxUB34Q1QIwCHoECAoQBg&biw=360&bih=574&dpr=3
“Climate Models Have Not Improved in 50 Years”
– and scepticism won’t help due to new censorship aka “why giv’em a platform, a tribune”
By Guardian, BBC, ABC, CBC, NBC … Command output:
https://www.google.com/search?client=ms-android-huawei&sxsrf=ACYBGNTB12aAh-BXoFNhAKnQ8rajTzzQ6w:1576377324709&q=censorship+in+uk+universities&sa=X&ved=2ahUKEwidnOKTz7bmAhVPLewKHSxUB34Q1QIwCHoECAoQBg&biw=360&bih=574&dpr=3
https://www.inc.com/matthew-jones/5-things-all-millennials-want-gen-xers-to-know-abo.html
5 Things Millennials Want Everyone To Know About Political Correctness (That Older Generations Don’t Understand)
Being PC isn’t about restricting free speech, it’s about creating space for meaningful conversations
The heated debate about political correctness is often misunderstood. While many individuals across generations dislike the pejorative use of political correctness to represent censorship, a closer investigation reveals generational differences in the desire to use inclusive language.
Millennials know that using appropriate language invites rather than restricts productive conversation. Creating a supportive environment makes space for all individuals to feel welcome in sharing their opinions, rather than fearing that people will demonize their personhood and attack their character based on their identities. Thanks to the internet, Millennials are citizens of the globe and ambassadors of social justice. Unfortunately, not all generations understand how using certain words or phrases prohibits dialogue and hurts other people.
To discover five things that all millennials want older generations to know about political correctness that they don’t understand, read the list below.
1. There is a major difference between “being honest” and spewing prejudice.
2. Political correctness is not about censorship, it’s about showing respect.
3. Millennials feel more connected to global citizenship and human rights than nationalism.
4. Inclusive language creates space for meaningful conversations to take place, offensive language makes people feel unsafe.
5. Millennials are not being sensitive, they’re being morally minded and ethically informed global citizens.
:: and He that believeth shall be saved; but he that believeth not shall be damned.
:: damned!
OMG: Down there ´s america:
Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU atmosperhic temperature dataset vs. CMIP-5 climate models. –> Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU -atmospheric–temperature dataset vs. CMIP-5 climate models.
Down there ´s america:
Imagehttps://www.education.com › article
Layers | Science project | Education.com
atmosperhic from http://www.education.com
Fourth Grade Science Science projects: Atmosperhic Layers. Every night before you go to sleep, you probably cover up with a blanket.
https://www.google.com/search?q=fury+in+the+slaughterhouse+Down+there+%C2%B4s+america&oq=fury+in+the+slaughterhouse+Down+there+%C2%B4s+america&aqs=chrome.
https://www.google.com/search?q=fury+in+the+slaughterhouse&oq=fury+in&aqs=chrome.
OMG: Down there ´s america:
Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU atmosperhic temperature dataset vs. CMIP-5 climate models. –> Figure 13. The models haven’t improved. RSS V4.0 MSU/AMSU -atmospheric–temperature dataset vs. CMIP-5 climate models.
Down there ´s america:
https://www.google.com/search?q=atmosperhic&client=ms-android-huawei&sourceid=chrome-mobile&ie=UTF-8&ctxr&ctxsl_trans=1&tlitetl=de&tlitetxt=atmosperhic
Imagehttps://www.education.com › article
Layers | Science project | Education.com
atmosperhic from http://www.education.com
Fourth Grade Science Science projects: Atmosperhic Layers. Every night before you go to sleep, you probably cover up with a blanket.
https://www.google.com/search?q=fury+in+the+slaughterhouse+Down+there+%C2%B4s+america&oq=fury+in+the+slaughterhouse+Down+there+%C2%B4s+america&aqs=chrome.
https://www.google.com/search?q=fury+in+the+slaughterhouse&oq=fury+in&aqs=chrome.
Zeke Hausfather’s study is nothing but deliberately deceptive “spin.” To excuse extreme inaccuracies of modeled projections that failed to anticipate negative feedbacks would mitigate GHG emissions, he substituted GHG level increases after the effects of negative feedbacks, in place of emissions.
I fired off a 16-part tweetstorm about it, starting here:
https://twitter.com/ncdave4life/status/1205543695338131461
it’s “unrolled” here:
https://threadreaderapp.com/thread/1205543695338131461.html
or here:
http://sealevel.info/threadreaderapp_1205543695338131461_hausfaths_study_is_nothing_but_spin-unrolled_tweetstorm_2019-12-13f.html
As part of the effort to build support for creation of the IPCC, on a sweltering June 23, 1988, James Hansen testified to Congress about the temperature projections from
GISS’s state-of-the-art “GCM Model II” climate model. He described three “scenarios,” dubbed A, B and C. He told Congress that “scenario C” represented “draconian emission cuts,” “scenario A” represented “business as usual,” and “scenario B” was in-between. Here’s the transcript:
https://sealevel.info/1988_Hansen_Senate_Testimony.html
Their paper on the same topic, Hansen et al 1988, Global climate changes as forecast by the GISS 3-D model, J. Geophys. Res., was
published a couple of months later. It filled in the details.
The three scenarios, it said, “represented the response of a 3D global climate model to realistic rates of change of radiative forcing mechanisms.” Its discussion focused mostly on scenario A, which they said “goes approximately through the middle of the range of likely climate forcing estimated for the year 2030 by Ramanathan et al. [1985],” though they acknowledged that, “Scenario A, since it is exponential, must eventually be on the high side of reality in view of finite resource constraints and environmental concerns, even though the growth of emissions in scenario A (=1.5% yr⁻¹) is less than the rate typical of the past century (=4% yr⁻¹)… [so] Scenario B is perhaps the most plausible of the three cases.”
Hansen, in both his testimony and the paper, strongly conveyed the impression that “scenario A” was the realistic one, except in the very long term, when “finite resource constraints” must “eventually” limit emissions, making scenario B more plausible. But Hausfather deliberately distorted Hansen’s meaning, by omitting Hansen’s “eventually” qualifier, to make it appear that Hansen had presented scenario B as the most realistic, even in the near term.
There were many problems with GISS’s “scenario A.” Perhaps most obviously, by 1988, CFC emissions were already slated to decline, because of the 1985 Vienna Convention for the Protection of the Ozone Layer and 1987 Montreal Protocol. So, building an exponential increase of those emissions into any of their scenarios was shockingly dishonest. But they did it anyhow.
Other than that, Scenario A’s emission projections actually were conservative. In fact, it under-projected CO2 emissions. Over the next 26 years CO2 emissions actually increased by an average of 1.97%/yr, rather than scenario A’s 1.5%.
Yet scenario A still projected 200% to 300% too much warming.
Graph:
http://sealevel.info/HADCRUT4_1988-2019_woodfortrees18.png
Details:
http://sealevel.info/hansen1988_retrospective.html
There were two main reasons for the extreme inaccuracy of their “business as usual” scenario A:
One reason was that it inexcusably projected exponential growth in CFC emissions, which were already slated to decline.
The other reason was that they completely failed to anticipate that powerful negative feedbacks like “greening” and ocean processes, would remove CO2 from the atmosphere from at an accelerating rate, thus mitigating CO2 emissions. That’s why CO2 levels have increased so slowly, while CO2 emissions increased so rapidly. In fact, their paper conflates “emissions” with GHG level increases, using the terms interchangeably, because they didn’t realize that higher GHG levels would accelerate the natural processes which remove those GHGs from the atmosphere.
That’s why the climate models did such a terrible job of projecting future temperatures.