By Andy May
In my last post, I explained how the IPCC attempts to use climate models to show humans have caused the recent global warming. Models are useful for testing scientific ideas, but they are not proof an idea is correct unless they successfully and accurately predict future events. See the story of Arthur Eddington’s test of Einstein’s theory of relativity here. In the computer modeling world, a world I worked in for 42 years, choosing one model, that matches observations best, is normal best practice. I have not seen a good explanation for why CMIP5 and CMIP6 produce ensemble model means. It seems to be a political solution to a scientific problem. This is addressed in AR6 in Chapter 1,[1] where they refer to averaging multiple models, without considering their accuracy or mutual independence, as “model democracy.” It is unclear if they are being sarcastic.

Figure 1 shows the CMIP6 (IPCC, 2021) or the IPCC AR6 models, their mean in yellow and red boxes, and observations in green. In this region of the tropical troposphere, often called the climate model “hot spot,” climate models have always overestimated warming.
AR6 discusses weighting the models according to their performance and their dependance upon other models, since many models share code and logic, but could not find a robust method for determining the weights. In the end, they classified the models based first on observations prior to 2014 and second on their modeled ECS (Equilibrium Climate Sensitivity to a doubling of CO2) and TCR (the Transient Climate Response to a doubling of CO2),[2] as discussed in AR6, Chapters 1 and 4.[3] These latter two values, as computed by the ensemble mean and ensemble members, were compared to ECS and TCR values determined independently of the models. The AR6 modeling process resulted in higher projected future warming than the already hot AR5. In AR6 chapter 4 they admit that much of the increase was due to the higher ECS and TCR values used in the AR6 assessment.
The IPCC, in AR4, AR5, and AR6, often conflate models and the real world, so constraining their model results to an independently predetermined range of climate sensitivity is especially worrisome. Models are the primary source for ECS and TCR, which are model-based estimates of climate sensitivity to CO2. They are artificial model constructs that cannot be measured in the real world, they can only be approximated. This makes their technique partially circular. Further, models are used to forecast future temperatures. Since the models run hot compared to observed warming, and have done so for over 30 years, model forecasts can be expected to be too high.
One reason they give in both AR5 and AR6 for using an ensemble mean is they think large ensembles allow them to separate “natural variability,” which they conflate with “noise,” from model uncertainty.[4] Thus, they use models to compute natural variability, with all the biases therein. Another reason is if two models come up with similar results, using the same scenario, the result should be “more robust.” Gavin Schmidt gives us his take at realclimate.org:
“In the international coordinated efforts to assess climate model skill (such as the Coupled Model Intercomparison Project), multiple groups from around the world submit their model results from specified experiments to a joint archive. The basic idea is that if different models from different groups agree on a result, then that result is likely to be robust based on the (shared) fundamental understanding of the climate system despite the structural uncertainty in modeling the climate. But there are two very obvious ways in which this ideal is not met in practice.
1. If the models are actually the same [this happened in CMIP5], then it’s totally unsurprising that a result might be common between them. One of the two models would be redundant and add nothing to our knowledge of structural uncertainties.
2. The models might well be totally independent in formulation, history, and usage, but the two models share a common, but fallacious, assumption about the real world. Then a common result might reflect that shared error, and not reflect anything about the real world at all.”
Gavin Schmidt, 2018
In AR6, they acknowledge it is difficult to separate natural variability from model uncertainty. They tried separating them by duration, that is, by assuming that short-term changes are natural variability and longer-term changes are model uncertainty. But they found that some natural variability is multi-decadal.[5] Internal natural variability via ocean oscillations, such as the AMO[6] or the PDO,[7] have a long-term (>60 years) effect on global and regional climate.[8] These very long natural oscillations make it difficult to back out the effect of human greenhouse gas emissions.
Conflating natural variability with short-term noise is a mistake, as is assuming natural variability is short term. It is not clear that CMIP6 model uncertainty is properly understood. Further, using unvalidated models to “measure” natural variability, even when an attempt is made to separate out model uncertainty, assumes that the models are capturing natural variability, which is unlikely. Long-term variability in both the Sun and the oceans is explicitly ignored by the models.[9]
The CMIP models have a tough time simulating the AMO and PDO. They produce features that approximate these natural oscillations in time and magnitude, but they are out of phase with observed temperature records and each other. A careful look at the projected portions of Figures 1 (post 2014) and 2 (post 2005) will confirm this timing problem. Thus, when the model output is averaged into a multi-model mean, natural ocean oscillations are probably “averaged” out.

The model results shown in Figures 1 and 2 resemble a plate of spaghetti. Natural climate variability is cyclical,[10] so this odd practice of averaging multiple models erroneously makes it appear nature plays a small role in climate. Once you average out nature, you manufacture a large climate sensitivity to CO2 or any other factor you wish, and erroneously assign nearly all observed warming to human activities.
The IPCC included many models in their AR5 ensemble that they admit are inferior. Some of the models failed a residual test, indicating a poor fit with observations.[11] The inclusion of models with a poor fit to observations corrupts the ensemble mean. In fact, as admitted by Gavin Schmidt in his blog post, two of the models in CMIP5 models were the same model with different names, which inadvertently doubled the weight of that model, violating “model democracy.” He also admits that just because different models agree on a result, the result is not necessarily more “robust.” I think we can all agree he got that right.
It seems that they are attempting to do “consensus science” and, for political reasons, are including results from as many models as possible. This is an admission they have no idea how climate works, if they did, they would only have one model. As Michael Crichton famously said:
“I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled.”
Michael Crichton, January 17, 2003, at the California Institute of Technology
In Professor William Happer’s words:
“A single, incisive experiment is sufficient to falsify a theory, even if the theory accurately explains many other experiments. Climate models have been falsified because they have predicted much more warming than has been observed. … Other failures include the absence of the predicted hot spot in the upper troposphere of tropical latitudes.”
(Happer, 2021d, p. 6)
The “hot spot” that Happer refers to is the source of the temperatures plotted in Figures 1 and 2. McKitrick and Christy provide the details of the statistical climate model falsification Happer refers to in their 2018 paper. In summary, if the IPCC cannot choose one best model to use to forecast future climate, it is an admission that they do not know what drives climate. Averaging multiple inferior models does not allow them to estimate natural variability or the human influence on climate more accurately, it only produces a better-looking forecast. It is a “cosmetic” as we say in the computer modeling world. They will only be able to properly estimate natural variability with observations, at least in my opinion. They knew this in the IPCC first assessment report (FAR), but forgot it in later reports, in FAR they concluded:
“The size of this [global] warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability. … The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.”
(IPCC, 1992, p. 6)
Most readers will remember that the famous “Pause” in warming started less than ten years later.
The bulk of this post is an excerpt from my latest book, The Great Climate Debate, Karoly v Happer.
The bibliography can be downloaded here.
-
AR6, page 1-96 ↑
-
Transient Climate Response ↑
-
AR6 pages 1-96, 1-97, 4-22 to 4-23, and 4-4. ↑
-
(Mitchell, Lo, Seviour, Haimberger, & Polvani, 2020) explain a methodology for separating natural variability from model differences. See also Box 4.1 in AR6, pages 4-21 to 4-24 for a complete discussion of the problem. ↑
-
(AR6 4-19 to 4-24) ↑
-
Atlantic Multi-decadal Oscillation ↑
-
Pacific Decadal Oscillation ↑
-
(Wyatt & Curry, 2014) ↑
-
(Connolly et al., 2021) ↑
-
(Wyatt & Curry, 2014), (Scafetta, 2021), and (Scafetta, 2013) ↑
-
(IPCC, 2013, p. 882) ↑
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Hi Andy – I am re-posting my fundamental objection to the entire CAGW scam:
Best regards, Allan
https://wattsupwiththat.com/2022/01/16/how-much-manmade-co2-is-in-the-atmosphere-really/#comment-3433172
“…warming does precede CO2 level rises”
Correct, as proved in my January 2008 paper – maligned and ignored.
CARBON DIOXIDE IS NOT THE PRIMARY CAUSE OF GLOBAL WARMING, THE FUTURE CAN NOT CAUSE THE PAST
By Allan M.R. MacRae, January 2008
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
https://wattsupwiththat.com/2021/06/07/carbon-cycle/#comment-3264363
[excerpt]
Atmospheric CO2 changes lag temperature changes at all measured time scales. (MacRae, 2008). Humlum et al (2013) confirmed this conclusion.
Kuo et al (1990) made similar observations in the journal Nature, but have been studiously ignored.
IF CO2 is a significant driver of global temperature, CO2 changes would lead temperature changes but they do NOT – CO2 changes lag temperature changes.
Think about that:
Kuo was correct in 1990, and for 31 years climate science has ignored that conclusion and has been going backwards!
Climate Sensitivity (CS) to CO2 is a fiction – so small, if it even exists, it is practically irrelevant.
“The future cannot cause the past.” Here is the proof, from my 2008 paper:
https://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah6/from:1979/scale:0.18/offset:0.17
In the modern data record, the lag of atmospheric CO2 changes after atmospheric temperature changes is ~9 months. This is an absolute disproof of the CAGW hypothesis, which states that increasing CO2 drives temperature.
“The future cannot cause the past.”
In my 2019 paper below, I explained why the lag is ~9 months – it is basic calculus, the 90 degree (1/4 cycle) lag of the derivative and its integral, which is the ~3 year ENSO period.
My 2008 paper remains very important. My 2008 conclusion was confirmed and expanded by Humlum et al in 2013, for which I am grateful.
All warmists and most skeptics argue about the magnitude of climate sensitivity to increasing CO2, and whether the resulting CO2-driven global warming will be hot and dangerous or warm and beneficial. Both groups are probably wrong.
There is a high probability that the mainstream climate debate about the magnitude of CS is wrong – a waste of decades of vital time, tens of trillions of dollars of green energy nonsense and millions of lives. Vital energy systems have been compromised, damaged with intermittent, unreliable wind and solar generation – a debacle.
It is important to note that Global Cooling is happening now, even as CO2 concentration increases – another disproof of the global warming fraud.
Cheap abundant reliable energy is the lifeblood of humanity – it IS that simple. The green sabotage of our vital energy systems, whether innocent or deliberate, has cost lives and could cost very many more.
Good analysis. Just eyeballing it, it looks like the 95% confidence interval for the combination of models would be about +/- 3 C at around 2020.
If they were honest about land and sea based measurements that give us the fraudulent world temperature, it would be about plus or minus 5 C. At least nothing in the hundredth of a degree.
You cannot use formal statistics on numbers that are invented. Full stop. Geoff S
If the ‘ climatologists ‘ were honest about everything they would be out of a job and the world would be less fearful of their catastrophic forecasts which the politicians , msm and social media are gorging on .
Ja. Same conclusion here. It ain’t the CO2 who dunnit
https://breadonthewater.co.za/2022/03/08/who-or-what-turned-up-the-heat/
Wot. ‘wot dunnit’
Proper English, please. 🙂
“The Future Cannot Cause The Past”
In Climate “Science” it can and does!
IF the future COULD cause the past, the UPPER BOUND* of climate sensitivity to increasing atmospheric CO2 would be ~1C/doubling – too low for dangerous global warming to occur.
Let’s stop this false CAGW nonsense before we waste many trillions of dollars and many millions of lives… Oh! We already did… Shame!
*Note: The Upper Bound of Climate sensitivity is calculated by ASSUMING that ALL the observed increased in global temperature in the satellite data record or the surface temperature data record is attributed to increasing atmospheric CO2. Still a false crisis!.
Completely contrary to observations and the actual science.
you can’t just make this stuff up to suit your political opinions, you know…
(Argument from lack of authority, doesn’t fix the fallacy, if that’s what your thinkin’ ; )
What ‘actual science’ would that be? I’m sure the climate modellers would love to know, as it’s clear that they don’t understand how the climate works.
Climate Observations are not your friend, Griff.
They refute your CO2 crisis narrative.
You didn’t even try to make up a real counterpoint to anything.
LOL
“you can’t just make this stuff up to suit your political opinions”
Yet it seems to be all griff is capable of.
narry a fact in sight !
1) Models are not observation
2) Models are not science
3) Making up stuff is the core of climate science
4) Politics is why climate science was invented
stop making sense
Alan,
You are correct in your several papers as are Kuo (1990) and Humlum.
The Alarmists seek to overturn the fact that CO2 lags temperature by pointing to the paper Shakun et al (2012).
I find the conclusions of that paper entirely speculative.
It is asserted that while the tilt of the earth’s axis and known phenomena initiated all of the 5 ice ages over the last 800,000 years, in the last deglaciation 20,000 years ago,some ninety per cent of the consequent warming was caused by CO2 or greenhouse gases.
This “proves” that CO2 does not lag temperature or not in the current experience!
In the paper you will see the phrase “Transient Modelling” which says a lot about the reliability of the findings.
You need to do so serious refutation of this paper which is widely quoted.
“so”= “some”in final paragraph.
The IPCC seems to be following the notion tat multiple measurements of tje same thing increases the mean accuracy. As the models are “measuring” different things, this is just adding noise..
It’s crazy.
I hadn’t thought of this before. They do appear to be trying to find the “true value” by averaging multiple measurements.
Two problems though, one, they are using different “devices” to measure with, and two, that process never eliminates systemic error. That means their “true value” probably isn’t accurate. Obviously!
Another Q—what causes the seemingly random variations in the year-to-year output of any given model?
They didn’t account for butterflies.
Thank you for the great review of the latest IPCC agenda. And none of those models can replicate past climatic changes — and therefore their forecasts can not be trusted.
Actually, one model does replicate temperature observations, INMv4 and inmv5, as seen in figure 2 above. But it does not forecast alarming warming, so is ignored or disparaged.
https://rclutz.com/2020/01/26/climate-models-good-bad-and-ugly/
Correct. I discuss this all the time during my public talks about climate change and my videos — like this one … https://www.youtube.com/watch?v=5OubfvWAJ4c
When calculating fluid flow in pipes, there are two widely used MODELS. Mannings and Colebrook-white.
Engineers select the most appropriate for the particular their problem.
They don’t use both and average.
Averaging climate models is unscientific and simply wrong.
Yep, its a very strange concept that averaging a whole heap of obviously farcical models, somehow gives them something that is even remotely real.
Take that big bold red “model mean” line off the graph, and it becomes more apparent that the models are simply wrong. That’s why they show it.
GIGO …
What I find amazing is that they repeat their model errors known since decades, but instead repeat them.
The tropical troposphere hotspot model error is easy to explain now that we have a decade of ARGO data on upper ocean salinity, one of its three ultimate design goals. ARGO says that tropical ocean rainfall is about twice model estimated. So the WVP is about half of reality in models, so model feedback ECS is about twice observed.
They not only repeat the same error, report after report, they move farther from observations each time! The models are getting worse with time. If I had done that in my modeling days, I would have been out on the street.
From about 2005 the model average turns linear just like Dr. Franks predicted. In addition, his error range appears correct too since the model average is getting further and further away from actual observations.
Good observations, Jim. Do you know if the transition delineates a change from where the models are tuned, excuse me, parameterized, to where they are running without training wheels, so to speak? Also, as Dr. Frank’s error analysis utilizes the different between observed and modeled cloud fraction, I wonder if modeled cloud fraction lines out around the same time.
I am no expert on models, but as I understand it, they are run iteratively, i.e., one year after another. That prevents them from running off the rails but I suspect it also minimizes any massive changes which means after a number of runs they converge to a common increase. Basically like a series with a common factor.
It is something I have never seen a paper discussing. However, I believe Dr. Frank didn’t get into the details, just the fact that when run they turn into a linear projection.
I suspect clouds just have a set value all the way through by using parameters which again voids any changes that occur over time.
“so model feedback ECS”
To get feedback, you first need a signal. !
CO2 does not have a warming signal.
Rainfall is presumably at lower altitudes than the hot-spot and therefore of minor effect on the model result, if any. I am curious about the mauve(?) plot in Fig 2, surely a reasonable fit? Or is this the notorious Russian plot that fits?
That is INM-CM4, the Russian model that fits observations. I noticed INM-CM5 and INM-CM4-8, in CMIP6 fixed that problem, now they don’t fit either. See Figure 1.
If this was real science, the INM-CM4 model would have been declared the winner, the climate “crisis” would have been cancelled and AGW would be treated as, at worst, a very minor long term problem that could easily be addressed by relatively small adjustments in “business as usual.”
Great post Andy… 👍👍
Models have failed, or rather the use thereof, in many disciplines. Don’t know where I got this, maybe here, but it seems to be a useful guide. Designed for physics it should be required reading for the fisheries modelers, for example who greatly underestimated populations of red snapper in the Gulf of Mexico. Divers, oil platform workers, shrimpers among others knew better. Such models have been used for several decades unsuccessfully without proper understanding of natural fluctuations, the necessity pointed out by one Martin Burkenroad 6 decades ago.
Burkenroad, M. D. 1948. Fluctuation in abundance of Pacific halibut, In A Symposium on Fish Populations. Bulletin Bingham Oceanographic Collection. 11(4):81-129. 1951. Some principles of marine fishery biology. Publications Institute Marine Science University Texas. 2(1):171-212. III, p195-“The Effects of Environmental Change Independent of Fishing.”
Oberkampf, W. L., T. G. Trucano and C. Hirsch. 2003. Verification, validation, and predictive capability in computational engineering and physics. Sand Rept. Sand 2003-3769S. Sandia National Laboratories. 92 pp.
Imagine you want to predict the value of Bitcoin in 5 years to inform your investment decisions.. Now you wouldn’t consult economists, because you know how useless they are. So instead you choose an astrologer.
But because the reputation of astrologers is only slightly better than economists, you decide to use 100 astrologers. This results in a large range of predictions. You don’t know which to choose, and so you decide to do what the IPCC does. You average them.
Now you’re ready to invest in Bitcoin!
Or write an IPCC report. 😉
Climastrology
Let’s not forget that real observed temperatures are now back down to 2002-2010 levels.
Oh, they will homogenized(adjust) that out soon.
[IF] man was the cause of CAGW, then we would have seen some degree of change relative to the global environmental laws and regulations since 1890-1900 when the first State oriented environmental management programs began, and have in plurality traversed the dozen+ decades to now. Yet, since the 1970’s and all the environmental law and regulation, all that we have seen from our ‘scientific community’ on climate is world ending hysteria. Parden any entrenched climate change skepticism on my part. Both the politics and the science has been horrendously unscientific.
Hypothesis with model augmentation…
Science will be that philosophy. Better than it was before. Better, stronger, faster.
Able to divine space and time, near, far, and ludicrous.
“The God Parody”
That said, the democratic/dictatorial duality.
Take that big bold red “model mean” line off the graph, and it becomes more apparent that the models are simply wrong. That’s why they show it.
A key error in climate models is that clouds are uncoupled from surface temperature. They will never get that right while there is a political agenda that sides with CO2 induced warming.
Once clouds become surface temperature controlled, it is impossible to get runaway warming, as Earth has demonstrated over literally billions of years with all sorts of natural variables in the mix.
Oceans are temperature limited. The sun could double its output and all Earth would see is more widespread monsoon as a result of increasing area of convective instability. The sun output could halve and there would be more sea ice to limit heat loss but oceans would still persist.
Ice on water and in the atmosphere regulate Earth’s energy balance not some trivial absorption spectra of various gasses.
It could be very soon that there is a political pivot back to Trumps world. It is already happening with Europe using Russian gas. Soon enough it will be less woke to believe the ridiculous notion that CO2 in trace amounts can alter Earth’s energy balance – so laughably naive.
At the Permian Triassic boundary the temperature reconstructions show a big spike. To achieve the same thing now would be possible: lower ocean albedo, suppress cloud condensation nuclei production and reduce dimethyl sulphate-producing phtyoplankton populations. You could do all that by spreading light oil everywhere.
Here’s a P/T extinction Feynman guess: Pangean orogeny, high winds blowing dust and long-term volcanic eruptions fed the oceans enabling a massive diatom bloom. Huge quantities of released lipids… It’s a guess. Feynman teaches us we are allowed to guess.
What would it look like? The Sea of Marmara.
JF
Just how many days did this tipping point take?
Rick
Great question!
From my copy of Science Vol 338 Oct 19, 2012, page 368 of “Lethally Hot Temperatures During the Early Triassic Greenhouse”:
“Calculation of seawater temperatures from O18 values reveals rapid warming across the Permian-Triassic boundary [21C to 36C, over ~ 0.8 million years(My)]; reaching a temperature maximum within the Griesbachian (~252.1 Ma) followed by cooling in the Dienerian.” [my bold]
Geologists have a unique take on the word “rapid” !
As a retired structural engineer, from my experiences with geologists a modest 1.1degree Celcius rise in temperature over 150 years is to all intents & purposes………..instantaneous!!!
Averaging models makes even less sense than averaging the geographically variable ocean/lake surface temperature warming rates. Roy Spencer does sterling work, but no-one seems to be at all interested that some areas of Earth’s water surface are warming two or three times faster than the average.
One can expect models to be all over the place, but what does climate science have to say about the measured warming acceleration of, for example, the Black Sea? Or the real canary in the coal mine, the Sea of Marmara?
And there must be someone in Michegan watching the Lake and wondering if pollution of the air/water boundary layer has a role.
JF
Ja. Same conclusion here. It ain’t the CO2 who dunnit
https://breadonthewater.co.za/2022/03/08/who-or-what-turned-up-the-heat/
Oceans are temperature limited. The sun could double its output and all Earth would see is more widespread monsoon as a result of increasing area of convective instability. The sun output could halve and there would be more sea ice to limit heat loss but oceans would still persist.
Averaging the models simply cancels out the randomness in them and reveals only their common components.
Their only common components are the input forcing profiles. So the model ensemble mean is simply the low frequency input forcing. This is why the model mean matches temps no better than the input forcings do.
This is a classic forward model misunderstanding.
The same principles apply in seismic inversion when attempting to go from relative impedance (which is constrained by the seismic trace) to absolute impedance (which depends only on the low frequency model input + the relative part).
Using the analogy of conditional simulation in geostatistics, the simple test is to subtract the model ensemble mean from each model and inspect the residuals. They are uncorrelated random noise, demonstrating the models are nothing more than input forcing + noise.
Precisely!
This chart is for the AR5 set of 39 models. It is the residual for each model after subtracting the ensemble mean from each. I subtracted the absolute mean model (remember the climate model output is absolute in degK whereas observed temps are baselined anomalies – this gives the modelers at least 1 extra degree of freedom to match the observations post simulation).
Note (a) the spread and (b) the lack of structure
You feed in one forcing and all you get is the forcing a noise. We spent billions for this? I could have written these models for a lot less and it would not have taken 30 years.
Great insight!
Sorry.
Typical winter weather in the US. Frost in Texas and snowstorm in the east.
Stratospheric intrusion in the east will bring a strong drop in surface temperatures in the eastern US.
Record Cold with 40 °F below normal Temperatures to impact Southeast U.S. on Sunday, some areas even colder than Alaska
If anyone doesn’t understand how thin the troposphere is in winter at mid-latitudes, they should be in the southeastern US right now. Then he will understand how a total freeze can occur in a few hours.
How do the people who believe the models are doing a good job refute these explanations of what others find wrong with the models? Do they even try (apart from name-calling)?
Chris,
Judging from modeler Gavin Schmidt’s quote in the post, even they don’t believe the models. The text in AR6 is also very critical. But then they turn around and use them to predict future weather. I detect that many in the “consensus” are losing their enthusiasm. One would think that trying to predict doom for 30 years, and still no doom or any sign of doom, would wear on anyone. Most of the alarmists are approaching retirement and they are probably ready to get off the merry-go-round.
From the article: “2. The models might well be totally independent in formulation, history, and usage, but the two models share a common, but fallacious, assumption about the real world.”
I believe this is the case with all of Alarmist Climate Science. They make the assumption that CO2 is the driver of Earth’s temperatures.
The evidence shows otherwise.
Natural variability (Mother Nature) is the driver of Earth’s temperatures until proven otherwise, and it hasn’t been proven otherwise, and in fact, the written temperature record refutes the idea that CO2 is the driver of Earth’s temperatures.
The Earth’s real temperature profile shows warming for a few decades (about 2.0C) and then cooling for a few decades (about 2.0C), and this pattern has repeated since the end of the Little Ice Age.
The CO2 profile shows a steady increase in CO2 since World War II (ending in 1945), and shows no corrlation with written, historical, unmodified temperatures which do not show a steady increase in temperatures.
The only thing that shows a steady increase in temperatures are the bogus, bastadrized, instrument-era, computer-generated, “hotter and hotter” Hockey Stick Charts. And, as should be obvious to anyone who looks, the Hockey Stick Charts are frauds meant to sell the Human-caused Climate Change narrative.
All the Alarmists have are the bastardized Hockey Stick Charts. The ones generated in a computer by Dishonest Data Mannipulators. That’s all they have.
Here’s a link to a chart showing the *real* temperature profile of the Earth, next to a bastardized Hockey Stick Chart. The way to indentify a bastardized Hockey Stick chart is to look at the Early Twentieth Century. If the Early Twentieth Century does not show to be just as warm as it is today, then you are looking at a bastardized Hockey Stick Chart.
The U.S. regional chart (Hansen 1999) on the left has the same temperature profile as the other regional temperature charts from around the world where they show the Early Twentieth Century is just as warm as today, meaning there is no unprecedented warming today caused by CO2, since CO2 has been rising all during this time, yet the temperatures are no warmer now than back in the recent past.
The Dishonest Data Mannipulators knew they couldn’t sell a CO2 crisis if current temperatures were not unprecedented, so they proceeded to change the temperature profile of the planet to make it appear that the temperatures have been getting “hotter and hotter” for decade after decade and are now at the hottest temperatures in 1,000 years. And it’s all a Big Lie created in Alarmist computers. This Big Lie is ALL they have, and it’s an obvious lie to anyone who looks objectively.
The written, historical record refutes this CO2 crisis garbage.
Currently, the temperatures have cooled 0.7C since the 2016 highpoint, while at the same time CO2 amounts have increased. Another decade of cooling and the Alarmists will be on the run.
I like this UAH graph that shows very well how temps move around a baseline but at least so far, always come back to zero. This graph belies any trending because of the return to zero. What it means is that you can’t make a guess as to where the future is going to go. Think of it showing variances and not absolute values that you can trend. It means the variance, both + and – is limited.
Another decade of cooling and the Alarmists will be on the run.
sadly clouds are most likely a random walk in the relevant ranges
as the old Pysch 101 lesson teaches us, random reinforcement is the longest-lasting association our human brains make, and therefore the most difficult to accurately assess, particularly in large groups
hence the prevalence of cultural superstitions that make no sense outside their cultural context
so like all high priests before them, they’ll just shrug, make another model, and change none of their demands on the rest of us
tldr
another failed harvest and those priests of Demeter will be on the run!
“so like all high priests before them, they’ll just shrug, make another model, and change none of their demands on the rest of us”
I don’t know about that. The alarmists get pretty touchy when we mention that temperatures have cooled 0.7C. Another 0.7C or so and I would imagine they would be even more touchy. I know if we have another 0.7C or more of cooling in the offing, I’m going to be having a lot of fun agitating the alarmists. 🙂
Can someone help me understand the difference between figure one and the attached graphic from the attached chart image from AR6 SPM.
Not much difference is there!
Perhaps it is just the scale of the two charts, but the AR6 chart appears to me to show the actual observations to be at or above the model predicted value on the right hand edge of the chart. This is not the case for Figure 1, so what I am I not understanding?
Both charts are anomalies, but they do not have the same reference (zero point), they are hard to compare quantitatively. However, if you simply mean that the CMIP6 models vastly overestimate GHG warming, then I’m with you.
The AR6 model mean basically fits the period pre-1900, runs below the observed temps for much of the 20th Century and then kicks back up in line to agree with the observations in the 21st Century.
Here’s the AR6 data exactly as Tom.1 posted, except I have baselined the data to the mean 1961-1990 (as used to be done) in the right hand panel of my attached image
The AR6 model mean (brown line) doesn’t look so clever now does it?
The black line in both is HadCRUT4 (as used in AR6). In my right panel I have also added UAH. All are baselined the same (1961-1990)
ThinkingScientist,
Shouldn’t the curves cross zero between 1961-1990? I see your point that betwee roughly 1965 and the 1990s, the predictions should be OK, it is before 1965 and after 2000 that the match is not good.
I just used the same data for HadCRUT4 as in the AR6 figure (my left panel) and baselined UAH and AR6 model mean to the HadCRUT4 curve over the period 1961-1990 (my right hand panel).
I left HadCRUT4 the same as per the AR6 report figure for consistency. I suppose in the AR6 figure they show the pre-warming period baselined to average zero so you can read off how much warming since then.
I would add it is my view that the “no warming” temps until around 1900 is likely wrong. This is directly contradicted by both glacial retreat and sea level rise datasets.
In the following graph I have HadCRUT4, AR6 models and UAH as before, but I have added sea level (Jevrejeva 2014) and Glacier (my set of the 18 long records). The sea level and the glacial retreat data are shifted in time (lag from peak cross-correlation over C20th) and converted to temps by OLS (against HadCRUT4, again C20th).
Note how the SL and Galcial data fit very well in the C20th and clearly start their long linear trend (when calibrated to a temp proxy as here) to the early part of the C19th and certainly imply a warming driver pre-1850
Finally for good measure here are 30 year trailing slope graphs for HadCRUT4 and for AR6 model.
The model is clearly under-predicting warming in the first half of the C20th (model peaks around 1932, obs. around 1945).
The model is clearly over-predicting warming as we get to the C21st.
So the AR6 model only gets the warming rates about right 1960 – 2000 or so.
Whoops! Posted too quick for image to load!
Correct, thanks for the graph.
Fig. 1 is about the ‘hotspot’.
The one you show is about global surface temperature.
Thanks Chris.
Oh, that was the confusing bit, now I get the question.
To my inexpert eye, that chart of models vs observations for global temperatures seems to show that the models actually do a reasonable job (the cynical part of me wonders what shenanigans were employed to achieve that).
Has there been much comment from non-alarmists about the simulated values shown here?
Is the trendline for simulated values still an ensemble mean?
Something I notice right away is that the graph ends at 2020, so doesn’t actually show any predictions about future temperatures. I guess this means is that they’re showing us output from a model (or ensemble mean) that has been ‘tuned’ to follow observed values.
Chris,
The observations are the green lines. Why do you think the models, or their mean (the red lines) are doing a good job? The error in Figures 1 and 2 is a degree or more. The projected part of Figure 2 is post 2005 and the projected part of Figure 1 is post 2014.
I was referring to the graph of global temperatures posted by Tom, not fig. 1.
The biggest problem with climate models are the fact they don’t know how to deal with clouds.
They don’t know how to predict future global cloud levels and therefore ignore any energy change this influences on the oceans and land surfaces. This energy difference inturn is blamed as climate change for CO2 because it is literally avoided.
Internal natural variability via ocean oscillations like ENSO, AMO and PDO contribute to changes in global cloud levels that cause increases or declines and lead to warming or cooling.
Global cloud projects in the past showed a 4% decrease in global cloud levels between the 1980’s and 2000’s. This behaviour in low level clouds increases shortwave radiation pentration into especailly the ocean surface that warms them.
The UK has annual sunshine levels that have increased 9% from the 1980’s until now. These are fairly signiciant changes that warm the surface whether sea/ocean or land. The UK is only one region but many around the world are also seeing similar changes.
Yet the annual rainfall for the UK hasn’t changed to any significant extent in 150 years since records began. The average over that period is a flat line, with some years being wetter than others, & some years being drier than others. I am also extremely sceptical about claims that this event or that event is the worst in 30/40/50 years, (take your pick) but it simply shows that these events have happened before, & therefore likely to happen again in the future!!! Why does anyone need a degree in climate science, it looks pretty easy to me!!! (sarc).
Good article, Andy May! “In summary, if the IPCC cannot choose one best model to use to forecast future climate, it is an admission that they do not know what drives climate.” Agree.
But I would also say, even if IPCC chose the one “best” large-grid, discrete-layer, step-iterated, parameter-tuned model, it would still have no diagnostic or predictive authority concerning the effect of increases in concentration of non-condensing GHGs. None. The buildup of uncertainty from any and all of both the equation-resolved and the parameterized processes would still render its outputs completely unreliable for the stated purpose of evaluating single-digit increments of static radiative coupling (in W/m^2) between the surface and the atmosphere.
Andy, I always enjoy your articles, and this is no exception. As to why climate modelers average model results. I’ve always assumed that there are two reasons. First, I think these folks are basically weather forecasters with limited experience in the wider world of computer modelling. What they are trying to do is, I think, expand their pretty good short term weather forecasting technology to a broader scale and vastly greater time scale. In the world of weather forecasting, averaging models — e.g. projected storm tracks — actually does seem to produce better results than arbitrarily selecting a single “best” model. Never mind that current weather forecasting falls apart after about ten days due to accumulating errors and that their seasonal forecasts tend to be awful. That’s not what they want to hear.
Second, I think they sort of vaguely think that averaging models is sort of like monte carlo modelling of processes with some random elements. Run the same basic model a lot of times with random elements allowed to vary and average the results. And maybe there is something to be said for that. IF their models were producing results that resembled observations. Which they aren’t.
My feeling. Their models need a lot of improvement before they are fit for any purpose. Continuing to work on them probably has some value. Long term weather/climate models that work would probably be useful. But in their current state, they look to be pretty much useless.
Agreed.
the graph is much too kind to CMIP due to the missing observations at the end
look at the UAH LT anomaly — the 2021/2022 temps are a death knell for every ECS>2 model run 1979 to present
none of them even get close
and the CERES data is pretty clear that clouds dominate the 21st
trillions of dollars wasted on a mistake no one will admit
Yes, 2021 global HadCRUT5 data, in red, already lies on the low end of the model runs.