By Andy May
The new IPCC report, abbreviated “AR6,” is due to come out between April 2021 (the Physical Science Basis) and June of 2022 (the Synthesis Report). I’ve purchased some very strong hip waders to prepare for the events. For those who don’t already know, sturdy hip waders are required when wading into sewage. I’ve also taken a quick look at the CMIP6 model output that has been posted to the KNMI Climate Explorer to date. I thought I’d share some of what I found.

Figure 1. The CMIP6 13-member model ensemble from the KNMI Climate Explorer. The global temperature anomaly from the 1981-2010 average is plotted for all model runs. All 13 model runs are plotted from the ensemble. Notice there are two runs plotted for three of the models.
There are two model ensembles on the website currently, one contains 68 model runs and the other contains 13. Figure 1 shows all 13 runs for the smaller ensemble. There are two runs in the ensemble from the Canadian Centre for Climate Modeling (CanESM5 P1 and P2). There are also two for NCAR (CESM2-1 and CESM2-2) and two for the Jamstec model (MIROC). So, Figure 1 represents ten models total. The models use historical forcing before 2014 and projected forcing after.
All curves are global average temperature anomalies from the 1981-2010 average temperature. Notice the 19th century spread of results is over one degree C. The current spread of results is not much tighter and the spread at 2100 is over two degrees. All these model runs use the ssp245 emissions scenario, which is the CMIP6 version of RCP 4.5, as far as I can tell. Thus, it is the middle scenario.
Two of the runs are pretty wild, the pale blue model that is very low in 1960 and very high in 2100 is the Met Office Hadley Centre model run, UKESM1.0-LL (2018). The Canadian “CanESM5 P 1” model run follows the same path but is hidden by UKESM1.0. The other runs are bunched up in a one-degree scrum.
In Figure 2 we show three of the model runs. Both Canadian model runs are shown, in blue and orange, along with one of the NCAR models in gray. The black is the 13-model run “ensemble mean” as computed by the KNMI Climate Explorer. I chose an ensemble on the website; I did not make this ensemble myself. Likewise, the ensemble mean was provided by KNMI Climate Explorer, I did not calculate it.

Figure 2. Three individual model runs are compared to the 13-member ensemble mean. The vertical axis is the anomaly from the 1981-2010 average in degrees C.
Historical forcings are used prior to 2014 and projected values after. The blue and orange curves are from two runs from a single Canadian model. The two runs are over 0.2°C different in 2010 and 2011, some months they are over 0.5°C different. There are multiple periods where the model runs are clearly out-of-phase for several years, examples are 2001-2003 and 2014 to 2017. The period from 2015 to 2019 is a mess.
Figure 3 compares the same ensemble mean shown in Figure 2 to three weather reanalysis datasets, also downloaded from the KNMI Climate Explorer. The weather reanalysis datasets are shown in the fainter lines.

Figure 3. Three weather reanalysis data sets are compared to the model ensemble mean from Figure 2.
Weather reanalysis is done after the weather data are recorded but using a weather computer model. The reanalysis has many thousands of observations that it can incorporate, so generally the output is quite reliable, at least in my opinion. Notice all three weather reanalysis datasets, NOAA, NCEP and ERA5 (European Centre for Medium-Range Weather Forecasts) are in phase and track each other. Over periods of up to three years, the ensemble model mean is hopelessly out-of-phase with the reanalysis. This occurs when the model had historical data (before 2014) and after.
Conclusions
I’m unimpressed with the CMIP6 models. The total warming since 1900 is less than one degree, but the spread of model results in Figure 1 is never less than one degree. It is often more than that, especially in the 1960s. The models are obviously not reproducing the natural climate cycles or oscillations, like the AMO, PDO and ENSO. As can be seen in Figure 2 they often are completely out-of-phase for years, even when they are just two runs from the same model. I used the Canadian model as an example, but the two NCAR model runs (CESM2) are no better. In fact, in the 2010-2011 period and the 2015-2019 period they are worse as you can see in Figure 4.

Figure 4. Comparing the two runs of the NCAR CESM2 model to the ensemble mean and a Canadian model run.
The AR5 report was an expensive redo of AR4. Both abandoned any hope of finding solid evidence of human influence on climate and tried to use climate models to show that humans somehow control the climate with our greenhouse gas emissions. They tried to use solid evidence in SAR, the second report, and TAR, the third report, but they were shown to be wrong in both attempts. You can read about their SAR humiliation here and the TAR “hockey stick” humiliation here.
Forty years of work and billions of dollars spent and still no evidence that humans control the climate. Models are all they have, and these plots do not inspire confidence in the models. As we discussed in our previous post, averaging models does not make the results better. Ensembles are no better than their member model runs and can be worse. If the individual runs are out-of-phase, as these certainly are, you destroy data, and the natural cycles, by averaging them. See the evidence presented by Wyatt and Curry and Javier, here and here. If they are going to convince this observer, they need to do much better than this. And a word to taxpayers, why are we paying these huge sums of money to simply do the same thing over and over? Bottom line, is AR6 going to be any different from AR4 or AR5? Are any of these documents worth the paper and ink?
The IPCC is fighting off the back foot now and we can expect a raft of Cognitive Dissonance papers coming out from this coming COP 26 boondoggle.
It is all just politics now. The science has gone with the wind.
The UN and its acolytes has morphed into a very dangerous entity in that there is no global constitutional mechanism to challenge its activities. It thus has agendas above its station with eyes on getting control of the levers of power.
Yes – the fact that Taiwan is not recognised as a sovereign state by the UN shows its true colours. The would-be autocrats at the UN are beholding to sovereign states for their funding. They have been continually hatching plans to establish a guaranteed source of revenue. Back in the late 90s they proposed an email tax. If that had got past the starter’s gun they would now have a UN tax on every electronic transaction. It would only need be a fraction of a cent to provide the independent income they crave.
Imagine the audacity of Trump to cut the finding to the UN-WHO. No place in leadership for a sovereign head who does not bow to the UN; unless they are backed by the CCP of course.
Climate “ambition” is the golden egg that the UN is trying to hatch right now. They only need to cream a 5% administration fee to get a healthy permanent income stream if they get the “ambition” they seek.
Fig. 1 looks like a hockey stick to me.
All these model runs use the ssp245 emissions scenario, which is the CMIP6 version of RCP 4.5, as far as I can tell. Thus, it is the middle scenario.
I would suggest that since RCP 8.5 has been completely debunked that leaves RCP 4.5 as the high scenario not middle.
There is still RCP6, which is now SSP4-6 and they added a new SSP3-7.0. RCP8.5 is falsified, but they still have it in SSP5-8.5. They won’t give up on it, they need it for bogus press releases.
What’s the point of the models?
The models, IMHO, are just there to scare people. They want us to give up our freedom and our money to a global government. That way global businesses can get richer since their global legal compliance costs and hassles will go down.
Answer: No. GIGO.
The conclusion that the ignorant will draw from figures such as these is that the climate (weather) was predictable before 1990. It will help sell the ‘extreme weather’ meme.
No anomalies for me. My model comes straight out with the forecast Global Average Surface Temperature for the next 80 years. It also excludes the natural cycles. I consider the ocean cycles just noise in the context of climate. The orbital changes matter but there is nothing of consequence there yet.
In the longer term, the tropical Atlantic is the location to monitor. If it fails to achieve the 30C controlled maximum temperature in an annual cycle then that will be a hint for the start of the next glaciation.
I know I will not be around to see it, but I will make certain my grandchildren are aware of my forecast. Maybe I can convince a few of the new generation of true scientists to take a fresh look at reality rather than the highly manipulated view presently on offer.
Andy,
A little exercise since you have the data. Rather than comparing anomalies, compare a few of the extreme examples showing the actual temperature forecast from say 2020 to 2030.
That will give more significance to the actual variation between models.
Then do the same thing over the Nino4 region, where the temperature cannot exceed 30C. That makes it very clear if the models are unphysical.
Reducing models to anomalies avoids any debate on what the models are using as the current surface temperature. Looking at the next decade is a time frame of interest.
Just to sample the nonsense purveyed as science. The attached chart is from BCC-CSM2.
By inspection the average for 2020 is 289K.
Then take a look at AWI-CM-1 showing an average of 288K for 2020.
So a “whopping” 1 degree difference between the two models for last year.
Isn’t that the entire warming that is going to send us all to hell in 12 years?
That is only the output from two of these computational turds. I bet it would get worse if I sampled them all.
The CMIP6 data on DNMI no longer has the pie, which enabled the trend in water vapour to be integrated. They only now offer pr; just one part of the story.
The average of FGOALS-g3 is not far from my prediction of 287K (14C).
This model is already 2K cooler than the BCC model – I pick this one.
Those interested in watching a 45 minute Jan 21presentation on climate go to this site, find the password provided for the video which equally destroys the CIMP model runs, sea level, forest fires. Good charts and graphs. https://clintel.org/new-presentation-by-john-christy-models-for-ar6-still-fail-to-reproduce-trends-in-tropical-troposphere/?mc_cid=1f85683f49&mc_eid=8edf2b0091
The method of averaging model outputs may look like a plausible approach to people who don’t really understand statistics. It is based on a false analogy with continued sampling from a distribution with unknown parameters. We know that the averaging of polling tends to give a more accurate estimate, but that is thanks to the central limit theorem, which depends of a number of assumptions. One of these is that you continue sampling from the same population. This setting has no meaningful counter part when it comes to modeling. There is no central limit theorem for models. To the very least modelers should try to prove such a theorem before applying it.
Basic failure to communicate?
The statement “Actual vapor pressure is a measurement of the amount of water vapor in a volume of air” is from University of Illinois meteorology web site at http://ww2010.atmos.uiuc.edu/(Gh)/guides/mtr/cld/dvlp/rh.rxml It is incorrect. What they are referring to is the ‘partial pressure’ of the WV in the total pressure of the atmosphere. Vapor pressure is a property of (in this case) liquid water that depends only on its temperature. The correct description of vapor pressure, as commonly used is given here: https://tinyurl.com/yjqy7r5x
I wonder how widespread this mistake is and whether it is contributing to the failure of the GCMs.
DP,
There are more errors.
For example, in general climate research, you often see pH defined as the negative logarithm if the Hydrogen ion concentration.
The proper definition uses “activity”, not “concentration”.
Activity is related to concentration by factors dominated by other species in the solution. For example, the Na+ and Cl- ions in salt water influence the connection, expressed in part by the Debye-Huckel equations. Sadly, the presence of suspended solids also affects some methods for the determination of pH, in ways that were unsolved last time I looked at the topic.
Yet, using a wrong definition, they (some get it right, to be fair) go on to express pH in tiny increments, like measuring sea water to 2 or 3 significany figures when you are lucky to do better than 1, as in (say for sea water) pH 8.1, which is moderately alkaline and not at all acidic. Geoff S Chemist
Glad you show a sense-of-humor (and an accurate one), as this “event” isn’t funny or of any real scientific importance other than to the warmarxists. Bottom line — do they narrow the ECS or just maintain the same nonsense-range? Rhetorical question of course.
Andy what do you think about Dr John Christy’s lecture at the GWPF in London in 2019?
He put nearly all their claims about their so called CAGW to the test and found only the Russian model was close. Your thoughts please?
He also found very little evidence for their HOT SPOT when compared to data or observations since 1979. BTW what will happen when the AMO moves to the cool phase and perhaps by 2030? Just asking?
Here’s that link to Dr Christy’s lecture and he puts so many of their claims( or wishful thinking) to the test. Any thoughts?
https://www.thegwpf.com/putting-climate-change-claims-to-the-test/
Neville, Thanks for the link! I had not seen this talk before. I don’t like Christy’s energy budget, but I don’t like any of them, so that is no big deal. Definitions of the surface are different. What happens in daytime is different from what happens at night, What happens over the ocean is different from what happens on land, etc. etc. Making a global average energy budget will always be confusing.
I love the way he computes the TCS using the bulk troposphere, that is so much more reliable than using surface temperatures and the value he gets 1.1 degrees per 2xCO2 makes so much sense. Will Happer computed the same value, but a different way. Notice part of that 1.1 could be a natural oscillation.
He also discusses his more recent paper with Ross McKitrick that I have read and written about: The problem with climate models | Andy May Petrophysicist
It looks like it was a great talk, I wish I had been there.
If they really knew what they were doing they would by now be running only one model, the one that is the best fit to what has actually happened. Parameterization should not be allowed because all that does is cover up modeling errors. The average of errant results from errant models is also an errant result and is really nonsense.