Guest Post by Willis Eschenbach [See update at end]
Well, for my usual unfathomable reasons and motives, I decided to take a look at individual model runs from the Computer Model Intercomparison Project 6 (CMIP6).
And again, for no particular reason, I took a look at the three NOAA GFDL GFDL-ESM4 climate model runs prepared for the CMIP6 all-forcing simulation of the recent past. These are all available at the marvelous KNMI website. And here’s what those three runs look like.

Figure 1. Three model runs of the NOAA GFDL GFDL-ESM4 climate model. These are of the SSP245 scenario. The vertical black line shows the 2014 end of the hindcast period and the start of the following forecast period.
Umm … err … seriously? Three runs of the same climate model using the same forcings and starting conditions and inputs are that far apart when trying to hindcast the past? I mean, not even trying to forecast the future, just trying to hindcast the past?
And the climate establishment wants us to believe that these are anything more than a pathetic joke?
But wait, as they say on TV … there’s more!
Here is the actual Berkeley Earth historical record, compared to the three model runs.

Figure 2. Three model runs of the NOAA GFDL GFDL-ESM4 climate model, plus the Berkeley Earth historical temperature Jan 1850 to Dec 2023.
Not much else to say about that … except that anyone depending on these climate models to tell us what’s going to happen in the future should know that they can’t even tell us what happened in the past …
[Update]: As Rud Istvan pointed out in the comments, Berkeley Earth is not either the only or necessarily the best historical record. I’ve added HadCRUT5, the Japanese Meteorological Agency, and the UAH MSU lower tropical temperature records.
This highlights another problem with the field of climate science. Not only do the models differ as to the historical temperature record … there’s also no agreement between observational reconstructions.

Figure 3. The same 3 model runs as above, plus three observationally-based historical surface temperature reconstructions and the UAH MSU lower troposphere temperature.
If I ran the zoo, the first thing I’d do is get the scientists together and at least get an agreed-upon historical record … [Update End]
My best to all, take a walk, enjoy your lives,
w.
Yeah, you’ve heard it before: When you comment please quote the exact words you are referring to. And if you want to prove me wrong, you might want to read this first.
From the article: “Not much else to say about that … except that anyone depending on these climate models to tell us what’s going to happen in the future should know that they can’t even tell us what happened in the past …”
Yes, and the Berkeley Earth fraud doesn’t tell us what happened in the past, either.
We have CIMP6 models that can’t even hindcast a bogus temperature record like Berkeley Earth.
It was just as warm in the Early Twentieth Century as it is today. Berkeley Earth does not show this. Berkeley Earth is wrong. Comparing computer models to bogus temperature records tells us nothing about the Earth’s past or future temperatures.
It was just as warm in the Early Twentieth Century as it is today. Berkeley Earth does not show this.
Do you have a global graph that supports this Tom? Not your (stops at 2000) US one?
Of course Berkeley don’t show the warmer early 20th century.
THEY DON’T WANT TO !!
Haha…. 12 downvotes, but not a single data set that proves I’m wrong… because I’m not. It’s a common falsehood that is spread here (the 30’s were warmer), all on a site that supposedly celebrates and embraces facts and data.
Truth is, it is not even close …now.
https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature
https://flexbooks.ck12.org/cbook/ck-12-advanced-biology/section/18.50/primary/lesson/global-warming-advanced-bio-adv/
The 1930s were warm in the US
Globally the data are very rough with a lot of infilling prior to WWII. I do not trust NASA-GISS, NOAA or HadCRUT
1800s global numbers are more like wild guesses
I would say post 1979 GAT using UAH is reasonable .. but a change of less than 1 degree C. means almost nothing.
Any warming before 1975 had to be natural.
Any manmade warming was post-1975
“1800s global numbers are more like wild guesses”
All global averages are wild guesses.
hence the endless efforts to smooth the past on every scale
The Arctic was certainly warmer in the 1930s than earlier in that century witnessed physically by the fact that the open season at the coal port in Spitsbergen went from 3 months of the year before 1920 to over 7 months of the year by the late 1930s
Do you have a reference for your port anecdote?
Tom Abbott has given you the proof otherwise multiple times. You continue to ignore it.
Who is going to believe anything from your references? From the second one: “The causes of Ice Ages are not completely understood, but greenhouse gases, especially CO2 levels, often correlate with temperature changes (Figure below). Rapid buildup of greenhouse gases in the Jurassic Period 180 million years ago correlates with a rise in temperature of 5oC (9oF).” The problem is that the temperature rise occurred BEFORE CO2 increased.
“Tom Abbott has given you the proof otherwise multiple times. You continue to ignore it.”
Well I don’t ignore it, but I wonder why he refuses to post the updated version of the data he posts, that clearly shows that as we have headed into the 21st century there has been a severe uptick in temperatures. And those temps are way past the 1930’s. Yet he keeps doing it. And to make things worse, people here uptick his efforts. It’s like they have no filter for truth. I mean that is pretty basic stuff.
There has been an uptick in the anomalies calculated from average mid-range temperatures around the globe.
Mid-range temperatures ARE NOT SUFFICIENT TO DESCRIBE CLIMATE. Two different locations with different climates can have the same mid-range temperatures. Since the absolute temperatures are not a good proxy for climate then neither are the anomalies calculated from them.
Proof: What is the uptick you are speaking of? Is it mid-range temps? Minimum temps? Maximum temps?
“Blah blah blah absolute temperature…. blah blah blah anomalies… blah blah blah mid range temps.”
Any way you cut it, if you use one of the recognised international data sets (NOAA’s MLOST, NASA’s GISTEMP, or the UK’s HadCRUT, even Berkeley Earth) you will not be able to show that at any point last century global temperatures were we are warm as today. If you think you can let’s see it. Trying to prove otherwise just makes you look silly (or like a “climate science denier,” which incidentally is the only term that accurately describes someone who denies the reality of the current state of the science…. it has nothing to do with the holocaust).
I didn’t figure you would answer.
Define what you are classifying as warm – Tamx, Tmin, or Tmid-range?
If you can’t define what you are talking about then you are jut blowing smoke out your butt!
“Tamx, Tmin, or Tmid-range?”
Huh? It don’t matter. You could choose TWizard of OZ…. the early twentieth century was no where near the temps we are experiencing today. Now if you think I am wrong, it is on you to offer data that says otherwise.
OF COURSE IT MATTERS!
What early twentieth century temps are no where near what we are experiencing today? Tmax temps? Tmin temps? Tmid-range temps?
My guess is that you don’t have the faintest clue.
You’ve been given the data multiple times on global temps in the 20’s and 30’s.
go here: https://www.c3headlines.com/2012/07/extreme-global-warming-noaa-confirms-modern-us-warming-not-as-hot-vs-1930s.html
There are none so blind as those who will not see.
I’m sorry but you really are wasting my time. Find someone else to foist your denier websites on. In the meantime I’ll go with the people who are experts in the field.
https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature
Bye….
Simon believes the organisation that lowered the average temperature for 1997 by 2.52C to hide the decline.
The original 1997 NOAA report provided the temperature as 62.45F or 16.92C, but now they show it as 57.92F or 14.4C.
Don’t believe it Simon?
See it for yourself.
https://www.ncei.noaa.gov/access/monitoring/monthly-report/global/199713
‘Expert’ Ex = has been spert = drip under pressure
Yup, sums you up just right
I never claimed to be an expert. I like almost every poster here, am a keen amateur. You need to learn to read.
Such arrogance. I read your comment. So toddle off and join your ‘experts’.
You may have read them, but that is only part of the process. There is this thing called “understanding.”
Which you lack. You obviously love to rub shoulders with your ‘Experts’.
It appears that my initial comment was too subtle for your young mind.
Your initial comment was a childish quip implying that because I thought “I was an expert” I was a drip. It missed the point that I was making and that is I trust people who study in this field, over climate denier websites. And so would you if you knew how to filter BS.
You’re not very astute, I wasn’t saying you were an expert, but at least you realised your standing. The one who is childish is you, for your original comment, implying that you consider this site as for deniers. When in fact there are many clever people here, who obviously understand more than you. So don’t insult them.
There are clever people here, that is true. But (often) their skill is in twisting the facts, offering half truths cloaked as reality. They are credible to the gullible and I think they just found a new disciple.
You are one arrogant and incredibly stupid and naive young boy. It’s time for you to go and sulk with the rest of your kind, because this site is too mature for the likes of you.
Simon, anyone calling someone a “denier” is a damn coward who is trying to discredit an opponent by attacking them WITHOUT dealing with their scientific claims.
The only valuable use of the term is to reliably identify people like you whom anyone can ignore without remorse or error.
Next, you seem to think that science is about “trust” … miss the point much?
Finally, you totally misunderstand what WUWT is doing. Here’s what is going on that you have missed.
https://wattsupwiththat.com/2020/12/30/a-new-years-look-at-wuwt/
w.
“Simon, anyone calling someone a “denier” is a damn coward who is trying to discredit an opponent by attacking them WITHOUT dealing with their scientific claims.”
So Willis, I wonder if you use the same set of standards for anyone who uses the term “alarmist?” If the answer to that is “yes,” then I think you have a valid point.
And to be honest, I don’t know a more accurate term to describe someone who repeatedly says “there is zero evidence that CO2 is causing warming.” (something that happens here on a daily basis)
That’s not entirely true – before the tricky accounting of temperature readings took hold, the temperature data showed that temps were rising from the 1800’s into the early part of the 1900s, took a bit of a dip (that had me wondering if that had something to do with WW1) in the period before the 10s, and then there was a bit of a steady rise during the 20s and 30s (famous dust bowl era) until about 1941, where a pronounced decline in temperatures took hold until reaching the basement in 1975-ish. Then we had the quasi steady rise until about 1997 and the era of the El Nino step function temperature increases
So if you don’t care about a degree here or there – which is my thinking 🤔 most days – then yes, it’s basically the same.
But having seen 1st hand the 2 feet of snow in the 70s turn into the occasional 20cm (and also having seen the advent of the metric system here in Canada), I am forced to admit that a fraction of a degree might matter.
And back to history – if “early 20th century” meant something like 1930s (Dust Bowl) or 1908 (RCMP boat sails through the Northwest Passage with little ice to be seen) – then no, it’s not like now, but in fact warmer.
Most other times, it was colder than now, but that’s ok because it was abnormally cold back then, and our current warm-ish period is just a respite from the return of the glaciation.
I hope and pray for that extra 3 to 5 °C promised by governments, but I won’t hold my breath waiting for it, because governments lie.
They don’t necessarily lie – they’re just very good at being wrong!
“It was just as warm in the Early Twentieth Century as it is today.”
If you are talking about GAT, that claim is total BS.
GAT is meaningless.
There is no global temperature. All of these are wrong, not even useful.
Presumably there is a Monte Carlo process running in there somewhere? There would have to be some insertion of a random element to do this, surely?
In the chaotic output of a nonlinear dynamic system, things can look random despite being fully deterministic. Weather—and by extension climate—is by definition a nonlinear dynamic system, therefore mathematically chaotic. Nonlinear just means there are feedbacks. And dynamic just means the feedbacks to not act instantaneously.
I recall another post by the inimitable Willis, a year or two back, where he had looked at the code (Fortran IIRC) from one of the models. He mentioned multiple random number generators throughout the program. So the models aren’t fully deterministic, just “deterministic with a dash of randomness”. There was also mention of multiple “guardrails” to stop the output of each time-step drifting off into absurdity.
The use of “guardrails” basically tells us that the modellers don’t trust their models to actually simulate any kind of reality; they have to be constrained to approximate the results that the modellers have determined in advance. Which – of course – is exactly what climate science is all about.
And let’s not forget the parameters that they use to tweak the hindcasts so that they don’t look outrageous (didn’t do a great job in the above examples though, did they?).
And my favourite beef about all the nonsense: temperature is a physical quantity that can be measured, and it’s theoretically possible to estimate a global average temperature at any instant (although, as far as I can tell, global average temperature has only one use – to track climatic changes over time). But “temperature anomaly” is not a physical quantity, according to my 1960s-vintage high school physics (and, as usual, I welcome correction if I’m wrong). Models must (surely) use actual temperatures in Kelvin in their internal workings, so presenting model results as anomalies must serve a purpose – I presume that forecasting a global average temperature of (say) 18.2C in 2100 wouldn’t be alarming enough (“I have my thermostat set higher than that” says Joe Public).
The models don’t like showing actual temperatures because many are off by 5°C hot or cold. They hide this by only revealing the anomaly from a predetermined value, basically an offset from some point in the past that the model also could not reproduce. It is hard to trust a model that is consistently hot or cold. How can they claim to have the physics correct if they can’t reproduce the actual temperature in the past?
And one of the problems with random number generators is they are not truly random. They should really be called Pseudo-random.
Plus, as I showed here before, they all use a Gaussian distribution as the default for their numbers. Most allow other distributions, exponential or binomial, but those don’t “guarantee” that the numbers fit nicely into a distribution with small uncertainty of the mean.
“So the models aren’t fully deterministic, just “deterministic with a dash of randomness”. “
I don’t believe even this is true, at least as far as randomness is concerned. As Pat Frank has shown, the output of the climate models is nothing more than a simple linear equation. For all of their complexity and “fudge factors” they still output nothing more than a simple linear equation. It’s why each new set of models get further and further from reality with their outputs, the “global” climate is not a linear equation.
You are correct. That is why you never see an uncertainty number with anomalies. It would destroy the statistical significance of the milli-Kelvin they arrive at.
It is also why they can’t provide an actual global temperature each time they calculate an anomaly. As you say, 13,14, or 15 degrees Celsius would alarm no one.
Modelers claim that taking the ensemble mean smooths out offsetting errors.
I would observe that using this single model’s 3 ensemble mean would NOT improve an obviously bad situation. It would only hide it. Well done, WE.
The projection uncertainty bounds from the suppressed model errors would extend off the page. None of those temperature projections have any physical meaning.
And the B.E. air temperature anomaly record is not better than ±1.5 C.
It’s false precision all the way down.
They would look even more absurd if plotted as real temperatures instead of delta-Ts.
A decade or two ago here at WUWT, there was an article that discussed the problems with combining models that all had the same underlying model dynamics making the estimates and error bands, not independent of each other and subject to simple combinations, like averaging, but rather error bands being additive, and hence widening the error bands to the point that the combined models error bands were so large that the ensemble models or means lacked any predictive value.
I didn’t work through the statistics of the arguments in detail and I only saw this type of argument once but the article was reasonable enough that I was convinced enough of its validity that I think the validity of obtaining ‘ensemble means’ to smooth out the results across multiple models is worth being addressed when we discuss combining models.
Uncertainties in the initial conditions as well as uncertainties in the modeling itself grow with each iteration of the model. An uncertainty of u0 in the first iteration becomes u0 + u1 for the second iteration. Thus the uncertainties grow.
The modelers ignore this by using the typical climate science meme of “all uncertainty is random, Gaussian, and cancels”. I’ve been told more than once that in the “long run” all the uncertainties in the models cancel out.
As far as the ensemble is concerned the old adage of “two wrongs don’t make a right” applies. Multiple wrong models can’t give you a “right” answer, only an average of the wrong answers.
Hindcast: all over the place.
Forecast: linear way up.
And to be clearer. The hindcast from present to 10-15 years past is model matching and the fact that the models start diverging radically show the model is broken.
The written CMIP protocol calls for an early 30 year only hindcast after best tuning model parameters. In WE’s charts, that’s back to 1984. Even in that ‘brief’ specified interval, the match isn’t very good between 3 runs of one model.
Models are like serious viruses, too many areas have adopted them without understanding them. I learned about them in the early 1990s when “Blackened redfish” from a Louisiana chef caused pressure on a large fish population. Models ranged from extinction to needing a bounty. Now all sorts of fish dishes are blackened, the good carbon. Some models avoided natural fluctuations which were significant.
Ecosystem marine ecology models, many on oysters, are proving more complex than hoped, but are still long hoped. Any bets?
Rose, K. A. 2012. End-to-end models for marine ecosystems: Are we on the precipice of a significant advance or just putting lipstick on a pig? Scientia Marina 76(1):195-201.
doi: 10.3989/scimar.03574.20B
Thanks again Willis.
I’ll add your sage advice to my standard responses to the climate catastrophe worriers in my circles –
“you’re getting yourself all bent out of shape over nonsense numbers that have no application in the real world in which we live.”
Past performance is not indicative of future performance. Let’s gamble.
Sure it is. That’s why we have increasing penalties for repeating criminal activities, otherwise known as “three strikes and you’re out”.
As shown in Fig 2, the models have higher agreement with their ability to project into the future than their ability to match actual historical data. What’s the problem?
BTW Willis, thanks for your always insightful posts!
Modelers know that global warming from CO2 is about increasing future warming. So they make sure their models do that. Wouldn’t do otherwise.
The Co2 connection is taken as a given. It underpins their models. Take Co2 out of the equation and what do you get? Putin…i mean Russian models. Quite accurate..
If you can’t accurately show the past, which involves known values, how can you say that the future prognostications are accurate representations of what will actually happen.
You can’t say what is going to happen. The future is a cloudy crystal ball. CAGW advocates are just like the hucksters at the carnival that want to tell your future by looking in a cloudy crystal ball..
The models will eventually show the earth turning into a burning ball from their linear projection. And that is all the models are when netted out, linear projections growing forever.
The minor little problem is that Berkeley Earth is extensively stepped on, in the meaning of “adjusted” to the point of unreality..
TH, Berkeley Earth (BEST) is even worse than you say. I gave an example in footnote 25 to essay ‘When Data Isn’t’ in ebook Blowing Smoke. BEST turned a raw no trend into a warming trend at BEST station 166900. It did this when its ‘regional expectations’ QC algorithm automatically excluded 26 extreme cold months. 166900 is Amundsen Scott at the South Pole, arguably the best maintained and certainly the most expensive weather station on Earth. The nearest ‘regional expectation’ station is 1700km away and 2700 meters lower at McMurdo on the Antarctic sea coast. Automated absurdity.
Yes, BEST is misnamed, and evidently so years ago:
Thanks, Rud. I’ve added some other temperature records (HadCRUT5, JMA, and UAH MSU) to an update at the end of the head post.
w.
Yes I remember that discussion and Mosher’s response.
Here it is in full.
“However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.
Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.”
BEST does no ADJUSTMENT to the data.
All the data is used to create an ESTIMATE, a PREDICTION
“At the end of the analysis process,
% the “adjusted” data is created as an estimate of what the weather at
% this location might have looked like after removing apparent biases.
% This “adjusted” data will generally to be free from quality control
% issues and be regionally homogeneous. Some users may find this
% “adjusted” data that attempts to remove apparent biases more
% suitable for their needs, while other users may prefer to work
% with raw values.”
With Amundsen if your interest is looking at the exact conditions recorded, USE THE RAW DATA.
If your interest is creating the best PREDICTION for that site given ALL the data and the given model of climate, then use “adjusted” data.
See the scare quotes?
The approach is fundamentally different that adjusting series and then calculating an average of adjusted series.
in stead we use all raw data. And then we we build a model to predict
the temperature.
Mr. E: Thanks for this article, I guess I can stop asking if they have a model that shows, in hindcasts of 5k years or so, climate conditions consistent with trees growing up north (you know, where stumps emerge from shrinking ice) 4,500 years ago. I might be impressed.
“Umm … err … seriously? Three runs of the same climate model using the same forcings and starting conditions and inputs are that far apart when trying to hindcast the past? I mean, not even trying to forecast the future, just trying to hindcast the past?”
Willis
You might want to read the wikipedia article on Edward Lorenz https://en.wikipedia.org/wiki/Edward_Norton_Lorenz (skip down to the section titled “chaos rgeory” because it was just this sort of thing — inability to get consistent results from a computer model — that prompted the development of chaos theory. I suppose that it’s possible that the inconsistent hindcast results are the result of some sort of problem in setting up the initial conditions for each run.
Of course it’s also possible that the models are unmitigated junk.
I sort of favor the latter theory myself.
rgoery == theory when trying to type on a Chromebook and proofreading without the reading glasses the dog disassemled.
DK, when you get your reading glasses reassembled you might enjoy reading James Gleick’s delightful book CHAOS: making a new science. It isn’t deeply mathematical, rather it tells the stories of the mathematical pioneers that first developed the field’s basics—things like sensitive dependence, bifurcation, strange attractors, Mandelbrot set. Of course starting with Lorenz.
I have (or had) that book. Second the recommendation. I lent it to a friend years ago and never got it back.
Friend should be in quotes, maybe.
I’m not a chaos expert (though my wife might argue differently), but your comment made me think, if the chaotic nature of the hindcasts resulted in such a wide variation of results when they’re modeling on “known” data, how do they get such consistent, tight, non-chaotic results in the forecasts?
They actually don’t. WE’s chart hindcasts cover a much longer period than his chart forecasts, because that was the point of his post. ‘If you can’t hindcast, why do you think you can forecast?’
The forecasts also diverge as you go out longer in forecast time. Same model, three runs, all junk both ways.
Rud, Good point, thanks for the clarification. I didn’t think about the difference in time periods between the hindcast and forecast.
It is a conundrum. If the models increase in variance when hindcasting due to chaos, modeler’s excuse I’ve seen on twittter, then the model runs should do the same going forward. Oh no, they don’t and that means they are correct! You can’t have chaos going one way and not the other.
Why is anyone surprised about this? From Donohoe et al (PNAS, 2014):
‘The greenhouse effect is well-established. Increased concentrations of greenhouse gases, such as CO2, reduce the amount of outgoing long-wave radiation (OLR) to space; thus, energy accumulates in the climate system, and the planet warms. However, climate models forced with CO2 reveal that global energy accumulation is, instead, primarily caused by an increase in absorbed solar radiation (ASR). This study resolves this apparent paradox. The solution is in the climate feedbacks that increase ASR with warming—the moistening of the atmosphere and the reduction of snow and sea ice cover. Observations and model simulations suggest that even though global warming is set into motion by greenhouse gases that reduce OLR, it is ultimately sustained by the climate feedbacks that enhance ASR.’
‘Trenberth and Fasullo considered global energy accumulation within the ensemble of coupled general circulation models (GCMs) participating in phase 3 of the Coupled Model Intercomparison (CMIP3). They report that, under the Special Report on Emission Scenarios A1B emissions scenario, wherein increasing radiative forcing is driven principally by increasing GHG concentrations, OLR changes little over the 21st century and global energy accumulation is caused nearly entirely by enhanced ASR – seemingly at odds with the canonical view of global warming by reduced LW emission to space.’
All that stuff they told you about CO2 ‘blocking’ OLR to space? Apparently, in model-land it doesn’t really work that way, which leaves us with the unfalsifiable theory that our emissions of CO2 lead to CAGW regardless of what satellite or any other observations may tell us.
The canonical theory is not reduced OLR emission to space in any case. it’s OLR emission to space at a slightly higher altitude.
The constant lapse rate means that emission at higher altitude makes the surface warmer.
Covered in ebook Blowing Smoke. Higher in the troposphere means colder, so less energy removed per IR emission since lower wavelengths carry less energy per photon. Is also why CO2 will never saturate. Its effect just declines logarithmically as first explained in 1938 by Guy Callendar.
Pat,
First of all, your work on the propagation of error through climate models (or lack thereof) is sufficient reason for any rational person to reject any claims re. the efficacy of GCMs.
With all due respect, however, I would defer to the modelers themselves as to what they consider ‘canonical theory’. What you describe seems more in line with what some, including Wijngaarden & Happer (W&H, 2023), describe as a ‘gray atmosphere’. Very useful for scaring the kiddies taking ‘Introduction to Climate Change’ at PoMo University, but hopefully, even the modelers don’t consider this to be a realistic mechanism.
As, for my earlier comment, I was just trying to convey, as evidenced by the quoted article, that at least some CAGW proponents seem to be conveniently flexible as regards to what observations constitute proof of an enhance greenhouse effect. Specifically, several recent articles at WUWT have noted that the currently observed warming has been accompanied by higher OLR AND lower cloud cover, which itself would imply higher ASR. While some commenters, myself included, take this to be a negation of a CO2-forced greenhouse effect, others, including those who originally cited the article, sincerely believe that such observations, which are presumably consistent with the output of CO2-forced GCMs, prove the efficacy of the models.
To me, it just looks way too much like special pleading.
Yes, that is the view. However, it violates Kirchhoff’s Law of Radiation. With more CO2, more energy absorbed higher in the atmosphere seems reasonable, but what about emissions? They must also increase. That means there’s more photons heading to space and even if they are weaker the total energy remains the same.
Makes sense to me. More CO2 means more CO2 everywhere, including at altitude. More CO2, more radiation to space, albeit lower energy per unit of radiation. The big question is whether it is being measured properly or not.
And who says lapse rate must stay constant?
That’s what you get when the prosecution is trying to get a guilty verdict. Deflect and prove the guilt some other way. Rinse and repeat.
Figure 2 makes me wonder about the accuracy of the Berkeley “historical” record. Is it based on thermometer readings or proxies?
According to the graph, the Berkeley “anomaly” went up from about -0.5 C in 1965 to about +0.4 C at the dividing line between “hindcast” and “forecast”, which is probably circa 2020. This would be an increase of 0.9 C in 55 years (0.164 C per decade), which is much higher than we see with UAH or GISS temperature records.
The Berkeley “historical” record shows temperatures rising nearly linearly after 1965. But I’m old enough to remember the magazine articles from 1975 predicting an ice age, meaning that temperatures were likely decreasing between 1965 and 1975.
BEST is based on thermometer records.
But they did three dubious things.
BEST is not best.
The one detailed look I took at Berkeley Earth records showed some temperatures in the lat 1700’s and in the 1800’s with a measurement uncertainty in the tenths digit and a few in the hundredth’s digit. When I asked about this obvious problem the answer I got was no one uses those uncertainties, they use corrected ones. Corrected ones?
I took this to mean they used the typical climate science meme of “all measurement uncertainty is random, Gaussian, and cancels”. So the corrected uncertainty intervals all became 0 (zero).
Now I know that CMIP stands for Computer Model Intercomparison Project.
I’ve always just read it as chimp.
Started with 3 chimps, then 5 .. now 6..
The models are basically just chimp scat. !!
or could that be “chump”?
The fifth one looks like “CHIMPS”.
Anyone who believes CIMP
is a chump
I wonder… how much of the inability to hindcast is related to the use of ‘corrected’ data to build a forecast but the use of ‘actual’ temperature data to check the hindcast?
Willios,
My suspicions about CMIP started 15 years ago when I found this rather close agreement.
Geoff S
……
“We examine tropospheric temperature trends of 67 runs from 22 ‘Climate of the 20th Century’ model simulations and try to reconcile them with the best available updated observations (in the tropics during the satellite era). Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs.”
Here is the modelled trend of temperatures at various altitudes above the Earth surface, expressed in milli⁰C per decade.
Pressure Results #15 Average of all 67
hPa milli⁰C/decade milli⁰C/decade
1000 163 156
925 213 198
850 174 166
700 181 177
600 199 191
500 204 203
400 226 227
300 271 272
250 307 314
200 299 320
Model # 15 is Australia’s CSIRO MK3.0.
Attention is drawn to the exceptionally good model results at mid-altitude, 300 t0 500 hPa. They agree with the average of the other runs/models to one milli⁰C per decade. Accurate estimates to one thousandth of a degree per decade?
I believe most modelers are mathematicians and not trained in the physical sciences. To take real actual physical measurement to the nearest degree and to torture them into providing milli-anything is not reasonable. Why would one need a micrometer if that was the case?
Willios, lol.
This article needs a theme song Once in a Lifetime Talking Heads. “Same as it ever was”
I’m thinking more like Twist and Shout.
The climate confuser games are perfectly accurate for their intended purpose: Scaring people about the future climate. Except for the Russian INM model that is not scary enough, with an ECS out of the IPCC preferred range..
The confuser games are only used for predictions of the future. Everyone assumes they can “predict” the past.
Of course we all know the global average temperature has only changed 0.2% in Kelvin degrees over a century. No one could possibly notice a 0.2% change, even in one day. Anyone who thinks their local climate has warmed since 1975 must be delusional.
There is no global average temperature…
My climate model, developed in 1997, and used for one simulation every year, has made the same forecast every year:
The climate will get warmer,
unless it gets colder.
In 1997, I renamed the other models
“Climate Confuser Games”. They are used to confuse people by promoting climate scaremongering. Accurate predictions were never a goal.
“Three runs of the same climate model using the same forcings and starting conditions and inputs are that far apart when trying to hindcast the past? I mean, not even trying to forecast the future, just trying to hindcast the past?”
Willis, can you explain in simple terms for a layman like me how model runs using the same forcings, starting conditions and inputs can yield anything other than identical results? I thought the computers simply ran mathematical calculations and I was inferring from that that the same starting conditions would produce the same results, just like when I put 2 plus 2 in my calculator I always get four. I appreciate I am very likely embarrassing myself with this question, but I ask anyway.
Why would we need to “hindcast” a historical record? It’s “historical” — meaning that there exists a written record that can, one would think, simply be looked at…..
Training wheels on the tricycle…
Kip, better late than never. The reason that the 30 year hindcast is the second required model run submission is to insure that the models tuned parameters reasonably accurately reproduce the past. The presumption is that if they do, then the forecast is reasonably accurate also.
The problem is, CMIP allows the submission in the form of reproducing the 30 year anomaly. In reality, the actual model hindcast C temp varies by about +/-5C worst case between models. Nothing like the real past temperature. I gave the example for CMIP5 borrowed from a Judith Curry presentation.
Rud ==> Yep, that’s what I mean….the hindcast can’t cast correctly, differs wildly from the historical record, they settle for the “same shape” not “numerically accurate”.
I am not a fan.
the argument has been for 30 years that if the models can hindcast, the forecast must be accurate too
never mind that for virtually any series of values imaginable there are an infinite number of wrong models that will correctly hindcast the series
The tricky bit is the feedback parameter, which is described as the radiative response per unit change of GMST.
It’s not totally well defined, as the Planck response (for instance) includes change in emission from the entire depth of atmosphere (not just the surface). Instead of the expected ~ -4 W/m2 per K GMST according to SB, it’s down at -3.3 W/m2. That’s where a large part of the net positive feedback is coming from, i.e. around 0.7 W/m2 per K GMST of missing Planck response because it’s chilly aloft. That is rarely checked for logical consistency. Namely, the emission properties of the surface, surface air adjacent, suspended condensate, free tropospheric air, and stratospheric air are much different from a simple 4th power of T Planck perspective.
Well, like many official state bodies the KNMI is infiltrated by political appointments in the form of directors who push the climate alarm narrative. These directors often do not have the necessary scientific skills but are very good at em..’directing’ people and/or narrative towards a particular em.. direction. Same with the ‘climate ‘reporters’ of various publications. The not so funny thing is that the same people were on the ‘anti faxxer’ hunt not so long ago. Expect them to pile on the garbage as soon as the weather improves. Here in Ireland we had a glorious warm weekend and right enough, articles about heat stress popped out..
The two largest variable energy inputs to the Earth’s surface are the Sun and clouds and the models don’t include either.
Their models are like models of horse races that don’t include the horses.
As I’ve written before in WUWT comments, I’m dyslexic, and whenever I see the abbreviation “CMIP”, the word I get out of it is “chimp.” It takes me thirty seconds or so of concentration to unsee it. But I think it’s appropriate, because it reminds me of the classic trope of “given an infinite number of monkeys (read: chimps) equipped with an infinite number of typewriters, eventually they would write all of the great works of literature.” Averaging the output of any number of different climate model runs seems like a similar endeavor – seeking the right answer from a the average of a huge number of wrong answers. The funniest part of CMIP is the inherent – but unstated – admission of its authors that averaging is valid only if the numbers you are averaging are random – and therefore, so are the results of climate model runs.
I’m not dyslexic, and I also see chimp.