WUWT regular Alan Tomalty writes:
This new study seems to be one of the first studies to admit to internal climate variability for the 20th century. The IPCC is always loath to admit this. However, since the study conclusions are built upon computer climate model simulations, I contend this is still junk science even though it bolsters the skeptic position somewhat. I draw your attention to the following quote:
“We have for example noted that the temporal variance of the majority of ensemble members is larger than what can be inferred from available observations. The result of the study must be assessed with that in mind. We have no simple explanation to this but it might be that the model projects the variance on larger scales than nature as a consequence of limited resolution. We would consequently encourage other modeling groups to undertake similar studies which will hopefully make use of the latest high resolution models coupled models (Haarsma et al. 2016). Intuitively we might have expected the opposite and that reality might expose a higher level of variance than the climate model.”
Why the climate modeler would expect that the climate model would exhibit a lower variability than the real life situation, I have no idea. I guess this is another case of the climate scientist falling in love with his model.
So, it seems that the climate scientists are working their way back in time so that they will have something to do when their CO2 alarmist study of future climate falls apart. However I don’t think that trying to model past climate will capture the public’s imagination quite the way that modelling the doomsday future climate has.
It seems that we will indeed be inundated with climate studies as per the one above,(I predict 1 per week) all steming from the following.
https://climatedataguide.ucar.edu/climate-data/noaa-20th-century-reanalysis-version-2-and-2c
I will quote from the NCAR/UCAR website even though it is NOAA’s project.
“The Twentieth Century Reanalysis (20CR) provides a comprehensive global atmospheric circulation data set spanning 1850-2014. Its chief motivation is to provide an observational validation data set, with quantified uncertainties, for assessing climate model simulations of the 20th century, with emphasis on the statistics of daily weather. The analyses are generated by assimilating only surface pressures and using monthly SST and sea ice distributions as boundary conditions within a ‘deterministic’ Ensemble Kalman Filter (EKF). A unique feature of the 20CR is that estimates of uncertainty are derived using a 56 member ensemble. Overall, the quality is approximately that of current three-day NWP forecasts.”
So they are saying that the quality is as good as a 3 day weather forecast. Hmmmmm. So does that mean that 3 days backward is as good as 3 days forward, or that the hindcast for June 21, 1852 is as good as a 3 day weather forecast? If the latter; that would be very good quality indeed.

So it is obviously the former. So that must mean the accuracy of hindcasting the climate in 1850 is about as accurate as forecasting the climate of the year 2185. So come to think of it, I don’t understand the statement “Overall, the quality is approximately that of current three-day NWP forecasts.”
Their other statement:
“A unique feature of the 20CR is that estimates of uncertainty are derived using a 56 member ensemble.”
This certainly sounds like an ensemble of computer climate models but in the new language of climate scientists it means 56 simulations run on the same climate model with each simulation set to different starting parameters. So, they are using 56 different sets of computer climate model simulations, each of which do not fully understand the underlying science of the planet; and averaging the uncertainty to give one estimate of uncertainty and are calling this a strength of their project!!!!!!!!!
Is the objective here to tell you at what hours of the day it rained on June 21 in 1852 in Mobile Alabama? Or is it something grander than that? Your tax dollars, folks, being spent here..
What national security, or national economic. or national pride reasons would we have to fund past studies of weather/climate that go only as far back as 1850. Oh , I can certainly see the long range goal of this is to wipe out any warming that ever appeared without massive mounts of CO2 in the atmosphere. Once they have fiddled their way back to 1850, why stop there? The next target will be the medieval warming period. It certainly looks like the climate scientists want to put the paleoclimatologists out of business. Computer models are always “sexier” than proxies for climate and so much faster in data generation. Whenever you read the word “Reanalysis”, always remember at some point it is computer generated data even if some real world data is mixed in with it. On another page of the site I found this under Key Limitations
“Does not provide the best estimate of the atmospheric state since ~1979, when more complete observations and more comprehensive reanalyses are available”
Duhhhhh, 1979 was the year when the UAH led by Christy started to provide real data.
PS: I obtained a graph of one of their (UAH) temperature data reanalysis for the US Average Annual temperature degrees Celsius at 2 metres from surface from 1870 to 2010 at 25 degrees N – 50 degrees N and 55 W – 114W.
The graph looked like a long gentle sloping sine/cosine curve with variability from 12.5C to 15 C and no upward trend. Interestingly, the highest was in the 1930s. I guess NOAA hadn’t gotten around to adjusting this computer generated reanalysis data yet.
For the open access study online, see the below link.
https://link.springer.com/article/10.1007/s00382-018-4343-8
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“Whenever you read the word “Reanalysis”, always remember at some point it is computer generated data even if some real world data is mixed in with it.”
Computers do not generate data, not even un-real world data. Data are collected from instruments; after that, it is all estimates.
Technically you are correct Ed, but in practice it amounts to the same thing.
EX: There are no global data for soil moisture . So what the modellers do is use satellite data that has a complex formula to come up with a world map of soil moisture. This data which is subject to huge errors (because the satellites are not actually sampling water content of soil) is then inputted into the models as reanalysis data. I will quote from the following web site:
https://www.ecmwf.int/en/about
“A climate reanalysis gives a numerical description of the recent climate, produced by combining models with observations. It contains estimates of atmospheric parameters such as air temperature, pressure and wind at different altitudes, and surface parameters such as rainfall, soil moisture content, and sea-surface temperature. The estimates are produced for all locations on earth, and they span a long time period that can extend back by decades or more. Climate reanalyses generate large DATASETS that can take up many terabytes of space. “
Perhaps it would be better to call adjusted data “faux-data,” or “simulated-data.”
I only read part of the article and it sounds to me a bunch of modelers want to get paid for doing nothing meaningful.
In the Adjustocene. the changes to the Climate Data and Record are so numerous and so complex that NOAA and the rest f the adjusters simply cannot keep up with the task and occasionally the Truth creeps through the Maelstrom.
Oh this will be fun… Modeling so we can predict the past. This means…we need to fudge the data of the future to be in line with our predictions of the past… right? I mean, to predict the past you need corrected data from the future, that just makes sense! (Make the stupid stop! Argh!)
“Overall, the quality is approximately that of current three-day NWP forecasts.”
So does that mean that 3 days backward is as good as 3 days forward, or that the hindcast for June 21, 1852 is as good as a 3 day weather forecast?
Think of it this way: It’s like someone claiming they have a model that can predict the outcome of any NFL game with 100% accuracy, 3 days after the game is played. But they’re quick to point out that their model sucks at predicting the outcome of a game 3 days before it’s played.
It’s easy for climate witch doctors to tweak their models to spit out a desired result when they know what the answer is. But when they don’t, it’s like Yogi Berra said: “It’s tough to make predictions, especially about the future.”
As the amazingly intelligent Brad Keyes once wrote: “Oscar Wilde once quipped that people “can’t predict crap-all—especially in advance.” In these increasingly unforeseeable times his words are looking more and more prescient.”
I’ve heard of 20/20 hindsight.
This is legally-blind hindsight.
OK, how does one get a higher resolution computer model without more and better data? Just running the same model over and over again cannot increase resolution no matter how much one wishes. We know what the basic data looks like today and all its shortcomings. We have a pretty good idea that we had lot less data and less precise data in the past most especially in geographical coverage.
These folks are just making it up as they go. Sadly, they have convinced themselves that since it is a computer model being used that somehow it is reflecting reality.
Re: “We have no simple explanation to this”
The obvious answer is:
1) Very large Type A (statistical errors) and
2) Very large Type B (systemic bias etc errors). e.g. Christy shows the model mean warming rate error (Type B) is about 260% of the actual warming rate from 1980!
See BIMP’s international GUM standard:
Guide for the Expression of Uncertainty in Measurement JCGM 100:2008
https://www.bipm.org/en/publications/guides/gum.html
One word only – technobabble.
Robert G Brown at duke addressed internal variability of climate models from 2014
https://wattsupwiththat.com/2014/10/06/real-science-debates-are-not-rare/
Note, also, his reply comments to others.
A somebody said, an unvalidated model is no more than an illustration of someone’s hypothesis. (And all climate models are unvalidated, inherently, because of the lack of adequate real data to validate them against.)
Just a bit of fun
I have been playing with the England Temperature and Sun Hours data from: https://www.metoffice.gov.uk/climate/uk/summaries/datasets.
The temperature to sun hours correlation is 0.72.
The average 10 year ratio between the two is approximately 0.009 so I assumed that if I multiply the sun hours by 0.009 for each year I should get the temperature due to the Sun.
I subtracted the 0.009 x Sun hours from temperature data and plotted it. The trend line shows an increase of 0.15C since 1929 which I am assuming is all due CO2.
Why?
I thought you might like to know the IPCC forcing for CO2 is around 10 times what it should be.
An engineering rule for models is they can’t be used to forecast till they can accurately replicate the past. Climate scientists have ignored this rule. Also if you don’t have a handle on the natural variations, you can’t identify an anthropogenic signal. This purposeful ignorance in climate science has caused modellers to chase their tail.
Soppose their simplistic and partial models can’t match reality, as should be the case given what they ignore by assertion and what they over amplify based on guesswork. sorry, consensus assumptions. We will have to pass laws to change reality, and subsidise the effort. Once they have a version that tracks reality, perhaps they could extrapolate that?
PS I still want to know, how does the increasing water vapour that ends the fastest natural change of an interglacial while CO2 is still rising fast, with clouds and precipitation, suddenly do the opposite in their models from a little bit of CO2 warming, simply because they say it does? J’accuse!
AND we already knew plant/life response to more CO2 is almost instant, from horticulture, 1,000ppm seems the benchmark. IPCC denied that as well to assume the response was too slow to reduce temperature, which of course was utter BS (Bad Science) – but a heresy of the inconvenient truth to the IPCC. Plants respond in three separate ways, growth, respiration rate and reproduction, FAST, , as we do, e.g. human birth rate response to longer life/lower infant mortality, Lovelock had the response of life to to climate nailed, I suggest.
The result is a pseudo random noise generator with an accelerating warming drift. Could be done with a simple recursion AlGore rhythm.
How can they predict the past when all the past temperatures have been bastardised.
The modellers are getting caught in their own lies.
ALAN : “I guess this is another case of the climate scientist falling in love with his model.”
WHY NOT……….lots of “creative ” artists do it !
Although they are usually referred to as “their muse” !
The future climate is known with great certainty,
according to the global warmunists — hot enough
to melt every living creature on our planet
into an ugly blob on the ground … unless we all
do as the warmunists say, without question !
Isn’t that what runaway warming really means?
It’s a tough task to make predictions like that
without bursting into laughter — leftists can do it
because they have no sense of humor !
It’s the past climate that’s impossible to predict,
with all the government bureaucrat adjustin’
and re-adjustin’
and re-re-adjustin’
Are they laying the groundwork for getting rid of all those troublesome satellite and ground station measurements? I can see the headlines now: “Studies show there were no hurricanes before 1950. “
The July 2018 study “Can an ensemble climate simulation be used to separate climate change signals from internal unforced variability?”that is referenced at the end of my article is a pure demonstration of a climate modeller who has actually ventured inside of his virtual climate world and believes totally in his model.
Witness the following doozy quotes :
“With relevance to climate change studies it should be pointed out that this range of uncertainty is only a consequence of unpredictable processes and not to additional uncertainties caused by model deficiencies such as related to physical parameterizations and model resolution.”
“We have for example noted that the temporal variance of the majority of ensemble members is larger than what can be inferred from available observations. The result of the study must be assessed with that in mind. We have no simple explanation to this but it might be that the model projects the variance on larger scales than nature as a consequence of limited resolution.”
“Intuitively we might have expected the opposite and that reality might expose a higher level of variance than the climate model.”
My head just shakes in astonishment at the above quotes.