By Renee Hannon
Christian Freuer has translated this post into German here.
This post examines how present global surface temperatures compare to the past 12,000 years during the Holocene interglacial. The AR6 IPCC climate assessment report, Climate Change 2021: The Physical Science Basis, by Working Group 1 states in their Summary for Policymakers section A.2.2:
“Global surface temperature has increased faster since 1970 than in any other 50-year period over at least the last 2000 years (high confidence). Temperatures during the most recent decade (2011–2020) exceed those of the most recent multi-century warm period, around 6500 years ago [0.2°C to 1°C relative to 1850–1900] (medium confidence). Prior to that, the next most recent warm period was about 125,000 years ago, when the multi-century temperature [0.5°C to 1.5°C relative to 1850–1900] overlaps the observations of the most recent decade (medium confidence).”AR6
Paleoclimate proxy data records have low temporal resolution.
Comparing present instrumental data to the past is no small task. Temperature data during the Holocene and older are indirect measurements based on proxies. Scientists have compiled and extensively analyzed these proxy data covering the past 10,000 years. The datasets contain 100’s of records and include terrestrial, marine, lake, and glacial ice proxy data, to name a few.
Unfortunately, lake and marine proxy data are smoothed due to sediment mixing and uncertain age control. Smoothing of paleoclimate proxy data also occurs due to averaging of multiple data types together which destroys higher frequency decadal data (Kaufman and McKay, 2022). Hence, proxy data during the Holocene is multi-century at best, representing an average temperature smoothed over a couple hundred years.
The IPCC statement above is correct but can be misleading. They compare decadal average temperatures to multi-century average proxy data. To better understand how modern temperatures compare to the past, one can either deconvolve past proxy data or smooth present instrumental temperatures to produce a similar temporal resolution comparison.
Kaufman and McKay, 2022, wrote a technical note comparing multi-century present and future temperatures to the past. They used the average of instrumental data plus AR6 model projections to show global mean temperatures of about 1 deg C during a 200-year period from 1900-2100 shown in Figure 1. These averages include 120 years of present instrumental data and 80 years of future modeled projections. The pre-industrial baseline is defined by the IPCC as the average global temperature during 1850-1900.
Instrumental temperature data have been around since 1850, about 170 years. These data are closely approaching a bicentennial timescale. To note, pre-1950 HadCRUT instrumental data is considered lower quality due to sparser data coverage and increased noise (McLean, 2018). Since IPCC scientists use simple averages for comparison to the past, then averaging instrumental data should also be considered as a present base case. Using the IPCC’s instrumental dataset, a simple average for the last 170 years shows a global temperature anomaly of a whopping 0.3 deg C, uncertainty range of 0.1, above the pre-industrial baseline shown in Figure 1.
Smoothing instrumental temperature during the last century and a half allows for a truer comparison to smoothed multi-century proxy data. This smoothed instrumental data is 70% cooler than the 1 deg C represented by present plus future temperature means over a 200-year period. Annual global instrumental temperatures have only been slightly at or above 1 deg C for about one decade. That’s not even close to being a multi-century comparison to the past.
A More Valid Comparison of the Present to the Past
Using instrumental data without adding in uncertain future modeled projections seems to be a better way to compare present temperatures to the past. Nobody knows how accurate model projections are especially considering the debates about their track record of not matching observed temperatures and past proxy data. A smoothed instrumental average for comparison to the past is absent in the AR6 report and never established, mentioned, or recognized by the IPCC. Figure 2 shows the 170-year instrumental temperature average (small black square) compared to past proxy data during the Holocene.
The Holocene climatic optimum (HCO) occurred 6000-7000 years ago with the warmest 200-year long interval estimated at 0.7 deg C with an uncertainty range of 0.3 to 1.8 deg C according to extensive proxy data compiled by Kaufman, 2020. An earlier proxy study by Marcott, 2013, shows an HCO temperature mean of 0.8 deg C with a two-standard deviation of 0.3 above the pre-industrial period. Marcott also confirms that proxy records completely remove centennial variability, and no variability is preserved at periods shorter than 300 years in his reconstruction. Andy May also performed a Holocene global reconstruction using proxy data here. His reconstruction shows an HCO of 0.85 deg C above the pre-industrial baseline and over 1 deg C warmer than the coldest time of Little Ice Age. Figure 3 shows the 170-year instrumental temperature average compared to the HCO temperature of these reconstructions.
Chemical, biological, and physical data supports a warmer Holocene past. A mid-Holocene climatic optimum is supported by pollen records which show expanded grass and shrub vegetation in the African Sahara, increased temperate forest cover in Northern Hemisphere mid-latitudes and boreal forest instead of tundra in the Arctic (Thompson, 2022). Glacier and ice cap fluctuations from lake studies in the Arctic were smaller than present or absent during the early and mid-Holocene (Larocca, 2022). Both Javier Vinos, 2022, and Kaufman, 2023, have a thorough discussion of empirical evidence at different latitudes supporting a warmer past mid-Holocene.
Even the IPCC states that around 6500 years ago temperatures ranged from 0.2°C to 1°C warmer relative to 1850–1900 pre-industrial period. Therefore, the present global temperature 170-yr average is mostly cooler than the past Holocene climatic optimum 6500 years ago. As a matter of fact, the present average temperature barely hits the 5% minimum error bar on one of the reconstructions and is just over the IPCC minimum range.
In the IPCC technical justification note, Kaufman and McKay 2022, conclude that global recent plus the modeled upcoming warming reaches a level unprecedented in more than 100,000 years. My emphasis on the word plus. Without including future modeled temperatures, present instrumental temperature, averaged over 170 years, does not exceed the warmest multi-century period of the Holocene based on proxy data. And it’s not even close to the last interglacial period when multi-century temperatures were almost 1.5 deg C warmer than the pre-industrial period. If, big IF, the climate models are considered reliable, then perhaps in the future 80 to 100 years, present global temperatures might be as warm as the past Holocene Climatic Optimum.
Download the bibliography here.
Thank you, Renee.
Can you provide any reasons why the IPCC inclusion of model, projected data now to 2100 cannot be regarded as deliberate scientific misconduct?
Like many other scientists, I think it way past time that IPCC faced a Spanish Inquisition. You just do not try to pull shifty math like that in scientific circles that I have moved in. Geoff S
There can be no doubt that it is deliberate; the objective is to generate the scariest numbers that they can get away with. In the same way that they define their temperature anomaly metric in terms of 30 year average. but then give a headline comparison between current and1850-1900 baseline using a 10 year average.
As recorded here. The IPCC has already tried to defend what they did, but that they did it multiple times demonstrates beyond any reasonable doubt that it was deliberate
The previous video from the same channel introduces the issue.
Extremely interesting. Thank you
You didn’t say how you did the smoothing, so I presume just a moving average. The problem is that the result is then lagged, with the central point 85 years ago. So you are comparing, in effect, the temperature of the Holocene with the temperature of 1938 – ie before most of the AGW.
AGW is irrelevant, the article is comparing the supposed instrumental period average to the supposed Holocene Optimum average irrespective of climate factors.
OK, so we may not know so much about the Holocene Optimum period (so why “Optimum”?). But we know what is happening now.
Nick Stokes wrote: “But we know what is happening now.”
Ha ha ha ha ha ha ha… Thanks for the great laugh, Nick. You’re ready for open mic night at the local comedy club.
better yet, he should do a Netflix Special 🙂
Nick, Because we need to know if “now” is unusual. When the global average surface temperature varies almost four degrees every single year and local temperatures often vary more than that in any 24-hour period, it seems likely that one degree of warming in 170 years is pretty normal.
Optimum because the sahara was s grassland and tundra now was forest then. As a species that evolved in equatorial Africa, a warmer climate is much more optimal us for food growth and life in general than colder climates.
Besides – you know that before your cult took over the institutions the Holocene thermal optimum was coined. It was not until the cult that people started pretending 1850 was some kind of utopian climate age.
“a warmer climate is much more optimal us”
Here in frigid, damp Woke-achusetts, I’ve been desperate for a day over 60 F for 5 months! And since all my ancestors for centuries were in Italy, my genetics desperately need WARMER temperatures.
Yup. Stokes fell into his own elephant trap with that one.
Then how can we know that now is worse?
We? You are right that many of us do know that humans are not affecting the climate. You on the other hand keep claiming the opposite.
Rubbish Nick, she is correctly comparing measures at similar resolution. You will need another 100 years of temperature DATA to do what you suggest.
So let’s put all the climate crisis policy nonsense on hold until we actually have sufficient data to make a valid comparison. And call out bogus comparisons of rates of warming at resolutions differing by at least 2 orders of magnitude.
“Rubbish Nick, she is correctly comparing measures at similar resolution.”
Yes, if you go back far enough you’ll always find data of lower resolution that you can’t compare with. So? We know what is happening now.
We know its warming now.
We know its warmed (and cooled) before.
To show current warming is “unprecedented” or somehow exceptional you have to be able to compare to previous historical warming at the same resolution.
You can’t unless you reduce the resolution of the modern measurements to that of the paleo data. And that’s a one way street.
So claims of “unprecedented” or “climate crisis” are therefore unproven. Any other claim about modern versus paleo warming is without scientific merit and anyone persisting in comparing rates of warming at differing resolution (2 orders of magnitude different at least) should be called out. Its not science, its recklessly misleading.
Thank you, Renee, for a fresh look at data
It is recklessly misleading, but the IPCC gets away with such lying, because there are plenty of brainwashed sheep among the masses.
It takes just one dog to control such a flock. The elites know this, so they collaborate with their lapdog media to rule as they like.
The IPCC has been getting away with its 100-plus, bogus computer models since well before 1979, but the increasing gap between “prediction” and satellite measurements has become more and more obscene over the past 44 years.
The IPCC knows about it, but nevertheless spouts its high temperature nonsense, supported with fantasy graphs.
The IPCC knows, it can count on the central command/control Biden posse fanatics, that are willing to wipe out the whale population to build dysfunctional, 900-ft-tall wind turbines
But due to the low temporal resolution of paleoclimate proxies from ice cores, sediment, etc., especially as you go further back in time, we have no way to know how quickly it warmed so any statements proclaiming the supposedly “unprecedented” pace of warming in the last 50 or 100 or 150 years are complete nonsense. If it warms another 1 or 2 °C in the next hundred years (far from certain based on our limited data), we’ll have enough data to say “unprecedented warming” with a high degree of certainty when compared to the low temporal resolution data from thousands of years ago, yet we still won’t be able to say that when compared to data over the last million years where we see repeated cycles of natural warming and cooling by up to 14 °C. It still remains unknown if the ~1 °C warming since the mid-1800s is natural, mostly natural, or mostly caused by human greenhouse gas emissions.
Yes indeed. We can see that the climate is getting milder, deaths from extreme weather events are down 90%, food production is booming. It’s a catastrophe!
doomsayers will always be with us
Yes, but the big difference is that we have thermometer observations for the recent record. We know that the current global warming trend didn’t really start in the thermometer record until the 1950s, so, as Nick says, the entire period of observed warming is masked when it is averaged alongside the earlier part of the record.
You might argue that the same could have been true of earlier periods and that might be right; but we have no way of knowing at present. We do know about the past ~170 years though.
Its irrelevent. Resolution is resolution. You want to get two data points of modern temperature data on a graph with support (resolution) of 200 years? You need 400 years of temperature measurements that you can divide into two samples, averaging 200 years of data into each. Then measure the rate of warming between the first and the second block of data. That’s the rate too compare.
And Marcott notes resolution of 300 years, so you would need 600 years of modern temperature data to compare a warming rate.
Change of variance and rates of change in time series under change of scale are very significant and can only be done in one direction – High Res to Low Res. Renee’s point here is very valid and anyone claiming otherwise does not understand resolution and its impact on rates of change and variance in time series.
Far too many in the climate alarmist clique do no understand resolution at all. They think you can increase resolution by averaging multiple data points – i.e. you can measure the diameter of a crankshaft journal using a yardstick if you just take enough measurements and average them together.
And variance? Forget it. Do you *ever* see any variance quoted with the global average temperature?
Worse than that, and not just restricted to climate scientists, there are those who think you can “downscale” from low resolution measurements to high resolution.
Just where they think the unmeasured additional bandwidth comes from is beyond me…
They’ve never heard of the rules for significant digits.
heck, even I know that and I only took a Mickey Mouse one credit course in statistics 52 years ago- I only wish now that I had taken a better statistics course- it’s a very powerful tool- without which you don’t have real science
Never forget, statistics is only a descriptive tool for collected data. It is the data that is the science, not the statistics.
For instance, both multiple measurements of the same thing using the same device under conditions of repeatability and multiple single measurements of different things using different devices lacking repeatability conditions *both* can be described by the same statistical descriptors such as average and variance.
Those statistical descriptors can be useful in one situation and not so useful in the other. And even then it depends on the calibration status of the measurement devices as to whether the statistical descriptors give *accurate* information.
“It is the data that is the science, not the statistics.”
I think it’s both. The data is like the pixels in an image and statistics helps us see the image- though people will debate what they see in the image. Then comes theory building, testing, etc. I’m no scientist but I get the basics.
The data is the science. Science is more about making an hypothesis, creating a mathematical description, and designing experiments capable of obtaining the necessary data to prove or not prove the hypothesis. The experiments must capture the data to a resolution that is appropriate for making a decision.
The largest part of current temperature data does not meet the required resolution to adequately calculate the small changes that are occurring. Few climate scientists appear to have the experimental science education needed to deal with actual, measurable physical quantities. Few experimental scientists or for that matter, mechanics, machinists, and engineers would have the chutzpa to average disparate meadurements and through “arithmetic averages” extend resolution by simple mathematical calculations.
One can use statistics as a tool to make inferences about the data, but careful attention must be given to the assumptions behind the the tool. Climate science in general does not do this, and therefore, the inferences they arrive at are questionable!
I certainly doubt tree rings as proxies for temperature. As a forester for 50 years- I know that tree ring widths have more to do with rain than temperature – along with age of tree, competition with other vegetation, injuries and diseases, etc. Sides of the tree with more sun will be larger. Lots of information there but little about temperature. Of course there are other “temperature proxies” of which I know nothing- but I have little confidence in them. I’m waiting for the aliens to land and tell us precisely what’s happened over the past few million years as they’ve likely been monitoring the Earth. I say this last part half in jest and half with the hope they exist and will enlighten us. Having seen a UFO once I have more confidence in aliens than I do in climate alarmists.
I propose an experiment around the idea of resolution:
Measure the temperature once daily at 7am for a month (the historical record), then one day measure it once per hour from 7am-2pm. Chart the results. I bet you see changes at an “unprecedented” rate in the hourly data.
Given the resolution differences, I don’t see how any comparison of the modern record vs. the historical can make any claim about the speed of any change.
You are missing the point as you are still comparing instrumental temperature to instrumental temperature in your experiment. A better experiment would be to measure the air temperature once monthly (the present), and then measure the temperature of groundwater once a decade (the historical proxy data).
Deep (10-20 m) groundwater or subsurface sediment temperatures have been successfully related to mean annual air temperatures [Todd, 1980]. In groundwater, seasonal variations in heat fluxes are dampened out and subsurface sediment temperatures are constant throughout a year. With time and during burial, the annual sediment layers are mixed by bioturbation and storms and become averaged over multiple annual layers.
Renee, I don’t think I missed your point, but the only thing I was addressing was sampling rate (i.e. “resolution”). If you have more frequent samples, you can see finer detail. Annual or even decadal samples, whatever the means, cannot be reasonably compared to centennial or longer samples. I was ONLY addressing that, with a ridiculously simple thought experiment.
Proxy vs. instrument makes the comparisons, IMO, even less valid, which I believe was what you were saying.
the only way I’d be convinced about that past 170 years would be if we had a million thermometers across the planet then and now- we have more than that now but not uniformly everywhere
Even a million thermometers wouldn’t help if they didn’t have the resolution to detect the very small changes that climate science says is occuring.
You simply can not add resolution by averaging. Too many mathematicians like to quote the Central Limit Theory as allowing one to achieve the information that provides additional resolution. This is so far off base it is pathetic.
The CTL only tells one how close a series of sample means can predict the actual true mean. It describes an interval surrounding the estimated mean (calculated from samples), within which the true mean may lay. It is known as the Standard Error of the sample Means (SEM). That interval has nothing to do with the resolution of the the data, nor the resolution of the calculated mean. Those resolutions are determined by using Significant Digit Rules.
To summarize, those who claim that the SEM is a measure of the applicable resolution of the true mean are sadly mistaken.
and here I thought all those PhD climate scientists had to take advanced statistics- is that not true? so why don’t they use it correctly and why isn’t that failure caught in “peer review”? Other than the fact that the peer review system is in failure mode?
Most of the statistics taught today simply denies the fact that uncertainty exists. I have five introductory college statistics textbooks I have purchased over the years. Not a single one has even one example where the data is given as “stated value +/- uncertainty”. None of the several physics textbooks I have do either. I learned about uncertainty and significant digits in my EE and Chemistry labs many, many moons ago – when nothing ever came out exactly matching what I calculated from the “rules”.
One anecdote. My youngest son, when starting in microbiology, was told to not worry about taking anything more than basic math requirements and that included not taking any statistics courses. He was told “if you need a statistical analysis done go find a math major!”. Thankfully he listened to me and took several stat courses so he *knows* how to analyze data. He’s also a very hands on experimenter and knows the difference between data and statistical descriptions.
I suspect that is the problem in so many fields today. The PhD understands the data but knows very little in how to analyze it using statistics – so they have no base to judge the efficacy of the statistical analysis. The stat majors doing the statistical analysis know how to analyze the data but understand nothing about it and so have no base to judge the efficacy of the statistical analysis. You wind up with the blind leading the blind (pardon my lack of wokeness) down the road to perdition.
though I claim to know almost nothing about statistics- all this time I thought it ALL about uncertainty- I should think the lack of appreciating the topic might be fine in a world dominated by dogma but not one which wants to advance science and engineering and public policy
regarding the climate thing- it makes perfect sense to me that given the size of the Earth, its complexity, the added complexity of 8B people and what they’ve done, to claim that it’s settled that we have a climate emergency is 1% science and 99% hard leftist politics- it just doesn’t smell right- then when I heard Gore saying the oceans are boiling I realized they aren’t just wrong- they’re totally crazy
“they’re totally crazy”
For those with a little bit of training in statistics, I recommend Statistics done Wrong, by Alex Reinhart, who admits that “I still take obsessive pleasure in finding ways to do statistics wrong.”
If you are a scientist, I defy you to read the book without finding at least one mistake you’ve made in your career!
Don’t ask me which one(s) I’ve made, I will deny it!
The first thing people, including climate scientists, need to ask themselves is “are we dealing with samples or with the entire population”. No one has ever done that where I could find it.
What they end up doing is dividing the standard deviation they find by the √n, where they claim that the number of stations = “n”. In so doing, they are claiming that they have the entire population of temperatures making up the Global Average Temperature.
If that is true, then the Standard Deviation of the population is what should be quoted and not divided by anything.
I’ve had folks claim individual stations are samples. If that is true then “n” is the size of each sample, i.e. 12 (months) and still not the number of stations. In order to calculate the population Standard Deviation from a sample means distribution, you MULTIPLY the standard deviation of the sample means distribution by the size of the samples. The equation is all over the internet, there is no excuse for anyone to not know it. SEM = SD/√n. If you know the SEM, i.e., the standard deviation of the sample means, then it becomes SD = SEM * √n.
It is enlightening to see no one ever, discuss how they calculate the variance of an anomaly. That should be the Var(X-Y) = VarX + VarY. In other words, the variance of a months average plus the variance of the baseline. Wanna bet that it is calculated that way?
Michael Mann, and most of his cohorts, are self-described ‘climatologists.’ His academic qualifications are :
A.B. applied mathematics and physics (1989),
MS physics (1991), MPhil physics (1991),
MPhil geology (1993),
PhD geology & geophysics (1998)
Climatology is noticeable by its absence.
His academic background is not too different from my own, or others commenting here.
What is different, is that I don’t represent myself as a climatologist. I make it a point to try to let the facts and logic speak for themselves.
Do you mean the Mann of Mann, Bradley and Hughes, who made a graph nicknamed hockey stick, which pinned instrumental T data on the end of a long period of proxy data? Two very different resolutions on one graph?
That ended well, didn’t it. Sceptics descended on that verboten technique in force, attracting attention about how bad science was part of the global warming story. That was additional to “Hide the decline”.
Joseph, that’s why we believe the satellite data, that make millions of observations regularly of the atmosphere, so they are comparative and cover the whole planet not just a few thousand convenient stations on land.
OK- too bad we didn’t have them centuries ago so we could have better “climate science” now.
“Yes, but the big difference is that we have thermometer observations for the recent record. We know that the current global warming trend didn’t really start in the thermometer record until the 1950s,”
My thermometer record shows warming starting long before the 1950’s. We had warming from the 1910’s to the 1940’s and then cooling from the 1940’s to the 1970’s, and then warming from the 1980’s into the 2000’s, where the temperatures have peaked, as they did in the 1930’s, and a cooling trend has appeared.
Here’s the U.S. chart (Hansen 1999) showing warming and cooling before 1979, and then the UAH satellite chart showing the warming and cooling from 1979 to present.
As you can see, there is no unprecedented warming in North America comparing today with the past. You don’t have to go back hundreds or thousands of years to find a period that was just as warm as today. The Early Twentieth Century was just as warm as today as recorded in numerous written temperature charts from all over the world.
Combined, these two charts represent the real temperature profile of the Earth where it was just as warm in the recent past as it is today, and CO2 has had no visible effect on temperatures because much more CO2 is in the air today then was in the air in the Early Twentieth Century, yet it is no warmer today that it was then.
This is the BIG LIE the climate change alarmist tell, and it is refuted by the written temperature record, which is available to just about anyone who cares to look, so one has to wonder why all the alarmist experts, and some of those on the skeptic side, continue to ignore the Early Twentieth Century and the bastardization that has taken place to erase the Early Twentieth Century from memory.
The written temperature record and the Early Twentieth Century temperatures repudiate the Catastrophic Anthropogenic Global Warming (CAGW) claims of the Alarmists. That’s all you need as proof that CO2 has no discernable effect on the Earth’s atmosphere. Almost 100 years of increased CO2 going into the atmosphere, yet it is no warmer today than it was then. What’s left to say?
Actually, more like after the 1970s.
It started right about the time humans started dumping plastic pollution into the oceans. Almost every alarmist argument is based on correlation. They would apply just as well to plastic pollution.
TFN, you seem to forget (ignore?) the approximately 1910 to 1945 warming trend that is greater than the post-1950 trend. It is also essentially the same as the post-1975 warming trend that got the Leftists’ panties in a wad. The globe is now cooling, doancha know? As much as the CliSciFi practitioners like to play around with the surface temperature record, weather balloons and satellites tell the true story in the atmosphere, where the greenhouse effect occurs.
We have a temperature record showing similar warming (actually, more because ice melt was greater) and despite low coverage in arctic (so lacking representation of arctic amplification) before greenhouse gas warming was substantial. So we know the modern greenhouse warming isn’t a lot different than decadal variability.
It’s interesting to note that over its first 80 years (1850-1930), HadCRUT has no warming trend. The current warming trend doesn’t really start until ~1950s in any of the global temperature data sets.
And, the 1950s just happened to be when the modern solar maximum peaked.
I’ve actually been looking at the HadCRUT5 (Analysis / Infilled) dataset for something else, trying to find “trend channels” more than 30 years long (so they count as “climate”).
If I squint a bit I can get a “current warming trend” starting in 1964, but not in the 1950s.
Hansen’s data suggest a start about 1964, but other data sets suggest a start after the concern about another Ice Age waned.
The warming started in 1976-77 when the PDO moved into it’s warm phase.
My first selection of trend channels had a “clean break” between “1937 to 1977” and “1978 to 2022″ (/ 2030).
This had the advantage that the separate “Cumulative CO2 emissions” sections were all “anchored” to the start-date of that channel’s trend line (1978 for the last one initially).
Extending the last channel from 1978 to 1964 had me changing the “anchor” to the end-date (2022) instead for “personal preference / aesthetic” reasons … to have overlaps everywhere without “breaks”, a case of “beauty is in the eye of the beholder” and all that …
You may well not be the only person who prefers the attached graph instead of the version given in my OP.
Phil Jones says three periods are equal in warming magnitude.
Note the dates.
lol – the 1930s was hotter in the place in the world with the most dense temperature records ( the USHCN ) than it is now by quite a lot. There were more 90 degree days in the US in the 1930s than there are now. I get that you pretend the entire southern hemisphere can be averaged by 2 stations but the USHCN is not reliable because it tells a story you don’t like – but what makes you think the 1930s during the dust bowl was cooler than now?
There is a significant difference between the thermometer readings in the USHCN and the so-called “global temperature” data sets. One can certainly argue that if the earth is experiencing “Catastrophic Global Warming”, it would show up in thermometer readings in the US. I’m making a distinct difference between what the thermometer readings are, and what the “homogenized global temperature” data sets are. The USHCN thermometer records also show that there were warmer decades than the last decade, and colder decades.
Willis has shown here: The US Blows Hot And Cold | Watts Up With That? ,that the daily high thermometer readings in the US have increased by only about 1.26 degrees F over the last century (based on the USHCN actual thermometer readings). I doubt that any sane person could call that “catastrophic”.
There are roughly a million thermometer readings in the data sets, so if anyone doesn’t like his analysis, he has included links to the data for anyone who wants to analyze it differently. In the meantime, I’ll defer to his always insightful analyses.
Your claim that Renee is comparing “the temperature of the Holocene with the temperature of 1938” is patently nonsense. She is doing no such thing. She is comparing the mean temperature of a period that includes ALL the claimed AGW up to date to the Holocene.
Where you choose to report or plot the date of the sample is irrelevant in this context, the averaging into a single value of the correct resolution is the same. She could plot the point as a trailing mean from 2022 or centred on 1938. Either way, its the same value at the same resolution. And moving the point back or forth by 85 years on a graph representing 12,000 years at a resolution of 200 years makes no difference at all to the comparison. Her point still stands.
Your claim is without merit and should be ignored.
As usual, you are being disingenuous. The average temperature represents the set of the entire 170-year collection of instrumental temperatures, not the temperature for 1938.
If she had used a shorter, more recent time period, I would expect you to complain that she didn’t use all the instrumental data available. You are looking for anything to complain about if it doesn’t agree with your catechism.
The most rapid 30 year increase in the US Heat index occurred during the period 1925 to 1954, almost twice the rate of the last 30 years. 48% of US states current high temperature records were set in the 1930s. If not AGW, then what?
Comparing the supposed proxy-derived global average temperature over a 4,000 year period (9,500 – 5,500 yrs BP) to the thermometer average over the past decade being described as “misleading” is somewhat an understatement IMO.
I did a similar analysis some years ago, but using a 50 year average. Even with that, current temperature are not shockingly higher than past temperatures. Attaching the thermometer record to the proxy record is lying with statistics unless the thermometer values are averaged over a similar time period as the proxies.
Very good Renee. Correctly comparing estimates at the same resolution as far as possible.
The claims of other scientists based on comparison of modern temperature data at high resolution to 200-300 year resolution of smooth proxies is very close to scientific fraud, at the least it is recklessly misleading.
This one simple analysis demonstrates how unremarkable modern temperature changes are.
“This one simple analysis demonstrates how unremarkable modern temperature changes are.”
No, it doesn’t demonstrate that. It can’t. It demonstrates that time resolution is such that it is possible that a temperature spike occurred way back that we can’t now detect. But it doesn’t show that one occurred. We do know that one is happening now.
And exactly what is happening now? The Antarctic has been cooling or unchanged for 70+ years. The Japanese Met office shows cooling around Tokyo and offshore islands for 40 years. The SE US has experienced cooling for more than 50 years. Yes. There are places that have warmed.
The world temperature graphs shown in the article are adjusted or just plain made up. Anyone who tells you the world was warmer in the 70s than say 1921 isn’t telling the truth. We need a complete audit grid cell by grid cell of the data being presented as the world temperature data.
Didn’t Berkeley Earth do that a few years back and come up with more or less the same results as everybody else? (Including the Japanese Met Office, since you mentioned them).
You mean that Berkeley Earth that shows the uncertainty in temperature records from the 1800’s in the tenths digit?
Not sure why that would be an issue? The precision is the result of the processing required (averaging, standard deviations, etc).
Ha, ha,ha. You don’t deal in physical measurements do you?
Averaging, standard deviation, etc. DO NOT determine the information available in a measurement! Only the resolution of the original measurements determine the ultimate resolution of calculations. Anyone who has taken higher level lab classes would know this.
From John Hopkins University:
“””””9. When determining the mean and standard deviation based on repeated measurements
“””””o The mean cannot be more accurate than the original measurements. For example, when averaging measurements with 3 digits after the decimal point the mean should have a maximum of 3 digits after the decimal point.”””””
“””””o The standard deviation provides a measurement of
experimental uncertainty and should almost always be rounded to one significant figure”””””
If you need additional references from other university lab courses, let me know. If you find a reference that disputes, please post here so all can see it.
You cannot add precision by averaging. That violates significant digit rules for metrology. A repeating decimal as an average is *NOT* infinitely precise.
If you have a standard deviation then how do you know what the true value is? For any specific measurand the true value is considered to be somewhere in the standard deviation interval. How does that increase precision?
If you have measured the same thing multiple times with the same device under repeatability conditions and all measurement error is totally random and Gaussian (or at least symmetric) then you can assume the average value as the true value – BUT you cannot extend the precision of that true value beyond the precision of the measuring device.
If you want to know something out to the hundredths digit then your measuring device better have a resolution in the hundredths digit at least and in the thousandths digit would be better. The temperature record simply doesn’t meet that requirement – not even the newest measuring devices (please note, the resolution of the sensor does not determine the resolution of the total device).
The temperature record simply isn’t fit for the purpose to which it is being used. You simply cannot know temperature differences in the hundredths digit. It is far more likely that *all* temperatures should have the units digit as the last significant digit. And the use of anomalies fixes nothing, their last significant digit still be in the units digit.
Null hypothesis rules.
Extraordinary claims of “climate crisis” can never be demonstrated by comparing modern high resolution temperature to paleo data. Even at 200 year resolution you would need 400 years of modern temperature data to get two points on a graph to compare warming rates.
Let me know when you get to 400 years of modern temperature observations and I’ll get worried. Meanwhile any claims to the contrary are recklessly misleading and not science.
But, the “science” is settled!
“It demonstrates that time resolution is such that it is possible that a temperature spike occurred way back that we can’t now detect”
But we are not being beaten about the face over a “temperature spike” but over “the earth is going to turn into a cinder”. Temperature growing forever till we all die – and the tipping point is nigh!
Yes, but we don’t know that the current temperature spike is unusual. This analysis shows that it probably is not. More precisely it shows that the current spike cannot be compared to the past proxies to support the conclusion that the current spike is unusual. In other words, it shows that all those papers that compared the recent thermometer record to past proxies to conclude that the current spike is unusual cannot be trusted because they are based on an incorrect statistical analysis.
Nick Stokes reverses the null hypothesis. Probably not for the first time either.
OK, I’ll give you this one. But, the point that is further implied is that there is no justification for claiming that the current warming is “unprecedented for the last 100,000 years,” or some similar statement. We simply don’t know. However, I’d suggest that there is high probability, knowing what the current variability looks like, that there were similar spikes in the pre-instrumental days.
“We do know that one is happening now.”
Nope. No temperature spike in the United States. It’s cooler now than in the past. No unprecedented warming here. CO2 warming is missing in action.
If we don’t have sufficient resolution in the historical data to identify if there is or is not a “spike”, then we don’t have sufficient grounds to assume any current “spike” is in any way unusual.
It’s on a par with comparing modern GPS navigation with pre-Harrison navigation. Or navigating by place names. Cape Wrath Old Norse word meaning turning point, where you turned to get to the Western Isles or Suðreyar or southern islands known to the Scots as Na h-Innse Gall “islands of the strangers” going a bit further south they got to Earra-Ghàidheal the border region of the Gaels or Argyll. We still have a Bishop of Sodor and Man Sodor being Suðreya,
Sorry I got sidetracked but, modern navigation gets you to a few feet from a front door, old methods get you to a island or county.
We’re now comparing high precision to rough guidance.
There is a method in geostatistics that uses the change (reduction) in variance with decreasing resolution to correct measures and their variance to different resolution. The method is usually referred to as change of support and there is a large body of literature on it.
The change of variance due to averaging was in fact an original observation in Danie Krige’s thesis.
Might be worth finding out about the relation between the smoothing and the change of variance as described in the change of support in the geostatistics literature .
You might find that there are too many large exogenous variables in the instrumental T record, like site moves and changes to shelters and thermometer types and UHI. They will dominate the variance. Chemical analyses used by Krige et al has less variance from other causes, so the variance approach has a better chance of success.
For weeks now I have been struggling with 50 Australian stations from “pristine” locations that should have less UHI than usual. The month-to-month variance is so large that it is hard to derive useful systematics. Geoff S
Perhaps. But the general principles of change of support are true, notwithstanding its origin in gold assay and averaging. I have studied and applied it comparing seismic attributes to well data.
Also interesting would be to compare different proxy measures of temperature and their variance and compare to the supposed resolution.
Don’t forget that notwithstanding variance change caused by change of support (ie scale or measurement), the fact that a measure is a proxy (and therefore not correlated with R=1 to the actual measurement) also reduces the variance. So the proxy temperature to modern temperature variance comparison suffers from (at least) a double whammy of variance reduction (smoothing) – once from resolution and once from it being a proxy (ie R<1).
There is a method in geostatistics that uses the change (reduction) in variance with decreasing resolution to correct measures and their variance to different resolution. The method is usually referred to as change of support and there is a large body of literature on it.
Thanks for the information. It’s surprising that climate scientists have not tried such an option or at least used a smoothing algorithm on instrumental data for comparison to the past.
There is a simple introduction in An Introduction to Applied Geostatistics by Isaaks and Srivastava. I have some nice training notes on it too, and how change of scale can be related to a variogram too. If moderators can pass my email to you and you email me I can send some material. Andy May also has my email somewhere I suspect so that might be another route.
As one whose colleagues did a deep dive into geostatistics starting about 1974, then applied it with success to ore resource calculations on some major new mines, I thought it was a major advance of scientific method. IIRC, I was advocating a geostatistics approach to temperature analysis to people like the Phil Jones group way before Climategate, but there were no takers. A decade ago on WUWT I was asking geoscientists then using geostatistics to come forward and help, but none did. Geoff S
I have been involved in geostatistics for 30 years, in petroleum geoscience. I have run many projects, designed and developed software and present numerous training courses. I taught geostatistics to MSc students at Imperial for 14 years (sadly wokeism has destroyed that course now).
I have been on WUWT for around 15 years, possibly longer.
I am an absolute novice at geostatistics. What I have read appears to be useful for relatively static and localized objects. I guess I’m not sure how a globally dynamic atmosphere with multivariable influences such as sun, rain, wind, humidity, lapse rate, etc. that vary with time frames of minutes or hours would be able to be analyzed using geostatistics methods. Do I have a misconception here?
Just for fun here’s a comparison of figure 2 above with the IPCC FAR figure 7.1
That graph is hugely different from IPCC graphs
It would be great to compare the data sets.
Remember, a few years back, the IPCC claimed the little ice age was just a European event. But, folks from around the world, said, their areas also had recorded a cold area.
That claim was buried
One needs to take care with the depth of research behind that Holocene temperature sketch, attributed to Lamb, used in FAR.
Steve McIntyre wrote several articles about it on Climate Audit. This link will get you to others.
The entire Climate Audit article is relevant to Renee’s article here and is recommended reading, a reminder that not all is new under the sun.
Very interesting, I think the exact same analysis could be done with CO2, considering the Antarctica data is smoothed with a multi centennial length averaging function, guaranteeing that if one went a few hundred years to the future, collected and processed the Antarctica data with the same process, 400 PPM would not be visible.
Joos, 2008, attenuates atmospheric greenhouse gas variations during the enclosure process of air into firn and ice using a firn diffusion model. The current CO2 atmospheric spike would be portrayed as a broad 30-40 ppm excursion in the ice record. However, it is not smoothed a second time to match proxy sample spacing.
I wonder if would be possible to derive an anomaly time series from the proxy data and apply spectral balancing to adjust the anomalies to account for the smoothing and then convert back to derive a proxy that is a more realistic comparison to the instrumental data?
To deconvolve past proxy data, especially global averages, would be very difficult. A simple start would be evaluating individual proxy data with higher resolution instead of a global average. Andy has a good post on using proxy data at different latitudes. https://andymaypetrophysicist.com/2021/06/23/how-to-compare-today-to-the-past/
Thanks, I am curious because in geophysical analysis of seismic data we have a similar problem of trying to match seismic data to well bore data because of the lack of high and lows in the seismic data relative to the borehole well log data. We use deconvolution and match filters to adjust the seismic wavelets closer to the bandwidth of the well bore data, so volumetric seismic reflectivity estimation has a chance of working, and this seems like a situation where that workflow could be applicable.
There is no chance that the world today is warmer than 6,500 years ago. And anything near the oceans 12k years ago. The northern hemisphere changes were abrupt and big. 12k years ago warm oceans invaded the poles warming the planet. Marine proxies show a peak about 12k years ago and declining from there. Most glaciers by the oceans all disappeared from about 12k years ago and it took about 5k years to melt the giant ice cubes that were the Ice Sheets covering the continents. Antartica’s minimum like the Ross Ice Shelf was about 6k years ago. Permafrost was less, glaciers were less, alpine and arctic tree lines were less, it all shows the same thing – warmer. There is zero chance.
A common error in speech and text. Geoff S
You say that instrumental temperature data has been around since 1850, about 170 years.
The Central England Instrumental Temperatures Data set began in 1659 and has continued to the present, about 360 years.
Would its inclusion change any of your conclusions?
The HadCET data is more representative of the NH hemisphere. For a 200-year duration it appears there would be 30 more years of cooler data included within the mean and the recent warming would still be smoothed to a lower average than its current decadal mean. Therefore, I do not believe it would change any of my observations.
In a reply to ThinkingScientist above, Tim Gorman says, “They’ve never heard of the rules for significant digits.”
Tim makes a very astute observation here. Let’s expand upon his remarks and take a hands-on approach to working with significant digits, a.k.a. significant figures.
My right hand has five fingers. The finger count for my right hand, which is 5, has only one significant digit.
My left hand also has five fingers. The finger count for my left hand, which is also 5, also has only one significant digit.
Taken together, my two hands have a total of ten fingers. But the total finger count, which is 10, has only one significant digit.
Among all these various counts, only figures containing one significant digit appear.
And so a question naturally arises ….. which of the digits among my ten fingers is the most significant digit?
I am right handed, so the digit which is the most likely candidate for being ‘most significant’ is probably associated with that hand.
Is it No 1, my thumb? Is it No.2, my index finger? Is it No. 3, my middle finger? Is it No. 4, my ring finger? Is it No. 5, my pinky?
Here is a list of arguments for each candidate competing for the title of Most Significant Digit.
— Digit No. 1, the thumb, because we as humans could not use tools without it.
— Digit No. 2, the index finger, because it is exceptionally useful for pushing buttons and for pointing fingers at guilty parties.
— Digit No. 3, the middle finger, because it is very effective for expressing anger and/or disdain at those whom we disagree with.
— Digit No. 4, the ring finger, because it indicates our marital status to those who might be interested, for whatever reason.
— Digit No. 5, the pinky, because it complements the use of the other four fingers in a variety of different circumstances.
In answering this vitally important question, a complication arises; i.e., which digit is most significant at any given point in time is contextually and situationally dependent.
For myself, I can only say that Digit No. 2, the index finger is most often my most significant digit.
Why is that?
Because I spend a lot of time pointing fingers at those in the nuclear industry who refuse to take responsibility for bringing their projects in on budget and on schedule.
More properly, the number of fingers that a person has are exact integers, with as many significant figures as necessary.
In general it is only the trigger finger that is significant.
Historical records show that the Medieval Warm Period (ca 1000 – 1300) was warmer than now (raising sheep in Greenland, etc.), while the Little Ice Age (ca. 1600 – 1750) was colder than now. There aren’t many instrumental records going back that far, but could the current slow warming be part of a natural cycle–the same cycle that caused the cooling from the Medieval Warm Period to the Little Ice Age, that has nothing to do with human CO2 emissions?
Besides, every average temperature includes a few extremely high and a few extremely low local temperatures, and the average for even this month includes blizzards in southern California, which are considered rare by people who have lived there for decades.
I don’t have the reference. I do recall reading that some proxies had been found (ocean cores?) that showed large temperature shifts in time periods as short as 20 years on the boundaries between glacial and interglacials. These shifts were both plus and minus. Before then it was assumed such shifts would be gradual.
As such there is nothing unusual about current temperature changes as a warning sign that our interglacial is approaching the end.
Perhaps someone has a reference as I have seen it referenced at least twice.
What seems insane to me is that we do not have a value for Natural Climate Variability
We have the historical data and a ton of people getting paid for climate work. It seems to me that statistical methods should be able to solve for historical variance and Natural Climate Variability.
Looking at Paleo climate at different times scales it sure looks to me like some sort of self-similar fractal distribution. It is only when we look at the more recent reconstructions (5-10k years) that this distribution appears to break down.
This suggestst to me that climate science may have improperly accounted for natural Variability. However I’m not on the list of people funded to find the problems. If I was, for a meer $500 million I would almost guarantee a result.
“It is hard to get someone to find something when their job depends on not finding it”
Historical variance? Climate Science won’t even properly handle the variance of daily temperatures, monthly averages, or annual averages – be they absolute temps or anomalies!
Lamb assessed more recent temperature fluctuation changes looking at agricultural records, particularly viticulture. British Isles viticulture start in Roman times but disappear in subsequent cooling then back again in MWP before disappearing in LIA.
Viticulture is back but has not extended as far as the 2 previous northern limits.
Expansion then retraction of glaciers with carbon dating of ancient forests buried in moraine are frequently referred to by glaciologists and good indicators of more recent climate fluctuations.
“the IPCC states that around 6500 years ago temperatures ranged from 0.2°C to 1°C warmer relative to 1850–1900 pre-industrial period.”
In my opinion the IPCC is far from the truth. Recent studies (published 2020 and later) point towards much warmer periods during the early Holocene. IPCC would be well advised to update its datas:
I have taken a look at the 50 highest resolution proxies behind PAGES12k. The standard deviation across these proxies for a typical decade is around 0.7C. Obviously this does not take into account the smoothing effect of using linear interpolation between temperature data points, the accuracy of each method (reported as +/-1C) or the fact that all the other proxies behind major reconstructions have lower resolution.
This means that the actual standard deviation behind any claim based on the assumption of a stable past is >1C for any decade. Consequently we simply do not have the accuracy or resolution to claim either that anything unusual is happening or that the past was actually stable.
The IPCC current claimed human caused warming is 1.1C. So we are within one standard deviation from the long term trend.