Temperature Feedback Follies

Is it really the hottest in 125,000 years, and if so, what does that imply?

By Chris Hall

The motivation for this article came from claims that this summer was the hottest in 125,000 years and the breathless fear surrounding this. Just skimming the news reports suggested to me that this was based on two main points: the assumption that climate is very stable and has not varied before recent anthropogenic forcing, and that the present deviation above normal temperature was many standard deviations (sigma) above what is expected that it could not possibly have be matched or exceeded for 125,000 years.

The first assumption aligns with a “Hockey Stick” style paleotemperature reconstruction, where there is tiny natural temperature variability for the last millennium. There are several reconstructions like this, e.g. some of the flatter Temp12k records, along with the classic Hockey Stick (Figs. 1 and 2). The second assumption is based on the faith that the statistical properties of the paleoclimate temperature record have not changed at all for a very protracted time period.

Although I will not argue one way or the other on any particular paleotemperature reconstruction, I will point out that the 125,000 years mentioned for our record breaking temperatures comes from a little bit of sleight of hand. If you look at the Vostok ice core temperature record that is on the paleoclimate page of wattsupwiththat (Fig. 3), as soon as you go back about 12,000 years to the beginning of the Holocene, the temperature drops sharply into the depths of a severe glacial period, and you only get back to “normal” after you travel back in time roughly 125,000 years until you get to the toasty Eeemian. So, in reality, it’s not much of an achievement being hotter than the vast canyon of the glacial freeze. Saying that, the question becomes, was 2023 the hottest year, and was August of 2023 the hottest month, in 12,000 years?

For the rest of this article, I will assume the unlikely case that for the Holocene, temperature was extremely stable. Then, what statistical properties does the present day instrumental temperature record possess, and what does this imply for claims of record temperatures? This led me into looking at what this implies for feedback mechanisms for the climate, so stay tuned.

HadCRUT5 Global Monthly Temperature Anomalies: It’s what we have

I decided to look at what the official temperature record from a century of instrumental data that precedes the bulk of the rise in CO2 from anthropogenic sources, i.e., 1850 to 1950. For this, the HadCRUT5 global monthly analyzed record seemed a reasonable pick. There are others out there, but they are highly correlated with each other and they are based on the same raw data, such as it is. This data set is plotted in Fig. 4.

The mean of this part of the record is -0.3078 C, which is expressed as an anomaly with respect to a later part of the record, and the standard deviation is 0.2066. The maximum temperature of the entire global monthly record is from August of 2023 with an anomaly value of 1.3520, so it turns out that August was over 8 sigma above my 1850 to 1950 mean baseline. Wow! I’m guessing that a simple-minded extrapolation back in time would suggest that we would not have exceeded this scorching temperature during the Holocene.

SARIMA Land

This next section gets a bit heavy and can be skipped by anyone not wanting to get into the weeds of how I created simulated temperature records based on the statistical properties of the existing 1850-1950 temperature record. It fits a model that assumes autocorrelation within the record. The techniques used are popular with stock traders and most of the machinery used is in the R library “forecast”. If this sort of thing isn’t very interesting, skip to the next section.

I wanted to see how autocorrelated my baseline temperature is by using a Seasonal Auto Regressive Integrated Moving Average model (SARIMA). The parameters for this type of model are usually given as (p,d,q) x (P,D,Q)m. Here p is the number of previous points in the series that a given data point is “regressed” to (i.e., correlated to), d is number of differences to take to try to make the series resemble white noise (trust me, this is where the integrated part comes in), q is the number of previous model deviations (i.e., errors) to average and m is the seasonal spacing, in this case 12 months. The capital letters are the same things but for points shifted by seasons, rather than single data points. There are some very cool routines in R that let you find optimal factors that can be used to generate synthetic models, where one can either manually or automatically do the fitting.

One might be tempted to ask why a so-called global temperature record might have a seasonal component. Isn’t summer in the Northern Hemisphere winter in the Southern Hemisphere? Shouldn’t these cancel out any seasonality? I can think of at least two reasons why the two hemispheres don’t exactly cancel. First, the Northern Hemisphere has a lot more land than the Southern, meaning that it has a much larger seasonal temperature variation. Second, the Earth’s orbit is slightly elliptical and in fact, Northern Hemisphere summer occurs during aphelion (farthest from the Sun) and winter occurs during perihelion (closest to the Sun). This configuration is one of the main reasons why we are currently in an interglacial, because due some quirkiness of orbital mechanics, Northern Hemisphere summers are actually of longer duration than winters.

An important tool to tease out the amount and type of autocorrelation exists in a time series is the Partial Auto Correlation Function (PACF). The HadCRUT5 PACF plot is in Fig. 5a and it shows that there is significant autocorrelation, along with a seasonal signal. The whole business of making a SARIMA model is to find factors for (p,d,q)x(PDQ)m that allow you to extract the model from the original signal, where the residual that is left over is just an uncorrelated series of “white noise”. I played around with manually fitting the SARIMA parameters, but wound up using an automated fitting procedure for two different cases. In Fig. 5b is the PACF plot of the residual for the case where the “d” parameter was constrained to be zero, and the automated routine came up with (2,0,0)x(2,0,0). The standard deviation of the residual with this model was 0.128 degrees C. A slightly better fit was achieved when “d” was not constrained and its residual standard deviation was 0.126 degrees C (Fig. 5c). Both models give residuals that reasonably mimic white noise.

Control Knobs: to “d” or not to “d”, that is the question

330px-Spinal_Tap_-_Up_to_Eleven

The white noise residuals that result from modeling the temperature times series are the random, chaotic background noise of the climate. They are likely the result of volcanoes, oceanic eddies, solar activity, rice paddy belches, and the chaotic flapping of manic butterflies. Whatever you do, it seems that the Earth’s temperature record chaotically bounces up and down by roughly 1/8 of a degree Celsius each month, and that variability is not autocorrelated and does not depend on the season. The important thing is how do the two statistical models derived above behave over a protracted period of time?

In Fig. 6, I show the results of two simulations that run for 1,000 years. In the case of the model shown in Fig. 5c, we have a classic version of a “random walk” times series. For a random walk, the series is not tied to a specific “set point” (SP), and it can blithely wander, up or down or oscillate back and forth. This sort of behavior is very closely related to the physical process of diffusion, and the average distance from the original starting point, here assumed to be a temperature anomaly of zero, increases as the square root of time. In essence, this kind of time series lacks any sort of negative feedback that tethers the temperature to a particular SP. Now this behavior is incompatible with proxy temperature records that purport to show that there is no significant change in temperature for centuries or millennia.

The model shown in Fig 5b, however, is perfect for those who claim that the global temperature has not varied significantly for a protracted period of time. In this case, although the temperature oscillates about zero, its average deviation from that SP does not increase with time. This indicates that there is a built-in set of negative feedbacks that keeps the series close to the SP. It is this type of time series that I will examine in more detail.

SARIMAPID: why look at temperature feedback?

I know what you’re saying: but Steve, why look at temperature feedback? Surely all the important feedbacks will be operating on the myriad number of control knobs controlling the climate and not directly from temperature. And you’d be right, except for one very important control knob, CO2. In the case of carbon dioxide, the direct climate sensitivity to a doubling of its concentration in the atmosphere is somewhere in the vicinity of 1.5°C. However, the truly scary consequences of driving your SUV only come about when you add in the assumed positive feedback of increased water vapor in the atmosphere, and that positive feedback is via the mechanism of temperature itself. Increase temperature and you get more water vapor, leading to higher temperature. Cool down and you get lower water vapor, which makes things cooler still. Since the feedback mechanism is temperature itself, any perturbation of temperature, whether it’s from bovine flatulence or butterfly wings should exhibit this feedback.

To examine the effect of feedback on a simulated temperature record, I tacked a simulated Proportional Integral Differential (PID) controller onto the end of the SARIMA simulation. I’ve worked with PID controllers for many decades while trying to set laboratory sample temperatures to a particular SP, for temperatures ranging from 10°K to 1700°K. Although these thermal regimes often exhibit non-linear behaviors, and one might think that an inherently linear control system would not work, in practice, one chops up the temperature regions into smaller, nearly linear regions, where the controller works quite well. Here, I’m assuming that temperature offsets within a few degrees of a global temperature of roughly 288°K is “linear enough” for a PID controller.

The “P” value is a negative feedback amount that linearly scales the response based on the current offset from the desire SP. The “I” is used to wipe out small errors by integrating the difference between the actual temperature and the SP over time. The “D” parameter is used to damp out large overshoots by looking at the derivative of the approach to the SP. Since temperature derivatives are often noisy, for many well-behaved systems, the D parameter is frequently not needed. Positive values for P and I indicate negative feedback. If any of you have a high end wood pellet grill, then you too probably have a PID controller.

For the purposes of this article, I just implemented P, or proportional control and left the I and D parameters as zero. Specifically, I implemented:

            Ti = TSARIMA – P x (Ti-1 -SP)

Note that the SARIMA model that progresses from this point onward also includes all the previous steps derived from the SARIMA model plus any feedback.

Just for fun, I wanted to see how much negative feedback would be needed to force the random walk model of Fig. 5c to become tethered to a SP of zero. It turns out that a value of only about 1×10-3 degrees per month for P is enough to tame the randomly wandering beast. However, some negative feedback is necessary to prevent the deviation from an initial value of zero to increase monotonically with time.

The model of Fig. 5b is much more closely anchored to the SP of a zero degree temperature anomaly, and therefore we should expect that it takes much more feedback to move this kind of time series away from the case of no PID control. This is because there is already a lot of negative feedback built into this model. I show the results of exploring the effects of additional proportional feedback in Fig. 7, which plots the maximal deviation from zero for a range of 1,000 year simulations. The deviations are scaled in terms of standard deviations (sigma), where the zero feedback standard deviation is about 0.1748 degrees. When P is positive, you have negative feedback and when it is negative, you have positive feedback. For the zero feedback case, one can expect about a 4 sigma maximal deviation for the 12,000 months of the simulation. As negative feedback increases in magnitude, the maximal deviation decreases down to about 3 sigma.

However, Fig. 7 also illustrates something that your mother probably taught you: too much of anything can be bad. When you get extreme negative feedback, you see the onset of a phenomenon often referred to as “hunting”, where the extreme feedback starts over correcting, which leads to larger and larger oscillations. This kind of behavior kicks in even sooner when you have positive feedback, where any perturbation of the system gets magnified. In fact, the system completely blows up for a -P value exceeding0.19.

Conclusion

What this tells me is that there cannot be very high positive temperature feedback within the climate system if the “normal” or pre-industrial temperature record is totally flat. It is possible that there is a delayed impact of water vapor increase due to a rise in temperature, but this could be accounted for using the “I” parameter of a PID controller. This parameter can introduce instabilities just as easily as the P parameter. On top of that, if the atmosphere above the oceans rises by an average of a few tenths of a degree, why would it take more than a month for the percentage of water vapor in the atmosphere to rise? Basically, my point is that if there is a rise of 1°K due to a rise in CO2, and that actually causes a 2°K rise in temperature because of positive feedback, then any perturbation of temperature for any reason should also be magnified due to positive feedback.

Of course, some, or possibly most, of the “noise” in our existing temperature record may be due to measurement or instrumental noise. If that’s the case, then all that changes in this story is the magnitude of the white noise component. General temperature feedback still needs to be considered in any climate model if one, at the same time, wants to increase the equilibrium climate sensitivity of carbon dioxide via the mechanism of positive temperature feedback.

Reference

Kaufman, D., McKay, N., Routson, C., Erb, M., Davis, B., Heiri, O., Jaccard, S., Tierney, J., Dätwyler, C., Axford, Y. and Brussel, T., 2020. A global database of Holocene paleotemperature records. Scientific data, 7(1), p.115.

Get notified when a new post is published.
Subscribe today!
5 19 votes
Article Rating
372 Comments
Inline Feedbacks
View all comments
February 13, 2024 2:51 pm

To how many tenths of a degree do they claim to be certain of the temperature 125,000 years ago?
I mean, tenths of a degree seem to matter so much now. Who measured the temperature then? How can they be so certain to a tenth of a degree about what it was then?
Without a “meme” to support, what would be the honest and ethical best guess?

Reply to  Gunga Din
February 13, 2024 6:58 pm

“the temperature 125,000 years ago?”

Mann has a new chart based oN a new climate proxy: Cave wall paintings.

Mann claims: “Those Neanderthals were smarter than they looked”

Reply to  Richard Greene
February 13, 2024 8:11 pm

“Those Neanderthals were smarter than they looked””

Certain smarter than Mickey Mann !!

Reply to  bnice2000
February 14, 2024 12:09 am

Oh dearie me.

Someone thinks that Mickey Mamn is smarter than a Neanderthal.

Easily fooled, aren’t you. !

February 13, 2024 5:45 pm

In the US most of the high-temperature records are in the distant past.
https://www.ncei.noaa.gov/access/monitoring/scec/records/all/tmax

Story Tip

Reply to  scvblwxq
February 13, 2024 6:54 pm

True but not very relevant

Greenhouse gas and UHI caused warming will mainly cause warmer nights and higher TMIN’s,

Solar and SO2 reductions caused warming would cause warmer days and higher TMAX’s

Reply to  Richard Greene
February 13, 2024 8:13 pm

Only greenhouse gas that can affect temperature is H2O

By your continued inability to produce one single bit of scientific evidence…

… you have proven conclusively that CO2 DOES NOT.

Reply to  bnice2000
February 14, 2024 12:07 am

red thumb fool….. can’t handle the truth… very pathetic. !

Either put forward a coherent post (if capable, which I doubt)..

…. or stop being a COWARD.

co2isnotevil
February 14, 2024 11:41 am

Temperature feedback as quantified by the alarmists is bogus and unsupportable by any physics. Feedback is a linear process that requires the input and output to be linearly related across all possible inputs and outputs and that the W/m^2 of input ‘forcing’ are proportional to the output temperature raised to the 4’th power per the Stefan-Boltzmann Law which isn’t even close to linear, incrementally or not. NO OTHER LAW OF PHYSICS QUANTIFIES ANY OTHER RELATIONSHIP BETWEEN W/m^2 AND TEMPERATURE REGARDLESS OF THE SOURCE OF THE W/m^2! Furthermore, there must be an implicit power supply to power the ‘gain’. The claim that approximate linearity around the mean is sufficient for the linearity requirement of feedback analysis is irreconcilably wrong. Except at a singularity, all non linear relationships are approximately linear at a point in which case why was it necessary for Bode to specify that feedback analysis only applies to linear systems? The alarmists ignorantly claims that the average forcing not accounted for by the incremental analysis is the power supply which is equally wrong, The forcing not accounted for by the incremental analysis is completely consumed maintaining the average temperature which is also not accounted for by the incremental analysis. THERE ARE NO JOULES LEFT TO POWER THE IMPLIED AMPLIFICATION SAID TO BE THE RESULT OF THE BOGUS APPLICATION OF FEEDBACK ANALYSIS!

antcam
February 15, 2024 10:41 pm

My guess is that the “one in 125.000 years” comes from the mathematics of gaussian random variables (variables that follow the Gauss bell distribution).
I think that assuming that temperatures are random and/or follow a gaussian curve may be wrong (I haven’t ever seen that kind of graphi), as it seems climate and weather are chaotic, in the mathematical sense, so tje statment may just demonstrate a misunderstanding of both statistics and weather.

Reply to  antcam
February 16, 2024 5:31 am

Look at the attached image. It is from something I did about three years ago when investigating the Central Limit Theory. As you can see, the histograms of individual stations are not Gaussian except for one and I’ll bet you can figure out why. The data is from 60 days in June and July.

The “Total” distribution is calculated by treating each day as a sample and plotting the sample mean from each sample to generate a sample means distribution. As shown, the CLT does provide a semi-Gaussian distribution which would probable be better with better sampling. The mean of the Total distribution is very close to the mean of all the temperatures averaged together as the CLT would predict.

This graph also illustrates the problem with what is considered as “measurement uncertainty”. It is obvious that if the GUM is followed, it should be an interval around the mean that shows the dispersion of MEASURED values surrounding the mean.

The sample means does not provide this value. The sample means distribution only allows calculating an interval in which the population mean may lay. This value is called the SEM. The sample means distribution is only useful for determining how close the estimated mean may be to the population mean.

The sample size in this graph is 6. Traditionally, the formula for deriving the population standard deviation is “SEM * √6 = SD”. This would not come close to replicating the standard deviation. Why is that? Because the derivation of that formula is only valid if the means and SD of all the samples are the same.

location_average_temperature