CO2 and Temperature

By Andy May

I had a very interesting online discussion about CO2 and temperature with Tinus Pulles, a retired Dutch environmental scientist. To read the whole discussion, go to the comments at the end of this post. He presented me with a graphic from Dr. Robert Rohde from twitter that you can find here. It is also plotted below, as Figure 1.

Figure 1. Robert Rohde’s plot of CO2 versus global temperature and a logarithmic fit.

Rohde doesn’t tell us what temperature record he is using, nor does he specify what the base of the logarithm is. Figure 2 is a plot of the HadCRUT5 temperature anomaly versus the logarithm, base 2, of the CO2 concentration. It is well known that temperature increases as the CO2 concentration doubles, so the logarithm to the base 2 is appropriate. When the log, base 2, goes up by one, it means the CO2 concentration has doubled.

Figure 2. The orange line is the log2CO2, use the right-hand scale. The multicolored line is the HadCRUT5 land + ocean global surface temperature record, it uses the left scale. The different colors identify the periods shown in the legend.

In Figure 2 we can see that the relationship between CO2 and temperature is close to what we expect from 1980 to 2000, from 2000 to today, warming is a bit faster than we would predict from the change in CO2. From 1850 to 1910 and 1944 to 1976 temperatures fall, but CO2 increases. From 1910 to 1944 temperatures rise much faster than can be explained by changes in the CO2 concentration. These anomalies suggest other forces are at work that are as strong as CO2-based warming.

Figure 3 is just like Figure 2 but the older non-infilled HadCRUT4 land plus ocean temperature record is used.

Figure 3. HadCRUT4 and NASA CO2. Unlike Figure 2, this record shows the pause in warming from 2000 to 2014.

The HadCRUT4 record is not infilled, just actual data in sufficiently populated grid cells, and it shows the well-known pause in warming from 2000 to 2014, shown in green. Compare the green region in Figure 3 to the same region in Figure 2. They are quite different, even though they use essentially the same data.

So, with that background let’s look at a plot like Robert Rohde’s. Our version is shown in Figure 4. The various periods being discussed are coded in the same colors as in Figures 1 and 2.

Figure 4. Our version of Robert Rohde’s plot. We use the HadCRUT5 temperatures and NASA CO2. Note NASA’s CO2 record reverses from 1941-1950. This makes the plot look funny.

The R2 (coefficient of correlation) between Log2CO2 and temperature is 0.87, so the correlation is not significant at the 90% or 95% level, but it is respectable. Here we need to be careful, because correlation does not imply causation, as the old saying goes. Further, if CO2 is the “control knob” for global warming (Lacis, Schmidt, Rind, & Ruedy, 2010), then how do we explain the periods when the Earth cooled? The IPCC AR6 report also claims that CO2 is the control knob of global warming on page 1-41, where they write this:

“As a result, non-condensing greenhouse gases with much longer residence times serve as ‘control knobs’, regulating planetary temperature, with water vapour concentrations as a feedback effect (Lacis et al., 2010, 2013). The most important of these non-condensing gases is carbon dioxide (a positive driver)”

AR6, p. 1-41

Jamal Munshi compares the correlation between temperature and CO2 to the correlation between CO2 and homicides in England and shows the homicides correlate better (Munshi, 2018). Spurious correlations occur all the time and we need to be wary of them. They are particularly common in time series data, such as climate records. Munshi concludes that there is “insufficient statistical rigor in [climate] research.”

Figure 5 shows the same plot, but using the older HadCRUT4 record, which uses almost the same data as HadCRUT5, but empty cells in the grid are not infilled.

Figure 5. The same plot of Log2CO2 versus temperature but using the HadCRUT4 record.

In Figure 5 the coefficient of correlation is worse, about 0.84. This record also has the same problem with reversing temperature trends as CO2 increases. HadCRUT4 shows the pause better than HadCRUT5, but oddly, the trend is a better match to the CO2 concentration.

Conclusion


I’m not impressed with Rohde’s display. The coefficient of correlation is decent, but it does not show that warming is controlled by changes in CO2, the temperature reversals are not explained. The reversals strongly suggest that natural forces are playing a significant role in the warming and can reverse the influence of CO2. The plots show that, at most, CO2 explains about 50% of the warming, something else, like solar changes, must be causing the reversals. If they can reverse the CO2-based warming and overwhelm the influence of CO2 they are just as strong.

Works Cited

Lacis, A., Schmidt, G., Rind, D., & Ruedy, R. (2010, October 15). Atmospheric CO2: Principal Control Knob Governing Earth’s Temperature. Science, 356-359. Retrieved from https://science.sciencemag.org/content/330/6002/356.abstract

Munshi, Jamal (2018, May). The Charney Sensitivity of Homicides to Atmospheric CO2: A Parody. SSRN

4.4 31 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

253 Comments
Inline Feedbacks
View all comments
Rudi
November 9, 2021 2:24 am

There is something strange about the temperature data showing relatively cold years in the 1920th. That was a time when glaciers were melting and temperatures were relatively warm according to the news archives.

Tom Abbott
Reply to  Rudi
November 9, 2021 8:56 am

The temperatures warmed from 1910 to 1940 at the same rate of warming that took place from 1980 to the present, so it should be expected that glaciers would melt a little bit during that time period.

David H
November 9, 2021 3:43 am

The time period is too short. For climate it should be longer.

Clyde Spencer
Reply to  David H
November 9, 2021 10:05 am

Thank you for your unsupported opinion.

November 9, 2021 3:56 am

FWIW, if you assumed the long term trend is driven by CO2 & shorter term variations are driven by “other forcings”(not saying this is necessarily the case but a thought experiment … please read on for the implications), the slope in the equations is some combination of TCR & ECS, but given the length of the correlation, probably more dominantly ECS. The 2 correlations would imply a CO2 sensitivity in the ~ 2.0 to 2.3°C/doubling range.

Note that this is very much within the “business as usual” range – not the climate catastrophe range. Whether you accept this assumption or not, it can certainly be used to argue that we don’t need radical changes in our energy systems, social systems etc. This is an opportunity to point out that “inconvenient” observation out to those wanting radical and wasteful transformations and in vernal more control over your daily life choices. Never miss an opportunity to do that.

josh scandlen
November 9, 2021 5:06 am

The reversals strongly suggest that natural forces are playing a significant role in the warming and can reverse the influence of CO2. “

100%. Once you see the tipping point for the fallacy it is, you can’t unsee it. And then it’s off to the races. In some ways, I wish I was blissfully ignorant like these other sheep and just went along for the ride.

Oddgeir
November 9, 2021 5:14 am

“It is well known that temperature increases as the CO2 concentration doubles,”

Rewrites to

It is well known that (atmospheric) CO2 increases as the temperature increases.

Ref Henry, Dalton.

Oddgeir

Captain climate
November 9, 2021 5:21 am

I did a similar scatterplot and added in the Paris agreement assumptions, the CMIP6 ECS stats, and the implied trend in temperature at the year 2100 based off the exponential increases in CO2 and the observed relationship between log CO2 and global average temperature. You can see that the models are insane in one picture OR that any extreme warming will happen well after all of us are dead and gone. And who knows that will happen with orbital forcings etc.

comment image

Bac Si
November 9, 2021 5:22 am

450 million years ago, during the Late Ordovician period, the CO2 level on this planet was 5,000 parts per million. A Global Warming fanatic would naturally assume that with that level of CO2 in the air, the planet must have been so hot that lead would remain melted on the surface of the planet. In Fact, the planet was so cold that a 30 million year long Ice Age started. It’s called the Andean-Saharan Ice Age. And now the Global Warming fanatics are worried that life will end in 11 years because the CO2 level is a tiny 412 ppm. Doesn’t anyone bother to look this stuff up? Does everyone remember their grade school science class? That’s where we all learned that while the Pleistocene Ice Age was ending and all that ice was melting, man walked from Asia to the North American Continent. That means that the earth started warming up 30,000 years ago, not 100 years ago. There were no factories, SUVs, or people burning millions of gallons of oil for electricity. There isn’t a whit of difference between the politicians of today screaming that we must stop Global Warming, then it was hundreds of years ago listening to politicians screaming that we have to appease the volcano god. The money that the world wastes on that nonsense, instead of solving real problems, makes me ill.

Tad
November 9, 2021 5:25 am

What’s the time delay between adding CO2 to the environment and average temp changes? Surely there is a significant lag on a planetary scale if there is causation? Even my frying pan doesn’t heat up instantly. From an energy budget perspective even if CO2 does increase energy retention rate, what timescale is needed before it perceptibly alters surface temps?

November 9, 2021 5:27 am

Andy,

The first thing I would like to see is a time series analysis on both of these. An expanding variance can make an increasing trend and should be eliminated.

Secondly, this analysis uses a median temperature composed of an average of Tmax and Tmin. I look askance at trying to make a definite conclusion based on this. I think a better comparison is CO2 vs Tmax and CO2 vs Tmin. If CO2 is actually a control knob, then both temps should rise and fall with CO2 concentration. I suspect, the only correlation is with Tmin.

Reply to  Jim Gorman
November 9, 2021 9:55 am

Here’s the correlation with TMax using BEST.

r^2 = 0.82, p-value is minimum, i.e less than 2e-16.

20211109wuwt2.png
Reply to  Bellman
November 9, 2021 9:56 am

And here’s the same for TMin

r^2 = 0.88, p-value is minimum.

20211109wuwt3.png
Reply to  Bellman
November 13, 2021 4:07 am

Where are your uncertainty bars? The values from 1950 on range from about 0 to +1.5. Even assuming a +/0 0.6C uncertainty (which is WAY underestimating the uncertainty of the baseline) the uncertainty bars would black out most of the graph from 1950 on. Meaning the trend line has no basis in fact.

If you *really* want to see what is happening to Tmin then look at the number of days to first frost. Not much uncertainty in those values.

Reply to  Tim Gorman
November 14, 2021 11:20 am

Here you go, using the BEST 95% estimate of uncertainty.

20211114wuwt1.png
Reply to  Clyde Spencer
November 9, 2021 1:19 pm

Nice article! I like the look at temps in the past.

Reply to  Jim Gorman
November 10, 2021 12:13 am

Secondly, this analysis uses a median temperature composed of an average of Tmax and Tmin

This is a persistent and profound misunderstanding on your side. We are using the mean not the median, but anyway your method does not describe even how the median is calculated. Neither the mean.

If CO2 is actually a control knob, then both temps should rise and fall with CO2 concentration

This is not true either. Extremes may show strange behaviour. You can have a persistent warming even with long periods of simultaneously decreasing max and min temperatures. The mean is a much better indicator. FYI the extremes also show increase, see Bellman’s graphs.

Reply to  nyolci
November 10, 2021 9:47 am

“We are using the mean not the median, but anyway your method does not describe even how the median is calculated. Neither the mean.”

Define “mean” and Median” when using two numbers.

Additionally define the “mean” and “median” of two points in a sinusoidal like waveform like a daily temperature especially when one value is a “max +” and the other is a “min -“. The last I knew that mean would be zero on a perfect sine! Since the temps during day and night are not perfect sines, there will be a median value for both halves of the wave resulting in a “median” that is different than the “mean”.

Reply to  nyolci
November 10, 2021 9:56 am

“You can have a persistent warming even with long periods of simultaneously decreasing max and min temperatures. ”

You might want to think about this a little more. Is this something like a pause?

Reply to  Jim Gorman
November 10, 2021 3:02 pm

Define “mean” and Median” when using two numbers.

We are not using two numbers.

one value is a “max +” and the other is a “min -“

You’re somehow obsessed with these two points. The mean is calculated by averaging all the measurements in the period/location chosen. The min and max has no special role here. The median is calculated by ordering all the measurements and picking the “middle one” (a value that’s lower than the higher half and higher than the lower half). The min and max has no special role here either. They neatly fall into the “low” and the “high” set, respectively. But the median is not really used, it has undesirable properties.

Is this something like a pause?

No. That’s something different, furthermore, I was talking about a mathematical possibility (with an increasing mean not a pausing one). But the fact is that extremes may show counterintuitive behaviour (especially if the location/time period is limited).

Reply to  nyolci
November 10, 2021 5:30 pm

“We are not using two numbers.”

Just what in blazes do you think a daily average is. What are monthly averages built from. IT ALL BEGINS WITH TWO NUMBERS EACH AND EVERY DAY.

“The min and max has no special role here either. They neatly fall into the “low” and the “high” set, respectively.”

I don’t even know what you are trying to say here. I don’t think you do either.

The min and max do NOT fall neatly into low and high set. A daily average doesn’t even provide a range. You call it a mean, what is the standard deviation of that mean of a day?

Can you combine that mean directly with another mean has a different standard deviation? Do the variances add when you do that?

Quoting a mean with no other statistical parameters is scientifically
inappropriate behavior.

The fact that you don’t recognize any of this is simply flabbergasting.

Reply to  Jim Gorman
November 11, 2021 12:32 am

IT ALL BEGINS WITH TWO NUMBERS EACH AND EVERY DAY.

No. You have x daily measurements at a station and you use that.

I don’t even know what you are trying to say here.

Please go and read how the median is calculated. You may be able to understand it then.

The min and max do NOT fall neatly into low and high set. A daily average doesn’t even provide a range

Huh, you seem to have messed this up… At that point I was talking about how the median (not the mean) is calculated from the daily data points.

Can you combine that mean directly with another mean has a different standard deviation? Do the variances add when you do that?

Actually, you can. You can have a composite mean. You have to apply proper weighting, not straightforward but you can. The variances behave as with the mean, they don’t add up. (I write this down with a little bit of fear knowing your tendency to completely misunderstand even the simplest thing.)

Quoting a mean with no other statistical parameters is scientifically inappropriate behavior.

These are always published in scientific publications.

Reply to  nyolci
November 11, 2021 6:22 am

“No. You have x daily measurements at a station and you use that.”

Exactly what do you think Tmax and Tmin are, multiple measurements in a day? They are one temperature each, the warmest and the coolest. I believe that is two measurements per day and an “average” is calculated from those two measurements. In your vernacular, “x = 2”! Granted the latest measuring systems use information from short time intervals (seconds) to obtain these figures but they are still just two figures.

“Please go and read how the median is calculated. You may be able to understand it then.”

I assure you I know how medians are calculated. Please read the following and pay attention to the section on temperatures. Ask yourself why we are doing grade school averaging on temperatures when better mathematical treatments are available. And remember these are monthly temps. They are made up of periodic wave functions throughout each day of the month! Have you ever heard of Fourier or wavelet analysis?

https://courses.lumenlearning.com/precalctwo/chapter/modeling-with-trigonometric-equations/

“Actually, you can. You can have a composite mean. You have to apply proper weighting, not straightforward but you can. The variances behave as with the mean, they don’t add up. (I write this down with a little bit of fear knowing your tendency to completely misunderstand even the simplest thing.)”

Yes you can find a composite mean. Using monthly figures it is pretty simple. However variances do add when combining populations. Read the following:

https://www.khanacademy.org/math/ap-statistics/random-variables-ap/combining-random-variables/a/combining-random-variables-article

https://apcentral.collegeboard.org/courses/ap-statistics/classroom-resources/why-variances-add-and-why-it-matters

Ask yourself what is the variance of a monthly temperature when combining 30 or 31 individual random variables of daily average temperatures. Do you need to know the daily variances? What is the variance in an annual temperature? How about when thousands of stations are combined to calculate a GAT?

“These are always published in scientific publications.”

Really! Show us a publication that shows what the variance of each monthly GAT is. Not the fake uncertainty number, but the actual variance in the data.

Reply to  Jim Gorman
November 11, 2021 5:01 pm

I believe that is two measurements per day and an “average” is calculated from those two measurements

You are wrong.

I assure you I know how medians are calculated.

You are mixing up median and mean.

Have you ever heard of Fourier or wavelet analysis?

Short answer is “Yes”. The long answer is “MSc of EE”.

However variances do add when combining populations.

This is your obsession and a complete misunderstanding of the subject.

Reply to  nyolci
November 12, 2021 7:24 am

I believe that is two measurements per day and an “average” is calculated from those two measurements

“You are wrong.”

As he says with no backup at all. I’ll ask what you call [(Tmax +Tmin) / 2 ]?

I assure you I know how medians are calculated.

“You are mixing up median and mean.”

Again, as he says with no back up.

However variances do add when combining populations.

“This is your obsession and a complete misunderstanding of the subject.”

Yes, it is my obsession! Quoting arithmetic means of any kind without also quoting a variance or standard deviation is scientific obfuscation.

Covering up statistical details borders on fraud. As an engineer do you ever quote measurements without also quoting the uncertainty and/or accuracy of the measurement? Do you quote measurements to more precision than was actually measured? I certainly hope not. As a fellow engineer I would have thought you would be obsessive with correct and ethical practices also.

As to my misunderstanding, again, statements with no explanation are worth nothing. I have shown links to places on the web that explain variance and my understanding. You have posted nothing, nor have you written your explanation of why adding variances is not a prerequisite for an accurate description.

Reply to  Jim Gorman
November 12, 2021 12:34 pm

Quoting arithmetic means of any kind without also quoting a variance or standard deviation is scientific obfuscation.

??? This is hilarious. We, the non-deniers are always pointing out that (contrary to your assertion) the arithmetic mean’s variance is much less than the variance of the individual measurements. This is what we are insisting on.

As an engineer do you ever quote measurements

??? I’m not quoting any measurements here. I’m referring to you to the scientific literature. Please try to read at least a single article at last, okay, instead of bsing. FYI the others (Bellman, bdgwx, Banton) do quote measurements and they give errors most of the time, or when they don’t, they provide it if asked.

I have shown links to places on the web that explain variance and my understanding.

These directly contradict your assertions (or, rarely, irrelevant).

Reply to  nyolci
November 12, 2021 3:26 pm

 the arithmetic mean’s variance is much less than the variance of the individual measurements. This is what we are insisting on.”

It is the variance of the individual measurements that determine the uncertainty of the mean. The variance is a measure of the interval in which the next measurement may lie. The variance of the mean does *NOT* tell you anything about where the next measurement may lie.

There are THREE factors that describe measurements used in a data set.

  1. The measurement process must be independent. The next measurement process cannot be influenced by previous measurement processes.
  2. The measurement values must represent a random value.
  3. The measurement values can be dependent or independent.

a. Multiple measurements of the same thing are dependent, they depend on the single measurand.. Each measurement has an EXPECTED value, known as the “true value” of the measurand. The measurements create a random distribution around the true value that can be analyzed using statistic tools.
b. Multiple measurements of different things do *NOT* result in a random distribution around a true value. None of the measurements have an EXPECTED value. Each measurement represents a totally independent value (i.e. a random variable with a population size of 1) with an uncertainty interval. That uncertainty interval is analogous to the variance of a population with more than one member).

It is a statistical truism that when combining independent, random variables that the variance of the combination is the sum
of the variances of each independent, random variable.

V_total = V_1 + V_2 + …. + V_n

This is why the uncertainty of the multiple measurements of different things have their uncertainties add. That uncertainty carries over to the mean that you calculate. The standard error of the mean is *NOT* the same thing as the variance of the population.

Reply to  nyolci
November 12, 2021 5:22 pm

This is your obsession and a complete misunderstanding of the subject.”

When you combine independent, random variables their variances add. You claim to be an MSc of EE and you don’t know this? Where did you take your masters degree?

Reply to  nyolci
November 12, 2021 5:20 pm

Huh, you seem to have messed this up… At that point I was talking about how the median (not the mean) is calculated from the daily data points.:

If the temperature profile is a sine wave during the day and an exponential at night then the *average* during the day is 0.63 * Tmax. At night it approximates 0.63 * Tmin.

“sin(t) ≈ e^(it) ”

All you need to calculate the average value is Tmax and Tmin. This also minimizes the uncertainty associated with the average.

If you numerically integrate the profile in order to determine the average then the uncertainty for each point used ADDS in order to find the total uncertainty.

GHCN for 19th and 20th century are mostly Tmax and Tmin data. Values calculated from these *are* mid-range values. They are neither average temp or mean temp.

Reply to  nyolci
November 12, 2021 5:41 pm

“No. You have x daily measurements at a station and you use that.”

————————————-
STATION NAME DATE TMAX TMIN

USC00146252 PAULINE, KS US 1894-01-01 59 32

USC00146252 PAULINE, KS US 1894-01-02 58 33
—————————————-

GHCN data from 1894-01-01 and 1894-01-02

You get exactly two data points per day. Tmax and Tmin.

What you get from these is a mid-range value, not an average/mean.

Like it or lump it, it is still what it is. You are just throwing icky stuff against the wall to see if it sticks. Stop it.

Reply to  Tim Gorman
November 13, 2021 8:48 am

What a hilarious display of convoluted and confused thinking, and half axxed knowledge… Okay, to your credit you’ve started to look into probability theory at last. You’ve realized, probably, that you’re doomed in any debate otherwise.

The variance is a measure of the interval in which the next measurement may lie

Okay, variance is a mathematical concept, and it’s related to these things, but it’s not what you say. Variance = E((x-E(x))^2), ie. the expected value of the square of the difference of a random variable from its expected value.

Each measurement has an EXPECTED value, known as the “true value” of the measurand

No, expected value is not known as the “true value”. Actually, the “true value” is not a term of probability theory. FYI the expected value of a dice roll is 3.5.

The measurements create a random distribution around the true value

This is not the definition of random distribution. Again, you don’t have a “true value” here. A distribution is just a function that has to conform to certain rules. “Distribution” is not defined in terms of a “true value”. It has an expected value though, but the expected value is just a mathematical function of the distribution (sum(a*p(a)), where “a” is a possible outcome, and p(a) is its probability).

Multiple measurements of different things do *NOT* result in a random distribution around a true value

This sentence makes no sense in the context of probability theory. You can combine random variables in numberless ways, they have a combined distribution. You yourself know they can be added together, for example.

None of the measurements have an EXPECTED value

Yes, they have. See above, dice roll may well be regarded as a (rather useless) measurement. Multiple, summed dice rolls have an E() value. If you add two rolls of a dice (or one roll with two ones), you have a “triangular” distribution with expected (and, incidentally, most probable) value of 7. Quantum mechanical phenomena (to our best knowledge) are governed by probabilities, and observation/measurement “forces” a particular output that follows the distribution of whatever quantum variable. And it has an expected value.

V_total = V_1 + V_2 + …. + V_n

I can’t understand your complete inability to understand that we are always talking about averages. In that case V_avg = sum(V_i)/n.

If the temperature profile is a sine wave during the day and an exponential at night

??? The temperature profile is something that is very unlikely to follow your “assumptions”. I don’t understand your goal here, you make a completely unfounded assumption to prove your tmin-tmax idiocy?

GHCN for 19th and 20th century are mostly Tmax and Tmin data

Perhaps, and this was likely an artifact of the measurement method and the general technical conditions of the era. Now we have much more advanced technology where we sample temperature and whatever other variables frequently and regularly. I’m pretty sure they have a huge set of guidelines how to do this sampling. FYI These things regarding sampling (and its irregularities) in old data are some of reasons why they have to apply adjustments, another thing you are unable to comprehend.

Reply to  nyolci
November 13, 2021 2:55 pm

What a hilarious display of convoluted and confused thinking, and half axxed knowledge… Okay, to your credit you’ve started to look into probability theory at last. You’ve realized, probably, that you’re doomed in any debate otherwise.”

In other words you can’t refute anything I’ve asserted. Why am I not surprised?

Variance = E((x-E(x))^2), ie. the expected value of the square of the difference of a random variable from its expected value.”

What’s the EXPECTED value for a dataset consisting of the length of every single board, broken or whole, in your hearest Home Depot store?

“No, expected value is not known as the “true value”. Actually, the “true value” is not a term of probability theory. FYI the expected value of a dice roll is 3.5.”

Of course the true value is the expected value. That’s why multiple measurements of the same thing are considered to be true value plus/minus a random error. Each measurement leads you closer and closer to the true value.

Like most people with no understanding of physical science, rolls of a dice is a discrete probability distribution. Temperature is *NOT* the same thing, it is a continuous function. And, btw, if the temperature profile is truly close to a sine wave then guess what? Sine waves HAVE NO PROBABILITY DISTRIBUTION.

Stop trying to teach an old dog how to suck eggs. You truly suck at it!

(more tomorrow)

Reply to  Tim Gorman
November 13, 2021 3:59 pm

In other words you can’t refute anything I’ve asserted. Why am I not surprised?

??? My whole post is a refutation of your (substitute whatever).

What’s the EXPECTED value for a dataset consisting of the length of every single board, broken or whole, in your hearest Home Depot store?

Stupid question. You have an expected value of board length. The nearest Home Depot’s stock is just a set of actual outcomes of this random variable.

Of course the true value is the expected value. That’s why multiple measurements

and

rolls of a dice is a discrete probability distribution

Good God, the dice was just an easy example. You can have a continuous distribution with such an expected value that is not a legal element of the domain. Like impact positions for gunshots. These have a 2 dim. Gaussian normal (or similar) distribution. But if you put an obstacle in the path you get two distinct sets. The positions are not discrete but they fall into either set and the expected impact position is actually in the “forbidden territory”.

Sine waves HAVE NO PROBABILITY DISTRIBUTION.

What a confusion and mess in your head… Temperature MEASUREMENTS have.

Stop trying to teach an old dog

I apparently can’t teach this ignorant dog, he is too old for that.

November 9, 2021 5:31 am

“…something else, like solar changes, must be causing the reversals.”

Have you tried comparing solar activity against temperature to see how good a correlation you get?

Richard M
Reply to  Bellman
November 9, 2021 12:01 pm

That won’t work since the energy received by the planet is also dependent on how much is reflected. We now know that all the warming in the 21st century was due to a reduction in clouds.

“… the root cause for the positive TOA net flux and, hence, for a further accumulation of energy during the last two decades was a declining outgoing shortwave flux and not a retained LW flux. ” – Hans-Rolf Dübal and Fritz Vahrenholt, October 2021

Reacher51
November 9, 2021 6:21 am

Surely, a chart that overlays CO2 concentration on a sinusoidal temperature reconstruction encompassing the Roman Warm Period through the modern warm period would show the utter ridiculousness of the CO2-as-climate-control-knob argument even more clearly.

It may be true that CO2 does not well explain late 19th century and early 20th century multidecadal warming. However, it is much more obviously true that unchanged CO2 concentration cannot in any way account for multicentury warming and cooling periods, and nor can it account for the global warming that took place from ~1700 to the beginning of the Industrial Revolution. The inability of CO2 even to account for modern multidecadal warming, cooling, and hiatus periods shows how little effect it actually has, even at the margins of the long term trend that CO2 cannot have been responsible for starting.

Ian
November 9, 2021 7:00 am

“it does not show that warming is controlled by changes in CO2
 
Other way around. Salby and Harde have now shown conclusively that changes of CO2 follow from changes of temperature, not vice versa.
 
https://scc.klimarealistene.com/2021/10/new-papers-on-control-of-atmospheric-co2/
 
Figures 5 and 9 of Part II are especially persuasive. They confirm what was found earlier by Humlum (2013): that increased CO2 originates in the tropics, not in the industrialized temperate latitudes.

Rick W Kargaard
November 9, 2021 7:20 am

There are casual correlations between temperatures and CO2. But is CO2 driving temperature or is temperature driving CO2 levels.
We all seem to agree that temperature drives moisture levels and that moisture levels drive temperature. If both are true, why not with CO2.
Both situations should lead to runaway warming. What limits it and what reverses it, as obviously happens? I am getting tired with trying to puzzle this out.
Toss all the graphics and other explanations at me that you wish. I can look out my window and see a world that has changed in myriad significant ways but climate is not one of those significant changes. Anything I have seen suggests a moderating and beneficial trend, meaning more of the good and less of the bad.
Almost all of the unusual weather events I have seen before in some form or place and I am only 79 years old. All of this suggests that we actually have a very stable climate and there is little reason for worry unless you insist on living in dangerous places.

griff
Reply to  Rick W Kargaard
November 9, 2021 8:29 am

well looking around the UK, I clearly see a climate which has changed: winters are markedly milder (at least until very late winter when a sudden severe cold blast seems to hit)… there are still green leaves on trees this week and the odd flower and only one mild frost – unusual now to have one before new year…

And especially the UK is much, much wetter – though not everywhere, the excess falls as slow moving heavy rain systems or sudden intense downpours – we have yearly floods and flash floods.

the UK Met Office figures bear out my observations…

Reacher51
Reply to  griff
November 9, 2021 10:25 am

Is the UK much wetter now than it was during the Great Famine of 1314-1316, when rain fell nearly continuously for two and a half years, and crops were ruined nationwide? It certainly doesn’t seem to be.

In fairness, of course, the UK does appear to be much wetter than it was in 1921, when England went more than 100 days without rainfall, crops failed nationwide, and the formerly wet country had to ration water.

Looking at the UK, it seems as if its current climate and weather are, in fact, perfectly fine and certainly well within the normal range of historic variability. A thousand years or so of recorded history bears out my observations…

Rick W Kargaard
Reply to  griff
November 9, 2021 6:20 pm

Perhaps you are living in one of those dangerous places. Try Canada if it is too hot for you there.

Alan the Brit
Reply to  griff
November 11, 2021 4:22 am

Strange, the last time I looked at the Wet Office’s rainfall data for the UK over the last 150 years, it was a flat line, oh yes some years were wetter than others, some years drier than others, but the average was & is a flat line!!!

Reply to  Alan the Brit
November 11, 2021 5:23 am

Maybe you should look at the Met Office’s data instead. Definitly not a flat line.

Screenshot 2021-11-11 132232.png
Reply to  Bellman
November 12, 2021 11:24 am

Alan & Bellman, Englands rainfall has not changed

England annual rainfall.png
Reply to  Chas
November 12, 2021 11:25 am

But Scotland’s has

Scotland annual rainfall.png
November 9, 2021 7:24 am

How would this correlation look if GMST temperature were properly expressed in Kelvin?

Kelvin has thermodynamic significance.
Celsius does not.

288 K – 255 K = 33 C NOT 33 K.
This 33 C is a difference of 33 Celsius units.
33 K is 33 Celsius units above absolute 0.
See? Not the same.

For instance: a 3 degree C rise from 15 C to 18 C is a 20% increase. Oooh, scary!!!
But 291 K to 288 K is only 1%. Ho-hum.

C references water’s triple point.
K references absolute 0.
They are not casually interchanged.

It’s similar to psia, psig, psid and vacuum. If you don’t understand the differences you are going to screw thangs up.

Tom Abbott
November 9, 2021 8:15 am

I fail to see how comparing the CO2 record to a bogus temperature record can give us anything useful.

Peter
November 9, 2021 8:57 am

One of the first things I learned as a researcher, back in the 80’s, was that if your data was scattered and a relationship between experimental factors and observed results weren’t evident, then use a log scale. If still not “obvious”, use a log-log plot! EVERYTHING is linear in a log-log plot! The other “trick” (Tm MM) was to keep lowering the confidence limits and statistical significance criteria until something popped up…

Jeff Alberts
November 9, 2021 10:51 am

Rohde doesn’t tell us what temperature record he is using”

Doesn’t matter. When you present a single line as a “global temperature”, you’re presenting a fantasy, something that doesn’t exist.

November 9, 2021 10:57 am

I am starting to think the surface temperature observations prior to about 1900 are of little value. They directly conflict with sea level and glacial retreat which show putative warming commencing much earlier (pre-1850) and not from circa 1910 as the IPCC story has it.

Tom Abbott
Reply to  ThinkingScientist
November 10, 2021 4:04 am

“I am starting to think the surface temperature observations prior to about 1900 are of little value.”

I would move that up to about 1979. The entire Twentieth Century temperature record has been bastardized to cool the time periods when temperatures were just as warm as today, such as the 1880’s and the 1930’s.

Human-caused Climate Change promoters want everyone to believe we are currently living in the hottest time in human history, and in order to sell this lie, they have to distort the historical temperature record to erase periods that were just as warm as today.

Otherwise, they couldn’t claim we are experiencing unprecendented heat today, and they couldn’t claim CO2 is causing the heat. Without a bastardized temperature record, the alarmist don’t have a case to make. There goes all their grants and prestige and virtue signalling. They don’t want that, so they Lie.

November 9, 2021 12:21 pm

HADCRUT 5.0 Analysis-gl 1850-202109 12 month mean, fit function T(t)= c0+c1*ln(cCO2(t)/cCO2(t0)), R² 0,87
fit function T(t)= c0+c1*{[(cCO2(t)-cCO2(t0)]/cCO2(t0))}^1,2, R² = 0,89
Conclusion: The data don’t support the saturation effect. There must be several reasons for the warming. 

bdgwx
November 9, 2021 1:05 pm

Here is a plot of OHC vs LogE(CO2) from 1959/01 to 2021/03. The R^2 is 0.92 on this relationship. CO2 is only one of the agents modulating OHC. Aerosols are the 2nd biggest modulator so I’d like to get a hold of aerosol loading and see what the R^2 is on a composite of LogE(CO2) and AOD. The ebb and flow of aerosol loading can explain some of the deviations from the trendline of OHC.

comment image

Reply to  bdgwx
November 9, 2021 1:24 pm

Once again, bdgwx brings up the entirely fictitious Ocean Heat Content metric.

JCM
November 9, 2021 6:47 pm

There is no question CO2 and various temperature metrics are highly correlated

Tropical temps vs Co2 derivative: Temperature correlates with every bump and wiggle of Co2 changes (derivative).

https://woodfortrees.org/plot/hadcrut4tr/from:1990/plot/esrl-co2/from:1990/mean:12/derivative/normalise

No lag, nothing like that. Essentially in lock step.

Co2 derivative co-relates in lock step with Temperature..

Nothing about absolute CO2 concentration. Regardless of the concentration, Co2 rate of change correlates with absolute Temperature change. It’s a perfect proxy in the observational data.

The pauses are there and everything. Co2 does whatever Temperature does…

For the infinite number of factors affecting temperature Co2 is matching it in the data.

Reply to  JCM
November 10, 2021 12:03 pm

Except for the interval of 1945 to 1975 . . . yeah, except for that “inconvenient” truth.

JCM
Reply to  Gordon A. Dressler
November 10, 2021 12:58 pm

Sorry I don’t follow. Can you point me towards CO2 derivative and temperature data for that period? I am not aware of any reliable data for that period. I don’t see why the pattern wouldn’t persist. Perhaps we are not understanding eachother.

Reply to  JCM
November 11, 2021 8:11 am

So simple:

Please look at Figure 1 in the above article . . . it shows the interval of 1945-1975 indicated by the yellow-orange to first few red-orange data points, where a LS curve fit would definitely shown declining global mean temperature versus increasing atmospheric CO2 concentration.

Next, please look at Figure 2 in the above article . . . the yellow portion of the graphed curve, clearly indicated in the graph’s legend as “1974-1976” that necessarily is inclusive of the interval of 1945-1975, clearly (by eyeball or by LS curve fit) shows decreasing temperature anomaly while the separately plotted line for log-base2-CO2 is clearly increasing.

Moreover, the text immediately under Figure 2 has this direct statement: “From 1850 to 1910 and 1944 to 1976 temperatures fall, but CO2 increases.”

Strange that you apparently missed noticing these intervals of anti-correlations.

Lastly, for the sources of CO2 and temperature data used in Figure 1, you can contact Dr. Robert Rohde. For Figure 2, the top of the graph clearly indicates HadCRUT5 for temperature data and NASA for CO2 data. If you think this data is not reliable, I suggest you take the matter up with both of these organizations.

JCM
Reply to  Gordon A. Dressler
November 11, 2021 9:20 am

Strange that you apparently missed noticing these intervals of anti-correlations

The correlations are quite apparent. To my original point, and the data I presented, it is clear to see the rate of CO2 change diminishes in the periods you mention. Sure, it’s not perfect, but it’s clear to see the Co2 line levels off in relatively cooler periods. The core of my argument is that it is the rate of change that is correlated. See my original plot – do you dispute this? There is no reason to suspect a change of sign in absolute CO2 trend. This is a common misconception I see all over this site – many expect to see linear relations between variables. There appears to be an avoidance of calculus and derivatives when considering anything from solar effects to Co2-temperature relationships. Many do not appear to understand the integral relationships. A decade or two of a decline in temperature from a ‘high’ to a ‘less high’ does not suggest Co2 change should thus go negative. But, it’s reasonable that the rate of Co2 change would be less steep. This is observed – the Co2 derivative and temperature is certainly not anti-correlated.

Reply to  JCM
November 11, 2021 9:43 am

JCM, you state:
“The core of my argument is that it is the rate of change that is correlated.”

Previously you posted:
“There is no question CO2 and various temperature metrics are highly correlated . . . No lag, nothing like that. Essentially in lock step . . . Co2 derivative co-relates in lock step with Temperature . . . Regardless of the concentration, Co2 rate of change correlates with absolute Temperature change. It’s a perfect proxy in the observational data.”

So, in reply I have to ask, when the derivative of temperature change goes from positive to negative (as it did leading into the period of 1945-1975), why didn’t the derivative of atmospheric CO2 concentration also go from positive to negative???

After all, you assert the derivatives of each parameter are in “lock step” with each other; even furthermore, you assert something about a “perfect proxy in the observational data”. Perfect???

JCM
Reply to  Gordon A. Dressler
November 11, 2021 9:47 am

See my note below with the glacier example. There is more to data and correlations than simple linear relationships.

JCM
Reply to  Gordon A. Dressler
November 11, 2021 10:29 am

So, in reply I have to ask, when the derivative of temperature change goes from positive to negative (as it did leading into the period of 1945-1975), why didn’t the derivative of atmospheric CO2 concentration also go from positive to negative???

The Co2 derivative declines – but there is no reason to expect a change in sign of absolute Co2 change. While the glacier example is a useful analogy, critical glacial thresholds are probably stationary. In the Co2 case there is no reason to expect a critical threshold for atmospheric Co2 mass balance to be stationary so it’s an interesting question. Co2 mass balance seems to depend in large part on its own concentration.

Assuming it is stationary, for the sake of argument, if a change in temperature from 5 to 7 degrees results in a change from static Co2 concentration to a Co2 increase, would you expect Co2 to decrease from a temperature change of 10 to 8 degrees? It seems likely it would continue to increase, albeit at a slower rate than when it was at 10. Here we could argue a critical threshold is roughly 6 degrees. It’s not a great example but it might illustrate my point.

JCM
Reply to  Gordon A. Dressler
November 11, 2021 9:37 am

I had a similar discussion with Vuk the other day about glacier advances.https://wattsupwiththat.com/2021/11/06/no-mention-of-the-little-ice-age-justin/#comment-3382236

Vuk proposed that glacier advances and recessions should correlate with a change in sign of Temperature trend. Vuk suggested that because the calculated temperature trend was negative from the 1720s that therefore it is when glaciers would have started advancing. Vuk expects to see a linear relationship. Vuk has ignored a critical factor – that there is a temperature threshold at which a general glacier advance might occur. Vuk proposed that glaciers would have started advancing during a period of relative warmth. Instead, Vuk might have found better results by integrating around an empirical threshold (or physical threshold) to find the timing of a change in sign of glacial mass balance trend.

Reply to  JCM
November 11, 2021 2:49 pm

The debate over glacier movement (growth/retreat) dependence on temperature trending is fundamentally different than that of global temperature correlation with global atmospheric CO2 concentration.

Glacial growth/retreat has an included key triggering parameter: the freezing point of water, a phase change. Even under a cooling trend, a glacier cannot increase in size if there is no snow/ice accumulation (which requires sustained atmospheric temperatures over time averaging below 32 deg-F).

There is no such key triggering point (phase change) associated with any proposed phenomena that relates global temperatures to atmospheric CO2 concentration.

QED.

JCM
Reply to  Gordon A. Dressler
November 11, 2021 4:16 pm

1960s and 70s had a few 12 month blips below zero. A net decrease of Co2. 1990s took a dip but not quite there. If you agree, what is your interpretation?

https://woodfortrees.org/plot/esrl-co2/derivative/mean:12

mean_12.png
JCM
Reply to  JCM
November 11, 2021 5:04 pm

If you’re so inclined, could you also comment about the variations displayed in the plot and why you propose that it is not related to temperature.

JCM
Reply to  Gordon A. Dressler
November 10, 2021 1:07 pm

The only CO2 data aside from ice core data I am aware of is compiled by Beck.

https://climatecite.com/wp-content/uploads/E-G-Beck-CO2.pdf

It’s a little noisy but it kinda works.

Beck.png
JCM
Reply to  JCM
November 10, 2021 1:33 pm

This, of course, calls into question the available temperature estimates as well. So, if you have better information I am all ears. Thanks.

spock
November 9, 2021 10:38 pm

“All four trees were grown under the same conditions except for the concentration of CO2 in their”“plastic enclosures. This is why the Earth is greening as we elevate carbon dioxide in the atmosphere by nearly 50 percent, from a starvation-level of 280 ppm to 415 ppm. It can be seen from this experiment that there is room for much more growth in trees, food crops, and other plants as CO2 continues to rise to more optimum levels. The level of 835 ppm of carbon dioxide allows trees to grow more than double the rate of 385 ppm. This photo was taken in 2009 when atmospheric CO2 was about 385 ppm.”

Excerpt From
Fake Invisible Catastrophes and Threats of Doom
http://libgen.rs/book/index.php?md5=62F19352A7FD8FA7830C90D187094289

im.jpg
Tom Abbott
Reply to  spock
November 10, 2021 4:13 am

It’s amazing how good plants do with increased CO2.

November 10, 2021 10:15 am

Missing in the above article: any attempt to reconcile the temperature increases shown on the graphs for the last 7 years with the “pause” in UAH-determined global temperature increase as presented in the WUWT article by Christopher Monckton of Brenchley (link: https://wattsupwiththat.com/2021/11/09/as-the-elite-posture-and-gibber-the-new-pause-shortens-by-a-month/ ).

Quote from the that linked article:
“The New Pause has shortened by a month to 6 years 8 months on the UAH data, but has lengthened by a month to 7 years 7 months on the HadCRUT4 dataset.”

November 12, 2021 1:11 pm

If you’ve got two graphs that show decent correlation, how can one tell who’s the leader and who’s the follower?

Pick the one you want to be the driver?

November 12, 2021 1:34 pm

Temperature, like baseball at-bats, lend themselves to endless statistical analyses. But what about narrowing the focus to look at the individual rather than the mass?

I downloaded the most recent USHCRN TMAX raw data and started looking at stations with good, complete data since 1992 – the last 30 years. Right now I’m running numbers for Jan-Feb-Mar (JFM), and it appears that every station in Alabama has cooled during that time.

Doesn’t that count for something? Once rolled up into tha grid averages that disappears, but it must be significant on its own.