UAH Global Temperature Update for January, 2022: +0.03 deg. C.

From Dr. Roy Spencer’s Weather Blog

February 2nd, 2022 by Roy W. Spencer, Ph. D.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for January, 2022 was +0.03 deg. C, down from the December, 2021 value of +0.21 deg. C.

The linear warming trend since January, 1979 now stands at +0.13 C/decade (+0.12 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1991-2020) average for the last 13 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST 
2021 01 0.12 0.34 -0.09 -0.08 0.36 0.50 -0.52
2021 02 0.20 0.32 0.08 -0.14 -0.65 0.07 -0.27
2021 03 -0.01 0.13 -0.14 -0.29 0.59 -0.78 -0.79
2021 04 -0.05 0.05 -0.15 -0.28 -0.02 0.02 0.29
2021 05 0.08 0.14 0.03 0.06 -0.41 -0.04 0.02
2021 06 -0.01 0.31 -0.32 -0.14 1.44 0.63 -0.76
2021 07 0.20 0.33 0.07 0.13 0.58 0.43 0.80
2021 08 0.17 0.27 0.08 0.07 0.33 0.83 -0.02
2021 09 0.25 0.18 0.33 0.09 0.67 0.02 0.37
2021 10 0.37 0.46 0.27 0.33 0.84 0.63 0.06
2021 11 0.08 0.11 0.06 0.14 0.50 -0.42 -0.29
2021 12 0.21 0.27 0.15 0.03 1.63 0.01 -0.06
2022 01 0.03 0.06 0.00 -0.24 -0.13 0.68 0.09

The full UAH Global Temperature Report, along with the LT global gridpoint anomaly image for January, 2022 should be available within the next several days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

5 22 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

261 Comments
Inline Feedbacks
View all comments
Ouluman
February 4, 2022 9:34 am

Well, all I can say is where I live in Finland it was -23c 2 days ago , -16c yesterday, -7 today and 30 cm snow, -1 in a few days. Normal stuff. Wtf is 1/10th of 1c in 10 years difference going to do? We often have 30c difference in temps within 36 hours.

bdgwx
Reply to  Ouluman
February 4, 2022 10:57 am

The global average temperature does not change 30 C in 36 hours.

AlexBerlin
Reply to  bdgwx
February 4, 2022 11:31 am

No, but any system able to survive quick 30 C changes will brush off a slow 3 C change without even noticing. Especially as it is going in the right direction, away from freezing temperatures. Ice kills. Less ice = better life on Earth.

bdgwx
Reply to  AlexBerlin
February 4, 2022 7:05 pm

Has Earth had a period where there has been a quick 30 C change in the global average temperature?

Reply to  bdgwx
February 6, 2022 6:19 pm

Always the idiotic word games with you people, assuming we are as stupid as your people.

bdgwx
Reply to  Pat from kerbob
February 7, 2022 9:10 am

It’s a fair question. The insinuation by Ouluman and AlexBerlin is that because it is normal and survivable for specific locations to have 30 C changes than it must be normal and survivable when the global average temperature changes by 30 C as well. I want to know when the last time the global average temperature changed by 30 C and to be presented with evidence that this was a survivable period. Perhaps you could answer the question?

Reply to  bdgwx
February 7, 2022 3:53 pm

What is the difference between temps above the Arctic circle in winter and the summer Dubai?

People survive in both extremes. That doesn’t mean that some regions will prosper and some won’t. But it doesn’t mean everyone on the globe will die!

bdgwx
Reply to  Tim Gorman
February 7, 2022 4:18 pm

I believe the average high in Dubia in August is around 41 C and the average low at Summit Camp in January is -48 C.

Has the global average temperature ever changed by 89 C between August and January?

Do you think people would be able to survive if the global average changed by 89 C?

Reply to  bdgwx
February 8, 2022 4:57 am

You missed the point entirely. The global average doesn’t determine the climate at any specific point on the globe. If the climate above the Artic Circle changed by +89C then people living there will still survive. They would just have to start living like those in Dubia! It’s why the “global average” is so meaningless. That average tells you nothing. You have to take the local climate into consideration! The global average only tells you that there are areas that average less than the global average and there are locations that average higher than the global average. Not every place will have a climate equal to the “global average”.

bdgwx
Reply to  Tim Gorman
February 8, 2022 7:28 am

I think it may be you that missed the point. Ouluman and AlexBerlin are insinuating that because the temperature changes by 30 C in their backyards with a significant impediment to survival then it must be true that the global average temperature can change by 30 C and that there would not be any impediments to survival as well. You upped the ante to 89 C. Do you think a global average temperature change of 89 C is possible? If it is possible do you think it would happen without any significant impediments placed on survivability? What about a smaller 30 C change?

Reply to  bdgwx
February 8, 2022 11:26 am

I didn’t miss the point at all. If their back yard is above the Artic circle then it doesn’t matter if the temp change is +30C or +90C. People living there will still survive.

If you look at the climate models their trend has a positive trend forever. They are merely linear equations of y = mx + b. Such a projection will, sooner or later reach +89C. Are you implying the models are (gasp) wrong?

Again, there will be climates below the +89C growth and above the +89C Those local climates that are below the +89C growth will certainly contain some that will allow continued survival.

Will life change? Of course. So what? Life has been and always will be a struggle for survival. My grandparents several times removed struggled to survive the climate on the plains of Kansas in the 1800’s. Life wasn’t very pleasant. I’m sure our ancestors that crossed into North America so long ago had to struggle against the climate as well.

bdgwx
Reply to  Tim Gorman
February 8, 2022 11:51 am

You really don’t think survivability will be impaired if the global average temperature increased 89 C from 15 C to was 104 C? Really?

Reply to  bdgwx
February 8, 2022 2:37 pm

The survivability of the human race won’t change. It survives pretty well in Dubai. If northern Alaska changes to a climate of Dubai I’m pretty sure those in Alaska will continue to survive.

Why do *you* think every one on Earth would die? There are really only two choices – 1. The human race would die out or 2. the human race would survive.

Pick one.

bdgwx
Reply to  Tim Gorman
February 8, 2022 3:19 pm

Again…that’s a 104 C…the global average. Are you absolutely sure the survivability of the human race won’t change if the average surface temperature of Earth was 104 C? Are there any second thoughts here or are you sticking with it?

Reply to  bdgwx
February 8, 2022 12:50 pm

Do you think a global average temperature change of 89 C is possible? “

Every day on the globe. When India is 40C and Antarctica is -50C that breadth of change occurs on earth.

Can a GAT of -89 occur? If it does we’ll be in a full blown ice house earth and who will care?

bdgwx
Reply to  Jim Gorman
February 8, 2022 3:11 pm

bdgwx said: “Do you think a global average temperature change of 89 C is possible?”

JG said: “Every day on the globe.”

Wow.

Reply to  bdgwx
February 4, 2022 12:03 pm

You miss the point. Did large number of mammals die because of the major change in temperature? Why do you think large number of mammals will die because of 2 C change over 100 years?

Derg
Reply to  Jim Gorman
February 5, 2022 2:32 am

Exactly. CO2 is life.

Derg
Reply to  bdgwx
February 5, 2022 2:31 am

Lol…CO2 control knob indeed.

Reply to  Ouluman
February 6, 2022 6:22 pm

Exactly
Since basically all of the observed warming has been at high latitudes (like where you and I reside) winters and overnights, the 1-1.5-2-2.5-3c etc increase is entirely beneficial and leads to fewer dead humans.
Which I continue to suspect is the real problem.

February 4, 2022 2:15 pm

A better way to look at the satellite data is to select the data set that minimizes the Urban Heat Island Effect, Water Vapor and Albedo. To do that, download the South Pole Data and chart each month. What you will find is there has been no warming since the start of the data set. Now, how can CO2 increase 25 to 30% and yet have no impact on temperatures? What then is causing the warming? The only thing that warms the oceans. That is VISIBLE Radiation, not LWIR. Do we have evidence more warming visible radiation is reaching the oceans? You bet. Cloud cover has been decreasing over the oceans explaining the warming of the oceans, and that has nothing to do with CO2. Nothing.

bdgwx
Reply to  CO2isLife
February 5, 2022 7:00 am

What happens when you consider at the NoPol?

What happens when you consider the negative lapse rate in SoPol?

What happens when you consider the ocean portion?

What happens when you consider all of the other factors that modulate the planetary energy imbalance?

Why did clouds change?

Reply to  bdgwx
February 5, 2022 8:45 am

Why not answer the question that was asked by co2?

Deon Botha-Richards
February 5, 2022 3:27 am

What does the zero line represent? Is that the temperature from “preindustrial time”?

bdgwx
Reply to  Deon Botha-Richards
February 5, 2022 8:13 am

It’s the 1991-2020 average.

Carlo, Monte
Reply to  Deon Botha-Richards
February 5, 2022 8:15 am

No, its an average over arbitrary some time period, the details of which are not openly provided—they are probably somewhere in other documentation, but I can’t point to them. The baseline average is subtracted from the individual monthly averages to produce the “anomaly” numbers (a very deceptive term IMO that is commonly used in climate science).

Reply to  Carlo, Monte
February 5, 2022 7:26 pm

No, its an average over arbitrary some time period, the details of which are not openly provided

It’s explained in the head positing and mentioned in the y label of the graph, that the base line is 1991-2020. This was much discussed when Dr Spencer changed the base period from 1981-2010. I’m really not sure why you think this is being hidden.

Carlo, Monte
Reply to  Bellman
February 5, 2022 8:11 pm

More whining.

February 6, 2022 11:42 am

The IPPC’s stated goal (speaking for all of humanity, of course) is for mankind to limit global warming to 1.5 °C or less global temperature increase above “pre-industrial times”. In turn, the IPCC defines “pre-industrial times” to be the period of 1850-1900. (Ref: https://www.ipcc.ch/site/assets/uploads/sites/2/2018/12/SR15_FAQ_Low_Res.pdf ).

Interestingly, the IPCC apparently goes out of its way to avoid stating what the absolute global temperature was at any time in the period of 1850-1900.

Furthermore, the IPCC’s timeframe for limiting the 1.5 °C increase is given variously, and confusingly, as “in the next several decades”, “by 2040”, “by 2050”, and “by the end of this century”. What’s one to make of this?

Anyway, the global temperature for 1850-1890 has been independently stated to be “roughly 13.6 °C” (ref: https://history.aip.org/climate/timeline.htm ) whereas the global temperature for 1880 to 1900 has been independently stated to be 13.7 °C (ref: https://www.currentresults.com/Environment-Facts/changes-in-earth-temperature.php ).

The 2020 global surface temperature (averaged across land and ocean) was 1.2 °C (2.1 °F) warmer than the pre-industrial period (1880-1900) (ref: https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature ).

Dr. Spencer’s data presented in the above article leads to a linearized global (i.e., land and sea) warming trend of +0.13 C/decade averaged over the last 43 years. Therefore, combining the foregoing data, we can reasonably predict that the point of reaching 1.5 °C of warming from “pre-industrial times” might occur in (1.5-1.2)/0.13 = 2.3 decades from 2020, or equivalently around year 2043.

While there is absolutely NOTHING mankind can do to change the trend revealed by Dr. Spencer’s analysis of UAH satellite-based global temperature data in, say, the next 20 years, I am confident that a current confluence of natural factors will be resulting in global cooling actually occurring over the next century or so.

In fact, the transition from long-term global warming to long-term global cooling might be revealed in the article’s graph of UAH satellite data for the interval of 2017-2021.

Let’s stay tuned! . . . next little ice age, here we (may) come.

February 6, 2022 12:14 pm

The IPPC’s stated goal (speaking for all of humanity, of course) is for mankind to limit global warming to 1.5 °C or less global temperature increase above “pre-industrial times”. In turn, the IPCC defines “pre-industrial times” to be the period of 1850-1900. (Ref: https://www.ipcc.ch/site/assets/uploads/sites/2/2018/12/SR15_FAQ_Low_Res.pdf ).

Interestingly, the IPCC apparently goes out of its way to avoid stating what the absolute global temperature was at any time in the period of 1850-1900.

Furthermore, the IPCC’s timeframe for limiting the 1.5 °C increase is given variously, and confusingly, as “in the next several decades”, “by 2040”, “by 2050”, and “by the end of this century”. What’s one to make of this?

Anyway, the global temperature for 1850-1890 has been independently stated to be “roughly 13.6 °C” (ref: https://history.aip.org/climate/timeline.htm ) whereas the global temperature for 1880 to 1900 has been independently stated to be 13.7 °C (ref: https://www.currentresults.com/Environment-Facts/changes-in-earth-temperature.php ).

The 2020 global surface temperature (averaged across land and ocean) was 1.2 °C (2.1 °F) warmer than the pre-industrial period (1880-1900) (ref: https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature ).

Dr. Spencer’s data presented in the above article leads to a linearized global (i.e., land and sea) warming trend of +0.13 C/decade averaged over the last 43 years. Therefore, combining the foregoing data, we can reasonably predict that the point of reaching 1.5 °C of warming from “pre-industrial times” might occur in (1.5-1.2)/0.13 = 2.3 decades from 2020, or equivalently around year 2043.

While there is absolutely NOTHING mankind can do to change the trend revealed by Dr. Spencer’s analysis of UAH satellite-based global temperature data in, say, the next 20 years, I am confident that a current confluence of natural factors will be resulting in global cooling actually occurring over the next century or so.

In fact, the transition from long-term global warming to long-term global cooling might be revealed in the article’s graph of UAH satellite data for the interval of 2017-2021.

Let’s stay tuned! . . . next little ice age, here we (may) come.

John Boland
Reply to  Gordon A. Dressler
February 6, 2022 7:23 pm

No clue what to think. I am a skeptic, a denier of science apparently. No doubt there appears to be an upward pressure on temperature trends right now…so it’s possible the AGW crowd is right. In any case if we are that close to the 1.5C rise I would bet we get there. So what does that mean? I have no idea. I don’t see a case for CO2 causation, I just don’t see it. I don’t see a case for cooling either. All I see is a non linear system just doing what it does.

MarkMcD
February 6, 2022 2:05 pm

You know what I’d like to see?

I’d like to see all the data that has been used for version 6 to produce the recent record to be redone using version 5.

See, in 2015, version 6 came online and the entire temp record took an upwards jump. So I want to see what the record might be if we went back to when UAH temps were a closer match to the radiosondes.

Mind you, I am having issues finding the radiosonde data from the past couple of years – or rather the raw data appears to be available for download but I can’t find anywhere it has been mapped into human-readable form.

Coincidence?

Reply to  MarkMcD
February 7, 2022 6:19 am

I’m not sure if this is meant to be a joke. But for the record, but temperatures did not take an “upward jump” in version 6, quite the opposite.

Graph showing annual anomalies compared with 1981-2010 base period in °C.

20220207wuwt1.png
February 7, 2022 9:57 am

Why do these data set owners keep pretending that number out there in the hundredth place has any significance? I know from college physics that you might keep that number out there when doing all the calculations, but once you reach the end you have to drop off those insignificant digits.

bdgwx
Reply to  James Schrumpf
February 7, 2022 10:15 am

They aren’t pretending that they have significance at least in terms of uncertainty. See Christy et al. 2003 for details. I believe there are several reasons why they include extra digits. The public can perform their own calculations on the data with less rounding error. It allows the public more visibility into how each monthly update changes past data. In some cases this may be by request. For example, I heard that BEST had originally provided 2 decimal places, but the public requested more digits. I think one solution that would satisfy all parties is to provide two data files: one using significant figures rules and one that contains all IEEE 754 digits.

Reply to  bdgwx
February 7, 2022 1:08 pm

Haven’t we gone through this many times already? The standard deviation and uncertainty calculations only apply if one has several measurements of the same thing. Taking a month’s worth of temperatures from one station is not taking several measurements of ONE thing.

The entire point of the multiple measurements improving the accuracy of the mean of the measurements is that the length of the board has a single true value which is being estimated. Thirty separate temperatures have no such true value to estimate, so the techniques used to improve the measurement of one true value are meaningless.

bdgwx
Reply to  James Schrumpf
February 7, 2022 1:34 pm

Yes. We have gone through this quite a bit already. Not only does uncertainty propagate through a combining function (like an average) the same regardless of whether the input measurements are of the same thing or not, but it works the same even if the input measurements are of an entirely different type with entirely different units. See the Guide to the Expression of Uncertainty in Measurement. You can also prove this out with the NIST uncertainty calculator. It is an undisputable fact. The uncertainty of an average is less than the uncertainty of the individual measurements from which the average is based.

That’s not all that relevant to your post though. Regardless of how uncertainty propagates through the UAH process they aren’t pretending that the thousandths or even hundredths place is significant in term of uncertainty. And I still think this whole issue gets resolved if they would just post 2 files: one with significant rules applied and one with all IEEE 754 digits.

Reply to  bdgwx
February 7, 2022 3:31 pm

Not only does uncertainty propagate through a combining function (like an average) the same regardless of whether the input measurements are of the same thing or not”

I’m sorry, this is just wrong. Measuring the same thing can build a measurement database with random errors surrounding a true value. Those random errors, if they are truly Gaussian (and this isn’t always the case), will tend to cancel with equal numbers of negative errors and positive errors.

When you measure different things, especially using different measurement devices, you do *NOT* build a measurement database with random errors surrounding a true value. In this case the errors do not cancel completely if at all. Nor is there a “true value” represented by the mean.

The *process* of propagating the uncertainty may be similar in both cases but they do *NOT* provide the same descriptive value. Consider multiple measurements of just two items using different measurement devices for each item, one item being twice the length of the other. You wind up with a bi-modal distribution with an average that will be somewhere in the gap between the the two modes and which does not describe a “true value”. There is no way to guarantee that the random measurement errors of mode 1 can cancel the random measurements of mode 2. And the average accurately describes neither of two modes let alone their combination.

Yet this is *exactly* what mathematicians and climate scientists do when combining temperatures in the northern hemisphere with ones from the southern hemisphere. And the excuse that they use anomalies doesn’t hold water either since the range of temps during winter (which determines the mid-range value) is different than the range of values during summer and this affects the anomalies as well as the average temps. I have yet to see anyone trying to find a “global average temp” using any weighting to account for the different temp ranges between the hemispheres.

Reply to  Tim Gorman
February 8, 2022 9:10 am

When you measure different things, especially using different measurement devices, you do *NOT* build a measurement database with random errors surrounding a true value.”

If the average of each device converges on the “true value”, with repeated measurements, indeed they do. There is no rule that they must not be “different” or even have gaussian distributions (a bi-modal distribution that converges on the “true value” with multiple measurements is just fine).

AGAIN, my world, oil and gas reservoir, production, economic modeling is awash with geological and rheological measurement methods that are 1. greatly different from each other, 2. with greatly different “random errors”, 3. used together n the larger evaluations.

What specific amygdala overamp triggers your flight reflex when presented with these immutable facts?

Reply to  bigoilbob
February 8, 2022 11:40 am

If the average of each device converges on the “true value”, with repeated measurements, indeed they do.

How can the average of each device converge on a “true value” when you are measuring different things? Will a 100% accurate tape measure measuring a 10′ 2″x4″ and a 6′ 2″x4″ board give you something that converges on a “true value”?

If the average from a 100% accurate measuring device doesn’t converge on a “true value” then how can a measuring device with uncertainty do so?

There is no rule that they must not be “different” or even have gaussian distributions (a bi-modal distribution that converges on the “true value” with multiple measurements is just fine).

Temperature measurements from different locations ARE different. Temperature measurements at the same location using different thermometers will give different results.

How do you get a bi-modal distribution to converge on a “true value” when the average doesn’t describe either mode? Your definition of a true value is apparently just the average, like most climate scientists and mathematicians.

1. greatly different from each other, 2. with greatly different “random errors”, 3. used together n the larger evaluations.”

Then you are not using base statistic descriptors. The average from greatly different measurements can’t describe a “true value”. Greatly different “random errors” with systematic errors just grow the uncertainty which also means the risk value associated with the evaluation goes up also. You’ll have to explain how you use them in larger evaluations.

I assure you, when I went in to pitch a capital project and had to admit that our measurements of peak usage were greatly different and that the uncertainty of our measurements varied greatly I wouldn’t be allowed back in the conference room again.

Reply to  bigoilbob
February 8, 2022 12:00 pm

If the average of each device converges on the “true value”, with repeated measurements, indeed they do. There is no rule that they must not be “different” or even have gaussian distributions (a bi-modal distribution that converges on the “true value” with multiple measurements is just fine).”

You are so full of crap it just isn’t funny.

Where do you think random errors originate? The measurand or the measuring device?

Answer – the measuring device and the process of making a measurement with that device.

If you have two devices measuring different things, you must first prove that the “true value” of each set of independent measurements have the same value BEFORE you can assume the average of each is the true value.

If you have two devices measuring two different things and have a bimodal distribution the mean of the two IS NOT the true value of either measurand.

Bimodal distributions generally occur due to two devices measuring the same thing. It is highly unlikely that the mean of the two measurements would be the true value however. It is more likely that that one or both of the instruments is in need of calibration.

Reply to  bdgwx
February 7, 2022 4:49 pm

Uncertainty is not the issue when dealing with Significant Digits Rules (SDR). SDR were developed to insure that extraneous information was not added to measurements by performing mathematical calculations. The information contained in a measurement is shown by the resolution to which the measurement is made. The resolution determines the number of Significant Digits that can be determined.

Any information added to a measurement by extending the measured digits is nothing more than writing fiction. It is creating unreliable numbers.

Using the IEEE 754 spec on floating point is not a reason for anything. In fact this spec has problems in rounding decimal numbers accurately and requires great care in programming to reduce errors.

This whole reference by you just indicates that you have no appreciation for what physical measurements are and how they should be treated. To recommend something that doesn’t follow the rules followed by all physical scientists, engineers, machinists and others just indicates your lack of dedication to correctly portraying measurements.

Carlo, Monte
Reply to  bdgwx
February 7, 2022 9:14 pm

Clownish nonsense.

Reply to  bdgwx
February 8, 2022 9:10 am

It is an undisputable fact. The uncertainty of an average is less than the uncertainty of the individual measurements from which the average is based.

I don’t think anyone is arguing that point per se, it’s the application of the method that I think is wrong. The way I see it, the above is only applicable if there is a “true value” you’re trying to approach. A board’s length is its “true value”. What’s the “true value” that a thousand different temperature measurements at a thousand different locations is trying to approach?

With the board, we’re not trying to approach the average (or mean) of all the measurements, we’re trying to determine its actual length. If we took a thousand different boards with a range of 100-200 mm and measured each of them once, then took all those measurements and calculated the standard deviation and the uncertainty of the mean, what do we actually now know about all those boards?

Sure, we’ve got a mean, and an SD, and the uncertainty in the mean. What do those values actually tell us about the boards, or an individual board?

At our thousand weather stations, a month’s worth of daily measurements are taken to get a monthly average for a station. There’s already a 30-year baseline of average temps for each month of a station’s history, so the monthly average for a station is subtracted from the baseline value for that month at that station to get the anomaly for that month and year for that station.

The 30-year average for a month is our Xavg. Each current month’s average is our X. The X is subtracted from the Xavg to get the deltaX (the anomaly). All the anomalies for each month of a year are averaged to get the “annual average anomaly” for that station, and SDs and uncertainty in the mean is calculated and so fourth.

But I don’t think that is sum(Xavg -X) for each station-month, it’s sum(Xavg1 – X1) for n=1 twelve times. Each station has its own Xavg to subtract from its monthly average X. Instead of adding the distance of each measurement from a common mean value, the distance of each station’s distance from its own mean is summed.

How does that help? Instead of knowing that each anomaly is some distance from a common value, we have anomalies that are each some distance from its own personal mean point.

I wish I could draw this, but it’s like instead of the anomalies being shots clustered around a bulls-eye, and each anomaly being measured as from where it hit to the bulls-eye, there are bulls-eyes all over the target paper, and each anomaly is measured from its own personal bulls-eye, without knowing how from the real bulls-eye each of the others is.

It just doesn’t seem like it’s giving any useful information.

Reply to  James Schrumpf
February 8, 2022 11:57 am

How does that help? Instead of knowing that each anomaly is some distance from a common value, we have anomalies that are each some distance from its own personal mean point.”

You nailed it!

Climate is the overall absolute temperature profile at a location, not its anomaly. Fairbanks, AK and Miami can have the same anomaly but *vastly* different climates.

Attached is a graph showing average growing season length (orange) and growing degree-days (red) for the US. Note carefully that while the growing season length ( the number of days between first-fall-frost and last-spring-frost) is going up while the growing degree-days (a measure of heat accumulation) is going down!

If max temps were going up you would expect heat accumulation (GDD) to go up as well. And that is what the climate scientists are trying to get us to believe. Tmax is growing more and more every day and soon Earth will be nothing but a cinder with nothing growing.

But the actual data from agricultural scientists, whose job depends on accurate, reproducible results, says otherwise. Growing season length is increasing because minimum temps are going up causing last spring frost to move earlier and first fall frost to move later.

Admittedly this is a national average and different locations and regions will see different results so you can’t just project the national average to any and all places in the US. But that is the *exact* same problem you have with the “global average temperature. You can’t project it to any specific location or region. Which also implies that a one-solution-fits all approach is also bad policy. Solutions have to be tailored to fit the problem at local and regional areas.

Reply to  Tim Gorman
February 8, 2022 11:58 am

I forgot the graph.

gdd_avg.png
Reply to  James Schrumpf
February 8, 2022 1:41 pm

What’s the “true value” that a thousand different temperature measurements at a thousand different locations is trying to approach?”

Whatever the proper spatial interpolation, with normal statistical rules governing the sum of the variances from both the interpolation and from the varying error distributions of the measuring instruments and techniques, arrive at. With an appropriate error band of it’s own.

What gets lost – especially to the Gorman’s – is that we are measuring one thing – temperature. Different places, different instruments and methods, with different distributions around the “true value”, at different times if trending is evaluated, but all temperature. We have known the statistical bases for doing this, and then spatially interpolating it with proper error aggregation for decades. And now we have the computing HP to do it without empirical short cuts.

I’m sorry the results don’t agree with your prejudgments. But not sorry that you and the Gorman’s don’t have the backup of anyone with actual statistical training, even in this fawning fora…

Reply to  bigoilbob
February 8, 2022 3:23 pm

“What’s the “true value” that a thousand different temperature measurements at a thousand different locations is trying to approach?”

Whatever the proper spatial interpolation, with normal statistical rules governing the sum of the variances from both the interpolation and from the varying error distributions of the measuring instruments and techniques, arrive at. With an appropriate error band of it’s own.

It appears that you just said there IS no true value but whatever the calculations arrive at.

If that’s so, then the result is meaningless. To go back to the board example, that is saying the true length of the board doesn’t exist until we measure all one thousand different boards and suitably process the results and then proclaim the result as the “true value” of the length of the board, even if no board of that length existed in the population.

Your description sounds more like the statistics applied for figuring public opinion. There are no measurements, only numbers; if you ask 1200 people who they would vote for in a Presidential election, there are no units. Out of the 1200 so many answer this way, so many another. It’s a pure tally of the vote.

That doesn’t seem to be way to handle physical measurements at a location. If the first step to determining standard deviation is to subtract the mean from each individual measurement. What do you do when you have a thousand different means and a thousand different measurements? Instead of sqrt( sum((X – Xmean) ^2/ n), n=1000 you have
sqrt(sum(X1-X1mean1)^2 + (X2-X2mean)^2 + (X3-X3mean)^2 ) + . . . n.

How can those possibly relate to give a rational, logical answer?

bdgwx
Reply to  James Schrumpf
February 8, 2022 4:52 pm

I wonder if there is a real world application of averages that might resonate better with you. There are so many examples that can be considered. What about image analysis especially in the context of astronomical research? I was thinking it is a decent analog because there is a grid of pixels where each pixel has a value assigned to it representing the brightness of that pixel not unlike how we can represent the Earth as a grid of cells where each cell has a value assigned to it representing the temperature of that cell. If you want to determine the brightness of an astronomical object you can spatially average the pixels just like you would spatially average the cells of Earths to determine the average temperature. There other interesting similarities between the two that I don’t want to get into yet. I was just thinking that if you can be convinced (if you’re not already) that astronomers can determine the various properties of astronomical objects through image analysis techniques including spatial averages then it might make the concept a global average temperature of Earth (or any planet) more intuitive.

Reply to  bdgwx
February 8, 2022 5:36 pm

Not the same thing. You are describing a static image that you are measuring. It would be more appropriate if you said pixels or groups of pixels were missing and you used a clone tool to fill with nearby pixels. And/or you took a pixels from images from a large number of different telescopes averaged them and then say you obtained a more accurate and precise image by averaging them all together.

Go back to the start of even using “anomaliea” for some purpose. Primarily to show the warming of the globe coincided with the growth of CO2. That ignored seasons, hemisphere winter/summer differences, and that as you went back in time, less and less coverage of the globe. Now that the connection between temp and CO2 is becoming smaller and smaller, the processes to try and combine land and sea temps to show this connection are also becoming less and less important.

Reply to  James Schrumpf
February 8, 2022 5:02 pm

It appears that you just said there IS no true value but whatever the calculations arrive at.”

There is one, but we non deities don’t know it. But we have a best expected value, and it’s associated error band. it’s what you always end up with when doing technical evaluations with distributed inputs, and almost always with deterministic inputs.

Reply to  bigoilbob
February 8, 2022 6:45 pm

What’s the best expected value for the average temperature of the Earth, and why do you think so?

Reply to  James Schrumpf
February 8, 2022 7:09 pm

It is a dreaded calculated value, of course. I.e., it’s the properly spatially interpolated mean, or average. It varies with time, and the spatial interpolation technique used (there are more than one). It should come with the aggregation of the distributed uncertainties of the instrumentation and processes used for that time period, the residuals from the spatial interpolations, and so on. See BEST temp data, for example.

But what we are largely looking for are the trends over physically/statistically significant time periods, and their standard errors. These have proven to be quite disturbingly durable w.r.t. ACC. So much so that when the doubters get cornered, they invariably change the subject…

Carlo, Monte
Reply to  bigoilbob
February 8, 2022 9:20 pm

You’re a liar, blob.

bdgwx
Reply to  James Schrumpf
February 9, 2022 5:57 am

I don’t think there is a best or optimum temperature for Earth.

Reply to  bdgwx
February 9, 2022 8:20 am

Here’s how I see the problem under discussion. In a probability analysis there are no units. If there are no units there is no uncertainty except the statistical ones.

But these are measurements with units, and that is the uncertainty getting tossed aside in the probability analysis.

All those measurements are in hundredths of a degree C, which is ridiculous on its face before modern instrumentation, but that’s not the problem either, as digits can be removed. The problem is if those measurements were taken on a thermometer with one-degree increments, all those measurements have an uncertainty of +/- 0.5 degrees C. Modern thermometers are probably around +/- 0.05 degrees C.

When I look at a temperature series like 10.1,11.1, 9.8, 7.4, 10.5, I see 10.1+/- 0.05, 11.1 +/- 0.05, etc. When the mean is calculated, the uncertainty goes along:

(10.1 + 11.1 + 9.8 + 7.4 + 10.5) / 5 = 9.8
(0.05 + 0.05 + 0.05 + 0.05 + 0.05) / 5 = 0.05

The mean is 9.8 C +/- 0.05 C

They carry along in the standard deviation. Raising an uncertainty to a power multiplies the uncertainty by the power.

(10.1 +/- 0.05 C – 9.8 +/- 0.5 ) ^2 = 0.09 +/- 0.1 C

Completing the calculation:

(11.1 +/- 0.05 C – 9.8 +/- 0.5 ) ^2 = 1.69 +/- 0.1 C
(9.8 +/- 0.05 C – 9.8 +/- 0.5 ) ^2 = 0.0 +/- 0.1 C
(7.4 +/- 0.05 C – 9.8 +/- 0.5 ) ^2 = 5.76 +/- 0.1 C
(10.5 +/- 0.05 C – 9.8 +/- 0.5 ) ^2 = 0.49 +/- 0.1 C

Standard deviation = 1.6 +/- 0.3 C

If I did all the maths right, that’s the correct result. Question is, would a mathematician or statistician carry that +/- 0.3 C uncertainty along?

bdgwx
Reply to  James Schrumpf
February 9, 2022 9:06 am

JS said: “ If there are no units there is no uncertainty except the statistical ones.”

It turns out that uncertainty is assessed the same regardless of the units of measure or even if there are unit at all. The GUM has an example of determining combined uncertainty when the combining function results in a unitless value.

JS said: “(10.1 + 11.1 + 9.8 + 7.4 + 10.5) / 5 = 9.8
(0.05 + 0.05 + 0.05 + 0.05 + 0.05) / 5 = 0.05
The mean is 9.8 C +/- 0.05 C”

That’s not how uncertainty propagates though. Using GUM [1] equation 10, Taylor [2] equations 3.16, 3.18, or 3.47, or the NIST [3] monte carlo method all say the mean is 9.78 ± 0.02 C. That’s 5 different methods all giving the exact same answer. The general formula for the propagation of uncertainty through a combining function that produces an average is u(Tavg) = u(T) / sqrt(N) when all Ti elements have the same individual uncertainty u(T) and there are N elements. It doesn’t even matter what the distribution of u(T) is. It could be gaussian, uniform, etc. It works out all the same.

The UAH uncertainty is significantly more complicated to assess. The spot measurement uncertainty is about ± 1 K. And even though there are 9504 cells in the grid mesh the uncertainty of the average of the grid mesh isn’t 1 /sqrt(9504). This is because there is a non-zero correlation between grid cells due to the way the satellites view the Earth. The degrees of freedom of the grid mesh turns out to be 26 so the uncertainty is closer to 1 / sqrt(26 – 1) = 0.2 for a monthly average. Christy et al. 2003 provides a lot details regarding the uncertainty of the UAH dataset.

Reply to  bdgwx
February 9, 2022 12:49 pm

It turns out that uncertainty is assessed the same regardless of the units of measure or even if there are unit at all. The GUM has an example of determining combined uncertainty when the combining function results in a unitless value.

That Equation 10 looks pretty complicated to use on adding 5 simple measurements. You sure a simpler method won’t work?

I’m suspecting we’re talking about two different things. I’m talking about measurement error. As in, if I read a thermometer marked in degrees, any reading I take is going to have a measurement uncertainty of +/- 0.5 degrees.

I think you are talking about statistical uncertainties, the standard deviation and the uncertainty in the mean.

Could you indulge me and show how you would propagate the measurement error in my example above?

Don’t forget to show your work!

bdgwx
Reply to  James Schrumpf
February 9, 2022 2:29 pm

JS said: “That Equation 10 looks pretty complicated to use on adding 5 simple measurements. You sure a simpler method won’t work?”

It’s not terrible. The most confusing part for those who aren’t familiar with calculus notation is the partial derivative of the combining function wrt to the inputs. For a function that computes the average the partial derivative is 1/N since when you change an input by 1 units it changes the output 1/N units. There are easier methods, but GUM 10 is generic enough that it can handle arbitrarily complex output functions.

JS said: “I’m suspecting we’re talking about two different things. I’m talking about measurement error. As in, if I read a thermometer marked in degrees, any reading I take is going to have a measurement uncertainty of +/- 0.5 degrees.”

Ah…got it. The ±0.5 figure here is the read uncertainty due to the instrument reporting in increments of 1. That changes things a bit actually. Because that is a uniform distribution with the read error between -0.5 and +0.5 then the standard deviation works out to 0.289. For -0.05 and +0.05 the standard deviation works out to 0.0289. That’s the figure we’ll need to plug into the various combined uncertainty equations for the example below.

JS said: “Could you indulge me and show how you would propagate the measurement error in my example above?”

Absolutely. Note that given my new understanding that the ±0.05 figure was the bounds of a uniform distribution and not the standard uncertainty the answer is going to be a bit different than what I gave you previously. The equivalent standard uncertainty for ±0.05 is ±0.0289. Here is GUM equation 10 solved both numerically and algebraically using ±0.0289.

GUM 10 – numerical method

x_1 = 10.1, u(x_1) = 0.0289
x_2 = 11.1, u(x_2) = 0.0289
x_3 = 9.8, u(x_3) = 0.0289
x_4 = 7.4, u(x_4) = 0.0289
x_5 = 10.5, u(x_5) = 0.0289

y = f = (x_1+x_2+x_3+x_4+x_5)/5
y = 9.78

∂f/∂x_1 = 0.2
∂f/∂x_2 = 0.2
∂f/∂x_3 = 0.2
∂f/∂x_4 = 0.2
∂f/∂x_5 = 0.2

u(y)^2 = Σ[∂f/∂x_i * u(x_i)^2, 1, 5]

u(y)^2 = 0.2^2*0.0289^2 + 0.2^2*0.0289^2 + 0.2^2*0.0289^2 + 0.2^2*0.0289^2 + 0.2^2*0.0289^2

u(y)^2 = 0.0000334 * 5 = 0.000167

u(y) = sqrt(0.000167)

u(y) = 0.0129

GUM 10 – algebraic method

u^2(y) = Σ[∂f/∂T_i^2 * u^2(T_i), 1, N]

Let…

y = f = Σ[T_i, 1, N] / N

Therefore…

∂f/∂T_i = 1/N for all T_i

And then it follows that…

u^2(y) = Σ[(1/N)^2 * u^2(T_i), 1, N]

And when u^2(T_i) is the same for all T_i then…

u^2(y) = ((1/N)^2 * u^2(T)) * N

u^2(y) = N * 1/N^2 * u^2(T)

u^2(y) = 1/N * u^2(T)

u^2(y) = u^2(T) / N

u(y) = sqrt[u^2(T) / N]

u(y) = u(T) / sqrt(N) = 0.0289 / sqrt(5) = 0.0129

NIST monte carlo method [1]

x0 is uniform between 10.05 and +10.15
x1 is uniform between 11.05 and +11.15
x2 is uniform between 9.75 and 9.85
x3 is uniform between 7.35 and 7.45
x4 is uniform between 10.45 and 10.55

y = (x0+x1+x2+x3+x4)/5 = 9.78

u(y) = 0.0129

Reply to  bdgwx
February 9, 2022 5:56 pm

For -0.05 and +0.05 the standard deviation works out to 0.0289.

How? Please show me that calculation. My maths work out to 0.05.

bdgwx
Reply to  James Schrumpf
February 9, 2022 6:13 pm

The formula for the variance and standard deviation of a uniform distribution is as follows.

σ^2 = (b-a)^2/12

σ = sqrt[(b-a)^2/12]

So for a uniform distribution with endpoints a = -0.05 and b = 0.05 we have the following.

σ = sqrt[(0.05 + 0.05)^2/12] = 0.0289.

The NIST uncertainty calculator will calculate the SD of any distribution as well. Leave everything at the defaults except change x0 to a uniform distribution with left and right endpoints specified as -0.05 and +0.05.

Reply to  bdgwx
February 9, 2022 8:23 pm

This is not a uniform distribution it’s an uncertainty in a measurement. The LSU Physics Dept. lab web page on Uncertainties and Error Propagation says this:

Example
w = (4.52 ± 0.02) cm,
x = ( 2.0 ± 0.2) cm,
y = (3.0 ± 0.6) cm.
Find z = x + y – w and its uncertainty.

z = x + y – w = 2.0 + 3.0 – 4.5 = 0.5 cm
Dz = sqrt(0.02^2 + 0.2^2 + 0.6^2)
= sqrt(0.0004 + 0.04 + 0.36)}
= sqrt(0.4004) = 0.6
z = 0.5 +/- 0.6 cm

This isn’t wrong. But it’s not what you’re doing.

bdgwx
Reply to  James Schrumpf
February 9, 2022 9:36 pm

You described it as being the result of the limitation in the markings on the instrument being only in units of 0.1. That means there is equal probability of the true value being anywhere in the range -0.05 to +0.05 of what you read from the instrument. For example, -0.01 is just as likely as + 0.01 or any other value in that range. That is a uniform distribution. If I’ve misunderstood the meaning of your 0.05 uncertainty figure then no big deal. Just tell me what it means and I’ll redo the calculation.

For the new example I get the same answer as LSU using both the GUM equation 10 and the monte carlo method. BTW…GUM equation 10 reduces to the formula LSU used for the propagation of uncertainty through an output function that only contains addition and subtraction. This is the well known root sum square formula and can actually be derived from GUM equation 10. In fact I’ve actually derived it in few posts here already.

Reply to  bdgwx
February 10, 2022 3:00 am

-0.01 is just as likely as + 0.01 or any other value in that range. That is a uniform distribution

Is it? Sounds to me like it’s saying “any measurement is only accurate to +/- 0.05 C”. There’s no distributed sample there, it’s just a statement of accuracy.

On the gripping hand, this is a snip of a histogram of a randomly selected GHCN Monthly site with 30 years of good monthly data, from 1992-2021. The mean of the data is 9,6C. What kind of a distribution would that be called?

temp_histogram.png
bdgwx
Reply to  James Schrumpf
February 10, 2022 6:07 am

JS said: “Is it?”

Yes. If the instrument only displays units of 0.1 then the true value could be any value within 0.05 of what is displayed. It is uniform because the true value would not exhibit any preference for a specific digit at the 2nd decimal place. In other words all digits in the 2nd decimal place are equally likely…10% for each.

You should be able to convince yourself of this easily in Excel. In column enter “=RAND()” in 100 cells of column A. In column B enter “=ROUND(A1, 1)” and then repeat for each Ax value in column A. In column C enter “=B1 – A1″ and then repeat for each Ax and Bx value in columns A and B. Finally, in another cell enter =”STDDEV.P(C1:C100)”. You’ll get a value very close to 0.0289.

JS said: “There’s no distributed sample there, it’s just a statement of accuracy.”

I’m not sure what you mean by “accuracy” here. Technically and per ISO 5725 accuracy is describing the bias of every measurement. It essentially shifts the error distribution to the left or right. The markings on the instrument or the display of the value reported by the instrument do not influence the accuracy in any way. Saying an instrument can only report or be read in units of 0.1 does not in anyway describe the accuracy of the instrument. It only describes a limitation of the precision of the instrument. Again, I’m using formal ISO 5725 language here.

JS said: “What kind of a distribution would that be called?”

That is an arbitrary and asymmetric distribution. It does not fit the common types: normal, uniform, triangular, exponential, weibull, etc. But, and this is important, it is still a distribution and still has a standard deviation. I’m not sure the relevance to the discussion here because it is not an error distribution. That histogram of Tavg does not describe an uncertainty or the dispersion of values that could be reasonably attributed to a thing being measured.

Reply to  bdgwx
February 10, 2022 7:12 am

You seem to think that you can teach the basic ground rules to the hard kernel of the miswired who seem to have their flight reflexes triggered whenever they are exposed to them. Your many earnest, imaginative attempts are truly admirable, but do you see the pattern yet?

  1. Initial engagement.
  2. Deflection, a la, “different instruments”, “different places”, “different times”.
  3. True denial of stat 101 concepts.
  4. Wholesale subject change. https://wattsupwiththat.com/2022/02/03/uah-global-temperature-update-for-january-2022-0-03-deg-c/#comment-3450532

Mr. Schrumpf at least seems interested in learning, if still blocked. Your insights like “That is an arbitrary and asymmetric distribution. It does not fit the common types: normal, uniform, triangular, exponential, weibull, etc. But, and this is important, it is still a distribution and still has a standard deviation.” might be helpful.

I too get pulled back in from time to time, so who am I to talk. Admire your patience. I read your local posts once in awhile, even though I will never invest the years that you and Nick have put in to become intimate with the fundamentals and specifics of temp reconstructions.

bdgwx
Reply to  bigoilbob
February 10, 2022 10:04 am

I am the eternal optimist for sure!

I do wish we could get past the denial that an average has lower uncertainty than the individual measurements that go into it.

If we could get past the denial of simple statistical principles and stop with the seemingly endless barrage of strawman that accompanies these discusses we might actually be able to discuss legitimate concerns with the accuracy, precision, and uncertainty of the UAH anomaly values. I’m thinking of things like the one-size-fits-all TLT weighting function, the possibility that the cooling stratosphere is contaminating the TLT values, possible systematic biases effecting the trend, and many other topics that are far more productive and valuable.

Carlo, Monte
Reply to  bdgwx
February 10, 2022 12:37 pm

I do wish we could get past the denial that an average has lower uncertainty than the individual measurements that go into it.

I do wish you’d take your lies back to wherever they came from.

Reply to  bdgwx
February 10, 2022 5:58 pm

I do wish we could get past the denial that an average has lower uncertainty than the individual measurements that go into it.

Speaking probability-wise, it’s obvious that the average of several measurements of our beloved board is most likely more accurate than an individual. That doesn’t rule out the possibility of one or more of the measurements being smack dead on the true value (though we can’t know that), while the average/mean is off a few hairs.

If we could get past the denial of simple statistical principles and stop with the seemingly endless barrage of strawman that accompanies these discusses

I don’t see the simple principles being protested. It’s the complicated ones giving me doubts.

I’ve just been looking at a paper put out by NIST, Technical Note 1900 “Simple Guide for Evaluating and Expressing the Uncertainty of NIST Measurement Results”. Starting on page 24 are several examples of measurements and the calculation of uncertainty.

“The equation, ti = r+Ei, that links the data to the measurand, together with the assumptions
made about the quantities that figure in it, is the observation equation. The measurand r is
a parameter (the mean in this case) of the probability distribution being entertained for the
observations.

“Adoption of this model still does not imply that r should be estimated by the average of
the observations — some additional criterion is needed. In this case, several well-known
and widely used criteria do lead to the average as “optimal” choice in one sense or another:
these include maximum likelihood, some forms of Bayesian estimation, and minimum mean
squared error.”

Example 2 is “proceeding as in the GUM (4.2.3, 4.4.3, G.3.2), the average of the m = 22 daily readings is t̄ = 25.6 ◦C, and the standard deviation is s =4.1 ◦C. Therefore, the standard uncertainty associated with the average is u(r)= s∕m =0.872 ◦C. The coverage factor for 95% coverage probability is k =2.08, which is the 97.5th percentile of Student’s t distribution with 21 degrees of freedom. In this conformity, the shortest 95% coverage interval is t̄± ks∕√n = (23.8 ◦C, 27.4 ◦C).”

NIST’s values for s, the average temp, and the standard uncertainty are the same as Excel calculated them.

This exercise is exactly the same thing we do with GNCN Daily temperatures every month to get the GHCN Monthly summaries. It seems as though NIST agrees with the simple approach here.

NIST_surface_temps.png
Reply to  James Schrumpf
February 10, 2022 7:41 pm

A few things.

1)
Remember the GUM is based on a single measurand, not multiple ones. Even the references (Sec 4) that discuss multiple measurements being used is based on a function that relate them to a final value refer to a single value. In other words, say LxWxH = Area. Average (mean) is not a function that builds to a final value. A mean is a statistical parameter of a distribution, not a function.

2)
“s/sqrt n” is what is known as SEM. It is the interval within which the estimated mean may lie based on the size of a sample distribution. It is based on the sample forming a normal distribution. It is actually based on sampling theory.

It is not measurement uncertainty. It is an assessment of how accurate an estimated mean obtained from a small number of sample measurements may be. It is a sample statistic used to estimate statistical parameters. This is certainly a type of uncertainty, but it doesn’t replace an assessment of measurement uncertainty.

The actual measurement uncertainty assessment uses root-sum-square calculation.

3)
Using a coverage factor gives an “expanded” uncertainty. It is certainly something that could be used and you show it will result in a much larger uncertainty value.

4)
You can not divide by sqrt N (say 9000 stations) and claim you have increased both accuracy and precision of data prior 1980. It just isn’t possible. Integer data must be used consistently as integers. You can not use a baseline from data newer than 1990 that has 1 or 2 decimals to convert integer data into a similar resolution. You immediately lose all the uncertainty information from having integer data. Too much of the discussion here is deflected into anomaly uncertainty and ignores the real data uncertainty.

bdgwx
Reply to  James Schrumpf
February 11, 2022 7:48 am

JS said: “This exercise is exactly the same thing we do with GNCN Daily temperatures every month to get the GHCN Monthly summaries.”

Yeah, at least for monthly station summaries. Note, that NIST uses a type A evaluation of uncertainty in the example. They could have done a type B evaluation as well by taking the assessed uncertainty of each Tmax measurement and combining them via an output function that computes an average. Both type A and type B are acceptable per the available literature. They often provide different results so it is often preferred to use both.

Since NIST uses the type A method let’s do the same thing on the UAH monthly grid. You can download the gridded data here. There are 9504 values. The standard deviation for January 2022 is 12.45 K which means the uncertainty on the average is 12.45 / sqrt(9504) = 0.13 K using the same type A method that NIST used in their example. Interestingly this is not significantly different from the type B method with full propagation through the gridding and spatial averaging steps used by Christy et al. 2003. Their result is 0.10 K for a monthly average.

Reply to  bdgwx
February 11, 2022 11:11 am

The NIST method calculates a 25.6 C mean, a 4.1 standard deviation, and a 0.88 SEM for the month of May 2012. The year’s measurements are completed, and now it’s time to calculate the mean for the year. We’ve got 12 means, twelve standard deviations, and 12 SEMs.

How is all this uncertainty handled?

bdgwx
Reply to  James Schrumpf
February 11, 2022 12:47 pm

You have two choices at this point. Continue the propagation using the type B method or do another type A analysis separately. I’ll do both types with the official Washington D.C. reporting station for the year 2012.

Type A Monthly (N={29-31})

Jan: 9.6 ± 1.0
Feb: 11.3 ± 0.8
Mar: 19.1 ± 1.0
Apr: 20.1 ± 0.9
May: 26.7 ± 0.5
Jun: 29.9 ± 0.9
Jul: 33.9 ± 0.7
Aug: 32.0 ± 0.5
Sep: 27.1 ± 0.6
Oct: 20.5 ± 0.9
Nov: 12.7 ± 0.7
Dec: 11.3 ± 0.8

Type A Annual from Monthly (N=12)

2012: 21.2 ± 2.4

Type A Annual from Daily (N=366)

2012: 21.2 ± 0.5

Type B Annual from Monthly Type A

2012: 21.2 ± 0.1

Type B Full Propagation assuming ± 0.3 obs

Jan: 9.6 ± 0.1
Feb: 11.3 ± 0.1
Mar: 19.1 ± 0.1
Apr: 20.1 ± 0.1
May: 26.7 ± 0.1
Jun: 29.9 ± 0.1
Jul: 33.9 ± 0.1
Aug: 32.0 ± 0.1
Sep: 27.1 ± 0.1
Oct: 20.5 ± 0.1
Nov: 12.7 ± 0.1
Dec: 11.3 ± 0.1

2012: 21.18 ± 0.01

The ± 0.3 figure is based on Hubbard & Lin 2003.

Note that this assumes zero correlation. In reality the monthly ± 0.1 and annual ± 0.01 will be far too low of an estimate.

Reply to  James Schrumpf
February 11, 2022 5:15 pm

How is all this uncertainty handled?”

A good question, hopefully asked for elucidation. I know that it can be rigorously evaluated, and I know how I would do it. My process would honor DOM weighting and any correlation between the monthly standard deviations. But bdgwx and Nick Stokes have spent years on the step by step specifics. I hope that one of them responds…

Reply to  bdgwx
February 11, 2022 2:14 pm

Dividing 12.45 by √9504 at best only tells the interval within which the mean may lay. IOW, the Standard Error of the sample Mean, the SEM. The SEM is not a gauge of the uncertainty of a measurement unless there is a single measurand. It is a statistic of a sample distribution, it is not a statistical parameter of a population.

The formula for SEM is:

SEM = σ/√N where,

SEM is the Standard Error of the sample Mean
σ is the standard deviation of a population
N is the sample size

Look at what you are doing here.

First, by using sigma (SD), you are defining your data as an entire population.

Second, you then declare the data a group of 9504 samples so that you can divide σ by a large number. When in actuality each sample has a size of 12 (i.e., average of 12 months) and should be the value of “N”.

These don’t go together. Your data is either a group of 9504 samples or it is the entire population, one or the other, it can’t be both. You are doing what many, many scientists do.

Read this document that NCBI felt appropriate for their website.

Standard Error | What It Is, Why It Matters, and How to Calculate (scribbr.com)

and another:

Basics of Estimating Measurement Uncertainty (nih.gov)

and from:

Standard Error | What It Is, Why It Matters, and How to Calculate (scribbr.com)

“The standard error of the mean, or simply standard error, indicates how different the population mean is likely to be from a sample mean. It tells you how much the sample mean would vary if you were to repeat a study using new samples from within a single population.”

Read this, especially the part about temperatures.

Standard Deviation Calculator

Carlo, Monte
Reply to  bigoilbob
February 10, 2022 12:36 pm

blob the Idiot.

Reply to  bdgwx
February 10, 2022 10:24 am

I’m not sure what you mean by “accuracy” here. Technically and per ISO 5725 accuracy is describing the bias of every measurement. It essentially shifts the error distribution to the left or right. The markings on the instrument or the display of the value reported by the instrument do not influence the accuracy in any way. Saying an instrument can only report or be read in units of 0.1 does not in anyway describe the accuracy of the instrument. It only describes a limitation of the precision of the instrument. Again, I’m using formal ISO 5725 language here.

I did use the wrong term, I meant precision. Here’s the specs for the temperature sensor used by NOAA in the USCRN stations, which I believe are their highest-quality quality stations:

Type: Platinum 1000 ohm ± 0.04% at 0°C (per IEC-751, Class A accuracy)

yadda yadda yadda

Accuracy: ±0.04% over full range

I’m presuming these stations are fully automatic and humans do not pop in to read these thermometers, so it it says the temp is 15.33 C, it’s reading those hundredth-place temperatures with no measurement uncertainty due to a human squinting at a thermometer marked in whole digits and trying to guess if that is 23.6 or 23.7 C.

When I was taking physics we were taught that the relative error (expressed as a percentage) was calculated as dX/X. Zero point zero zero 4 percent would be 0.0004 as a decimal, so does that mean with this instrument if it reads the daytime high as 23.6 C that the uncertainty in that measurement is +/- 23.6 * 0.0004 = 23.6 +/ 0.009 C? If so, what happens when it measures 0.00 C? Is the accuracy meaningless?

I performed your Excel experiment, and it was as you said, though I don’t see what it’s telling me about anything. Take a bunch of randomly generated 5-6 decimal place numbers subtract a rounded version of each from itself, and calculate the standard deviation of the resulting column of numbers. What does that tell me?

My first thought was that perhaps the random generator algorithm included a built-in standard deviation of approximately that number. So I used 100 temperature measurements from one of the NOAA GHCN-Monthly and performed the same exercise. The results were about 8.7% smaller than for the random numbers.

Is that significant? I don’t know. I don’t know what the point of the exercise was.

I read this paper online: J. Chem. Educ. 2020, 97, 5, 1491–1494, and while I didn’t understand all of it, this was very clear:

Monte Carlo simulations for uncertainty propagation take as inputs the uncertainty distribution for each variable and an equation for the calculation of a desired quantity. The desired quantity is then calculated by randomly drawing from the specified uncertainty distributions of the input variables. This calculation is then repeated many times (often 106 or greater) with new random drawings each time. 

Question. How does this repeated sampling and recalculating apply to a mass of temperature measurements?

bdgwx
Reply to  James Schrumpf
February 10, 2022 11:47 am

JS said: ” If so, what happens when it measures 0.00 C? Is the accuracy meaningless?”

PT1000 instruments work because temperature alters the electrical characteristics of the material such that it’s resistance increases with increasing temperature. The ±0.04% figure you see is for the uncertainty of the ohm measurement by the data logger. It is given as percent of the full scale range (FSR) of ohms over a temperature range of I believe -25 to +50 C. This ohm range is about 290 ohms. So 0.0004 * 290 = 0.12 ohms. And since there are about 3.9 ohms per C that would translate into an temperature uncertainty of about 0.4 C. However, I did look up the specific data logger for USCRN. Some use the CR23X which has a resistance uncertainty of about ±0.02%. That translates into a theoretical temperature uncertainty of about ±0.2 C. However, due to other factors Hubbard et al. 2005 concluded that it could be as high as ±0.33 C So to answer your question at 0 C the uncertainty is probably on the order of ±0.3 C for a typical USCRN station.

JS said: “What does that tell me?”

What that tells you is:

1) The error distribution when you truncate or round digits is uniform.

2) The error distribution when you truncate or round digits has a standard deviation given by 0.289 / D where D is the number of decimal places you keep.

For example, it is common for temperature instruments to only display 1 digit after the decimal place. That means the read uncertainty is uniform between -0.05 and +0.05 with an SD of 0.0289 C. So if the display says 15.1 C then the measured value could fall between 15.05 and 15.15 C. You just don’t know what it is because the display only gives you 1 digit after the decimal place.

It is also important to note that the read uncertainty is different from the measurement uncertainty. For example, let’s consider the CR23X data logger. It has a published measurement uncertainty of ±0.2 C. Now let’s say you log the values to Excel but with only 1 digit after the decimal place…just because. That injects a read uncertainty of ±0.03 (remember the 0.0289 SD). The combined uncertainty of your dataset per GUM equation 10 or using the simpler root sum square formula is sqrt(0.2^2 + 0.03^2) = 0.202 or 0.2. Notice how the CR23X measurement error dominates the combined uncertainty such that the read uncertainty is negligible.

Reply to  bdgwx
February 10, 2022 12:12 pm

You need to decipher how temps can be quoted to the 1/1000ths when the uncertainty means you’ll never know if 0.002 is correct or not.

Show JS how averaging different things allows one to increase the resolution of the measurements. How do you go from a resolution of 0.1 to 0.001 via arithmetic averaging?

Carlo, Monte
Reply to  Jim Gorman
February 10, 2022 12:41 pm

Via mental willpower that it be so.

bdgwx
Reply to  Jim Gorman
February 10, 2022 1:54 pm

bdgwx said: “2) The error distribution when you truncate or round digits has a standard deviation given by 0.289 / D where D is the number of decimal places you keep.”

Yikes. I just noticed that I butchered that. That should be 0.289 / 10^(D-1).

BTW…here is the derivation of that using the variance formula for a uniform distribution.

σ^2 = 1/12 * (b – a)^2

σ^2 = 1/12 * (0.5/10^(D-1) + 0.5/10^(D-1))^2

σ^2 = 1/12 * (1/10^(D-1))^2

σ = sqrt[1/12 * (1/10^(D-1))^2]

σ = sqrt[1/12] * 1/10^(D-1)

σ = 0.289 * 1/10^(D-1)

σ = 0.289 / 10^(D-1)

Carlo, Monte
Reply to  bdgwx
February 10, 2022 12:39 pm

As usual, your rants about uncertainty are nonsense.

Reply to  James Schrumpf
February 10, 2022 12:04 pm

Here is a screenshot of a section from the GUM that covers what you say.

Please note that a function for combining different measurements is needed. This doesn’t mean just using a statistical tool like averaging is a valid method for also dealing with uncertainty.

Finding an SEM, Standard Error of the Sample Mean, only tells you how accurate your estimated mean calculation may be for the distribution that you have. It does not deal with the propagation of the uncertainty inherent in each and every measurement used to calculate that mean and SEM.

Capture+_2022-02-10-13-53-20.png
Reply to  James Schrumpf
February 10, 2022 8:40 am

Don’t get too caught up in the minutiae of uncertainty in the measurements. The real issue is taking old data recorded in integers and expanding the information available from that measurement to include much more information with far more significant digits.

If proper scientific data analysis was being done, anomaly baselines would only have two significant digits and the corresponding anomalies would be in integers also. Error bars would be at least +/- 0.5 for these anomalies.

Averaging measurements whether of the same thing or different things simply can not increase the resolution of the measuring device.

Reply to  James Schrumpf
February 10, 2022 1:17 pm

Part of the problem is that it is not a uniform distribution, especially for human readings. It only becomes uniform as one reaches the limit of resolution. For example, the recording of 96 1/4 easily becomes 96. A reading of 96.75 easily becomes 97. Only around 96.5 does the reading really become uniform.

The uncertainty arises when 96 is recorded. You don’t know the next digit and can never know it, i.e., it is uncertain. Does that make the uncertainty a uniform distribution? Not really.

A normal distribution, which is what is assumed when measuring the same thing, multiple times, with the same thing gives you an SD = 0.5 for an interval of +/-0.5.

Reply to  James Schrumpf
February 10, 2022 12:23 pm

You are right, it isn’t wrong. Gave you a plus to cancel the negative.

I like your use of matching the uncertainty decimal place to the resolution of the measurement.

Reply to  James Schrumpf
February 9, 2022 10:51 am

They can’t afford to as they couldn’t show changes out to the 1/1000ths p!ace. Significant Digits are forgotten.

Reply to  bdgwx
February 9, 2022 10:49 am

So you think CAGW based on CO2 warrants any of the New Green Deal costs?

Reply to  James Schrumpf
February 7, 2022 3:16 pm

The problem is that the people who are supposedly experts today really aren’t. People requesting more digits should be told politely but firmly that more digits can’t be provided. By acquiescing to such requests they are foisting a fraud upon the public. Far too many scientists today are totally untrained in uncertainty and significant figures. They don’t know enough to even understand why they can’t have more digits!

It’s a sad commentary on how badly mathematicians and climate scientists are trained today about physical science.

Carlo, Monte
Reply to  James Schrumpf
February 7, 2022 9:13 pm

He and bellcurveman have been told this many times, yet refuse to acknowledge the truth. They are cutting an ulterior motive.

Carlo, Monte
Reply to  bdgwx
February 7, 2022 9:11 pm

Idiocy.