UAH Global Temperature Update for December, 2021: +0.21 deg. C.

From Dr. Roy Spencer’s Blog

January 2nd, 2022 by Roy W. Spencer, Ph. D.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for December, 2021 was +0.21 deg. C, up from the November, 2021 value of +0.08 deg. C.

The annual average anomaly for 2021 was +0.134 deg. C above the 30-year mean (1991-2020), which places it as the 8th warmest year in the 43 year satellite record, behind 2016, 2020, 1998, 2019, 2017,2010, and 2015.

The linear warming trend since January, 1979 remains at +0.14 C/decade (+0.12 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1991-2020) average for the last 24 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST 
2020 01 0.42 0.44 0.40 0.52 0.57 -0.22 0.41
2020 02 0.59 0.74 0.45 0.63 0.17 -0.27 0.20
2020 03 0.35 0.42 0.27 0.53 0.81 -0.95 -0.04
2020 04 0.26 0.26 0.25 0.35 -0.70 0.63 0.78
2020 05 0.42 0.43 0.41 0.53 0.07 0.84 -0.20
2020 06 0.30 0.29 0.30 0.31 0.26 0.54 0.97
2020 07 0.31 0.31 0.31 0.28 0.44 0.27 0.26
2020 08 0.30 0.34 0.26 0.45 0.35 0.30 0.24
2020 09 0.40 0.42 0.39 0.29 0.69 0.24 0.64
2020 10 0.38 0.53 0.22 0.24 0.86 0.95 -0.01
2020 11 0.40 0.52 0.27 0.17 1.45 1.09 1.28
2020 12 0.15 0.08 0.21 -0.07 0.29 0.44 0.13
2021 01 0.12 0.34 -0.09 -0.08 0.36 0.50 -0.52
2021 02 0.20 0.32 0.08 -0.14 -0.65 0.07 -0.27
2021 03 -0.01 0.13 -0.14 -0.29 0.59 -0.78 -0.79
2021 04 -0.05 0.05 -0.15 -0.28 -0.02 0.02 0.29
2021 05 0.08 0.14 0.03 0.06 -0.41 -0.04 0.02
2021 06 -0.01 0.31 -0.32 -0.14 1.44 0.63 -0.76
2021 07 0.20 0.33 0.07 0.13 0.58 0.43 0.80
2021 08 0.17 0.27 0.08 0.07 0.33 0.83 -0.02
2021 09 0.25 0.18 0.33 0.09 0.67 0.02 0.37
2021 10 0.37 0.46 0.27 0.33 0.84 0.63 0.06
2021 11 0.08 0.11 0.06 0.14 0.50 -0.42 -0.29
2021 12 0.21 0.27 0.15 0.03 1.63 0.01 -0.06

The full UAH Global Temperature Report, along with the LT global gridpoint anomaly image for December, 2021 should be available within the next several days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

4.9 13 votes
Article Rating
321 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alex
January 3, 2022 10:43 pm

Wow! The warming trend becomes exponential!

Scissor
Reply to  Alex
January 4, 2022 5:21 am

Entire houses and neighborhoods are burning up in Colorado (Boulder County).

Phil R
Reply to  Scissor
January 4, 2022 6:04 am

We got snow in SE Virginia yesterday, and birds fly south for the winter. So, what;s your point?

Scissor
Reply to  Phil R
January 4, 2022 8:53 am

I guess I should have used a sarc/ tag.

I was thinking that unfortunately individual catastrophic events, that are indeed devastating to many, are politicized and often exaggerated.

Climate social scientists are coming out of the woodwork to blame AGW and their political enemies. Instead, in addition to criminal investigations into the particular causes of this event, fire engineers should be studying what steps can be taken to minimize the damage from future fires.

Phil R
Reply to  Scissor
January 4, 2022 9:41 am

Sorry, usually pretty good at spotting sarc but missed this one. My bad…

Reacher51
Reply to  Scissor
January 4, 2022 7:51 am

Scissor,

The Great Peshtigo Fire, the worst forest fire in American history, devasted a huge swath of Wisconsin on October 8, 1871.

On that very same day, the Great Chicago Fire also ignited, destroying ~3.5 square miles of Chicago and leaving 100,000 people homeless.

Therefore… what? Have we just discovered terrifying new CO2 time traveling abilities? Or do we, perhaps, need a remedial course in elementary logic?

David Brewer
Reply to  Reacher51
January 4, 2022 6:12 pm

OMG… time travelling CO2. Now that’s really scary!!

Reply to  Scissor
January 4, 2022 8:35 am

Arson suspected. No power lines were down. Video of burning shed that apparently started the fire.

Sky King
Reply to  Scissor
January 4, 2022 4:06 pm

Downslope winds resulting from a western low pressure system that diverted the jet stream and heated adiabatically fed the fires. Please postulate how CO2 caused this.

Reply to  Alex
January 4, 2022 8:20 am

“MAD BLIZZARD” BATTERS HOKKAIDO, JAPAN; RECORD-SETTING SNOWSTORMS LEAVE 1 MILLION WITHOUT POWER IN U.S.–GAS PRICES RISE AS COLD FREEZES WELLS; + UK SET FOR BLIZZARD CONDITIONS
January 4, 2022 Cap Allon
Snow even hits Florida, as the next cyclical bout of global cooling dawns…

https://wattsupwiththat.com/2022/01/03/californias-sierra-nevada-sets-all-time-december-snow-record/#comment-3424429
 
The killer global cold will come in January and especially February 2022 – locked in by the cold Nino34 SST in October 2021. Nino34SST leads UAHLTglobal by ~4 months.
 
The difficult-to-predict southward descents of the Polar Vortex will decide who lives and who dies. I am particularly concerned about the UK and Germany, two old enemies joined in their struggle for winter survival, their vital energy systems sabotaged by toxic green-energy false propaganda. Lenin and Goebbels would be proud – their lessons of false propaganda have been well-learned by their modern pseudo-green acolytes.
 
I accurately predicted the current British energy crisis in 2002 and in greater detail in 2013. January and February 2022 will be worse.
AN OPEN LETTER TO BARONESS VERMA
British Undersecretary for Energy and Climate Change, 31Oct2013
By Allan MacRae, B.A.Sc.(Eng.), M.Eng.
https://wattsupwiththat.com/2013/10/31/blind-faith-in-climate-models/#comment-1130954

john harmsworth
Reply to  ALLAN MACRAE
January 4, 2022 2:05 pm

-32C here in Saskatchewan, Alan. Scanning the horizon for any chance of a warming trend, daily, weekly, annual or multi decadal. Whatever is available.

Bindidon
Reply to  john harmsworth
January 4, 2022 4:41 pm

You are simply at the wrong place…

comment image

… and I am at the right one

comment image

Reply to  john harmsworth
January 4, 2022 7:33 pm

Your summers are getting warmer. The flip side is that your winters will get colder. Eventually the snow will stay all year and your dependents, should they stay put, will be building on ice mountains.

January 3, 2022 11:44 pm

A month of interhemispheric heat piracy from the southern to northern hemisphere. NH warmer, SH cooler. All from piracy in the Caribbean 😁

https://ptolemy2.wordpress.com/2020/09/12/widespread-signals-of-southern-hemisphere-ocean-cooling-as-well-as-the-amoc/amp/

mcswell
Reply to  Phil Salmon
January 4, 2022 6:53 am

Arrr!

Art Slartibartfast
January 4, 2022 12:11 am

I do not understand why the average global temperature is even discussed. It reduces the thirty types of climate we have in the Köppen–Geiger classifcation to an oversimplified, meaningless number.

Depending on who you ask, the ideal global average temperature would be something like 15.4 °C. OK, so if it is this temperature everywhere on the planet we have a huge problem with melting ice caps. If this average is reached through a low temperature at the poles gradually rising to a higher temperature at the equator, it completely depends on the temperature distribution whether this is acceptable.

Can somebody explain why average temperature is worth examining?

Reply to  Art Slartibartfast
January 4, 2022 1:20 am

It distracts the climate hysterics.

Reply to  Art Slartibartfast
January 4, 2022 2:07 am

“Can somebody explain why average temperature is worth examining?”

Average global temperature is just a reference value against which change can be measured. If there is a statistically significant change to this value over time then why wouldn’t that be worth knowing?

Derg
Reply to  TheFinalNail
January 4, 2022 3:58 am

.134 is significant?

My thermometer can’t read that small.

Scissor
Reply to  Derg
January 4, 2022 5:26 am

The difference in temperature from one side of a room to another often will be greater than this by more than an order of magnitude.

Crowcatcher
Reply to  Scissor
January 4, 2022 6:11 am

Well worth having a look at the live Webb Telescope site to see what temperature differences occur in the real world (sorry, don’t know how to do the link)

Reply to  Crowcatcher
January 4, 2022 6:30 am

Don’t even need to find the link – if alarmists could calm themselves down and think about the typical temperature ranges they experience every day, let alone during the whole year, then they would see how ridiculous it is to worry about ~1.5°C over a century. In fact the UAH data shows that the US48 area is 1.6°C above the 30 year average, and it’s fine – in fact it just had the largest snow fall in Northern California for December, congratulations, no water shortages for 2022!

Reply to  Scissor
January 4, 2022 10:30 am

The difference in temperature from one side of a room to another often will be greater than this by more than an order of magnitude.

Surface readings are daily high and low temperatures divided by 2. There are also multiple sample points per region.

MarkMcD
Reply to  TheFinalNail
January 4, 2022 2:56 pm

And if it is 30° for 12 hours and 4° for 30 minutes?

Average temps are junk data. We need better reporting. Particularly with the weather station debacle over siting and new ‘instantaneous reads’ that are biasing the data we do get, even from multi-sites across a region.

If they don’t ‘like’ a reading the ‘experts simply use a nearby (sometimes 100 kms away) site to adjust the data.

Even this UAH data seems suspect.

From 2015: Version 6 of the UAH MSU/AMSU global satellite temperature dataset is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring.

Now go back to top of page and look at that graph – there’s a distinct state change in 2015!

Carlo, Monte
Reply to  MarkMcD
January 4, 2022 6:02 pm

Strictly speaking, the max-min “average” is actually the median temperature, with little or no relation to the mean temperature.

MarkMcD
Reply to  Carlo, Monte
January 7, 2022 4:05 pm

My point stands however. 😀

Also the one about the abrupt change in the data upon the implementation of version 6 in 2015.

It might be instructive to see the data fed into version 5 for the 7 years since 2015 and see what that looks like. AFAIK (and I may be wrong) there was no parallel run of the data when they changed versions, just “here’s the new graph!”

I am unable to find radiosonde data from 2015 on but I’d take bets that, without adjustments being made, it would not reflect a sudden jump in temps like we see above.

Given there was no apparent sudden change to weather as experienced by people (the headlines would have been screaming about it) I think the change made is invalid, and has led to data that is increasingly irrelevant.

Look at the graph again – it goes from hovering around the centreline to everything being well above it – do YOU recall a sudden jump in climate effects in 2015?

Carlo, Monte
Reply to  MarkMcD
January 7, 2022 6:20 pm

Indeed it does, I only wanted more people to realize that what is/was being recorded.

And no, I cannot identify any significant events in 2015-2016.

Reply to  Derg
January 4, 2022 6:35 am

Buy 100 such thermometers 🌡 .
Then you can.

Clyde Spencer
Reply to  Phil Salmon
January 4, 2022 9:03 am

By the time you read 100 thermometers the temperature will likely have changed.

Reply to  Derg
January 4, 2022 10:20 am

That’s not a thermometer reading.

Derg
Reply to  TheFinalNail
January 4, 2022 11:53 am

Lol

Phil R
Reply to  TheFinalNail
January 4, 2022 6:13 am

The fundamental flaw is that “Average global temperature” isn’t a “measure” of anything.

Reply to  Phil R
January 4, 2022 7:10 am

Exactly! It is a metric that doesn’t even include a standard deviation of the temp data that goes into the metric.

It is not a measurement!

Clyde Spencer
Reply to  Jim Gorman
January 4, 2022 9:07 am

Reducing a data set to a mean reduces the information. It is like taking a picture composed of millions of pixels and reducing it to a single ‘representative’ pixel.

AGW is Not Science
Reply to  Phil R
January 4, 2022 11:54 am

Furthermore, when most of the “readings” are being taken in places where increased urbanization are inflating them, and given how small the changes have been, one can imagine a high likelihood that the supposed amount of “change” is an artifact of the defects in the measurements rather than any actual change to the “climate.”

Reply to  TheFinalNail
January 4, 2022 7:06 am

Yet a mean has no information concerning any specific area of the globe. It is a meaningless metric useful only for propaganda purposes. It also allows so-called scientists to refer to an average global temperature as having an effect in any and all locations.

How many studies and press releases refer to a generic GAT rather than what is occurring locally and exactly how those local temps directly affect what is being studied. Many never directly study what a change in temp actually does, only that an increasing GAT must be the cause!

Changes should be broken down to the various climate areas so proper mitigation decisions can be made.

Reply to  Jim Gorman
January 4, 2022 9:06 am

Jim, I like your suggestion of having a breakdown for the climate zones/subzones.

Mitigation = adaption?

guest
Reply to  TheFinalNail
January 4, 2022 7:13 am

In freshman physics lab, if you expressed a result to a precision better than the capability of the instrument used to make the measurement, you would get points taken off. Taking an average of measurements made with a meter stick will not be known to a precision of a nanometer.

Clyde Spencer
Reply to  TheFinalNail
January 4, 2022 9:01 am

If the information is equivalent to counting the number of grains of sand on a beach, then the answer is that it isn’t worth knowing if it has no practical application.

Adding the temperature of water to the temperature of air is comparing apples and oranges because they have very different specific heats and different surface areas.

If one is looking for a metric to demonstrate trends, then ocean temperatures are probably a better choice than air or combined temperatures because water temperatures change more slowly, thus filtering out high-frequency noise.

In any event, any reported temperatures that don’t include error bars is just smoke and mirrors. It is an implicit claim that any changes in temperature are known exactly, when that isn’t true. We can calculate a nominal value, called an arithmetic mean, but commonly without the associated probabilistic variance. Most likely, even a 1 sigma variance is much larger than the +/-0.005 monthly or +/-0.0005 annual precision implied by number of significant figures in the reported means. To provide an analogy, it is like being lost in a large city. You know what city you are in, but you don’t know where you are in the city. Claiming that one is in Los Angeles doesn’t really provide much information about where one is in California. Try calling 911 and telling the operator you are injured. She will ask you where you are located. Good luck in getting assistance if you tell her that you are in Los Angeles.

john harmsworth
Reply to  TheFinalNail
January 4, 2022 2:12 pm

The IPCC actually said that warming of up to 1.8 C was generally beneficial for the planet. Why isn’t THAT worth knowing? Crickets from the Green section.

Gerard O'Dowd
Reply to  TheFinalNail
January 4, 2022 9:58 pm

Average global temperature is limited in utility because it is an abstract, retrospective, data point that has little relationship to microclimates or micro habitats; their usefulness would lie in their predictive value for changes in future average temperatures variations whether the direction or the absolute change in temperatures. Have you ever seen a probability calculation performed? Average temperatures might also be useful for the creation of a GT or Regional Temperature Futures Contracts on a Commodity Market Exchange or a betting line on an internet gambling site though I’m not certain how the odds would be established.

Tom
Reply to  Art Slartibartfast
January 4, 2022 4:27 am

During the last “ice age”, thousands of years ago, much of the land in the northern hemisphere was covered in ice, a lot of ice. Do you think the global average temperature was the same then as now? No, it wasn’t. Major changes in global climate are accompanied by changes in temperature. You can argue cause and effect, but there is little doubt climate and temperature are related. Temperature is a major factor in climate. Do you think we’d notice if the average global temperature were 20 deg C warmer; how about a hundred?

Phil R
Reply to  Tom
January 4, 2022 6:38 am

erm… I think glaciers covering the northern hemisphere would be a bit of a tell. I don’t think one would need to check the “global average temperature” to confirm that it’s cold.

Reply to  Phil R
January 4, 2022 2:37 pm

But the seawater under the sea ice might be warmer than usual at depth. So adding everything together is could still be the hottest year evah!

Reply to  Tom
January 4, 2022 7:28 am

You can answer this yourself with a couple of easy experiments. Throughout a day go outside and estimate the temp then compare it a thermometer. How close can you guess?

Try varying your house thermostat and have people try to guess the actual temp. What is the range of guesses?

I suspect you will find humans are not terribly accurate receptors of temperatures.

MarkW2
Reply to  Art Slartibartfast
January 4, 2022 4:29 am

The plain truth is that it’s totally and utterly meaningless but provides the statistics required to support the environmental agenda and keep the scientists’ gravy train running.

The focus should be on reducing pollution, which is the real killer, not CO2.

Tom
Reply to  MarkW2
January 4, 2022 5:31 am

You seem to be suggesting that monitoring the earth’s temperature is not an appropriate area for scientific inquiry.

MarkW2
Reply to  Tom
January 4, 2022 5:57 am

It isn’t appropriate. In fact it’s nonsensical. Temperatures and temperature trends vary so enormously that it makes absolutely no sense whatever. Totally and utterly meaningless except for the purposes of propaganda.

Tom
Reply to  MarkW2
January 4, 2022 6:01 am

I’m glad you cleared that up for me.

Richard Page
Reply to  Tom
January 4, 2022 6:26 am

Monitoring the local area temperatures and weather patterns for a local climate is perfectly fine – you can build up an accurate and precise picture of a changing climate. However, by the time you’ve averaged temperatures from all over the planet, then the result is ridiculous – the mathematical precision of the final number (often very carefully worked out to 3 decimal places) belies the fact that when you average real world readings together you multiply the error range until it’s such a ridiculously wide range that it becomes meaningless. Who in their right mind would think that an average world temperature of 7.5′ +/- 30′ was worth spit?

Tom
Reply to  Richard Page
January 4, 2022 6:43 am

Why don’t you ask Roy Spencer if he feels like his data “worth spit”.

Clyde Spencer
Reply to  Tom
January 4, 2022 9:10 am

I would be a lot happier if he appended error bars to his reported average values.

MarkW2
Reply to  Tom
January 4, 2022 3:12 pm

I very much doubt Roy Spencer would be in favour of such data but for the fact it’s been adopted by climate ‘scientists’.

Reply to  Richard Page
January 4, 2022 8:09 am

Exactly. What is usually quoted as a standard deviation is really the error introduced throughout and in the calculations, NOT the standard deviation of the data from which the mean was calculated.

Rick W Kargaard
Reply to  Tom
January 4, 2022 6:15 am

There is much scientific inquiry that has little or no usefulness to humanity in general and I would include establishing a more accurate guess of global average climate as one of them. It is even difficult to determine the average temperature of a room in my house as it varies constantly from spot to spot and second to second. Establishing a true average for the planet would likely require a planet size computer with access to almost infinite data.
Furthermore, the only temperatures that matter to me are the ones that determine my comfort so that I can make appropriate decisions to ensure it.
That includes enough warmth to mature the crops I depend on or perhaps enough winter cold that I can go skating on the local lake.
In other words, mostly local and time constrained conditions.

Phil R
Reply to  Tom
January 4, 2022 7:35 am

Before you start talking about “monitoring the earth’s temperature” you need to clearly define what “the earth’s temperature” is. you can’t monitor something you can’t even define.

Reply to  Tom
January 4, 2022 11:59 am

The earth has a fever. But it was a politician that told us that when he starred in a movie.

Reply to  Doonman
January 5, 2022 3:14 pm

The only answer to a fever is MORE COWBELL!!!

So I’m told.

Reply to  Art Slartibartfast
January 4, 2022 6:34 am

It’s a number. But it would be better to show it in context, because there are so many innumerates out there who think 0.134° is a big deal, or 1.5°C is the end of the world when in their lives they experience greater changes every day.

Reply to  Art Slartibartfast
January 4, 2022 7:11 am

For the same reason that you like to know what average you spend per month using credit cards….potential correction of bad habits, find faulty entries, awareness of your situation….

Sal Minella
Reply to  DMacKenzie
January 4, 2022 8:21 am

I doubt that the global average credit usage would help me.

Bob Rogers
Reply to  DMacKenzie
January 5, 2022 7:19 am

The average I spend per month is pretty meaningless to me. The absolute numbers have meaning. Also, I can directly impact what those numbers are. If I buy less stuff then the number will be lower.

No one knows with any certainty what causes the global average temperature to change.

Bob Hunter
Reply to  Art Slartibartfast
January 4, 2022 7:41 am

Because Climate Change is politics.
Therefore everything must be summarized in a 6 word headline or 20 second sound bite

Dan M
Reply to  Art Slartibartfast
January 4, 2022 9:50 am

It’s a useful aggregate temperature measurement that tells whether the planet as a whole is warming or not. Unfortunately, we only have satellite data going back to 1979. If we had data going back to the 1930’s, it would likely show an average temperature similar to today in the 1930s through the mid 1950s (based on un-manipulated ground temperature data). After that temperatures began falling (remember the “ice age is coming predictions of the early 1970s?).

It seems that we have a natural temperature variation that is on the order of multiple decades, perhaps a 7-8 decades for a full cycle. Actual warming due to CO2, if there is any, is much smaller than the CO2 sensitivity of the IPCC models, which have vastly over-predicted warming.

AGW is Not Science
Reply to  Dan M
January 4, 2022 12:11 pm

Actual warming due to CO2, if there is any

The Earth’s climate history says there isn’t any. Observation trumps theory.

Doug
Reply to  Art Slartibartfast
January 4, 2022 10:14 am

In reality no one knows the average temperature even to a degree. What is published is the average temperature of the small part of the earth that is measured . The rest is extrapolation. The fact that it is broken down to hundredth of a degree is dishonest and exploitive of the publics general ignorance of science .

Reply to  Doug
January 5, 2022 3:18 pm

Steve over at BEST once told me his customers wanted that precision, so that’s what BEST gave them.

Reply to  Art Slartibartfast
January 4, 2022 7:43 pm

Can somebody explain why average temperature is worth examining?

It gives rise to a single number of 1.5C rise from 1850 level that will cause all life on earth to cease. It makes for simple language.

The fact that no one living human had been to the South Pole in 1850 should give rise to the question how was global temperature determined in 1850?

Then there is the situation where 10 of the climate models from 10 of the prestigious climate prognosticating groups have the current global temperature over a 2C range.

The current disagreement over the global temperature covers a 2C range yet we are told that a 1.5C rise is going to wipe out all living matter.

Global surface temperature is indeed a meaningless number. Anyone telling you otherwise is a con artist trying to get you to part with money for the snake oil they are selling.

Russ R.
Reply to  RickWill
January 7, 2022 11:27 am

It is not a coincidence that the places that are “warming the most” are the most inhospitable to life. So they have the worst long term record, which is easy to “re-imagine” when the need arises.

January 4, 2022 1:49 am

“The annual average anomaly for 2021 was +0.134 deg. C above the 30-year mean (1991-2020)”

I mean, it just screams Thermageddon…

AGW is Not Science
Reply to  Climate believer
January 4, 2022 12:12 pm

If we were getting honest reporting, the description should read “The annual average anomaly for 2021 was indistinguishable from the 30-year mean temperature.”

Uncle Mort
January 4, 2022 2:49 am

Whenever I see an external temperature which is +0.134 deg. C above the 30-year mean I immediately turn down the central heating in order to save the planet.

Tom
Reply to  Uncle Mort
January 4, 2022 5:03 am

If the temperature anomaly increases by 0.134 deg F per month for 100 months in a row, I think you may change your opinion.

Reply to  Tom
January 4, 2022 6:11 am

Better read the article again: the anomaly is 0.134°C not F and no one is saying that there is a steady increase of that every month. The whole year of 2021 was slightly above the average based on the past 40+ years of the satellite record. The trend in the record is 0.14°C per DECADE, and even if that was maintained for a whole century one would barely know the difference, probably the most noticeable signs would be a longer growing season, more rain from the extra evaporation from the oceans, and a net decrease in temperature related deaths since 10x as many people die from cold as from heat. We would have a long way to go before the Earth reached the temperature levels of the Jurassic and Cretaceous periods, when the Earth was a paradise teaming with life.

Tom
Reply to  PCman999
January 4, 2022 6:23 am

F or C, if it goes up by that much every month for a long time, even if it’s on average, then you and everyone will notice.

no one is saying that there is a steady increase of that every month.

I said it, hypothetically.

Reply to  Tom
January 4, 2022 7:58 am

Guess again. 0.14°C per decade = 0.014°C per year = 0.0012°C per month = 0.00004°C per day.

Not exactly measurable on a thermometer is it?

bdgwx
Reply to  Jim Gorman
January 4, 2022 8:13 am

It’s also 0.0000000005 C/s. Or if the point is to be as absurd as possible let’s just all the way. It’s also 9e-54 C/tp where tp is Planck time or about 5e-44 seconds. Regardless, it is measurable because UAH literally did it.

Carlo, Monte
Reply to  bdgwx
January 4, 2022 9:24 am

bozo-x, the UAH numbers have little to no relation to where humans reside.

bdgwx
Reply to  Carlo, Monte
January 4, 2022 9:57 am

So now the goal posted has shifted from the numbers look too small in different units so it couldn’t have been measured to the numbers have no relationship to where humans reside so it couldn’t have been measured?

Using this logic because the depth of the Titanic wreck is a mere 0.00000000000012 parsecs underwater where no human resides then there is no possible way we could have measured its depth to 3840 meters. If that sounds absurd that’s because it is absurd.

Carlo, Monte
Reply to  bdgwx
January 4, 2022 6:05 pm

Comprehension problem?

whatlanguageisthis
Reply to  bdgwx
January 4, 2022 10:47 am

Regardless, it is measurable because UAH literally did it.

Actually, no. They did not measure it. They took a few measurements. Assumed some gradient for the areas they did not measure. Weighted the values they had by some secret formula. And came up with a value. To actually measure this, they would need a much finer grid of thermometers. They would need to make a lot more measurements from each thermometer per day, and then average those values.

Try this – you set the thermostat in your house for a desired temperature. You assume that is the temperature in the house, but is it? Go put a few thermometers in each room. Put some by the vents. Some by windows. Your fridge is in your house, should it get one? What about by the compressor for your fridge, that is in your house? Put one behind the TV. One in the back of closet. How good is your thermostat at telling you the temp in your house again? That is about the value of the global average temperature.

bdgwx
Reply to  whatlanguageisthis
January 4, 2022 11:27 am

whatlanguageisthis said: “Actually, no. They did not measure it.”

The GUM defines it as a “set of operations having the object of determining a value of a quantity”.

The NIST defines is as an “experimental process that produces a value that can reasonably be attributed to a quantitative property of a phenomenon, body, or substance”.

The 0.14 C/decade figure is a quantitative property of the entirety of the TLT atmospheric layers and UAH used a set of operations to determine its value. Therefore it is a measurement and UAH measured it.

What is your definition of “measure”?

whatlanguageisthis
Reply to  bdgwx
January 4, 2022 1:38 pm

The GUM defines it as a “set of operations having the object of determining a value of a quantity”.

So, as long as the objective is to determine the value, it is a measurement? This seems like a poor definition to me.

The NIST defines is as an “experimental process that produces a value that can reasonably be attributed to a quantitative property of a phenomenon, body, or substance”.

I would say that the produced value being reasonably attributed to represent a true global temperature is exactly the question. Can a single high and a single low temp, in a few locations around the planet, be reasonably expected to capture the true global average temperature of the planet? I would argue the current set of measurements are insufficient for the task.

What is your definition of “measure”?

I would say the values on the thermometers that are recorded at each weather station are measurements. I would say that the lat/long/time of the satellite temperature values are measurements. The global average temperature that is put out is a calculation.

bdgwx
Reply to  whatlanguageisthis
January 4, 2022 2:20 pm

Any temperature recorded by a weather station involves a calculation. The RTD instrumentation in common use today requires a complex model using thermodynamic, electrical, and material science knowledge to map the resistance values to meaningful temperature values. Nevermind that measuring resistance itself isn’t exactly as simple as reading a yardstick. It too involves a model and calculations. And many temperature values recorded by modern stations are themselves actually calculated averages over a 1 minute period using 6 samples. So if averaging and calculations preclude something from being a “measurement” then I doubt any temperature reading would qualify.

whatlanguageisthis
Reply to  bdgwx
January 5, 2022 6:28 am

A weather station taking 8640 measurements a day to generate 1440 values a day seems pretty good – for that location. Using that value to guess a temperature 100+ miles away isn’t valuable. Then only using 2 of those 1440 values (min and max) throws out a lot of data.

bdgwx
Reply to  whatlanguageisthis
January 5, 2022 7:57 am

There are two points I want to address here.

First, the data says that it is valuable to use observations up to 1200 km away to estimate the value of grid cells that have no observation themselves. 1200 km is the point where correlation coefficients drop 0.50 and 0.33 for high/mid and low latitudes respectively [1]. The alternative is to leave unobserved grid cells unfilled in this step. What that means is that in a later step you must effectively assume the unfilled grid cells inherit the average of the filled grid cells to compute the global average. The average distance between a randomly selected unfilled cell and a set of randomly selected filled cell is 10,000 km. What this means is that when contrarian bloggers say “infilling is bad” they are effectively saying that local weighted interpolation is bad but that global weighted interpolation is good which is obviously absurd. This is a complex topic so if something isn’t making since with what I just said then please ask questions. Understand that I don’t have all of the answers and I’m certainly no expert, but I’ll do the best I can.

Second, the surface record only contains Tmin and Tmax in many cases especially prior to 1980 when modern automated systems began taking over. That means any analysis prior to 1980’ish is limited to only Tmin and Tmax. It’s unfortunate, but nothing scientists can do it about it now. The decision by most traditional surface datasets is use Tmin and Tmax throughout the whole period to keep the analysis methodology as consistent as possible. That doesn’t mean there aren’t other datasets that use wildly different methodologies. For example, ERA uses 4D-VAR and process horizontal fields at 12 minute timesteps on a ~500,000 cell grid mesh using orders of magnitude more observations including those from stations, ships, bouys, weather balloons, aircraft, surface radiometers, space radiometers, and the list goes on and on. It turns out that the warming rate from ERA is spot on with those from BEST, HadCRUT, GISTEMP, etc. suggesting that the (Tmin+Tmax)/2 method is indistinguishable from the full integration method.

whatlanguageisthis
Reply to  bdgwx
January 5, 2022 10:21 am

So the people putting out the global average value say up to 1200 km is OK, but are using 10,000 km grids? Then they agree with me that the measurements available are insufficient for the task at hand.

I disagree with their assumption that 1200 km away has value. That is like saying you can use the same temperature for Cleveland and Atlanta or for Oklahoma City and Denver and it be relevant.

As for the records only having min and max, I submit this supports my position that the measurement data available is insufficient for the task of defining a global average.

bdgwx
Reply to  whatlanguageisthis
January 5, 2022 2:41 pm

whatlanguageisthis: “So the people putting out the global average value say up to 1200 km is OK, but are using 10,000 km grids?”

Nobody is “putting out the global average value up to 1200 km”.

Nobody is using a “10,000 km grid”.

1200 km is the distance at which the correlation coefficient between two points drops below 0.5 and 0.3 for high/mid and low latitudes.

10,000 km is the average distance between any two randomly selected points on Earth.

whatlanguageisthis said: “I disagree with their assumption that 1200 km away has value.”

Then you need to do two things.

1) Demonstrate with actual data that the correlation coefficient of points less than or equal to 1200 km is 0 or close to zero.

2) Demonstrate with actual data that the correlation coefficient of points 10,000km away is greater than those that are 1200 km away.

whatlanguageisthis said: “That is like saying you can use the same temperature for Cleveland and Atlanta or for Oklahoma City and Denver and it be relevant.”

That’s right. And the greater the distance is between sites the greater the difference and thus the greater the error.

For example, the temperature in Norman, OK will be closer to Oklahoma City, OK than the temperature at Cleveland. And an instrument that is with a few inches of the official instrument of Oklahoma City, OK will be closer still.

This is why infilling with a local weighted strategy is better than infilling with a global average weighted strategy. BEST, GISTEMP, and HadCRUTv5 use the former. NOAA and HadCRUTv4 use the later. GISTEMP’s local weighted strategy is simple where as BEST and HadCRUTv5 use a complex kriging-like method. ERA takes it an order of magnitude further and uses the 4D-VAR method.

whatlanguageisthis said: “As for the records only having min and max, I submit this supports my position that the measurement data available is insufficient for the task of defining a global average.”

The data does not support that hypothesis though. Through the overlap period of differing methods (1979-present) the warming is +0.19 C/decade either way regardless of whether the (Tmin+Tmax)/2 or the full integration method is used. The data says (Tmin+Tmax)/2 is a sufficient method.

whatlanguageisthis
Reply to  bdgwx
January 6, 2022 6:04 am

That is good that they aren’t using a 10,000 km grid. I read average distance between filled an unfilled as 10,000 km. Maybe think about how that is worded.

I am not the one demanding everyone else change their lifestyle, increase their costs, and lower their standard of living based on a fraction of a degree C, so I don’t feel compelled to prove anything. If you want me to accept the changes, prove your data.

Since all the temperature records are build from the same set of data, I would expect excellent agreement. Even with that, they are not exactly aligned, so there is clearly some difference of opinion. I also have issue with the adjustments that have been made to the measured data to do things like ‘hide the decline’ and such. How trustworthy is the data when it has been processed to hide inconvenient variations?

All this comes together for a value that I am supposed to accept as meaningful that has significance to 0.001 from a data set that has significance in the historical record to 0.5? No.

bdgwx
Reply to  whatlanguageisthis
January 6, 2022 6:43 am

The average distance between an unfilled cell and a filled would be about 10,000 km. Well, technically it is higher than that because as it turns out the unfilled cells tend to cluster together.

I think the thing that may spark the epiphany here is that when bloggers say “infilling is bad” they are speaking only of the local weighted infilling techniques and by inference, misunderstanding, and ignorance of how things are done imply that the de facto global weighted infilling technique is therefore good. Remember, averaging only filled cells is not a global mean because the Earth is actually 510e12 m2; not some subset of it. So when a partial sphere dataset (like HadCRUTv4) publishes their grid mesh average we all have to implicitly (whether bloggers realized it or not) have to assume (infill) those unfilled cells with the average of the filled cells to upscale to the global domain. That means when you use HadCRUTv4 you implicitly infill using cells that are on average 10,000 km away. Here is what the HadCRUTv4 grid looks like. And here is what the GISTEMP grid looks like.

Not at all datasets use the same set of data. They are broadly categorized as surface, radiosonde, satellite, and reanalysis. These 4 categories all use wildly different sets of data.

The adjustments are applied to correct for biases caused by station moves, instrument changes, time-of-observation changes, SST bucket cooling, SST engine heating, satellite drift, satellite orbital decay, etc. It is also interesting to note that the net effect of all adjustments on the surface datasets to reduce the warming trend; not increase as is often claimed by bloggers. “hide the decline” is in reference to the tree ring divergence problem and has no relevance to adjustments.

No global mean temperature is significant to 0.001 C. UAH claims an uncertainty of 0.2 C and 0.15 C for monthly and annual means.

Carlo, Monte
Reply to  whatlanguageisthis
January 5, 2022 10:13 am

And in fact is just the median of the data, not the mean.

Reply to  Tom
January 4, 2022 8:32 am

Tom

I think you have a comprehension issue.

Reply to  PCman999
January 4, 2022 8:44 am

“. . . the anomaly is 0.134°C not F and no one is saying that there is a steady increase of that every month.”

Actually, Dr. Spencer’s statement is that linear warming trend for global LAT since January, 1979 remains at +0.14 C/decade . . . that would be a steady increase of about +0.0012 C every month averaged over 516 consecutive months.

Yes, the article does bear re-reading.

Clyde Spencer
Reply to  Tom
January 4, 2022 9:14 am

I imagine that the probability of 0.134 deg C per month for 100 months in a row is about the same magnitude of the probability of pigs growing wings and flying.

Reply to  Clyde Spencer
January 4, 2022 11:08 am

Just so.

There is NO statement in the above article that there is a rise of “0.134 deg C per month for 100 months in a row”.

Instead, there is the statement that, based on the UAH data set for global LAT, “The average annual anomaly for 2021 was +0.134 deg. C above the 30-year mean (1991-2020) . . .” {my underlining emphasis added}

January 4, 2022 5:21 am

Here are the ten warmest years according to UAH

 1 2016 0.388 
 2 2020 0.357 
 3 1998 0.348 
 4 2019 0.303 
 5 2017 0.264 
 6 2010 0.192 
 7 2015 0.135 
 8 2021 0.134 
 9 2018 0.088
10 2002 0.080 

I’d say 2021 was statistically tied with 2015 for equal 7th warmest. It would have been equal to 3 decimal places if there hadn’t been a slight adjustment to March’s value this month.

John Tillman
Reply to  Bellman
January 4, 2022 6:03 am

Earth has been cooling since February 2016, so next month will be six years. When does the downtrend become significant?

Just a tiny fraction (0.04) of a degree separates 2016 from 1998, so there has been no statistically significant warming in this century. And what did occur was thanks to Super El Niño 2015-16.

Reply to  John Tillman
January 4, 2022 6:35 am

And it’s been warming since 2018, and since any date prior to 2015.

Any trend becomes significant when the properly calculated confidence interval is is such that it doesn’t include a zero trend. Given that your cooling since February 2016 has a confidence interval of around 0.8°C / decade, I think we will be waiting a long time before we see the downtrend become significant.

Clyde Spencer
Reply to  Bellman
January 4, 2022 9:19 am

Any trend becomes significant when …

Pray tell, what is the confidence interval for the annual average temperatures you provided?

Reply to  Clyde Spencer
January 4, 2022 9:41 am

You’d have to ask Dr Spencer that. (Pause, yet again, to reflect on the irony of UAH constantly being declared the most reliable data set, then have everybody complain that it doesn’t include confidence intervals).

But it’s irrelevant to the question of the confidence intervals for the trend, which is what I was talking about. The question was when we were going to see a statistically significant cooling period.

bdgwx
Reply to  Bellman
January 4, 2022 11:13 am

I also find it ironic that UAH does not publish the materials needed to independently verify their results even though other players in the space like GISS/NOAA provide everything you need including the source code for the adjustments and spatial averaging to produce the dataset on your own machine in less than an hour. And yet somehow UAH is considered be the pinnacle of reliability while GISS gets bottom feeder status.

Clyde Spencer
Reply to  Bellman
January 4, 2022 12:58 pm

No, it isn’t irrelevant. Without knowing the uncertainty of the individual data points, one cannot calculate the uncertainty envelope for the trend.

I think you are confusing that with determining the probability that a regression line has no trend.

Clyde Spencer
Reply to  Clyde Spencer
January 4, 2022 1:10 pm
Reply to  Clyde Spencer
January 4, 2022 3:41 pm

I’m sure if you make up implausibly large uncertainties, ones much bigger than the deviation of the entire data set, and assume that somehow these uncertainties don’t show up in any of the actual monthly data, then maybe you need a bigger uncertainty in the trend.

But this is such a stretch. The claim is that it’s plausible that temperatures were actually decreasing over the last 40 years, yet somehow a succession of errors produced a systematic bias in the trend showing half a degree of warming. If this is plausible then you should stop using satellite data and stop publishing all these Monckton pause articles. If you cannot know that the warming observed over 40 years is real, how can you possibly have any faith in a 7 year pause in the same data?

Carlo, Monte
Reply to  Bellman
January 4, 2022 6:08 pm

I’m sure if you make up implausibly large uncertainties, ones much bigger than the deviation of the entire data set

Once again you demonstrate your fundamental lack of understanding about the subject you now love to expound endless upon.

John Tillman
Reply to  Bellman
January 4, 2022 11:50 am

It has not been warming since 2018 or 2015. The lowest monthly anomalies since 2014 were this year.

Reply to  Bellman
January 4, 2022 12:04 pm

Nonsense. The earth is in a 2.5 million year old ice age.

Reply to  Bellman
January 4, 2022 6:21 am

Any statisticians and experts on error out there who can go over the details to see if 3 decimal places is really applicable? I appreciate how the satellites are measuring almost the whole planet in general, and a wide swath at any one data point, much better than a single ground based station measuring a tiny fixed point representing a huge area subject to weather and outside interference.

Reply to  PCman999
January 4, 2022 6:38 am

I would have preferred 2 decimal places, but you need to go to 3 in order to claim that 2021 was cooler than 2015. Hence my comment that 2021 and 2015 are really tied for 7th warmest.

As always nobody takes Dr Roy Spencer to task for claiming 2021 was cooler than 2015, without mentioning it was only cooler by a thousandth of a degree.

Clyde Spencer
Reply to  Bellman
January 4, 2022 9:22 am

I have more than once complained that Roy doesn’t append an uncertainty range to his reported nominal values.

bdgwx
Reply to  PCman999
January 4, 2022 6:59 am

Christy et al. 2003 say the uncertainty on annual mean temperatures is ±0.15 C. They also say that even though the grid has 9504 cells the degrees of freedom (DOF) isn’t 9504. It is a mere 26 due primarily to the sampling methodology of the satellites. In other words, their global mean temperature is effectively using only 26 spatial samples at least in regard to the uncertainty of that value. This is primarily why their published uncertainties are higher than those published surface based datasets like BEST and GISTEMP which report uncertainties on the order of ±0.05 C or less.

al in kansas
Reply to  bdgwx
January 4, 2022 8:05 am

Given that the surface data sets were recorded historically for weather purposes in the nearest whole °F, I would put their minimum uncertainty at ±0.3 C. Any less is false precision nonsense. Also, they assume a gaussian error distribution of the data set with no biases, and no tracible calibration records exist for most if not all of the thermometers in question. The surface data sets show nothing significant to support CAGW. This type of statistical abuse would get you flunked out of any science class I had in college or fired from any job in statistical process quality control in a heart beat.

bdgwx
Reply to  al in kansas
January 4, 2022 11:08 am

AIK said: “Given that the surface data sets were recorded historically for weather purposes in the nearest whole °F,”

Some stations reported in F. Remember not everyone uses the Fahrenheit scale.

AIK said: “I would put their minimum uncertainty at ±0.3 C.”

I’d actually put the uncertainty of LiG observations closer to ±1.0 C.

AIK said: “Also, they assume a gaussian error distribution of the data set with no biases, and no tracible calibration records exist for most if not all of the thermometers in question.”

Gaussian…correct. Bias…incorrect. Calibration…correct. It turns out that it doesn’t matter what the distribution is. It could be uniform, triangular, gaussian, etc. and the uncertainty of the spatial and temporal average of a bunch of observations is lower than the individual constituent observations. And a lot of the systematic bias in the observations themselves will cancel out with anomaly analysis. Don’t hear what I’m not saying. I’m not saying observational networks were ideal in the past. They aren’t even ideal today. But we can’t go back and change the past. All we do is analyze the data given to us and deal with the problems just like they do in every other discipline of science.

AIK said: “The surface data sets show nothing significant to support CAGW.”

Agreed. Though I never get a straight answer on what CAGW actually is. In fact, I don’t think I’ve ever been given a straight up definition. But with some leg pulling I’ve been able to deduce that it is probably around 6C per 2xCO2 or higher. The abundance of evidence including surface data do not support this amount of warming.

AIK said: “This type of statistical abuse would get you flunked out of any science class I had in college or fired from any job in statistical process quality control in a heart beat.”

Which widespread statistical abuses are you referring to?

Reply to  bdgwx
January 4, 2022 3:12 pm

It could be uniform, triangular, gaussian, etc. and the uncertainty of the spatial and temporal average of a bunch of observations is lower than the individual constituent observations.”

Thank you. This is the basic concept that is denied here.

Which widespread statistical abuses are you referring to?”

You’ll get no cogent answer in these fora,

Carlo, Monte
Reply to  al in kansas
January 4, 2022 12:42 pm

This type of statistical abuse would get you flunked out of any science class I had in college or fired from any job in statistical process quality control in a heart beat.

He has a special talent here…

Carlo, Monte
Reply to  bdgwx
January 4, 2022 9:25 am

These tiny numbers were and remain BS, bozo-x.

bdgwx
Reply to  Carlo, Monte
January 4, 2022 9:43 am

My sources are Christy et al. 2003, Rhode et al. 2013, and Lenssen et al. 2019. Everyone is welcome to verify the numbers I provided with these publications.

Carlo, Monte
Reply to  bdgwx
January 4, 2022 6:09 pm

Well pin a bright shiny gold star on your chest.

Reply to  PCman999
January 4, 2022 8:06 am

Only Dr. Spencer can truly answer this. We know that it does not measure temperature directly but instead uses a, for lack of a better word “proxy”, to determine an average temperature of a slice of the atmosphere.

The resolution of the proxy will determine the resolution of the metric derived from it. That resolution will determine the basic error and uncertainty values.

Richard Page
Reply to  Bellman
January 4, 2022 6:30 am

Statistics, when used this way are completely meaningless and we both know it. What is the statistical error range of each anomaly, then what is the actual (non mathematically derived) error range for the temperature dataset?

Reply to  Richard Page
January 4, 2022 6:50 am

Never claimed it was meaningful, just a bit of fun to round of the year. If you want meaningful statistics look at the overall trend or long term averages.

Again though, it’s interesting to see how many jump on me for not showing error ranges, yet make no objection to Dr Spencer claiming it was cooler than 2015, without even saying by how much.

ResourceGuy
Reply to  Bellman
January 4, 2022 6:34 am

Try looking at turning points in curves and polynomials sometime.

ResourceGuy
Reply to  Bellman
January 4, 2022 10:37 am

I deal with reporters a lot and you seem to think a lot like them when looking at information. Are you a history or English major?

Tom
January 4, 2022 6:30 am

Met Office Hadley Centre observations datasets

This is the Central England Temperature Record, which is, I understand, the longest instrumental temperature record that we have. It is a very good thing that the temperature did not stay where it was in 1700. We would probably not like it much and the world would be a different place. I can’t say how much different, but significantly, I believe. It is important to monitor this, and we’d be better off if we had better temperature data going back in time. If we did, it might even provide counter arguments against the alarmists.

CET Temperature Record.PNG
Reply to  Tom
January 4, 2022 6:57 am

It’s interesting (or is it better to say weird) that temps jumped up a degree in Central England during the 80’s to the end of the 90’s and then plateaued during the last 2 decades. Was central England paved over and developed during that time? Certainly wasn’t CO2 to blame. Maybe reductions in sulphates and soot from coal burning phasing out? Environmentalists causing global warming?

Clyde Spencer
Reply to  PCman999
January 4, 2022 9:26 am

Environmentalists causing global warming?

If so, I see it as a win-win! Cleaner air, and warmer.

Richard M
Reply to  PCman999
January 7, 2022 7:06 am

Not weird at all. The PDO went positive in 1977 and remained positive for all but 8 years which just happens to be the same time as the little dip in the early 2000s.

bdgwx
Reply to  Richard M
January 7, 2022 7:27 am

You can download the PDO data here. From 1977 to 2021 the PDO average is -0.21.

Richard M
Reply to  bdgwx
January 8, 2022 8:59 pm

Or here.

https://www.data.jma.go.jp/gmd/kaiyou/data/db/climate/pdo/pdo.txt

I use the term PDO as a generic reference to the air circulation patterns. ENSO also interferes with the signal so it is not a precise match. The phase of the AMO matters as well.

Reply to  Tom
January 4, 2022 9:17 am

2.5C warming since 1700….how do their drought, flood, life expectancy, food production, and average family income stats look over that same period ?….you know, just in case there is a correlation….

bdgwx
January 4, 2022 6:31 am

Here is the most recent multi-dataset composite comparing UAH with their peers.

comment image

Tom
Reply to  bdgwx
January 4, 2022 6:37 am

Seems to be a trend. I wonder if you could model that:-)

bdgwx
Reply to  Tom
January 4, 2022 7:20 am

I certainly couldn’t have modeled that. But those smarter than I can.

comment image

Clyde Spencer
Reply to  bdgwx
January 4, 2022 9:32 am

I’m not sure just how smart they are considering how badly the extreme values are modeled! Except prior to 2000, which may reflect tuning to historical data. Also, the last 20 years the CMIP5 estimates are consistently above the regression line, in contrast to earlier data being mostly below the regression line.

What does the regression line through the CMIP5 data look like?

bdgwx
Reply to  Clyde Spencer
January 4, 2022 10:55 am

CS said: “What does the regression line through the CMIP5 data look like?”

It is +0.232 C/decade. I show that in the legend. This is 0.045 C/decade higher than the 8 member composite of +0.187 C/decade. What that tells us is that CMIP5-RCP45 predicted a warming rate that is almost 0.05 C/decade more than was observed by the 8 member composite over this period.

Clyde Spencer
Reply to  bdgwx
January 4, 2022 1:03 pm

It is +0.232 C/decade. I show that in the legend.

That is not shown explicitly! You may know that is what you calculated, but you were sloppy in documenting it.

bdgwx
Reply to  Clyde Spencer
January 4, 2022 1:31 pm

Do you want me to add a blue dotted line to the graph similar to the red dotted line? I’m happy to oblige.

Clyde Spencer
Reply to  bdgwx
January 5, 2022 1:17 pm

That is basically what I asked for. If the data deserve being graphed, then all the pertinent data should be handled similarly.

bdgwx
Reply to  Clyde Spencer
January 6, 2022 5:24 am

Here it is. Full disclosure…for this version I artificially adjusted the CMIP5 timeseries up by 0.1 C relative to the others to get the trendlines lined up at the start to make it easier to see how the two timeseries diverge with time. Yes, I know someone could make the argument that if I’m going to artificially adjust up then I should be equally compelled to artificially adjust down as well. I get it. I want to make sure this is completely transparent so no one accuses me of deception here.
comment image

Ted
Reply to  Tom
January 4, 2022 7:57 am

Many groups developed models with the assumption that CO2 was the main cause of warming. Based on those, the IPCC said that with the increased CO2 measured in the atmosphere, the best estimate for temperature change would show a slope of +0.4, with a likely range of +0.24 to +0.64. Every one of the data sets is below that lower boundary, making it highly likely the assumption was incorrect.

Tom
Reply to  Ted
January 4, 2022 8:33 am

It was a rhetorical question Ted.

Reply to  Tom
January 4, 2022 9:37 am

But really, an upward temperature trend in a system that radiates at T^4….1% temp increase results in 4% more heat emitted….very rapidly correcting itself. In our case “rapidly” is the amount of time it takes the Sea Surface Temp to warm by a degree on average, about 110 years, assuming we’ve reached the peak on Bob Tisdale’s graph…

BD21649D-C938-4C62-B5E4-1FC950F8D35E.png
Reply to  DMacKenzie
January 4, 2022 10:27 am

Or NOAA maybe 120 years to raise SST 1.5 C, note the discontinuity at WW2 when ships finally quit sinking in the North Atlantic with their SST records on board….

1EBA0A6A-26BE-4F78-9E6A-DF7E7B9A13CC.png
ResourceGuy
Reply to  Tom
January 4, 2022 11:02 am
Reply to  bdgwx
January 4, 2022 7:00 am

Sorry B. I’m missing the meaning of the y trend and it’s R^2, in the plot. The values on the right side legend make sense.

bdgwx
Reply to  bigoilbob
January 4, 2022 7:07 am

It’s an Excel thing. Because the values are monthly the trendline equation insists on doing C/month. There is nothing I can do about it. You need to multiple the 0.0016 figure by 120 to get C/decade. Note that the slope is actually between 0.0015 and 0.0016 so there is some rounding error to consider.

Reply to  bdgwx
January 4, 2022 7:15 am

Thx. All perfectly understandable. I convert to annual fractions to avoid this.

I am accustomed to using petroleum biz software, with 2 folks in different offices running economics on hundreds of projects together, independently, and expecting identical summaries. Lots of mutual hand wringing on every set up parameter to avoid having $ values a few ppm different…..

Reply to  bdgwx
January 4, 2022 7:02 am

It’s weird how the simple trend line fits the data even though the data itself shows flat step-wize behaviour. What’s acting on climate in steps, El Nino? But that is move of a symptom than an cause. What is causing the El Nino’s?

Ragnaar
Reply to  bdgwx
January 6, 2022 1:51 pm

Where is HadCRUT?

bdgwx
Reply to  Ragnaar
January 6, 2022 2:27 pm

It is labeled HC5.

Tom
January 4, 2022 6:55 am

Reading some of the comments here, Matthew 7:6 comes to mind.

Carlo, Monte
Reply to  Tom
January 4, 2022 9:29 am

You seem to have skipped over Matt. 7:1-5.

John Tillman
Reply to  Carlo, Monte
January 4, 2022 11:56 am

Matthew 7
King James Version
Judge not, that ye be not judged.
For with what judgment ye judge, ye shall be judged: and with what measure ye mete, it shall be measured to you again.
And why beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?
Or how wilt thou say to thy brother, Let me pull out the mote out of thine eye; and, behold, a beam is in thine own eye?
Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother’s eye.
Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you.

January 4, 2022 7:05 am

Is the 30 year average rolling or fixed?

Ted
Reply to  Jeff in Calgary
January 4, 2022 9:57 am

Slowly rolling. They used to use a time frame of 1980 to 2010, so probably will be fixed for the next nine years then updated again.

Reply to  Ted
January 4, 2022 3:48 pm

Hmm, makes is a little difficult to compare data over time.

guest
January 4, 2022 7:08 am

Can lower tropospheric temperatures really be measured to a precision of 0.01 or even 0.001 Deg. C?

Tom
Reply to  guest
January 4, 2022 7:28 am

No, of course not, but we’re talking about an average. There is the “Law of Large Numbers”.

Phil R
Reply to  Tom
January 4, 2022 8:02 am

As a non-statistician, just curious if you could explain how the Law of Large Numbers is applicable here?

Tom
Reply to  Phil R
January 4, 2022 9:28 am

The law of large numbers, in probability and statistics, states that as a sample size grows, its mean gets closer to the average of the whole population.

Pretty simple really. While we can never know the exact average temperature of the earth’s surface, the more we sample the closer we get.

Reply to  Tom
January 5, 2022 11:16 am

You need to define what you consider to be a random sample, is it IID, what the sample size is, and what method you used to obtain random samples from your data set. Please note that each station IS NOT an IID sample because it is quite likely not a representative sample of the entire data set. Why? Because each sample should contain temps from each hemisphere, from ocean vs land.

Lastly, what is the standard deviation of the “average” temperature of the data set? An average is statistically meaningless without a statement of the standard deviation of that data set. 49 and 51 give a mean of 50 and has a small SD. Yet the average of 0 and 100 also gives a mean 50 but the SD is much much larger. The 0 and 100 are probably closer to the range of temperatures on the earth each day than 49 and 51.

Clyde Spencer
Reply to  Tom
January 4, 2022 9:42 am

The Law of Large Numbers works fine with exact numbers and things such as the probabilities of coin tosses. However, it ignores the propagation of error in real world data with finite precision.

Tom
Reply to  Clyde Spencer
January 4, 2022 10:38 am

Propagation of error in temperature measurements? How does that work?

bdgwx
Reply to  Tom
January 4, 2022 12:08 pm

All observations have uncertainty. In a nutshell the procedure for measuring the global mean temperature is as follows. The first step is to aggregate these observations to produce an anomaly baseline for each station. The individual observations uncertainties propagate into the anomaly baseline. The second step is to form the anomalies. The individual observation uncertainties combine with the anomaly baseline uncertainty. The third step is to form the station anomalies into a grid mesh. This process propagates the anomaly uncertainties and adds a sampling (both spatial and temporal) uncertainty to each grid cell value. Finally the trivial average of the grid cells is computed with the uncertainty of each grid cell propagating into the final average. Note that I’m leaving out a lot details and that some types of datasets (like reanalysis) work on entirely different principles. But this give you a rough high level overview of what is going on with most surface datasets. The best reference for the propagation of uncertainty is the Guide to Expression of Uncertainty in Measurement (GUM) available on the NIST website here.

Reply to  bdgwx
January 5, 2022 11:23 am

Tell how you can combine annual temps read and recorded from LIG thermometers which have a minimum of ±0.5° resolution and end up with an uncertainty 10 to 100 times smaller.

Your words are fine but show some math how this is accomplished.

bdgwx
Reply to  Jim Gorman
January 5, 2022 12:43 pm

Lenssen et al. 2019 and Rhode et al. 2013

The fundamental concept in play which is in compliance with statistical procedures defined in your own sources (Taylor and the GUM) and confirmed by NIST is that when you combine measurements via an average the combined result will always have an uncertainty equal to or less than the uncertainty of individual measurements themselves. You can “nuh-uh” this fact all you want and it will always still be correct. I’m no illusion that I’ll ever be able to convince you have this fact. I post for the benefit of others.

Carlo, Monte
Reply to  bdgwx
January 5, 2022 2:55 pm

A deft avoidance of the issue, worthy of Sir Robin.

bdgwx
Reply to  Carlo, Monte
January 5, 2022 5:20 pm

The request was for the math. I provided it. I literally could not have responded more directly to the issue then that.

Ragnaar
Reply to  bdgwx
January 6, 2022 1:57 pm

You can argue all you want. We know what the best numbers we have are. You can argue we don’t know, but we keep taking measurements. If there’s a problem, do you want to give up or use the best we have?

bdgwx
Reply to  Ragnaar
January 6, 2022 2:52 pm

That’s my position as well. We’re never going to be able to measure anything with 100% perfection so we’re going to have to use the best we have.

Carlo, Monte
Reply to  bdgwx
January 6, 2022 2:58 pm

Har, you just divide by root-N and all problems disappear in a puff of greasy green smoke.

More points!
Fewer problems!
Perfection!

bdgwx
Reply to  Carlo, Monte
January 7, 2022 8:23 am

Hardly. It is a lot more complicated than that. You’ll notice that Christy et al. 2003 did not divide by sqrt(9503). They divided by sqrt(25). This is because the degrees of freedom not v = 9504 – 1 = 9503 but v = 26. The reason v = 26 is because of the way the satellites sample the atmosphere. The observations that are assigned to individual grid cells are highly correlated with each other.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 8:45 am

I will not “notice” because the numbers are absurd.

Don’t you get it?

Reply to  Tom
January 5, 2022 11:01 am

The Law of Large Numbers (LLN) does not apply to the precision of measurement data used to find a mean. The precision of measuring devices determine the absolute precision you can obtain when using measurement data in calculations. This is why the rules of Significant Digits were developed.

Let’s discuss the LLN briefly. Now, how does the LLN apply to atmospheric temperatures? The weak LLN would be useful if we could expect measurements to follow finite predictable outcomes. For example, each day of the week having an expected temperature. In this situation, one would expect each temperature to appear 1/7th of the time. Temperatures don’t work that way. Temperatures are not unique finite numbers, each of which has a unique probability associated with it. Temperatures are an analog measurement with a constantly varying value. They are not repeatable so you can not determine probabilities for each and every value of temperature.

Why do so many people quote that the LLN can be used to know that an average of temperatures gives an accurate average? Because they have read something like this when reading about the strong LLN: “We end this section by reminding you that the most famous example of convergence in distribution is the central limit theorem (CLT). The CLT states that the normalized average of i.i.d. random variables X1, X2, X3, ⋯⋯ converges in distribution to a standard normal random variable.”

Remember, X1, X2, X3, … are really the means of each of the samples taken from the entire population. They are NOT the data points in a population. This is an extremely important concept. Many folks think the sample means and the statistical parameters derived from the distribution of sample means accurately describe the statistical parameters of the population. They do not!

As an example, plot daily Tmax and Tmin for a month. You will end up with a bimodal distribution with lots of temps at the warm end and lots of temps at cold end. The mean will fall somewhere between the entries. These don’t follow the strong LLN and converge into a normal distribution, ever.

The same applies to any data set that includes things like day/night, summer/fall, or Northern/Southern hemispheres. Can you compute a mean value for these? Certainly you can. But remember, the CLT states that the normalized average of i.i.d. random variables X1, X2, X3, ⋯⋯ converges in distribution to a standard normal random variable. The distribution of such widely varied data WILL NOT CONVERGE TO A NORMAL DISTRIBUTION EVER.

In essence you must use sampling theory to obtain a normal distribution from disparate data such as temperature data. Even then don’t confuse the calculation of a mean as being more precise than the measurements themselves. You should apply the Significant Digits rules in order to meet scientifically rigorous depiction of the results of calculations.

Clyde Spencer
Reply to  Jim Gorman
January 5, 2022 1:23 pm

Why do so many people quote that the LLN can be used to know that an average of temperatures gives an accurate average?

Because they read it somewhere and they liked that it appeared to support their beliefs.

bdgwx
Reply to  guest
January 4, 2022 7:29 am

No. Christy et al. 2003 report that the uncertainty on UAH TLT monthly anomalies is ±0.2 C.

Carlo, Monte
Reply to  guest
January 4, 2022 9:31 am

No. Remember that the tropopause temperature (0-10 km altitude) typically falls by 50°C.

Rick W Kargaard
January 4, 2022 7:19 am

“The highest temperature ever recorded on Earth was 136 Fahrenheit (58 Celsius) in the Libyan desert. The coldest temperature ever measured was -126 Fahrenheit (-88 Celsius) at Vostok Station in Antarctica.”
Using those two temperatures the mean for the planet is about 258K, 5F, or -15C.
Make sense, NO. But does adding data points make any more sense?

Tom
Reply to  Rick W Kargaard
January 4, 2022 7:30 am

It would be pretty ridiculous to pick just two data points. I can’t imagine anyone would even think of doing that.

Rick W Kargaard
Reply to  Tom
January 4, 2022 8:01 am

I did. And d it is no more ridiculous than using a thousand data points out of the billions that are necessary for meaningful accuracy.

Tom
Reply to  Rick W Kargaard
January 4, 2022 9:04 am

Are you familar with the Law of Large Numbers?

Carlo, Monte
Reply to  Tom
January 4, 2022 9:32 am

BZZZZZT. Inapplicable.

Tom
Reply to  Carlo, Monte
January 4, 2022 9:37 am

Go ahead, explain yourself.

Reply to  Tom
January 5, 2022 11:28 am

No and neither are you!

Clyde Spencer
Reply to  Tom
January 4, 2022 9:45 am

It would be pretty ridiculous to pick just two data points.

That is what has been done for decades in defining the daily ‘average’ temperature, which is actually the mid-range value of the daily extremes, Tmax and Tmin.

AGW is Not Science
Reply to  Clyde Spencer
January 4, 2022 12:35 pm

With no information about how long the temperatures remained at their high and low points, thus making it pretty meaningless.

Tom
Reply to  AGW is Not Science
January 4, 2022 1:00 pm

Of course, we now have the USCRN data for the past 15+ data. I’m pretty sure they are not doing it that way. Also, a great deal is known about diurnal temperature cycles given a particular geographic location, so it is better than a WAG by quite a bit.

Reply to  Tom
January 5, 2022 11:32 am

What is a representative temp for the U.S.A. in the summer and a representative temp for Antarctica in winter. What is the mean and standard deviation?

bdgwx
Reply to  Rick W Kargaard
January 4, 2022 7:41 am

That’s about -40 K of error. The error occurs because of the extreme spatial and temporal sampling bias. You can reduce the error by incorporating more observations both in the spatial and temporal domains into your analysis. Berkeley Earth measured the 1951-1980 temperature as being 287.257 ± 0.023 K by analyzing 458 million observations [1].

Phil R
Reply to  bdgwx
January 4, 2022 8:40 am

Fundamental mistake. BE did NOT “measure” anything.

bdgwx
Reply to  Phil R
January 4, 2022 10:48 am

There’s no mistake. They literally measured the global temperature. In fact, they did it for every month since 1850. See here. And they aren’t the only ones who did it. HadCRUT, GISTEMP, NOAA, ERA, UAH, RSS, RATPAC, and many more have done it as well.

Reply to  bdgwx
January 5, 2022 11:39 am

Where did they go to measure it?

bdgwx
Reply to  Jim Gorman
January 5, 2022 12:35 pm

The same place everyone else went to measure the mass of the Earth.

Reply to  bdgwx
January 5, 2022 4:05 pm

You do realize that the mass of the earth has never been “measured” don’t you? It has only been calculated, just like the GAT. Neither is a measurement, only a calculation.

Why are measurements such a hard concept for you to understand?

bdgwx
Reply to  Jim Gorman
January 5, 2022 5:19 pm

The GUM defines a measurand as a particular quantity subject to measurement” and measurement as “set of operations having the object of determining a value of a quantity”. NIST defines measurement as “an experimental process that produces a value that can reasonably be attributed to a quantitative property of a phenomenon, body, or substance”. How do you define it?

Reply to  bdgwx
January 6, 2022 5:52 am

Around and around the mulberry bush. Tell us how the GAT is a measurement of a measurand.

Exactly what is the “measurand”? Is it a “particular quantity subject to measurement”?

bdgwx
Reply to  Jim Gorman
January 6, 2022 6:21 am

The measurand is the GAT. The measurement is the set of operations UAH used to quantify it.

Carlo, Monte
Reply to  Jim Gorman
January 6, 2022 6:44 am

NIST tells him what he wants to know, therefore it must be correct.

bdgwx
Reply to  Carlo, Monte
January 6, 2022 7:16 am

Those definitions came from the GUM. That was the source you wanted me to start using. I didn’t even know about the GUM until you told me about it.

Carlo, Monte
Reply to  bdgwx
January 6, 2022 11:33 am

Great. Find what corresponds to your biases for tiny numbers and run with them, no need to think about what you are actually doing. Have fun.

Jeff Alberts
Reply to  bdgwx
January 4, 2022 8:41 am

But each temp reading is an intensive property of that point. Averaging that reading with readings from other places is meaningless.

bdgwx
Reply to  Jeff Alberts
January 4, 2022 11:40 am

Pressure is an intensive property. Does that mean MSLP is meaningless?

Reply to  bdgwx
January 5, 2022 12:03 pm

Yes. MSLP is a “defined” value.

” Atmospheric pressure, also known as barometric pressure, is the pressure within the atmosphere of Earth. The standard atmosphere is a unit of pressure defined as 101,325 Pa, which is equivalent to 760 mm Hg, 29.9212 inches Hg, or 14.696 psi. Wikipedia”

Your explanation using intensive is not applicable.

Averaging pressures in two different vessels IS meaningless!

bdgwx
Reply to  Jim Gorman
January 5, 2022 12:33 pm

And where did the 101325 Pa come from?

Reply to  bdgwx
January 5, 2022 3:56 pm

The ICAO, the International Civil Aviation Organization is one.

Microsoft Word – ISA.doc (uba.ar)

See what the pressure is at “0” altitude. You do understand that pressure at sea level changes because sea level is not a constant value around the earth, right? Kinda like GAT isn’t a constant around the earth either.

bdgwx
Reply to  Jim Gorman
January 5, 2022 5:15 pm

You missed the point. Why was 1013.25 mb used for the standard atmosphere? Does every point on Earth have the standard atmosphere above it at all times? Does mean sea level even have meaning? Do all points on the surface of the ocean have the exact same sea level?

Reply to  bdgwx
January 6, 2022 5:47 am

You are running around the bush again. If you want to know why, read through the ICAO proceedings. The point is that a “standard atmosphere” does not vary around the world by measuring at “sea level”.

The point is that a standard atmosphere is a defined quantity that is used to calibrate instrumentation so that comparisons at different places will have a common reference point.

The fact that GAT varies depending on how calculations are done and when using different data sets means it is not a measurement. It is a calculated metric that ultimately means nothing at any point on earth.

bdgwx
Reply to  Jim Gorman
January 6, 2022 2:49 pm

Let me ask the question in a different way. Why wasn’t 1000 mb used for the standard atmosphere? Why wasn’t 288 K chosen for the standard atmosphere?

Carlo, Monte
Reply to  bdgwx
January 6, 2022 2:59 pm

You don’t already know the answers to these questions?

I am shocked.

bdgwx
Reply to  Carlo, Monte
January 7, 2022 8:03 am

I know the answers. I wanted to hear how JG explains it away.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 8:46 am

Anything to keep the lies of climastrology alive, eh?

Clyde Spencer
Reply to  bdgwx
January 4, 2022 9:58 am

I don’t think that it is that simple. In the absence of sunlight, the Earth cools more quickly than it heats in the presence of sunlight. Basically, the temperature distribution is not symmetrical.

Incidentally, had Berkeley Earth presented their results correctly, they would have shown 287.26 ± 0.02 K.

https://wattsupwiththat.com/2017/04/12/are-claimed-global-record-temperatures-valid/

bdgwx
Reply to  Clyde Spencer
January 4, 2022 10:45 am

I’m not saying it is simple. In fact, the BEST method is actually pretty complex with the scalpeling, jackknifing, and kriging-like techniques in use. There is no assumption that the temperature is homogenous over each of the 15984 cells in their grid mesh. It’s quite the opposite actually. Their grids definitely show both spatial and temporal variations in temperature. You’ll even notice that they measured the seasonal variation in the global mean temperature due primarily to the distribution land over the globe. If you know of a figure significantly different from 287 from 1951-1980 that uses a more preferred methodology please post it.

Clyde Spencer
Reply to  bdgwx
January 4, 2022 1:07 pm

The point I was making, which I think you missed, was that they were improperly displaying the precision, as deduced from the significant figures.

bdgwx
Reply to  Clyde Spencer
January 4, 2022 1:23 pm

I’m not challenging your second point. I just didn’t think it was worth squabbling over 287.257 ± 0.023 K vs 287.26 ± 0.02 K especially considering the alternative presented thusfar was 258 K where the difference is 39 vs 0.003 K. Instead I focused on your first point which I thought was a more productive line of conservation.

Clyde Spencer
Reply to  bdgwx
January 5, 2022 1:29 pm

The thread was about precision. I was pointing out that alarmists routinely suggest a precision that is greater than justified. When more precision is implied than justified, it raises a question of “Why?” There are two plausible explanations, ignorance or malfeasance. Take your choice about what it says about those not following convention with regard to precision.

Reply to  Clyde Spencer
January 5, 2022 3:36 pm

Exactly.

Clyde Spencer
Reply to  Jim Gorman
January 5, 2022 8:16 pm

You left off the exclamation point! 🙂

bdgwx
Reply to  Clyde Spencer
January 5, 2022 5:01 pm

It’s funny because when I tell people the UAH TLT warming trend is +0.14 C/decade I’m told I’m being deceptive and alarmist because it is actually +0.135 C/decade which is lower. So I start saying it is +0.135 C/decade and then I’m told I’m being deceptive because I’m not following sf rules. And when I say fine I’ll just post the full IEEE 754 output and let you guys sort it out I’m told computer engineers were fools for creating a specification that scientists could abuse by performing calculations that use more digits than which they could possibly know. We can either squabble over the insignificant difference between 287.257 K and 287.26 K or have a more meaningful discussion about why the value is what it is and why it is not even close to 258 K.

Carlo, Monte
Reply to  bdgwx
January 5, 2022 5:45 pm

I’m told computer engineers were fools for creating a specification that scientists could abuse by performing calculations that use more digits than which they could possibly know.

Total BS, and demonstrating your abject lack of knowledge on the subject.

Clyde Spencer
Reply to  bdgwx
January 5, 2022 8:19 pm

Some disciplines, like physics, use more than 2 or 3 significant figures, and justifiably. Just not climastrologists!

Reply to  bdgwx
January 5, 2022 11:38 am

Berkeley DID NOT MEASURE the temperature to be 287.257 ±0.023 by any means. They calculated a mean using temps that did not have this resolution either. It is a bogus number to make people think they are excellent scientists able to read a calculator. Yet these numbers belie their knowledge of metrology and significant Digits rules.

bdgwx
Reply to  Jim Gorman
January 5, 2022 4:50 pm

What is your definition of “measure”?

Clyde Spencer
Reply to  bdgwx
January 5, 2022 8:25 pm

Using instrumentation to compare a defined standard, or having been calibrated against a standard, of some physical process or property.

That contrasts with manipulating measurements with calculations to summarize the characteristics of a sampled population.

bdgwx
Reply to  Clyde Spencer
January 6, 2022 5:31 am

Yeah, that would definitely disqualify global mean temperatures as being measurements. It would also disqualify the entire UAH dataset. Incidentally it would disqualify a large portion of surface temperature observations as well since many of them especially in the automated era are actually averages over a period of time themselves (usually 1 or 5 minutes). FWIW…I adopt the NIST and GUM definitions which are much broader and allow temperatures and even averages of temperatures to qualify as measurements.

Reply to  bdgwx
January 6, 2022 5:38 am

A measurement is detecting a physical property of some phenomena by using a device that is designed and calibrated to detect the property of choice. This results in a measured data point.

Measurements can range from determining temperature, length, or even particles in a cloud chamber.

A calculation is using mathematics to derive some metric from measured and recorded data points. Those calculations should never exceed the precision with which the measurements were made. To do so portrays the results of calculation as being more precise than the original measurement and compromises the integrity of what the measurement actually represents. It is unethical to do so because it misleads people into thinking that you measured something with more precision than was actually done.

Carlo, Monte
Reply to  Jim Gorman
January 6, 2022 6:43 am

And then this disingenuous guy puts a negative vote on a very easy-to-understand explanation of the difference.

Says it all, he has ulterior motives.

bdgwx
Reply to  Carlo, Monte
January 6, 2022 7:10 am

For the record…I have never downvoted any post on WUWT; not even once. I do frequently upvote though. I even upvote your and the Gorman’s posts when I think I reasonable point has been made.

bdgwx
Reply to  Jim Gorman
January 7, 2022 7:22 am

I dug a little deeper regarding the definition of “measurement”. There is a companion document to the GUM called VIM or the International Vocabulary of Metrology developed by the same group (JCGM) that developed the GUM. Here is what they say.

measurement – process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity

“experimentally obtaining” in the definition is to be interpreted in a broad sense of including not only direct physical comparison, but also of using models and calculations that are based on theoretical considerations.

I think UAH’s methodology here qualifies as a “measurement” according to the International Vocabulary of Metrology. What do you think?

Carlo, Monte
Reply to  bdgwx
January 7, 2022 9:04 am

I think you’ve got a big hat size and that Pat Frank was correct when he saw through your disingenuous shilling for climastrology after only about 4 posts.

Reply to  bdgwx
January 7, 2022 12:56 pm

JCGM 100-2008 page 33 – 36

B.2.5 – measurement

set of operations having the object of determining a value of a quantity

B.2.15 – repeatability

closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement

NOTE 2 – Repeatability conditions include

  • the same measurement procedure
  • the same observer
  • the same measuring instrument, used under the same conditions
  • the same location
  • repetition over a short period of time

B.2.17 – experimental standard deviation

for a series of measurements of the same measurand, the quantity s(q(k)) characterizing the dispersion of the results and given by the formula:

NOTE 1

  • Considering the series of “n” values as a sample of a distribution, q(bar) is an unbiased estimate of the mean u(q), and s^2(q(k)) is an unbiased estimate of the variance of that distribution.

B.2.18 – uncertainty (of measurement)

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

How many more references do you need of the applicability of the GUM to a SINGULAR MEASURAND?

The GUM simply does not cover the procedures for joining multiple measurement of different measurands over time. To do this, you need to involve yourself with sampling theory at least. You also need to investigate the use of time series analysis if you intend to continue using linear regression to justify predictions.

bdgwx
Reply to  Jim Gorman
January 7, 2022 2:06 pm

JG said: “How many more references do you need of the applicability of the GUM to a SINGULAR MEASURAND?”

I’m not challenging the fact that the GUM defines procedures for quantify the uncertainty of one measurand at a time. Don’t hear what I didn’t say. I did not say that the GUM is devoid of procedures for using measurements of other measurands to quantify another measurand and its uncertainty. In fact, there is a whole section dedicated to exactly that in addition to the example given for the type A procedure.

What I’m challenging is your definition of “measurement”. If you think 287.257 ±0.023 is not a measurement then you’re definition is inconsistent with the GUM.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 2:30 pm

Hey! You could divide your numbers by mc^2 and get really small uncertainties! Go for it!

bdgwx
Reply to  Carlo, Monte
January 7, 2022 5:30 pm

No that wouldn’t work. The magnitude of the uncertainty is usually expressed as the fractional uncertainty δx / |x|. When you divide an exact number such as the speed of light squared c^2 the fractional uncertainty does not change (Taylor section 3.4). But dividing by mass m you will increase the fractional uncertainty (Taylor equations 3.18 and 2.21). In other words dividing a number (whatever it happens to be) by mc^2 can only ever make the fractional uncertainty bigger; not smaller. BTW…you can also use GUM equation 10 to solve this problem as well. It should go without saying that I would never divide a temperature by mc^2 in this context. I’m not even sure what K.s2/kg.m2 or K/j would even mean.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 6:23 pm

No that wouldn’t work.

Yes indeed, you are parody-challenged.

Reply to  bdgwx
January 8, 2022 4:17 pm

It is not a measurement.

B.2.5
measurement
set of operations having the object of determining a value of a quantity

B.2.9
measurand
particular quantity subject to measurement

What you are trying to rationalize is using measurements to determine a unique value from a specific function, such as PV = nRT where “measurements” can be used to determine the value of another quantity. GAT calculations are a different animal. GAT attempts to use a mean of a distribution as a value. There is no defined “function” that relates various other quantities to a specific quantity.

GAT is not the result of a defined function, it is a statistical mish mash that does not properly deal with the resolution of measurement devices. You are trying to use equations from the GUM that do not apply to what you are doing.

Notice how all of these definition use the term “same measurand”.

B.2.14
accuracy of measurement closeness of the agreement between the result of a measurement and a true value of the measurand

B.2.15 repeatability (of results of measurements)
closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement

B.2.17 experimental standard deviation for a series of n measurements of the same measurand, the quantity s(qk) characterizing the dispersion of the results and given by the formula:

The GUM does not address statistical analysis of multiple measurements of different things like it or not. That is not its purpose. The 9504 different temps you reference elsewhere are not parts of an accepted function that can be used to determine a unique value.

Stop trying to use the GUM for defining statistical parameters.

Phil R
Reply to  Rick W Kargaard
January 4, 2022 8:11 am

Rick,

Your comment got me thinking about something that I’m not sure how to articulate or flesh out. It’s the whole idea that a global average temperature, even if it can be calculated, means anything.

I live in SE Virginia. it seems to me that the average temperature in SEVA over a period of time might be meaningful, especially in comparison with, say, the average temp over time in Boston. but the long-term average of the temps in NEVA an Boston is meaningless.

Clyde Spencer
Reply to  Rick W Kargaard
January 4, 2022 9:47 am

Part of the problem is that the global annual temperature distribution is not Gaussian. It is skewed.

https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/

Reply to  Clyde Spencer
January 5, 2022 12:09 pm

It is not a properly sampled population. The requirement of IID is not met for samples.

Reply to  Clyde Spencer
January 7, 2022 1:04 pm

BTW, I have your post stored for quick access in my climate folder. It is a good one.

Carlo, Monte
Reply to  Jim Gorman
January 7, 2022 6:24 pm

It is excellent; bellman might learn something from if he would read with an open mind.

Reply to  Carlo, Monte
January 8, 2022 6:57 am

Thanks for the pointer. I did learn somethings from it, mainly how to write an impressively long looking article without actually saying much. It jumps from one point to another with little consideration to what any of them mean, and leads up to a summary that appear to have nothing to do with the rest of the article. To be fair, I think Spencer isn’t anything like as bad here as you or the Gormans. He at least doesn’t claim that uncertainty increases with sample size – but I’m at the end still not sure what point he is trying to make, or how he justifies anything.

To take one example that caught my attention, he says

Climatologists have attempted to circumscribe the above confounding factors by rationalizing that accuracy, and therefore precision, can be improved by averaging. Basically, they take 30-year averages of annual averages of monthly averages, thus smoothing the data and losing information! Indeed, the Law of Large Numbers predicts that the accuracy of sampled measurements can be improved (If systematic biases are not present!) particularly for probabilistic events such as the outcomes of coin tosses. However, if the annual averages are derived from the monthly averages, instead of the daily averages, then the months should be weighted according to the number of days in the month. It isn’t clear that this is being done. However, even daily averages will suppress (smooth) extreme high and low temperatures and reduce the apparent standard deviation.

The highlighted passage is raised as an objection to the idea that you can reduce uncertainty by averaging. The issue being about not weighing for the number of days in a month. He then admits he doesn’t know if they do this or not – he hasn’t bothered to check, nor has he bothered to look at the actual monthly figures to see what difference it would make, or even though about what sort of difference it would make. Yet that doesn’t stop him tossing the idea in as a way of making it look like 30 year averages might be suspect.

So I thought I check it myself and compare UAH annual values averaged using unweighted monthly data to weighted data. I attach the graph. Most years differ by less that 0.001°C, and no year differs by more than 0.002°C. The difference between the 30 years from 1991 – 2020 is 0.00004°C.

20220108wuwt.png
Reply to  Rick W Kargaard
January 5, 2022 11:25 am

No it doesn’t make sense! Where is the standard deviation ever quoted.

January 4, 2022 7:35 am

If global average temperature changes about 1.4 degree C in 100 years that should be seen as stasis not change. History tells us this is insignificant compared to the more dramatic ups and downs even if we limit our view to the Holocene.

bdgwx
Reply to  Andy Pattullo
January 4, 2022 7:59 am

Can you post the global temperature reconstruction of the Holocene that are you referring to here? The keyword here is global. That is important because UAH is publishing a global temperature so we want to compare like-to-like.

Carlo, Monte
Reply to  bdgwx
January 4, 2022 9:33 am

Who are “we”?

bdgwx
Reply to  Carlo, Monte
January 4, 2022 10:00 am

Anybody who wants to be skeptical of the claim that a 1.4 C temperature change is insignificant over the Holocene period.

Carlo, Monte
Reply to  bdgwx
January 4, 2022 6:13 pm

This number, even if real, tells zippo about climate.

Reply to  bdgwx
January 5, 2022 12:11 pm

Can you walk outside and guess the temperature within 1.4 degrees reliably? That is why it is insignificant!

bdgwx
Reply to  Jim Gorman
January 5, 2022 12:25 pm

We are talking about the global mean temperature here; not the local temperature in someone’s backyard. And someone’s ability or inability to guess the temperature in their backyard is irrelevant to the quantification of significance of 1.4 C change in the global mean temperature.

Carlo, Monte
Reply to  bdgwx
January 5, 2022 2:58 pm

This artificial number has nothing to do with actual climate.

Reply to  bdgwx
January 5, 2022 3:26 pm

I don’t care what temperature you are talking about. If you can’t even accurately guess a temperature within that range, then a 1.4 rise in temperature IS INSIGNIFICANT.

You can run around the bush all you want, but you need to address real physical conditions at some point in time. The lives the warmists want to affect are important. It is then important for you to define why 1.4 degrees is a big deal.

Reply to  Jim Gorman
January 5, 2022 4:22 pm

If a 1.4 rise is insignificant, does that also mean a drop of 1.4 would be insignificant?

Reply to  Bellman
January 6, 2022 5:19 am

If you humans can not accurately detect temperature within 1.4 degrees, then of course either above or below any given temperature would be insignificant.

As a warmist, you should be able to describe the effects of temperature on humans and the environment. For example, will a 1.5 deg rise be noticeable to humans, or will agriculture be able to provide seed that can withstand that range? If someone wants to claim an existential danger, then there must be data to support that.

Reply to  Jim Gorman
January 6, 2022 5:55 am

So whenever anyone warns of the danger of returning to little ice age conditions, or tells me how much the world has improved now we are a degree warmer than the 19th century, I can just tell them they are wrong as they couldn’t feel a difference of 1°C?

bdgwx
Reply to  Jim Gorman
January 5, 2022 4:49 pm

So that’s the definition of significance now? If I can guess it (whatever it may be) it is significant. If I can’t than it is insignificant?

Reply to  bdgwx
January 7, 2022 1:10 pm

You have heard what I said. If you can’t decide within 1.4 deg what the temperature is, then what difference does it make to you.

If I walk outside and believe the temp is between 60 and 70, I am happy. If I see my breathe, I know it is below freezing and I am not happy. That really is about as insignificant as it gets.

Perhaps you are more sensitive to temperatures and can readily ascertain an accurate temp through your skin. You must lead a difficult life if you must worry about a 2 degree change.

bdgwx
Reply to  Jim Gorman
January 7, 2022 1:51 pm

1.4 C can mean the difference between snow and rain or whether a pond will be iced over or not. Those differences seem significant to me yet I doubt I could guess at the temperature to with even 5 C nevermind 1.4 C.

Reply to  Carlo, Monte
January 4, 2022 12:46 pm

“We” are obviously not a collection of normally distributed individuals, but a group of like minded persons who know better than “Them”.

bdgwx
Reply to  Doonman
January 4, 2022 1:46 pm

If that is your definition of “we” then I’m definitely not included. I have no shame in admitting that those who actually do the research and publish their works are far smarter than I will ever be.

Clyde Spencer
Reply to  bdgwx
January 4, 2022 9:59 am

Roy also lists regional changes.

bdgwx
Reply to  Clyde Spencer
January 4, 2022 10:34 am

It’s better than that. He provides a 1.25 degree grid with 10368 unique regions of which 9504 have values. Unfortunately it only goes back to about 1979 so we can’t use it make statements about any of those 9504 regions or combinations of those regions including the entire globe 100 years ago or throughout the Holocene.

Sandwood
January 4, 2022 7:35 am

I think it is important to note that Dr. Roy’s graph shows a linear temperature change in the data that he has…..it does not indicate that the planet has been warming at this rate.
If the satellites had been launched a year earlier or a year later, the linear temperature change that his data would show would be different.

Carlo, Monte
January 4, 2022 7:40 am

Here is the same data replotted with generous uncertainty limits; the regression line can fall anywhere within the confidence interval (green). Also shown is the histogram of the regression residuals; the roughly Gaussian shape is an indication that much of the variation is random.

UAH LT globe 2021-12.jpg
Carlo, Monte
Reply to  Carlo, Monte
January 4, 2022 8:48 am

There quite a bit of information that is not given with the UAH regression plots, especially:

• How many points are averaged in each monthly point
• The variances (standard deviations) of the points
• How many times is each spherical grid point sampled by the satellite in a month (the satellite orbital period is ~2 hrs while the Earth rotates underneath)
• The variances of the satellite temperatures for each month (before the baseline is subtracted)

Some additional information/clues are available from the UAH pages:

https://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltmonacg_6.0

This file is the baseline temperature data in K for the lower tropopause (packed without decimal points and delimiters), with ~10,000 grid points for each month of the year. The -9999 values are placeholders for the polar regions from which no data is recorded. Calculating some basic statistics and forming histograms of these data reveals some unusual details.

Month   Mean (°C)   Median (°C)    σ (°C)    Peak (cts)    Peak (°C)
Jan    -14.60    -13.68    12.94    557    0.25
Feb   -14.67    -13.40    13.29    702    0.25
Mar -14.69    -13.29    13.54    746    0.75
Apr -14.21    -13.03    13.35    892    0.75
May -13.31    -11.93    13.08    843    0.75
Jun -12.29    -10.33    13.04    822    0.32
Jul -11.82    -8.71    13.40    701    -0.19
Aug -12.18    -9.21    13.56    799    -0.14
Sep -13.08    -10.91    13.51    778    0.02
Oct -13.99    -12.50    13.30    688    0.25
Nov -14.49    -13.53    12.85    580    0.25
Dec -14.61    -13.77    12.77    513    0.25

Each month has a large peak always very close to 0°C, with broad shoulders between -40°C/-30°C and -5°C. Here are the histograms for the two months with the highest peak (April) and lowest peak (December). Some of the shoulders have smaller peaks. The median values are shifted higher than the means, corresponding to the influence of the large 0°C peaks; all the standard deviations are about 13°C.

UAH LT Apr-Dec Baseline.jpg
Reply to  Carlo, Monte
January 5, 2022 3:28 pm

Nice to see some REAL statistical analysis!

Carlo, Monte
Reply to  Jim Gorman
January 6, 2022 6:57 am

With a bit of irony, it was bwx who pointed me to the UAH baseline numbers. After unpacking them, I was more than a bit surprised to see that they never get above freezing. Then I plotted a histogram of one month and saw the giant spike at 0°C and had a “whoa!” moment. There is something odd about the convolution of the MSU response versus O2 level and the 0-10 km LT lapse rate that I won’t pretend to understand. Yet somehow they are able to resolve 0.01K changes in averages of quantities with 13K of variance?

bdgwx
Reply to  Carlo, Monte
January 6, 2022 8:02 am

UAH is not able to resolve 0.01 K changes. Christy et al. 2003 report an uncertainty on the monthly anomalies of ±0.2 C (2σ).

I think you meant to say the standard deviation is 13 K. The variance is 169 K. That’s interesting because when you plug that into the standard deviation of the mean formula GUM equation (5) it comes out to 13 / sqrt(9504) = 0.13 C which is consistent with the value reported by Christy et al. 2003 using a completely different methodology.

Carlo, Monte
Reply to  bdgwx
January 6, 2022 11:40 am

Bullshit. You can’t just brush the variance under the rug, the range of these numbers is huge!

If there happens to be 100,000 data points instead of 10,000, it gets even smaller! You win!

But go ahead and believe what you will, you obviously don’t want to consider anything that might interfere with your warmunist biases. Toss reason in the rubbish bin.

bdgwx
Reply to  Carlo, Monte
January 6, 2022 12:24 pm

I didn’t brush the variance under the rug. I used it in GUM equation (5).

Carlo, Monte
Reply to  bdgwx
January 6, 2022 12:31 pm

Yah, you are da king of UA!

All Hail!

Reply to  bdgwx
January 7, 2022 11:16 am

Equation (5) doesn’t apply to an average of disparate measurements from varied stations.

Reply to  bdgwx
January 6, 2022 4:02 pm

Missouri School of Mines pickiness.

169 K s/b 169 K^2

bdgwx
Reply to  bigoilbob
January 6, 2022 4:47 pm

Yikes. You are absolutely correct.

Reply to  bdgwx
January 7, 2022 11:14 am

Eq. 5 is based upon a quantity “q” that varies randomly and for which “n” independent observations have been obtained.

You do realize that is dealing with ONE MEASURAND, right? The random variables are the observed quantities from measuring a single measurand multiple times with the same device.

You really need to monitor or take some metrology courses or even chemistry/physics lab courses. The GUM is basically a manual focused on making measurements of a single thing (measurand ).

That single measurand may have a functional relationship of several different things and the GUM deals with computing a combined uncertainty. An example is the Ideal Gas Law where PV = nRT. It is not unusual to try and find the number of moles of a substance which is hard to measure with any degree of accuracy. It is easier to control and/or measure the other variables and calculate the number of molecules. The uncertainty is then calculated using the methods in the GUM.

You are trying to fit a square peg into a round hole. If you want to combine stations into a whole, you really need to do your research on sampling techniques and how statistical parameters are related. This won’t help you with measurement uncertainty but you will have a better idea of statistical analysis.

I leave this with a question.

Why do you believe an annualized temperature is the most correct phenomena. Each winter is spread across two different years rather than being treated as a unique occurance.

Why are not equinoxes and solstices used as dividing points instead of months and years?

bdgwx
Reply to  Jim Gorman
January 7, 2022 1:21 pm

Equation 5 is used to perform a type A quantification of the uncertainty of a measurand. The model Carlo Monte used for the type A method was y = f(x_1) = x_1 where y is the planetary temperature and x_1 is a single gridded value used as an input to estimate it. Using the method described in 4.1.4 we take the average of the model y_avg = (1/n)Σ[f(x_1_k), 1, k] and use that for our estimate of y. Doing it this way forces us to use equation 5 in 4.2.1. We have 9504 values for q_k. Carlo Monte concluded that s^2(q_k) is 169 K^2 which means s(q_avg) = sqrt(169 K^2 / 9504) = 0.13 K. Is this model a good approach? Nope. But that doesn’t mean the GUM precludes that choice of model from being used in equation 5. Remember, I didn’t pick that model or advocate for its use. Carlo Monte did. He just did the arithmetic wrong and got 3.6 K instead of the correct result 0.13 K. Remember, the type A method requires you divide the variance by the sample size prior to taking the square root. And you most definitely do not take the 4th root of the variance and call it good.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 2:34 pm

More bollocks.

The point you studiously try to ignore is that with more points your “calculation” gets even smaller (and more absurd).

bdgwx
Reply to  Carlo, Monte
January 7, 2022 4:54 pm

I’m definitely not ignoring that point. In fact, I’m doing the opposite. I’m trying to tell you that this is exactly what the GUM says. That is that type A evaluations are more reliable when there are more observations and that this may be a valid reason for choosing type B evaluations over type A in some scenarios. See 4.3.2 and E.4.3.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 6:25 pm

N up, U down == absurdity

Reply to  bdgwx
January 8, 2022 4:51 pm

You are ignoring the point. How does doubling the number of points increase the precision of a mean even further? You need to explain this in words as to what is occuring. You are cherry picking equations without a proper understanding of the necessary assumptions.

You reference equation 5. Look what the lead in says.

“4.2 Type A evaluation of standard uncertainty

4.2.1 In most cases, the best available estimate of the expectation or expected value µq of a quantity q that varies randomly [a random variable (C.2.2)], and for which n independent observations qk have been obtained under the same conditions of measurement …”

See where it says “a quantity “q” that varies randomly”? That is a single measurand with multiple measurements that have a Gaussian distribution.

You have no background in physical measurements do you? Yet you try to teach those who do! Amazing!

bdgwx
Reply to  Carlo, Monte
January 6, 2022 7:33 am

That is a really cool analysis. I wonder if the spike at 0 C is related to the enthalpy of fusion?

Reply to  bdgwx
January 6, 2022 4:06 pm

Uncommon sense. What else could it be? Not rhetorical. Anyone…?

Clyde Spencer
Reply to  Carlo, Monte
January 4, 2022 10:02 am

I presume that the confidence interval is one sigma because the 1998 El Nino breaches the upper green line.

Carlo, Monte
Reply to  Clyde Spencer
January 4, 2022 12:49 pm

I didn’t multiply it by 2 or anything, so the answer has to be yes. The prediction intervals calculation might be interesting.

Carlo, Monte
Reply to  Clyde Spencer
January 6, 2022 7:00 am

One of the Holy Trenders gave you a downvote for asking a pertinent technical question, Clyde. Most amusing.

January 4, 2022 8:27 am

Houston, we have (an apparent) problem.

The above article by Dr. Roy Spencer appears to show a significant upward trend in global LAT based on UAH satellite-derived temperatures over the last 7 years or so. The graphed rise in the temperature anomaly since end-2014 is about 0.2 C based on a linear curve-fit over that interval.

However a WUWT article posted by Christopher Monckton of Brenchley just four hours earlier (see https://wattsupwiththat.com/2022/01/03/no-statistically-significant-global-warming-for-9-years-3-months/ ) states that there is no statistically significant upward trend in global LAT “for fully seven years”, based on the very same UAH data set. Monckton defines his range of data uncertainty as 0.3 C based on his analysis of the HadCRUT4 dataset.

Dr. Spencer states a long-term trend (since Jan 1979) in global LAT warming of +0.14 C/decade based on the UAH data set. Lord Monckton states a “long-term” trend (since January 2015) of <-0.01 C/decade based on the same UAH data set.

Obviously, one conclusion from this comparison is that the stated “trends” in global LAT are significantly dependent on the interval over which one computes such.

Another conclusion is that one should not really talk about temperature trending without a clear indication of the statistically-based uncertainty of measurements in the data set being discussed.

Toward this latter point, I wish that Dr. Spencer would always include his estimate of the uncertainty range associated with his monthly updates of UAH global LAT anomalies, especially since he regularly reports the updates to two decimal places.

Keep in mind that I am NOT critical of the generally-excellent work of both of these gentlemen, who contribute so much to understanding the science underlying climate change. Both gentlemen may actually be in agreement . . . it’s just hard to reconcile the different reports based on the different methods of presenting the data.

Carlo, Monte
Reply to  Gordon A. Dressler
January 4, 2022 9:36 am

A serious uncertainty analysis of the UAH calculation would be a massive project, but it would be well worth the time and effort. This has not been done to date, regardless of the noise put out by those of the Holy Trends cloth.

Tom
Reply to  Carlo, Monte
January 4, 2022 12:54 pm

Just do a simple r squared calculation between the UAH dataset and all the other datasets. Let us know what you find.

Carlo, Monte
Reply to  Tom
January 4, 2022 6:14 pm

WHOOOOSH — This would indicate exactly nothing.

And who is “us”?

Reply to  Carlo, Monte
January 5, 2022 3:31 pm

It’s an “ensemble” of mostly wrong stuff, kinda like models dontcha know!

Carlo, Monte
Reply to  Jim Gorman
January 6, 2022 6:48 am

He shows up in WUWT, oblivious to the fact he can’t buffalo experienced technical people with his nonsense.

Another example of Unskilled and Unaware.

bdgwx
Reply to  Tom
January 6, 2022 7:28 am

This sounded like an interesting idea so I did it. Above you’ll see that I have a graph of 8 different datasets including representation from surface, satellite, radiosonde, and reanalysis categories. I form an equally weighted composite of them all. Below is the R^2 of each member relative to the composite.

UAH = 0.829
RATPAC = 0.873
RSS = 0.943
BEST = 0.952
GISTEMP = 0.954
NOAA = 0.955
HadCRUT = 0.965
ERA = 0.967

MarkMcD
Reply to  Gordon A. Dressler
January 4, 2022 4:23 pm

The above article by Dr. Roy Spencer appears to show a significant upward trend in global LAT based on UAH satellite-derived temperatures over the last 7 years or so. The graphed rise in the temperature anomaly since end-2014 is about 0.2 C based on a linear curve-fit over that interval.

From my Reply to a post above…

Even this UAH data seems suspect.

From 2015: Version 6 of the UAH MSU/AMSU global satellite temperature dataset is by far the most extensive revision of the procedures and computer code we have ever produced in over 25 years of global temperature monitoring.

Now go back to top of page and look at that graph – there’s a distinct state change in 2015!

Jeff Alberts
January 4, 2022 8:31 am

So, a made up number is slightly higher than it was 42 years ago, Why should I care?

AGW is Not Science
Reply to  Jeff Alberts
January 4, 2022 12:39 pm

Particularly when an “increase” is good news anyway.

Reply to  Jeff Alberts
January 4, 2022 1:59 pm

Jeff, you should care because Deep Thought said that the number “42” was very important to the future of the human race.

January 4, 2022 8:45 am

And Arctic sea ice is at its highest point for this date in the last 18 years! 2012 being the next closest and also the year we hit our lowest record in in the satellite era.
Which, of course, is totally meaningless, because arctic sea ice lowest and highest points in the satellite era are all with in the norms of the last 2000 years as are all recorded temperature records. The planet is not warming people the long term trend from the start of the Holocene is still cooling, no amount of BS changes that.

Reply to  bob boder
January 4, 2022 8:50 am

The area (and temperature trends) within the Arctic circle is far from representative of Earth’s average surface.

Reply to  Gordon A. Dressler
January 4, 2022 9:19 am

Global temperatures

Reply to  bob boder
January 4, 2022 11:18 am

Global temperatures = Arctic Circle temperatures + Antarctic Circle temperatures + temperatures everywhere else on Earth, averaged using a witch’s brew of mathematics.

There: fixed it for you.

I simply terms, what happens in the Arctic (in terms of rising or falling average temperatures) has little bearing on the temperatures for Earth as a whole.

Reply to  Gordon A. Dressler
January 4, 2022 2:02 pm

Sorry, darn typos, last sentence should read:
“In simple terms, what happens in the Arctic (in terms of rising or falling average temperatures) has little bearing on the temperatures for Earth as a whole.”

Clyde Spencer
Reply to  Gordon A. Dressler
January 5, 2022 1:34 pm

What happens in the Arctic stays in the Arctic!

Reply to  Gordon A. Dressler
January 5, 2022 10:02 am

Gordon

that was pretty much my point that all of these highs and lows and whatever that everyone is bring up are all well with in the range of the norm over the last 2,000 year and all on the cold side for the last 10,000 years. i.e. nothing to see here.

Doug
January 4, 2022 10:09 am

So wonderful that it’s getting warmer and more plant food is available. We defiantly are blessed to live in a interglacial period .

John Tillman
Reply to  Doug
January 4, 2022 12:03 pm

There would not be 8 billion of us if we weren’t in an interglacial. Probably not even 8 million.

AGW is Not Science
January 4, 2022 11:46 am

+0.21 deg. C

Now give me the margin of error in the “measurements.” And recall most of the “instrument temperature record” was recorded by liquid-in-glass thermometers in increments 100 times the size of this hundredths of one degree figure that may as well be something that falls out of your rear end when using the porcelain convenience.

What a f$&*%ing joke.

bdgwx
Reply to  AGW is Not Science
January 7, 2022 7:55 am

Christy et al. 2003 say it is ±0.2 C. LiGs have nothing to do with the UAH dataset.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 9:07 am

A number which is pure fantasy, that you hang your hat on it is quite telling.

bdgwx
Reply to  Carlo, Monte
January 7, 2022 9:42 am

I assure you, it is not fantasy. From Christy et al. 2003 section 5 pg. 627

Low–middle tropospheric comparisons suggest that globally the satellite system provides monthly and annual anomalies within a 95% CI range of error of approximately ±0.20 and ±0.15 C, respectively (Table10)

And I don’t hang my hat on that number. I have my own concerns with the UAH methodology both in terms of a potential time dependent bias in the timeseries itself and how they performed their uncertainty analysis.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 9:55 am

Another disingenuous claim, you quote it every chance possible.

That paper is not a UA. If this Christy person willy-nilly divides everything by root-N, then they are as deluded-clueless as you.

bdgwx
Reply to  Carlo, Monte
January 7, 2022 10:33 am

That’s John Christy, Roy Spencer, William Norris, William Braswell, and David Parker. “this Christy person” is the creator of the UAH dataset. And if you would read the paper you’d see that they did not divide everything by root-N “willy-nilly” or otherwise.

Carlo, Monte
Reply to  bdgwx
January 7, 2022 12:27 pm

Parody-challenged much?

You can’t buffalo me by spreading big names around, 0.2C for a temperature measurement is still and will remain an absurdity.

Robber
January 4, 2022 12:31 pm

Is there a difference between warming trends over the oceans versus land?

January 4, 2022 3:24 pm

Impatiently waiting for the December data to appear…

January 4, 2022 3:48 pm

Don’t know about anyone else, but we are currently experiencing the longest cold snap in memory.

JMurphy
Reply to  Jeff in Calgary
January 5, 2022 6:48 am

Now that would be news! Where is that?

Clyde Spencer
Reply to  JMurphy
January 5, 2022 8:30 pm

Jeff in Calgary

JMurphy
Reply to  Clyde Spencer
January 6, 2022 10:31 am

We’ll, I’ve seen some figures saying, for example ‘coldest day since 1966’ in Calgary. But I assume that there are people there who can remember the weather earlier than that so I am wondering where might be “the longest cold snap in memory”.

https://mobile.twitter.com/YYC_Weather?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1475548698939727877%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fd-18680785431223705959.ampproject.net%2F2111242025001%2Fframe.html

Bob Rogers
Reply to  Jeff in Calgary
January 5, 2022 8:09 am

We’ve just had a freakishly warm December here in South Carolina. Many days were 20F higher than normal.

January 4, 2022 10:35 pm

Dr. Spencer wrote “The linear warming trend since January, 1979 remains at +0.14 C/decade.”

I disagree. The trend is expressed to the nearest hundredth of a degree. While there are several ways to calculate the linear trend, accounting for months having different number of days, and/or accounting for leap years, just assuming that all months have the same number of days gives a trend from January 1979 to December 2021 is +0.13485844, which rounds to +0.13 C/decade, not +0.14 C/decade.

The graph starts January 1979 but the full dataset starts December 1978. The trend from December 1978 to December 2021, using a count of months for the trend calculation, is +0.13517220, which rounds to +0.14 C/decade. Spencer’s trend calculation starts in December 1978 but his statement incorrectly says “since January, 1979.”

The trend is only slightly different when using a count of days. The trend from January 1979 to December 2021 calculated various ways are;
Count of months: +0.1348584 C/decade [assumes all months are equal and 120 month per decade]

The trend per day is +3.692168E-5 C/day. Over a 4 year period there are 365.25 day/year on average due to a leap year every 4 years. The year 2000 is a leap year.
Count of days: +0.1348564 C/decade [assumes 365.25 days/year]

But our time period is not an even number of years nor an even number of 4-year periods. There are 15,675 days. The number of years is 42 + 334/365 = 42.91507 years from Jan 1979 to Dec 2021. Note there are 11 month mid Jan to mid Dec. The average number of days per year over our period is 15,675 days divided by 42.91507 years = 365.2563202. The most accurate trend is;
Count of days: +0.1348588 C/decade.

The three methods all round to +0.13486 C/decade, which further rounds to +0.13 C/decade.
Obviously, assuming all months are equal is close enough!

bdgwx
Reply to  Ken Gregory
January 7, 2022 7:54 am

That’s a pretty cool post. In my spreadsheet I only keep track from 1979/01 and later because the ERA dataset begins in 1979/01.