# 2021 Tied for 6th Warmest Year in Continued Trend, NASA Analysis Shows (Claims)

From NASA

Lee esta nota de prensa en español aquí.

Earth’s global average surface temperature in 2021 tied with 2018 as the sixth warmest on record, according to independent analyses done by NASA and the National Oceanic and Atmospheric Administration (NOAA).

Continuing the planet’s long-term warming trend, global temperatures in 2021 were 1.5 degrees Fahrenheit (0.85 degrees Celsius) above the average for NASA’s baseline period, according to scientists at NASA’s Goddard Institute for Space Studies (GISS) in New York. NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.

Collectively, the past eight years are the warmest years since modern recordkeeping began in 1880. This annual temperature data makes up the global temperature record – which tells scientists the planet is warming.

According to NASA’s temperature record, Earth in 2021 was about 1.9 degrees Fahrenheit (or about 1.1 degrees Celsius) warmer than the late 19th century average, the start of the industrial revolution.

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson. “Eight of the top 10 warmest years on our planet occurred in the last decade, an indisputable fact that underscores the need for bold action to safeguard the future of our country – and all of humanity. NASA’s scientific research about how Earth is changing and getting warmer will guide communities throughout the world, helping humanity confront climate and mitigate its devastating effects.”

This warming trend around the globe is due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere. The planet is already seeing the effects of global warming: Arctic sea ice is declining, sea levels are rising, wildfires are becoming more severe and animal migration patterns are shifting. Understanding how the planet is changing – and how rapidly that change occurs – is crucial for humanity to prepare for and adapt to a warmer world.

Weather stations, ships, and ocean buoys around the globe record the temperature at Earth’s surface throughout the year. These ground-based measurements of surface temperature are validated with satellite data from the Atmospheric Infrared Sounder (AIRS) on NASA’s Aqua satellite. Scientists analyze these measurements using computer algorithms to deal with uncertainties in the data and quality control to calculate the global average surface temperature difference for every year. NASA compares that global mean temperature to its baseline period of 1951-1980. That baseline includes climate patterns and unusually hot or cold years due to other factors, ensuring that it encompasses natural variations in Earth’s temperature.

Many factors affect the average temperature any given year, such as La Nina and El Nino climate patterns in the tropical Pacific. For example, 2021 was a La Nina year and NASA scientists estimate that it may have cooled global temperatures by about 0.06 degrees Fahrenheit (0.03 degrees Celsius) from what the average would have been.

A separate, independent analysis by NOAA also concluded that the global surface temperature for 2021 was the sixth highest since record keeping began in 1880. NOAA scientists use much of the same raw temperature data in their analysis and have a different baseline period (1901-2000) and methodology.

“The complexity of the various analyses doesn’t matter because the signals are so strong,” said Gavin Schmidt, director of GISS, NASA’s leading center for climate modeling and climate change research. “The trends are all the same because the trends are so large.”

NASA’s full dataset of global surface temperatures for 2021, as well as details of how NASA scientists conducted the analysis, are publicly available from GISS.

GISS is a NASA laboratory managed by the Earth Sciences Division of the agency’s Goddard Space Flight Center in Greenbelt, Maryland. The laboratory is affiliated with Columbia University’s Earth Institute and School of Engineering and Applied Science in New York.

https://www.nasa.gov/earth

-end-

Article Rating
Subscribe
Notify of
Inline Feedbacks
Tom Halla
January 14, 2022 6:07 am

Considering how much GISS cooks historic records, any statements from them Are useless.

Reply to  Tom Halla
January 14, 2022 6:29 am

But Bill Nelson said it is an indisputable fact 8 of the 10 hottest years have occurred and bold action is required….can’t we boldly fire Bill Nelson? Science leaves no room for doubt according to Bill.

oeman 50
January 14, 2022 8:28 am

It seems obvious he has to say these things to keep his job in this administration.

Michael S. Kelly
Reply to  oeman 50
January 14, 2022 7:11 pm

Bill Nelson was first elected to the US House of Representatives from Florida in 1978 (at age 36). In 1986, he became the first sitting Member of the House of Representatives to fly in space, aboard STS-61-C (Columbia), the last successful Shuttle flight before the Challenger disaster. He was flown as a “payload specialist.” Many, if not most, astronauts have a “handle.” A former colleague of mine, George Zamka, for example, had the astronaut handle of “Zambo.” The handle the astronaut corps bestowed on Nelson was “Ballast.”

While in the Senate, Nelson championed all of the NASA human exploration initiatives, culminating in the development of the Space Launch System (SLS), otherwise known as the Senate Launch System. It is a fully expendable launch vehicle combining all of the worst features of the Space Shuttle, and none of the advantages. In the unlikely event that it is ever launched, SLS will cost more than 10 times that of a Falcon Heavy. If Elon succeeds in getting Starship operational (and I think he will), it will outperform SLS at 100 times less cost. But Elon didn’t contribute as much to Nelson’s campaign as Boeing, Lockheed-Martin, and Northrop-Grumman. So SLS will go on ad infinitum without ever flying while Elon goes on to the Moon and Mars.

Having said all of that, I can’t really criticize Nelson. From the start of his Congressional career, he represented his constituents. That was his job, and he did it very well. His constituents were conditioned to accept the NASA model of “space exploration” by decades of abuse by NASA and the federal government. As a result, they were interested in a certain path forward, and Nelson dutifully pursued it, with great success.

He’s a good soldier. He will do what his commanders command. I can’t criticize him for that. The only thing I could criticize him for is pretending to believe the CAGW nonsense in order to please his commanders, if in fact, he was only pretending. I don’t know if he is. If he has any doubts, however, then I would be very critical.

lee
January 14, 2022 6:33 pm

8 out of 10 years? That is only weather. 😉

In The Real World
Reply to  Tom Halla
January 14, 2022 10:32 am

They have to keep making up their lies to keep their jobs , so there will always be fresh propaganda to keep the global warming scam going .

http://temperature.global/
This link updates from thousands of worldwide weather stations and shows that overall temperatures have been below average for the last 7 years .
But it only covers the last 30 years .

So , if you carefully select the figures you want and adjust them to suit your agenda , then it is possible to make up lies like ” Hottest Years On Record “, which the warmists are doing .

Mark D
Reply to  Tom Halla
January 14, 2022 10:33 am

After the past several years any respect I had for gvt produced data is long gone.

Beyond that so what? It’s been warmer. It’s been colder. Another trip ’round the sun.

John VC(@jvcstone)
Reply to  Mark D
January 14, 2022 12:42 pm

Once one understands that the “official narrative” is always a lie, things begin to make sense.

Tom Abbott
Reply to  John VC
January 14, 2022 2:45 pm

Yes, and the official temperature narrative is a Big Lie.

This Distorted Temperature Record is the ONLY thing Alarmists have to show as “evidence” that CO2 is a danger.

These “Hottest Year Evah!” claims are refuted by the written temperature record which shows it was just as warm in the Early Twenthieth Century as it is today, which puts the lie to “Hottest Year Evah!”.

NASA Climate and NOAA are a bunch of Liars who are screwing this old world up with their Climate Change lies.

The Bastardized Temperature Record is not fit for purpose. It’s a trick used by Alarmists to scare people into obeying.

Hughes Fullydeeoth
Reply to  Mark D
January 14, 2022 2:02 pm

What baffles me is how anybody can say (without bursting out laughing) that a government agency has carried out an “independent” activity
I blame idiocy, malice or a mixture of the two

Mark D
Reply to  Hughes Fullydeeoth
January 14, 2022 7:11 pm

Whenever a paper is stuck in my face my first question is who funded it.

Reply to  Hughes Fullydeeoth
January 15, 2022 6:00 am

HF: I upvoted your comment but it was registered as a downvote. Add 2 to the uppers.

MarkW
Reply to  Graham Lyons
January 15, 2022 3:25 pm

You cancel an upvote or a downvote, by pressing the opposite key. Then you can record either an up vote or a downvote.

Simon
Reply to  Tom Halla
January 15, 2022 12:07 pm

If you think they cook the books then explain where. That’s right your team can’t though many have tried. Which is why your statement is useless

Carlo, Monte
January 15, 2022 1:59 pm

Figured out who Peter Daszak is yet?

Simon
Reply to  Carlo, Monte
January 15, 2022 3:50 pm

Yawn.

MarkW
January 15, 2022 3:27 pm

How the books have been cooked has been explained many times.
Not surprised that you have managed to forget that.
Regardless, if it weren’t for useless statements, there would be no Simon.

PS: I see that Simon still believes that crying “you’re wrong” is an irrefutable refutation.

Last edited 1 day ago by MarkW
Simon
January 15, 2022 3:51 pm

How the books have been cooked has been explained many times.” No, it has been attempted that’s all. As I recall the Global Warming Policy Foundation attempted to collect evidence, then gave up and published nothing.

Pat from kerbob
January 15, 2022 8:15 pm

It’s been shown several times that the adjustments track c02 increases >98%.
That’s pretty clear

I was challenged to go on the GISS site and graph the same values Hansen did in 1998 and you get a different graph than is shown in his paper, cooler in the past, warmer in 1998 now.

Your statements are no different than claims climategate was nothing except no unbiased sentient individual with more than 2 brain cells can read those emails and insist there was nothing to see.

As always, if you had a solid story you wouldn’t have to lie.
You wouldn’t feel the need to produce hockey sticks based on rickety proxy data and claim that over rules endless physical evidence that it was much warmer through most of human history.

You just wouldn’t have to do it.
So keep on walking with your crap, no one here is buying

Simon
Reply to  Pat from kerbob
January 15, 2022 10:51 pm

“It’s been shown several times that the adjustments track c02 increases >98%.
That’s pretty clear”
I expect if it’s that clear you will be able to direct me to a site that demonstrates that clearly?

“As always, if you had a solid story you wouldn’t have to lie.” Wow that horse. Where did I lie?

I will ask it again. Where is your evidence the data is faulty? It’s like the Trump votes thing.Despite multiple enquires … No evidence.

Last edited 1 day ago by Simon
Simon
Reply to  Pat from kerbob
January 16, 2022 10:07 am

You seem to have gone very quiet Pat from kerbob.

January 14, 2022 6:08 am

Correlation is not causation. Why do government employees believe this nonsense? Where are the whistleblowers?

Joseph Zorzin
Reply to  David Siegel
January 14, 2022 7:26 am

few people will turn away a government sinecure- so they sing the party line

mark from the midwest
Reply to  David Siegel
January 14, 2022 8:17 am

Because there is no money to be made from “doing nothing”

DMacKenzie
Reply to  mark from the midwest
January 14, 2022 10:54 am

Plus if you do nothing for long enough, somebody notices and fires you….

Mark D
January 14, 2022 7:34 pm

In fed/gov? Surely you jest.

Reply to  David Siegel
January 15, 2022 6:09 am

Yes.
Where were the Lysenkoism whistleblowers in 1930s USSR?
Not much different in the ‘free’ World science is there.

2hotel9
January 14, 2022 6:09 am

Anything can be whatever you claim when you change data to fit your political agenda.

Steve Case
January 14, 2022 6:09 am

Yes, and every month NASA makes several hundred changes to their Land Ocean Temperature Index. Well the year just completed and yesterday they updated LOTI and compared to ten years ago here’s a graphic of what all the changes for the past ten years looks like:

Bob Tisdale(@bobtisdale)
Editor
Reply to  Steve Case
January 14, 2022 6:26 am

Thanks for the graph, Steve. I hadn’t looked at the changes to GISS LOTI in almost a decade. (Then again, I haven’t plotted GISS LOTI for a blog post in almost 8 years.)

Some of those look like they might be tweaks to earlier tweaks.

Regards,
Bob

Steve Case
Reply to  Bob Tisdale
January 14, 2022 8:13 am

It’s a comparison of the December 2011 LOTI file saved from the Internet Archives Way Back Machine with the LOTI file that came out yesterday. To be more specific it’s the the AnnualMean J-D values for December 2021 minus December 2011 plotted out.

If that includes earlier tweaks, then that’s what it is.

For 2021 GISTEMP averaged 329 “tweaks” per month all the way back to data from the 19th century.

The response from GISTEMP when asked why old data all the way back to January 1880 is regularly adjusted they say:

“[A]ssume that a station moves or gets a new instrument that is placed in a different location than the old one, so that the measured temperatures are now e.g. about half a degree higher than before. To make the temperature series for that station consistent, you will either have to lower all new readings by that amount or to increase the old readings once and for all by half a degree. The second option is preferred, because you can use future readings as they are, rather than having to remember to change them. However, it has the consequence that such a change impacts all the old data back to the beginning of the station record.”

Retired_Engineer_Jim
Reply to  Steve Case
January 14, 2022 8:39 am

So they are still finding that stations have been moved, or new instruments installed all the way back to 1880?

DMacKenzie
January 14, 2022 9:13 am

So what they said makes some sense, except I’m sure they don’t move 329 stations per month on average, although they might recalibrate that many. But on recalibration of an instrument, you generally let the old readings stand because you don’t know WHEN or at what rate it drifted out of calibration except in obvious cases.

Rick C
January 14, 2022 10:11 am

Most all recalibrations of meteorological glass thermometers simply verify that they are in calibration. It is very rare to find a glass thermometer has drifted – I never saw one in 40 years of laboratory work except for those used at high temperatures > 200C. They break, but they don’t drift significantly. I would doubt any explanation of temperature adjustments made due to calibration issues for stations using liquid in glass thermometers.

DMacKenzie
Reply to  Rick C
January 14, 2022 11:02 am

Did you know ? for about the first 100 years of glass thermometers, there were drift problems, some drifted 10 degrees F in 20 years…until types of glass that weren’t much affected by mercury were developed…..

Pat Frank
January 15, 2022 8:43 am

Historical LiG thermometers suffered from Joule creep. This is the effect resulting from a slow contraction of the glass bulb, as the silica relaxes to a more stable configuration.

The drift never stops but attenuates to become very slow after about 50 years. Total drift can be up to 1 C.

Meteorological LiG thermometers also had a ±0.25 C limit of resolution, at best. Somehow, people compiling the GMST record never learned about instrumental resolution. There’s not a mention of it in their papers.

One suspects the people compiling the global record have never worked with an actual LiG thermometer.

tygrus
Reply to  Rick C
January 14, 2022 1:35 pm

The thermometer accuracy over time are fine but there are many other factors that change recorded values for different sites & sites change over time:
1) time of day obs were taken & if it was missing the correct min/max.
2) site location eg. verandah, open field vs tree shade, near irrigation crops, near water body, A/C heat pump exhaust air, near carpark / concrete, plane exhaust at airports now with larger planes…
3) enclosure / mounting eg. on wall under eaves but open, pre-Stevenson screen, thermometer put off-centre instead of central in the box, large vs small box, height from ground…
4) scale divisions & F vs C. Did they round towards 0 or round to nearest? Did they measure 0.5 or smaller?

But poor records of these details over time & lack of testing means the QA process makes many assumptions vulnerable to biases. Some adjustments are correct, some may not be.

Temp dif between city temp & airport then they stop the city temp & this change exaggerates bias. Look at Penrith vs Richmond vs Orchard Hill (NSW, Australia) dif recs at dif locations probably affected by urbanisation. We previously used hoses/sprinklers outside to cool the children, concrete & house during heatwaves. The last 25yrs stops watering (mains water) during 10am to 4pm during hot summers & droughts.

Pat Frank
January 15, 2022 8:54 am

No one takes into account the systematic measurement error from solar irradiance and wind-speed effects. Even highly accurate unaspirated LiG thermometers produce field measurement uncertainties of about ±0.5 C.

The entire historical record suffers from this problem. The rate and magnitude of the change in GMST since 1850 is completely unknowable.

If the people compiling the global air temperature record worked to the scientific standard of experimental rigor, they’d have nothing to say about global warming.

Hence, perhaps, their neglectful incompetence.

Steve Case
January 14, 2022 9:33 am

It looks like they find an error in a station today, and assume that error extends all the way back to the 19th century. One station wouldn’t change the entire time series, so it further looks like they find a multitude of errors that do affect today’s anomaly for any particular month and extend the correction back in time. But it’s very curious as to why a pattern forms where all the anomalies since 1970 increase as a result of those corrections. Besides that one would think that records of those corrections would be made public and available for audit.

Jim Gorman
Reply to  Steve Case
January 14, 2022 12:05 pm

I think they propagate changes via homogenization. They “find” an inhomogeneity and make a change to a station. The next run they include the changed temp and lo and behold a nearby station looks incorrect and gets homogenized. And on and on. That’s why all the changes a downward rather than a 50/50 mix of up and down adjustments. What you end up with is an algorithm controlling the decision on what to change rather than investigating the actual evidence.

And let’s not forget that they are changing averages that should have nothing but integer precision by 1/100th of a degree. I hope someday, that some POTUS asks them to justify each and every change with documented evidence rather than an algorithm that follows the bias of a programmer.

Mike Jonas(@egrey1)
Editor
Reply to  Steve Case
January 14, 2022 12:00 pm

For a station move, the adjustment would be by the same amount every year – possibly different in different seasons but the same every year. Also, station moves can be in any direction so the sum total of adjustments could reasonably be expected to be about zero. Yet the chart of adjustments show steadily increasing negative adjustments as you go back over time. And there’s no way that NASA/GISS can keep finding lots of old station moves every single year. Something else is going on, and it smells.

Tim Gorman
Reply to  Steve Case
January 14, 2022 5:07 pm

Hubbard and Lin did a study almost twenty years ago and determined that adjustments *have* to be done on a station-by-station basis. Broad adjustments are too subjective to be accurate. Apparently GISTEMP doesn’t even recognize that temperature readings are impacted by little things like the ground over which the station sits. If it is grass and is green in summer and brown in the winter *that* alone will affect the calibration and reading of the temperature station. So what temps did they pick to formulate their “adjustment”? Summer or winter?

Robertvd
Reply to  Steve Case
January 14, 2022 7:10 am

You just wonder how all that ice melted in the 20ies and how life could have existed before the Ice Age started 3 million years ago.

TheFinalNail
Reply to  Steve Case
January 14, 2022 7:30 am

Those changes look pretty minor. Do they have any influence on the trend? If not, what would be the benefit of making them up?

DHR
January 14, 2022 8:24 am

Collectively, the changes make a substantial upward change to global temperature databases. Go to climate4you.com to see the sum of the changes over the years. The GISS global temperature chart is shifted up by 0.67C and the NCDC (NOAA) is shifted up by 0.49C. Curiously, the NOAA Climate Reference Network, a group of about 112 climate stations which include triply redundant thermometers and other devices and located uniformly over the Lower 48 shows no change in Lower 48 temperature since the system was set up in January of 2005 – 17 years. I have never seen a single announcement by NOAA or NASA concerning this information.

TheFinalNail
January 15, 2022 2:21 am

Collectively, the changes make a substantial upward change to global temperature databases.

I looked up the Climate4you chart you refer to and have to say I found it very misleading. You say “The GISS global temperature chart is shifted up by 0.67C…”; well, no.

Firstly, that’s not a trend alteration, that’s a difference between two different monthly values, January 1910 and January 2020. More importantly, it shows that as of May 2008 there was already a difference of 0.45C between the Jan 1910 and Jan 2020 starting figures. Since then, fractional changes have increased this difference to 0.67C; that’s a change of 0.22C from the 2008 values for these 2 months, not 0.67C.

Remember, this example refers to two single months, 110 years apart. What about all the other months? It looks to me as though Climate4you has scanned through the entire GISS data set and zeroed in on the biggest divergence it could find between 2 separate months, then misrepresented it (to the casual reader) as a much bigger change than it actually is.

Last edited 2 days ago by TheFinalNail
Steve Case
January 14, 2022 8:34 am

Yes, they are pretty minor changes, but over time they add up. Here’s the effect of those changes over the period 1997 to 2019:

You have to understand that GISTEMP makes hundreds of changes every month, it’s a steady drone. And as you can see from the other graph, all the the changes affecting data since 1970 result in an increase in temperature.

Well OK, that’s an influence of only 0.25 degree per century, but you have to understand that when GISS crows about being the sixth warmest they’re dealing with hundredths of a degree differences from year to year. It looks like they think it’s a benefit. My opinion? It makes them look petty.

cerescokid
Reply to  Steve Case
January 14, 2022 11:16 am

In isolation they might be minor but then how much is UHI effect and land use changes and uncertainties and for last 40 years AMO, etc etc. Individually insignificant but the cumulative effect given, as you say, we are dealing with tenths it all adds up.

TheFinalNail
Reply to  Steve Case
January 15, 2022 2:31 am

Do you have access to the 1997 edition of GISS, Steve? Can you post a link if so, thanks.

Steve Case
January 15, 2022 5:38 am
Steve Case
January 15, 2022 6:42 am

The link in the post below is to 2000 not 1997 because the 1997 Link in my files no longer works. But the 2000 version is close, but already in those three years the 1950-1997 time series increased from 0.75 to 0.81

Oh on edit, I see that link shows that it is an “update”

Last edited 2 days ago by Steve Case
Jim Gorman
January 14, 2022 12:10 pm

Explain how you get 1/100ths of precision from thermometers prior to 1980 when the recorded precision was in units digits. It is all fake mathematics that have no relation to scientific measurements and the requirements for maintaining measurement precision.

MarkW
Reply to  Jim Gorman
January 14, 2022 2:24 pm

According to the alarmists, if you have 100 stations scattered across the country, that makes each of the stations more accurate.

DMacKenzie
Reply to  Jim Gorman
January 14, 2022 6:53 pm

If you use a tape measure to measure the distance from your back door to your back fence, and the tape measure is calibrated in 10ths of an inch, and you make entirely random errors in the readings…..then statistically after 10,000 readings, your answer should be accurate to say your 1/10 of an inch divided by the square root of your number of readings….so 1/1000 of an inch…(I overly simplify)

But what if your reading errors aren’t actually random ? Or maybe your tape changes length with temperature, or there is a cross wind, etc. at certain times of the day and most of your readings are taken during those times ?

What if you are interested in, and only write down readings to the nearest foot? A million readings and your average could still be out half a foot, but someone else does the calcs and says your accuracy is a thousandth of a foot…..hmmmm…

What if you use a different tape measure for every different reading ? What if your neighbor starts writing his measurements to his fence in your logbook (homogenizing them) ?

Stats have a very human basis. Somebody just decided that a standard deviation should be the square root of the absolute y-axis errors squared, and it happens to mesh with other formulas derived from games of chance…so some useful things can be extrapolated. However, non-random errors from 10,000 different measuring tapes of the distance to your fence from one year to the next, isn’t going to cut your error to 1/100 th of a calibration distance….

I used to have a stats professor who would spend 15 minutes describing the lecture assignment and the pertinent equations, then 45 minutes explaining the ways these equations did not apply to the real world. His recommended second textbook was a notepad sized “How to Lie With Statistics”, still a classic I understand. Maybe he tainted my brain.

Pat Frank
January 15, 2022 9:01 am

then statistically after 10,000 readings, your answer should be accurate to say your 1/10 of an inch divided by the square root of your number of readings….so 1/1000 of an inch…(I overly simplify)

If your tape measure is calibrated in 1/10ths of an inch, then the average of repeated measurements will approach that 1/10th inch accuracy limit.

Doing better than the lower limit of instrumental resolution is physically impossible.

Tim Gorman
January 15, 2022 11:01 am

How do you make 10000 measurements of the SAME temperature? I guess I missed the development of the time machine? Did MIT do it?

TheFinalNail
Reply to  Jim Gorman
January 15, 2022 2:29 am

Explain how you get 1/100ths of precision from thermometers prior to 1980 when the recorded precision was in units digits.

No one is suggesting that the precision described comes directly from thermometers. No more than anyone is suggesting that the average family size is actually 2.5 children.

Last edited 2 days ago by TheFinalNail
Jim Gorman
January 15, 2022 6:09 am

The precision ONLY COMES from the thermometers. There is simply no calculation that increase the resolution/precision of measurements. You would do better to show references that support your assertion. I’m not sure where you will find one.

MarkW
January 15, 2022 3:39 pm

If it’s not coming from the instruments themselves, then it is imaginary.

Reply to  Steve Case
January 14, 2022 10:11 am

If possible to calculate, I think it would be great to be able to say something along the lines of XX% of global warming is because it is colder in the past than it used to be (according to NASA).

Jim Gorman
January 14, 2022 12:13 pm

Exactly! What caused the warming at the end of the Little Ice Age? When did that natural occurrence dissipate and when was it replaced by CO2. Was water vapor that small back then? Basically, don’t worry about the past, we know what we are doing in making future predictions.

To bed B
Reply to  Steve Case
January 14, 2022 11:20 am

They’re minor tweaks, but to something already dodgy. They do show that its massaging of data, especially that 40’s peak and the late 19th C adjustments. We couldn’t have a warming trend 100 years ago as big as the one now, and we can also claim changes also warmed the past, as I have come across many times.

To bed B
Reply to  Steve Case
January 14, 2022 12:36 pm

Comparing GISS LOTI with UAH 6 (offset 0.53) and you can see that they are very similar up until 1997. They differ a lot after that.

https://woodfortrees.org/graph/gistemp/from:1979/plot/uah6/from:1979/offset:0.53

Here is a plot of linear fits to GISS Loti and UAH 6 from 1979 to 1997 and 1999 till the present.

https://woodfortrees.org/graph/gistemp/from:1979/to:1997/trend/plot/uah6/from:1979/offset:0.53/to:1997/trend/plot/gistemp/from:1999/trend/plot/uah6/from:1999/offset:0.53/trend

Looking at the comparison of the plots, the large difference in trends due to the difference in the months cooler than the trend line, mostly after 2006. The months warmer than the trend tend line to be very similar. This is not the case for 1998. The peak of the El Nino is half a degree cooler in GISS.

This is not just because of a difference in methodology (or because we live on the surface and, not in the lower troposphere). One of the methods must be very dodgy.

Tom Abbott
Reply to  To bed B
January 14, 2022 2:58 pm

“This is not the case for 1998. The peak of the El Nino is half a degree cooler in GISS.”

NASA had to modify GISS to show 1998 cooler, otherwise they couldn’t claim that ten years between 1998 and 2016 were the “hottest year evah!”

Here’s the UAH satellite chart. See how many years can be declared the “hottest year evah! between 1998 and 2016. The answer is NO years between 1998 and 2016 could be called the “hottest year evah!” if you go by the UAH chart, so NASA makes up a bogus chart in order to do so.

Bill Everett
Reply to  Tom Abbott
January 14, 2022 4:32 pm

I believe that the temperature reading for 2004 that marked the end of the warming period that started about 1975 exceeds most of the annual temperature readings after 2004. If the El Nino periods after 2004 are ignored than this is even more evident. It almost looks like the beginning years of a thirty year pause in warming.

To bed B
Reply to  Tom Abbott
January 14, 2022 8:01 pm

This is what made me look closer. The other El Ninos seem similar in both.

Needless to say which one looksike it’s the dodgy one.

ResourceGuy
January 14, 2022 6:13 am

This Climate Church Lady Administration is part of the problem in banning all knowledge or spoken truth on the term cycles. Ye shall be excommunicated by the tech platform enforcers and all other official comrades. So let it be written in the Congressional Record, so let be done by regulation and decree (and all allied talking heads).

Rah
January 14, 2022 6:13 am

They really are pushing this crap to the limit. All part of the great reset.

January 14, 2022 6:24 am

Even the Weather Channel pushes climate alarmism … https://www.youtube.com/watch?v=HQKbm4qU_lQ

Trying to Play Nice
Reply to  John Shewchuk
January 14, 2022 8:28 am

What do you mean “even the Weather Channel”? They’ve been screeching climate alarmism for quite a while.

Pflashgordon
Reply to  John Shewchuk
January 14, 2022 1:25 pm

When the so-called “Weather Channel” stopped blandly reporting weather forecasts and went with videos, serial programming, and live humans in the studio and in the field, they almost immediately ceased to be an objective, reliable source of weather information. They have long been a weather/climate pornography channel. I NEVER look at them. Unfortunately, they have bought up formerly reliable weather apps (e.g., Weather Underground) and ruined them with their weather propaganda. They are the weather equivalent of the Lame Stream Media.

MarkW
January 14, 2022 7:32 am

The scam is starting to fall apart, they have to get as much as the can before that happens.

Steve Case
January 14, 2022 9:39 am

The scam is starting to fall apart, …
__________________________

If only that were true. If you include Acid Rain, The Ozone Hole and Global Cooling, the scam has been going on for over 50 years and doesn’t really show any signs of rolling over and playing dead.

Thomas
Reply to  Steve Case
January 14, 2022 1:40 pm

In fact it has gain much strength since the acid rain scare.

Last edited 2 days ago by Thomas
Gregory Woods
January 14, 2022 6:13 am

Gee, sounds pretty bad…

Chip
January 14, 2022 6:14 am

They claim to know annual global temperatures since 1880, and then proved reports measured in tenths and hundredths of a degree. This is a religion, not science.

Laertes
January 14, 2022 6:19 am

Only except when it invalidates their theory. I’ve seen articles that “former temperature records from the 30s are suspect but we can be SURE of the recent ones.” Rubbish.

Jim Gorman
January 14, 2022 12:17 pm

In essence the are saying the weathermen at that time were chumpls that didn’t have a clue as to what they were doing. “We” can look back 100 years to the information they put on paper and decipher where errors were made and where lackadaisical attitudes caused problems.

Joseph Zorzin
January 14, 2022 7:29 am

they use extremely accurate tree rings /s

Latitude
January 14, 2022 6:15 am

…a tie for the 6th warmest year……is not a warming trend

TheFinalNail
January 14, 2022 7:35 am

…a tie for the 6th warmest year……is not a warming trend

Nor is any individual year’s average temperature. The question is, what impact does this year’s anomaly have on the long term trend? In the 30-year period to 2020, the trend in GISS was +0.232C per decade. Adding 2021 data, even though it was a coller year, actually increases that trend fractionally to +0.233 C per decade. That’s not a statistically significant increase, obviously, but there’s certainly no sign of a slowdown in warming either.

Last edited 3 days ago by TheFinalNail
DHR
January 14, 2022 8:27 am

See climate4you.com for actual data.

TheFinalNail
January 15, 2022 2:33 am

See above, in at least one instance this has been badly misrepresented.

Alan the Brit
January 14, 2022 8:35 am

As a retired engineer, I find it extremely difficult to believe that scientists are able to measure to an accuracy of 1/1000th of a degree from o.232C to 0.233C, with no tolerance reference of measurement!!!

rbmorse
Reply to  Alan the Brit
January 14, 2022 9:06 am

Especially with a data range that extends more than 120 years into the past.

Jim Gorman
January 14, 2022 12:21 pm

And values that were recorded to integer precision for at least half of that time.

MarkW
Reply to  Jim Gorman
January 14, 2022 2:27 pm

In addition to be recorded only to integer precision, they only took the daily high and low for each day.
Anyone who thinks that they can get an average for a day to within a few tenths of a degree, from just the daily high and low, has never been outside.

Mark D
Reply to  Alan the Brit
January 14, 2022 10:44 am

As a retired hard hat I moved heat for a living and I learned just how difficult it is to get repeatable numbers measuring temperatures. One project for NASA had me change platinum rtds several times until they got the numbers were what they wanted to see.

The fever thermometers I use at home are all glass. They might not be accurate but the are consistent.

Jim Gorman
Reply to  Mark D
January 14, 2022 12:43 pm

I copied a page of routine uncertainties for one RTDS. As you can see even a class A at 0C is +/- 0.15C. This is the precision of measurement. How do these folks get values of precision out to the 1/1000th of a degree?

This just isn’t done in science. Otherwise we would know the distance to planets and stars down to the centimeter or less. All we would have to do is average the readings over the last century, divide by the number of data points and Voila!

RetiredEE
Reply to  Jim Gorman
January 14, 2022 1:54 pm

At least for the temperature ranges we are discussing the ITS90 uses a platinum RTD as the interpolation standard between standard points. When calibrating a given RTD for high precision it must be referenced to the temperature standards (i.e. H2O triple point) then a polynomial calibration is produced for that specific RTD. This can be used with accuracies/uncertainty below 0.001C however the devices in this class are lab standards requiring careful handling and would not be used for field work or instrumentation. They are also generally wire wound and very sensitive to shock and vibration.

The general purpose RTDs are trimmed to the performance required by the specific class required as noted in the referenced table. They still need to be calibrated in the instrumentation circuits. Oh yes, the circuitry used to measure the resistance must ensure that the sense current does not cause excessive heating of the RTD.

All that said, the general purpose and even the meteorological instruments do not have accuracy or resolution to those being stated by the adjustments. For example the ASOS system temperature measurement from -58 to +122F has an RMS error of 0.9F with a max error of 1.8F and a resolution of 0.1F.

It is becoming ever more difficult to trust anything from the government.

Tim Gorman
January 14, 2022 5:20 pm

You are only describing the uncertainty in the sensor itself. In the field that sensor uncertainty increases because of uncertainties in the instrument housing itself. Did a leaf block the air intake for a period of time? Did ice cover the instrument case for a period of time in the winter? Did insect detritus build up around the sensor over time? Did the grass under the instrument change from green to brown over time (e.g. seasonal change).

Although it has since been deleted from the internet, the field uncertainty of even the ARGO floats was once estimated to be +/- 0.5C.

MarkW
Reply to  Jim Gorman
January 14, 2022 2:28 pm

“How do these folks get values of precision out to the 1/1000th of a degree?”

By abusing statistics to the point that criminal charges would be warranted.

Joao Martins
Reply to  Alan the Brit
January 14, 2022 1:02 pm

I find it extremely difficult to believe that scientists are able to measure to an accuracy of 1/1000th

… except if it was not actually measured!… (“measured” as in using a ruler or a thermometer)

TheFinalNail
Reply to  Alan the Brit
January 15, 2022 2:34 am

They weren’t able to measure to that degree of accuracy and have never claimed to have been able to do so. As an engineer you will grasp the concept of averaging and how this tends to increase the precision of the collective indivdual values.

Last edited 2 days ago by TheFinalNail
Carlo, Monte
January 15, 2022 5:08 am

how this tends to increase the precision of the collective indivdual [sic] values

A fundamental principal of climastrology that exists nowhere else in science and engineering.

Last edited 2 days ago by Carlo, Monte
Jim Gorman
January 15, 2022 5:23 am

As an engineer, here is what I learned and it certainly does not agree with increasing precision by averaging.

Washington University at St. Louis’s chemistry department has a concise definition about precision. http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm

Defining the Terms Used to Discuss Significant Figures

Significant Figures: The number of digits used to express a measured or calculated quantity.

By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing. It is important after learning and understanding significant figures to use them properly throughout your scientific career. (bold and underlined by me)

As you can see, calculations can not add precision beyond what was actually measured. The word INTEGRITY should have special meaning to any scientist/engineer.

Bellman
Reply to  Jim Gorman
January 15, 2022 12:34 pm

The problem with your significant figure rules of of thumb are that following the rules exactly allow you to express an average to more decimal places than the individual measurements.

Suppose I take 1000 temperatures each written to the nearest degree C, i.e. 0 decimal places. Add them up and I follow rule “For addition and subtraction, the answer should have the same number of decimal places as the term with the fewest decimal places.”

So I get a sum to 0 decimal places. Say 12345°C.

Now I divide that by 1000 to get the average. This follows the rule “For multiplication and division, the answer should have the same number of significant figures as the term with the fewest number of significant figures.

12345 has 5 significant figures. 1000 is an exact number, so follows the rule “Exact numbers, such as integers, are treated as if they have an infinite number of significant figures.

5 is fewer than infinity, so the answer should be written to 5 significant figures, 12.345°C.

Now whether it makes sense to write it to 3 decimal places is another matter, which is why I’m not keen on these simplistic rules. As I’ve said before, I think the rule presented in the GUM and other works you insist I read are better – work out the uncertainty to 1 or 2 significant figures and write the answer to the same degree.

Carlo, Monte
January 15, 2022 2:01 pm

How exactly do you propose to measure 1000 temperatures simultaneously?

Measured temperatures are real numbers, not integers, so this fantasy world example is stooopid.

Last edited 1 day ago by Carlo, Monte
Bellman
Reply to  Carlo, Monte
January 15, 2022 3:47 pm

How exactly do you propose to measure 1000 temperatures simultaneously?

When did I propose that? You seemed to be obsessed with the idea that you can only take an average if you measure everything at exactly the same time, which I think says something about your understanding of statistics.

Measured temperatures are real numbers, not integers, so this fantasy world example is stooopid.

I wasn’t claiming the temperatures were integers, just that they were only measured to the nearest whole number. It would work just as well if you quoted the temperatures in 0.1s of a degree.

Tim Gorman
January 16, 2022 9:22 am

An average of independent, random measurements of different things is useless when applied to the individual elements. The average creates no expectation of what the next measurement will be – meaning it is useless in the real world.

If you want to describe something using statistics then you must be measuring the same thing with your measurements which can be averaged to create an expectation value for the next measurement.

I wasn’t claiming the temperatures were integers, just that they were only measured to the nearest whole number. It would work just as well if you quoted the temperatures in 0.1s of a degree.”

And yet you do your calculations as if those measurements have no uncertainty, assuming they are 100% accurate. The words “nearest whole number” *should* be a clue that uncertainty applies and must be propagated into your calculations. And it is that uncertainty that determines where your first significant digit is.

Bellman
Reply to  Tim Gorman
January 16, 2022 12:04 pm

An average of independent, random measurements of different things is useless when applied to the individual elements.

And you still don’t get that I’m not applying the average to the individual elements. The goal is to use the individual elements to determine the average. The average is thing I’m interested in. I’m not using it to predict what the next measurement will be. This does not make it useless in the real world. Believe it or not, statistics are used to understand the real world. There’s more to the real world than are drempt of in your workshop.

And yet you do your calculations as if those measurements have no uncertainty, assuming they are 100% accurate.”

No. The point of these significance rules is to give an implied uncertainty. The assumption is that of you are stating measurements to the nearest whole number, than there is an implied uncertainty of ±0.5, and that you can ignore all uncertainty calculations and just use the “rules” of significant figures to stand in for the actual uncertainty propagation.

Tim Gorman
January 17, 2022 12:29 pm

“And you still don’t get that I’m not applying the average to the individual elements.”

Then of what use is the average? Statistics are used to describe the population – i.e. the elements of the data set.

I’m not using it to predict what the next measurement will be.”

If the average is not a predictor of the next measurement, then of what use is the average?

Believe it or not, statistics are used to understand the real world. “

My point exactly. If your statistic, i.e. the average, doesn’t tell you something about the real world then of what use is it? If your statistic doesn’t allow you to predict what is happening in the real world then of what use is it?

That’s the biggest problem with the Global Average Temperature. What actual use in the real world is it? It doesn’t allow predicting the temperature profile anywhere in the physical world. Based on past predictions, it apparently doesn’t allow you to predict the actual climate anywhere on the earth. From extinction of the polar bears to NYC being flooded by now to food shortages to the Arctic ice disappearing the GAT has failed utterly in telling us anything about the real world.

No. The point of these significance rules is to give an implied uncertainty. The assumption is that of you are stating measurements to the nearest whole number, than there is an implied uncertainty of ±0.5, and that you can ignore all uncertainty calculations and just use the “rules” of significant figures to stand in for the actual uncertainty propagation.”

Word salad. Did you actually mean to make a real assertion here?

There is no “implied” uncertainty. The rules give an indication of how accurate a measurement is. Overstating the accuracy is a fraud perpetrated on following users of the measurement.

You can ignore all uncertainty calculations? Exactly what uncertainty calculations are you speaking of? An average? If you calculate an average out to more digits than the measurement uncertainty allows then you are claiming an accuracy that you can’t possibly justify!

The significant digits rules are part and parcel of measurements. They directly give you indication of the accuracy of the measurement. That applies to propagation of uncertainty from multiple measurements. The rules apply to any statistics calculated from the measurements. An average doesn’t have an uncertainty all of its own totally separate from the uncertainty propagated into the average from the individual elements.

That’s why the standard deviation of the sample means only indicates how precisely you have calculated the mean, it doesn’t tell you how accurate that calculated mean is.

Again, if you have three sample measurements, 29 +/- 1, 30 +/- 1, and 31 +/- 1, you can’t just calculate the mean as 30 and use that figure to calculate the population mean. You can’t just drop the +/- 1 uncertainty from calculations and pretend that 29, 30, and 31 are 100% accurate. Yet that is what they do in calculating the GAT. At a minimum that sample mean should be stated as 30 +/- 1.7.

Call those three values sample means. The standard deviation of the stated values of the sample means is sqrt[ (1^2 + 0^2 + 1^2) / 3 ] = sqrt[ 2/3 ] = 0.8. You and the climate scientists would state that the uncertainty of the mean is 0.8 But it isn’t. It’s at least twice that value, 1.7 (see the preceding paragraph).

MarkW
January 15, 2022 3:49 pm

Looks like Bellman has never done either engineering or science.
When doing calculations, your final answer can never have more digits of accuracy than the original number did.

Bellman
January 15, 2022 3:58 pm

Yes to your first point, no to your second.

Carlo, Monte
January 15, 2022 6:27 pm

The hinge on which all of climate scientology rotates.

Tim Gorman
January 16, 2022 9:12 am

You missed the significant digit rule that no calculated result should be stated past the last digit in doubt in the elements of the calculation.

12345 -> 12.345 the last digit in doubt would be the unit digit. So your result should be quoted as 12.

As usual you are confusing the use of significant digits by mathematicians instead of physical scientists and engineers.

You cannot increase resolution by calculating an average. It is *truly* that simple.

Suppose I take 1000 temperatures each written to the nearest degree C, i.e. 0 decimal places”

In other words your uncertainty is in the units digit. That uncertainty propagates through to the summation of the temperature measurements. And that uncertainty determines where your last significant digit should appear.

As usual you just ignore uncertainty and assume everything is 100% accurate – the hallmark of a mathematician as opposed to a physical scientist or engineer.

Bellman
Reply to  Tim Gorman
January 16, 2022 11:52 am

You missed the significant digit rule that no calculated result should be stated past the last digit in doubt in the elements of the calculation.

I was using this set of rules, as recommended by Jim. I see nothing about the rule you speak of. In any event, if there’s no doubt about the integer digit in any of the readings, there would be no doubt about the third decimal place when I divide them by 1000.

As usual you are confusing the use of significant digits by mathematicians instead of physical scientists and engineers.

Has it occurred to you that taking an average or any statistic is a mathematical rather than an engineering operation.

You cannot increase resolution by calculating an average. It is *truly* that simple.

It truly isn’t. However you are defining resolution.

In other words your uncertainty is in the units digit. That uncertainty propagates through to the summation of the temperature measurements. And that uncertainty determines where your last significant digit should appear.

That was the point I was making at the end. I think it’s better to base your figures on the propagated uncertainty rather than using these simplistic rules for significant figures.

As usual you just ignore uncertainty and assume everything is 100% accurate – the hallmark of a mathematician as opposed to a physical scientist or engineer.

I said nothing about the uncertainty of the readings, I was just illustrating what using the “rules” would mean.

Tim Gorman
January 17, 2022 9:26 am

I was using this set of rules, as recommended by Jim. I see nothing about the rule you speak of. In any event, if there’s no doubt about the integer digit in any of the readings, there would be no doubt about the third decimal place when I divide them by 1000.”

In other words you *STILL* have never bothered to get a copy of Dr. Taylor’s tome on uncertainty! The rules you are looking at are but an *example* given to students at the start of a lab class. This is usually extended throughout the lab to include actual usage in the real world.

I know this subject has been taught to you multiple time but you just refuse to give up your delusions about uncertainty.

Taylor:

Rule 2.5: Experimental uncertainties should almost always be rounded to one significant figure.

Rule 2.9: The last significant figure in any stated answer should usually be of the same magnitude (in the same decimal point) as the uncertainty.

Taylor states there is one significant exception to this. If the leading digit in the uncertainty is a 1, then keeping two significant figures in ẟx may be better. For instance, if ẟx = 0.14 then rounding this to 0.1 is a substantial proportionate reduction. In this case it would be better to just use the 0.14. As the leading digit goes up (I.e. 2-9) there is less reason to add an additional significant figure.

Has it occurred to you that taking an average or any statistic is a mathematical rather than an engineering operation.

Uncertainty in a measurement appears to be only significant to to physical scientists and/or engineer. This is *especially* true of the examples of mathematicians on this blog!

It truly isn’t. However you are defining resolution.”

A statement from a mathematician, not a physical scientist or engineer who has to work in the real world. Resolution is defined by the measurement device. You can’t get better than that. Refer back to your statement that the measurements are rounded to the units digit. That means your measurement has a resolution in the units digit, anything past that has to be estimated and estimated values in a measurement introduce uncertainty. You can’t fix that by calculation. Your uncertainty will have AT LEAST a value of +/- 0.5. That value is a MINIMUM value. Other factors will only add additional uncertainty.

“That was the point I was making at the end. I think it’s better to base your figures on the propagated uncertainty rather than using these simplistic rules for significant figures.”

You can’t get away from uncertainty in physical measurements. And that uncertainty *has* to follow the rules for significant figures. Otherwise someone using your measurements will have no idea of what the measurement really means. Propagated uncertainties are no different. If you imply a smaller propagated uncertainty than what the measurement resolutions allow then you are committing a fraud upon those who might have to use your measurement.

I said nothing about the uncertainty of the readings, I was just illustrating what using the “rules” would mean.”

Of course you did. You stated the measurements were rounded to the nearest units digit. That implies an uncertainty associated with your measurements of +/- 0.5.

Bellman
Reply to  Tim Gorman
January 17, 2022 1:34 pm

In other words you *STILL* have never bothered to get a copy of Dr. Taylor’s tome on uncertainty.

If you mean J.R. Taylor’s An Introduction to Error Analysis I’ve quoted it to you on numerous occasions and you keep rejecting what it says. But I’ve also been accused of using it when it’s out of date, and should not be talking about uncertainty in terms of error.

The rules you are looking at are but an *example* given to students at the start of a lab class.

You need to take this up with Jim. He’s the one saying they showed that an average couldn’t increase precision.

Rule 2.5: Experimental uncertainties should almost always be rounded to one significant figure.

Yes, that’s what he says. Other’s including the GUM say one or two significant figures. Some even recommend 2 over 1. This is why it’s best not to treat any authority as absolute, especially when talking about uncertainty.

A statement from a mathematician, not a physical scientist or engineer who has to work in the real world.”

You’re too kind. I may have studied some maths and take an interest in it, but I wouldn’t call myself a mathematician. But I disagree that statisticians don’t work in the real world.

Resolution is defined by the measurement device.”

I was thinking that the VIM defined resolution in a couple of ways, but the online versions seems to have been removed, so I can’t check. Instrument indication is one type of resolution, but the other is along the lines of the smallest change it’s possible to discern.

You can’t get better than that.

A statement that shows a lack of ambition. Have you forgotten Taylor’s example of measuring a stack of paper? The resolution of the measurement of the stack may only be 0.1″, but the thickness of a single sheet of paper can be calculated to 4 decimal places.

Tim Gorman
January 15, 2022 11:14 am

The average can only have the same number of significant digits as the elements used to calculate the average. I.e. no increase in precision. Using your logic a repeating decimal average value would be infinitely precise. That’s only true for a mathematician or a climate scientist.

Carlo, Monte
Reply to  Tim Gorman
January 15, 2022 2:06 pm

He has been told this on multiple occasions yet refuses to acknowledge reality.

Derg
January 14, 2022 10:54 am

Lol slowing down 😉

And yet the sea is rising at a slow rate. In 500 years Obama’s house will be underwater.

Jim Gorman
January 14, 2022 12:20 pm

CO2 is now impotent, right? Or will we see an immense erection of temperature values when natural variation goes away in the next couple of years?

TheFinalNail
Reply to  Jim Gorman
January 15, 2022 2:41 am

My guess is we will see the current long term rate of rise continue (about 0.2C per decade over a running 30-year period). There will of course be spikes up and down due to natural variabilty and volcanic activity, etc.

Last edited 2 days ago by TheFinalNail
Jim Gorman
January 15, 2022 5:11 am

So you are now willing to act like a dictator and force everyone to finance the spending of trillions of dollars we don’t have based on a guess?

Whatever happened to KNOWING for sure what will occur? Science is not based on guesses, it is only based on provable facts.

Thanks for nothing!

MarkW
January 14, 2022 2:26 pm

Why stop at a 30 year trend? Why not a 100 or 1000 year trend?

TheFinalNail
January 15, 2022 2:43 am

30 years is regarded as a period of ‘climatology’ by the WMO. It is often used as the base period for anomalies (GISS, UAH, HadCRUT), though not necessarily the same 30-year period.

Jim Gorman
January 15, 2022 5:03 am

And the WMO is the be all and end all in deciding this? Tell us a designated climate area on the earth that has changed in a 30 year period.

Climate is the average weather conditions in a place over a long period of time—30 years or more.” From What Are the Different Climate Types? | NOAA SciJinks – All About Weather

I think you’ll find that 30 years is the minimum time to detect a climate change. As of present, no one has ever reclassified any areas to a new climate type.

MarkW
Reply to  Jim Gorman
January 15, 2022 3:53 pm

Post modern science.
Decide what the answer should be, then invent a method that gets you there.

MarkW
January 15, 2022 3:52 pm

Since when do we do science based on the most convenient data set?

January 14, 2022 6:23 am

NOAA and NASA continue to play god with temperature data … https://www.youtube.com/watch?v=hs-K_tadveI

TheFinalNail
Reply to  John Shewchuk
January 14, 2022 7:39 am

Berkeley are also calling it the 6th warmest year:

As are JMA.

MarkW
January 14, 2022 2:30 pm

Using the same data.

Bellman
January 14, 2022 5:08 pm

It’s equal 7th warmest in UAH and equal 6th in RSS, not using the same data.

TheFinalNail
January 15, 2022 2:44 am

Using the same data.

Not quite. Berkeley in particular uses a lot more stations than the others.

LdB
January 15, 2022 4:51 pm

Stations and a method that is rejected by even the CAGW crowd … you probably need to specify a point 🙂

Last edited 1 day ago by LdB
LdB
January 14, 2022 5:01 pm

Warmest year eva acording to the bloke down the pub it is a very subjective thing.

Joseph Zorzin
Reply to  John Shewchuk
January 14, 2022 7:40 am

fantastic! science fiction!

Meab
Reply to  John Shewchuk
January 14, 2022 9:55 am

Good video, well worth watching. It shows several things using the raw and “corrected” data from the USHCN network, the world’s most reliable network of temperature measurement stations for analysis of long-term temperature changes. First, it shows that the raw data actually shows a slight decline in temperature over the US in the last century but the NOAA-adjusted data shows a temperature increase. Since the temperature increase is all owing to adjustments, in order for NOAA’s adjusted trends to be correct NOAA must have complete confidence that their adjustments are unbiased. Second, it shows that NOAA has recently taken almost 1/3 of the stations off-line -replacing their actual measurements with infilled (interpolated and adjusted) data. Therefore, NOAA’s recent data are the most subject to infilling errors. Third, the most recent temperature adjustments are still positive. This is a most curious thing given how modern stations are computerized and automated. If anything, most of the recent adjustments should be in the downward direction due to increasing UHI, but they aren’t.

These three things cast serious doubt on the accuracy of NOAA’s claims regarding which years were the hottest, especially as the differences between years are, at most, a few hundredths of a degree.

January 14, 2022 10:09 am

Exactly right. Thanks for the confirmation. Tony Heller has been exposing this fraud for several years, and I thought more exposure is needed. I now even discuss this at my public speaking events and it’s amazing how many are completely stunned — and upset. And the dang Weather Channel keeps shows this old video, even though NOAA changed their data … https://www.youtube.com/watch?v=HQKbm4qU_lQ

Carlo, Monte
Reply to  John Shewchuk
January 14, 2022 10:37 am

Great video; incredibly, month-after-month the usual suspects show up in WUWT trying to defend this professional misconduct.

Reply to  Carlo, Monte
January 14, 2022 10:53 am

Yes indeed — and thanks for the comments. It’s common sense — the fraud is so obvious and blatant. Just like Hitler indicated …the bigger the lie and the greater its delivery — the more people will believe. It’s no different than when people killed over 50,000 witches because they were indoctrinated to believe the witches caused the Little Ice Age and other related disasters of that time period. It’s ignorance fostered by propaganda. And so, hey … why fight it, and just do as our “climate leaders” do. If they insist on fighting climate change from the seashore — they we should too … https://www.youtube.com/watch?v=dZvYHt_3nt0

bdgwx
Reply to  John Shewchuk
January 14, 2022 1:14 pm

There is no fraud John. If you think there is and it is so obvious and blatant then it should be easy for you to show us which line or section in the code here and here is the fraudulent piece. That is your challenge.

Last edited 3 days ago by bdgwx
January 14, 2022 2:04 pm

Don’t worry about a thing, the new climate leadership at the United Nations will fix things … https://www.youtube.com/watch?v=p8hKJ_MMza8

bdgwx
Reply to  John Shewchuk
January 14, 2022 5:43 pm

How is that related to you indicting scientists of fraud?

Last edited 2 days ago by bdgwx
January 14, 2022 5:47 pm

That’s easy — just watch what our “climate leaders” DO versus SAY … https://www.youtube.com/watch?v=dZvYHt_3nt0

bdgwx
Reply to  John Shewchuk
January 14, 2022 9:00 pm

I watched the video. There is no evidence presented of fraud. It doesn’t even discuss fraud. No wait…it doesn’t even discuss science or evidence of any kind at all. In fact, there is no discussion in the video…like at all. Is it meant to be a joke?

January 14, 2022 9:03 pm

Glad you liked it. Speaking of jokes … https://www.youtube.com/watch?v=cE6rAWcjTyw

bdgwx
Reply to  John Shewchuk
January 15, 2022 5:59 am

So do you believe scientists actions are criminal or not?

January 15, 2022 6:04 am
LdB
January 15, 2022 4:57 pm

Stupid question they aren’t criminal because scientists can propose any theory they like right even stupid things like pink unicorns created Earth. They may even engage in scientific fraud of which there have been huge numbers of lately in many fields which is still not criminal. The only point it becomes criminal is if they do something against some law.

Tim Gorman
January 14, 2022 5:33 pm

If those infilled station records were from stations that were more than 50 miles distant, either in longitude or latitude then the made up data is inaccurate. The correlation of temperature between two points on the globe that are more than 50 miles apart is less then 0.8 and most physical scientists will consider that insufficient correlation to make the data useful.

Nothing needs to be shown in the programming is fraudulent. The assumptions about infilling are just plain wrong.

Reply to  Tim Gorman
January 15, 2022 6:06 am

Ditto. And the “wrong” supports the big “wrong” … https://www.youtube.com/watch?v=GYhfrgRAbH4

bdgwx
January 14, 2022 10:42 am

1) USHCN is not “the world’s most reliable network of temperature measurement”. It’s not even worldwide. It only covers 2% of the Earth’s surface.

2) USHCN is a subset of GHCN. It is produced by the same processing system. All USHCN observations are included in GHCN. As a result the adjustments are applied equally to both GHCN and USHCN.

3) The adjustments applied to GHCN include those for station moves, instrument changes, time-of-observation changes, etc. Those for ERSST include those for bucket measurements, ship intake measurements, etc. They are necessary to remove the biases they cause. The net effect of all adjustments actually reduces the overall warming trend.

4) The UHI effect and the UHI bias are not the same thing. UHI effect is the increase in temperature in urban areas as a result of anthropogenic land use changes. UHI bias is the error induced in a spatial average as result in adequate sampling of urban regions. It is possible for the UHI effect and the UHI bias to simultaneously be positive and negative respectively. The UHI bias is positive when urban observations are used as proxies for predominately rural grid cells or when the ratio of urban-to-rural stations increases. The UHI bias is negative when rural observations are used as proxies for predominately urban grid cells or when the ratio of urban-to-rural stations decreases.

5) USCRN is a network of stations that has been specifically designed to mitigate non-climatic effect biases like station moves, instrument changes, time-of-observation changes, etc. The USCRN-raw data corroborates the USHCN-adj data and confirms that USHCN-raw is contaminated with biases. See Hausfather et al. 2016 for details of this comparison.

MarkW
January 14, 2022 10:51 am

1) USHCN is not “the world’s most reliable network of temperature measurement”. It’s not even worldwide. It only covers 2% of the Earth’s surface.

Is english not your first language?
The quote does not claim that it cover’s the entire earth.

bdgwx
January 14, 2022 11:38 am

You missed some of the conservation in another thread where I pointed out the fact that the net effect of all adjustments actually reduces the warming trend relative to the unadjusted data for the global mean surface temperature. meab disagreed and used the USHCN dataset as evidence and to represent the entire Earth. There was no concession at the time that USHCN cannot possibly be used to make statements about the effect of adjustments on the global mean surface temperature or any statements regarding the entire Earth for that matter so I have no choice but to continue to believe that he still thinks it is a global or worldwide dataset or can be used to make statements regarding the entire Earth.

Meab
January 14, 2022 12:39 pm

Now yor’re just lying, badwaxjob. The types of adjustments applied to both data sets are substantially the same, and you know it.

January 14, 2022 2:09 pm

Climate religion has its crusaders — just like in the “crusades” 1,000 years ago, where data was not a requirement — just a belief. It is sad to see — even today. Fortunately, new leadership at the UN can see the fraud … https://www.youtube.com/watch?v=p8hKJ_MMza8

bdgwx
January 14, 2022 8:56 pm

Of course I know it. That’s what I’m trying to tell you. Again…USHCN is produced by the same processing system as GHCN. The adjustments that are applied to GHCN are the same as those applied to USHCN. The only difference is that USHCN only represents about 4% of the stations in GHCN. I don’t know how trying to explain something that you don’t seem to be challenging (at least with this post) makes me a liar.

MarkW
January 14, 2022 2:31 pm

And once again, the alarmist tries to completely change the subject.

I’m getting the idea that you have don’t have the ability to stick with one subject when you find yourself falling behind.

Carlo, Monte
January 14, 2022 10:53 am

Nothing to see here, just more keeping the jive alive, move along now folks…

Jim Gorman
January 14, 2022 1:04 pm

1) USHCN is not “the world’s most reliable network of temperature measurement”.

You didn’t even address the assertion in your answer. Bob and weave, right?

2) USHCN is a subset of GHCN. It is produced by the same processing system.

So what is the point?

3) The adjustments applied to GHCN include those for station moves, instrument changes, time-of-observation changes, etc.

So if we give you some stations that have been adjusted, especially from the late 1800’s or early 1900’s, you can provide physical evidence supporting the timing of moves, instrument changes, etc, right?

None of these address the artificial increase in precision of past data that was recorded only in integers. Show some scientific or academic references that support showing temperatures up until 1980 with precision to 1/100ths or even 1/000ths of a degree.

fretslider
January 14, 2022 6:23 am

“Earth’s global average surface temperature in 2021 tied with 2018 as the sixth warmest on record”

So, tied and not even fifth. Oh well.

Aerosols to the rescue?

The World Was Cooler in 2021 Than 2020. That’s Not Good News

As the world locked down in 2020, fewer emissions went into the sky, including aerosols that typically reflect some of the sun’s energy back into space. “If you take them away, you make the air cleaner, then that’s a slight warming impact on the climate,” said Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, during a Thursday press conference announcing the findings. But as economic activity ramped back up in 2021, so did aerosol pollution, contributing again to that cooling effect.

https://www.wired.com/story/the-world-was-cooler-in-2021-than-2020-thats-not-good-news/

They didn’t think that headline through.

RoodLichtVoorGroen
January 14, 2022 7:35 am

“The World Was Cooler in 2021 Than 2020. That’s Not Good News”

Indeed it isn’t, but not for the reasons stated.

“If you take them away, you make the air cleaner, then that’s a slight warming impact on the climate,”

So close yet so far…

TheFinalNail
January 14, 2022 7:41 am

The result of cooling La Nina conditions in the Pacific, when the ocean absorbs more heat. There is no evidence in a slowdown in the long-term rate of surface warming.

Dave Yaussy
January 14, 2022 8:19 am

Thank goodness. A slow, beneficial increase in temperature and CO2 fertilization for a grateful world.

MarkW
January 14, 2022 8:44 am

Apples and oranges.
You are looking at a 50 year alleged trend, and then saying that a very weak La Nina that is only a few months old, hasn’t completely cancelled this trend.

Derg
January 14, 2022 10:56 am

And no mention of the warm 30’s 🤔

Richard M
January 14, 2022 12:27 pm

There also is no evidence of a CO2 driven surface warming. The CERES data of the past two decades points to a different cause. A cloud thinning has allowed more solar energy to reach the surface.

Please tell us how you are going to increase the clouds.

Jim Gorman
January 14, 2022 1:08 pm

Historical documents are going to be the downfall of some of the CAGW myth. As this site has shown recently, there are various newpaper and journals that tend to show that the temp record is not what it should be. Too much fiddling will be caught out.

Mike Smyth
January 14, 2022 6:29 am

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson.

That’s his THEORY and he’s sticking to it. To bad the climate models are crap. To bad the artic sea ice is INCREASING, not decreasing. To bad there’s no correlation between CO2 and temperature.

Climate hysterics are neobarbarians. They want everyone to freeze to death because we can’t afford the energy to heat our homes.

Trying to Play Nice
Reply to  Mike Smyth
January 14, 2022 8:30 am

He mentions science but doesn’t have any available to show.

RetiredEE
Reply to  Trying to Play Nice
January 14, 2022 2:00 pm

Well, political science is NOT science. I think there is some confusion by the powers that be on this point.

ResourceGuy
January 14, 2022 6:35 am

When does the dam break on advocacy science and the related political hard press? Real science wants to know.

Patrick B
January 14, 2022 6:49 am

NASA doesn’t believe in margins of error. Are there any real scientists left at NASA?

bdgwx
Reply to  Patrick B
January 14, 2022 7:12 am
Carlo, Monte
January 14, 2022 8:07 am

Not even close to a real UA, but it does have the standard milli-Kelvin “confidential informants”.

MarkW
Reply to  Carlo, Monte
January 14, 2022 8:46 am

Do they continue to misuse the “law of large numbers”?

Carlo, Monte
January 14, 2022 9:40 am

Any port in a storm for a climastrologer. They are completely oblivious about how ridiculous these claims are.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 9:07 am

That’s not what the data says. The data says the 95% CI is about 0.05 K for the contemporary period and 0.10 K or higher for the pre WWII period. That is 100x higher than you’re claimed 0.001 K value. Don’t take my word for it. Download the data and see for yourself.

Carlo, Monte
January 14, 2022 9:30 am

Silly person, 50-100mK are still milli-Kelvins, and are still way smaller than what is attainable with actual temperature measurements. You might know this if you had any real metrology experience.

Nowhere did I state anything about a “0.001 K value”.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 10:06 am

milli-Kelvin is 0.001 K. 50-100mK is 0.050-0.100 K. The later two are 50x and 100x higher respectively than the former. And they aren’t even remotely similar in magnitude. The later is literally 2 orders of magnitude higher. You can call me silly or any of the other names you like. It is not going to change the fact that NASA is not claiming milli-Kelvin level of uncertainty.

Carlo, Monte
January 14, 2022 10:16 am

50-100mK is 0.050-0.100 K. The

Now you are just lying; repeat, nowhere did I state anything about a “0.001 K value”.

With the best available instruments it is possible to get down to 0.5-0.6°C. Anything smaller is ludicrous.

But keep trying, you have a vested interest in keeping the jive alive with these tiny linear “trends” that have no significance from a metrology point-of-view.

MarkW
Reply to  Carlo, Monte
January 14, 2022 10:54 am

Like most alarmists, bdgwx is very skilled in changing the subject.

bdgwx
January 14, 2022 2:17 pm

I’m not the one claiming that NASA does not believe in margins of error of that NASA is claiming milli-Kelvin uncertainty. But I am responding to those claims and specifically those claims in the thread in which they were created.. I’m neither changing the topic being discussed nor deflecting or diverting away from it. I’m responding to them directly. And to summarize this thread, not only did NASA not claim milli-Kelvin uncertainty neither did they ignore error margins.

Last edited 2 days ago by bdgwx
Tim Gorman
January 14, 2022 5:59 pm

From the uncertainty link:

“In Lenssen et al (2019), we have updated previous uncertainty estimates using currently available spatial distributions of source data and state-of-the-art reanalyses, and incorporate independently derived estimates for ocean data processing, station homogenization and other structural biases.”

Estimates on top of estimates, homogenization on top of homogenization, and biases on top of biases.

As Hubbard and Lin states in 2002, twenty years ago, adjustments have to made on a station-by-station basis taking into account the micro-climate and environment at each individual station.

Apparently NASA hasn’t learned that lesson yet, not even after twenty years. Homogenization and reanalysis using stations more than 50 miles apart just introduces greater uncertainty, not less.

Carlo, Monte
January 14, 2022 6:45 pm

You excel in pedantry.

MarkW
January 15, 2022 3:57 pm

And there you go again.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 11:03 am

That’s not a lie. It is a fact that 50 mK = 0.050 K and 100 mK = 0.100 K.

And above you said “Not even close to a real UA, but it does have the standard milli-Kelvin “confidential informants”. You’ve also made the claim here, here, here, and here. In fact, this blog post here has numerous comments of you and Pat Frank defending your claims that scientists are reporting milli-Kelvin levels of uncertainty for the global mean surface temperature.

It is an undeniable fact. Scientists are not claiming a mill-Kelvin level uncertainty on global mean surface temperatures. Period.

Last edited 3 days ago by bdgwx
Carlo, Monte
January 14, 2022 4:11 pm

Just keep banging your head on the bricks, the pain will eventually ease off.

And with no concepts of what real-world metrology entails, you try to cover your ignorance with lots and lots of words and sophistry.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 8:52 pm

The fact that 50 mK = 0.050 K or that 0.050 K is 50x greater than 0.001 K is not sophistry. Middle school students understand it.

Pat Frank
January 15, 2022 11:56 am

“The fact that 50 mK = 0.050 K or that 0.050 K is 50x greater than 0.001 K is not sophistry. Middle school students understand it.

When they get to studying that, middle school students are taught that writing out a value to three digits past the decimal means that they claim to know the value to three significant figures past the decimal.

To write 0.050 K is a claim that you know the value to ±0.001 K. Period.

Pat Frank
January 15, 2022 11:51 am

0.050-0.100 K. The later two are 50x and 100x higher

bdgwx, standard interpretation is that the last significant figure in a number represents the limit of uncertainty.

So, if one writes, 0.050 K last zero is taken to mean that the value is known to ±0.001 K. Thus, 0.050 K = 0.050±0.001 K. Standard practice.

To write 50 mK when the value is known to ±0.01K, one writes it 0.05 K or as the exponential: 5×10¯²K indicating ±0.01K. Or one can write (5±1)×10¯²K.

But writing the magnitude out to the third decimal place is a statement that the value is known to the third decimal place — three significant figures past the decimal.

Jim Gorman
Reply to  Pat Frank
January 15, 2022 6:05 pm

Just one more indicator of how these folks have never dealt with measurements either in real life or in a certified laboratory.

I don’t know of one lab teacher in college or high school that doesn’t teach that when you end a measurement with digits after the decimal point with a “0” that the assumptions is that you have measured that value to that place.

0.050 does mean you have measured to the 1/1000ths place and have obtained a reading of “0” in the 1/1000ths digit. You have not ascertained a 0.051 or 0.049, but a 0.050 exactly. That makes the 1/10000ths place where resolution ends. It also means that you have 3 Significant Digits after the decimal point.

bdgwx
Reply to  Jim Gorman
January 15, 2022 8:36 pm

In this context the 0.050 K value isn’t the measurement though. It is the uncertainty. And note that I didn’t make up the 50 mK or 0.050 K value. Carlo Monte did. What I said was that the uncertainty is actually closer to 0.05 K. He’s the one that took the liberty to call it 50 mK. I think you and Pat need to explain these significant figure rules and the difference between a measurement and its uncertainty. Hopefully you guys can speak his language because it is abundantly clear that I’m not getting through.

Last edited 1 day ago by bdgwx
bdgwx
Reply to  Pat Frank
January 15, 2022 7:05 pm

I’m challenging anything you just said. But remember that the 0.050 K figure is itself an uncertainty. It’s not an actual temperature. So in the 0.050±0.001 K value you posted the figure ±0.001 might be described as the uncertainty of the uncertainty whatever that happens to mean. I don’t know. Either way it definitely does NOT describe the uncertainty of the actual temperature anomaly.

For example, in this dataset the 2021 annual anomaly is 0.901±0.027. Notice that Berkeley Earth is NOT saying that the uncertainty of 0.901 is ±0.001. They are saying it is ±0.027. The fact that they include the guard digit in the uncertainty is in no way a statement that the uncertainty of the anomaly is ±0.001 which would be absurd since the said it was actually ±0.027.

Carlo Monte is claiming that because they published the anomaly as 0.901 that necessary means they are claiming an uncertainty of ±0.001 even they literally say it is actually ±0.027. It would be the equivalent of taking your assessed uncertainty of ±0.46 and claiming that you are saying the uncertainty is ±0.01. If that sounds absurd it is because it is absurd. You aren’t saying that anymore than Berkeley Earth, GISTEMP, or anyone else. That’s what I’m trying to explain to him. If you think you can get this point across to him then be my guest.

Pat Frank
January 15, 2022 10:12 pm

Carlo Monte is claiming that because they published the anomaly as 0.901 that necessary means they are claiming an uncertainty of ±0.001 even they literally say it is actually ±0.027.

Carlo is correct. And the ±0.027 should be written as ±0.03.

For example, in this dataset the 2021 annual anomaly is 0.901±0.027. Notice that Berkeley Earth is NOT saying that the uncertainty of 0.901 is ±0.001. They are saying it is ±0.027.

That statement just shows you don’t know what you’re taking about and neither does Berkeley Earth.

0.901±0.027” is an incoherent presentation.

The 0.901 represents a claim that the magnitude is known to the third place past the decimal. The ±0.027 is a claim that the uncertainty is also known to the third past the decimal.

But the measurement resolution is in fact at the second place past the decimal. Because the uncertainty at place two has physical magnitude. The third place cannot be resolved.

The uncertainty should be written as ±0.03 because the second decimal place clearly represents the limit of resolution. That’s where the rounding should be to produce a conservative estimate of accuracy. The third decimal place is physically meaningless.

And as the third place is meaningless the 0.901 itself should be written as 0.90 because the last digit is beyond the limit of resolution. It is meaningless. So the measurement should be written 0.90±0.03.

Of course, claiming to know a global temperature anomaly to ±0.01 C is equally nonsensical.

And this … Estimated Jan 1951-Dec 1980 global mean temperature (C)
Using air temperature above sea ice:  14.105 +/- 0.021
Using water temperature below sea ice: 14.700 +/- 0.021″

… is in-f-ing-credible. What a monument to incompetence.

The last paragraph in my 2010 paper indicates an appropriate round up of uncertainty to ±0.5 C.

bdgwx
Reply to  Pat Frank
January 16, 2022 4:36 am

PF said: “Carlo is correct. And the ±0.027 should be written as ±0.03.”

So ±0.027 is the same as ±0.001, but somehow ±0.03 is ±0.03?

PF said: “The last paragraph in my 2010 paper indicates an appropriate round up of uncertainty to ±0.5 C.”

You put ±0.46 in the abstract…twice. If ±0.027 is the same as ±0.001 then ±0.46 is the same as ±0.01. No?

Captain climate
January 14, 2022 10:27 am

There’s no way you get to that precision without all errors canceling, and you have no evidence they do.

MarkW
Reply to  Captain climate
January 14, 2022 10:56 am

The claim is that if you measure 100 different points at 100 different times, with 100 different instruments, your accuracy for all the measurements goes up.

Total nonsense, but it keeps the masses in line.

bdgwx
January 14, 2022 1:03 pm

MarkW: “The claim is that if you measure 100 different points at 100 different times, with 100 different instruments, your accuracy for all the measurements goes up.”

Strawman. Nobody is saying that.

MarkW
January 14, 2022 2:34 pm

That is how the law of large number works, when done properly.
It’s what you and your fellow alarmists use to get these impossibly high accuracy claims.

bdgwx
January 14, 2022 8:50 pm

No it’s not. The law of large numbers does not say that the uncertainty of individual observations decrease as you increase the number of observations. The uncertainty of the next observation will be the same as the previous observations regardless of how many you acquire. And don’t hear what I didn’t say. I didn’t say the uncertainty of the mean does not decrease as the number of observations increase. There is a big difference between an individual observation and the mean of several observations. Do not conflate the two.

Pat Frank
January 15, 2022 12:14 pm

“The uncertainty of the next observation will be the same as the previous observations regardless of how many you acquire.”

No it won’t.

Last edited 2 days ago by Pat Frank
bdgwx
Reply to  Pat Frank
January 15, 2022 6:49 pm

That paper is paywalled. Regardless I don’t see anything in the abstract that is challenging my statement. I’m wondering if because you aren’t up to speed on the conversation you may not be understanding it so let me clarify now.

If you have an instrument with assessed uncertainty of ±X then any measurement you take with it will have an uncertainty of ±X. Taking the next measurement will not decrease the uncertainty of any measurement preceding it nor will it decrease the uncertainty of the most recent measurement. In fact, if anything the uncertainty may grow with each successive measurement due to drift.

MarkW
January 15, 2022 4:01 pm

So you admit that claiming to know the temperature of the Earth 200 years ago to 0.01C is ridiculous, or that knowing the temperature of the oceans to 0.001C is utterly impossible.

bdgwx
January 15, 2022 6:42 pm

I’m the one trying to tell people that we don’t know the global mean surface temperature to within 0.001 C or even 0.01 C. The best we can do for monthly anomaly is about 0.05 C.

Pat Frank
January 15, 2022 12:12 pm

Nobody is saying that.

They’re all saying that.

Definitively: Brohan, P., Kennedy, J. J., Harris, I., Tett, S. F. B., & Jones, P. D. (2006). Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. J. Geophys. Res., 111, D12106 12101-12121; doi:12110.11029/12005JD006548

2.3.1.1. Measurement Error (ε_ob)
The random error in a single thermometer reading is about 0.2 C (1 sigma) [Folland et al., 2001]; the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error in the monthly average will be at most 0.2/(sqrt60) = 0.03 C and this will be uncorrelated with the value for any other station or the value for any other month.

Folland’s “random error” is actually the read error from eye-balling the meniscus.

That paragraph is the whole ball of wax for treatment of temperature measurement error among the professionals of global air temperature measurement. Pathetic isn’t the word for it. Incompetent is.

bdgwx
Reply to  Pat Frank
January 15, 2022 6:38 pm

PF said: “They’re all saying that.”

I think your confused because you don’t understand what “that” is. MarkW defines “that” as The claim is that if you measure 100 different points at 100 different times, with 100 different instruments, your accuracy for all the measurements goes up.”

Nobody is saying “that”. In other words nobody is saying that accuracy goes up for individual measurements as you increase the number of measurements. What they are saying is that the uncertainty of average of the 100 observations is less than the uncertainty of the individual observations. I’m boldening and underlining average intentionally to drive home the point. Note that the average is not the same thing as an individual value. Do not conflate the two.

PF said: “That paragraph is the whole ball of wax for treatment of temperature measurement error among the professionals of global air temperature measurement.”

And notice what they said. They said and I quote “So the error in the monthly average will be at most 0.2/(sqrt60) = 0.03 C”. I took the liberty to bolden and underline average to make it undeniably obvious that they didn’t say what MarkW is claiming.

PF said: “Pathetic isn’t the word for it. Incompetent is.”

What is incompetent is believing that uncertainty of the average of a set of values is not equal to or less than the uncertainty of the individual values themselves. Fortunately and based on what I’ve seen in your works you accept this.

MarkW
January 15, 2022 4:00 pm

That is precisely what the various alarmists have been claiming. That’s how they get results of 0.001C out of records that record measurements to the nearest degree.

bdgwx
January 15, 2022 6:39 pm

I’ve not seen any dataset that publishes an uncertainty as low 0.001 C for a global mean surface temperature.

Jim Gorman
January 14, 2022 1:15 pm

Not only accuracy but the precision is also increased. In other words, with enough measurements with a yardstick, you can get 1/1000ths precision. Instead of ending up with 1 yard, you get 1.0001 yards. Do you know any machinists that would believe that?

bdgwx
Reply to  Captain climate
January 14, 2022 1:09 pm

I have the GUM, Taylor, NIST, all statistics expert and text that all say that the uncertainty of the mean is less than the uncertainty of the individual measurements that went into that mean. In other words, I have a lot of evidence that backs that claim up. Would you like to discuss that evidence now?

MarkW
January 14, 2022 2:35 pm

That only works when you are using one instrument to measure the same thing repeatedly.
It does apply when you use multiple instruments to measure different things.

bdgwx
January 14, 2022 8:46 pm

Not only do the methods used to combine uncertainties from different measurands using different measurement instruments or methodologies, but they can be used to combine uncertainties from measurands with completely different units. Don’t take my words. Look at GUM equation 10 and pay particular example to the example used. Even Tim Gorman accepts this because when he tried to use Taylor equation 3.18 he was combining the uncertainty of not only different temperatures in units of K but also of the number of temperatures which is unitless. He just did the math wrong. Had he followed 3.18 and not made an arithmetic mistake he would have concluded that the uncertainty of the mean is given by u(T_avg) = u(T)/sqrt(N). Don’t take my word for it though. Try it for yourself. Compare your results with Taylor 3.16, Taylor 3.18, GUM 10, GUM 15 and the NIST uncertainty calculator. That’s 6 different methods that all give the same answer.

Last edited 2 days ago by bdgwx
Tim Gorman
January 15, 2022 1:08 pm

Nope. I sent you a message on how to properly use eqn 3.18.

Try again.

here it is:

———————————————–
Equation 10 from the gum and Taylor’s Eqn 3.18
Tavg = (ΣTn for 1 to n) /n
δTavg/Tavg = δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]
δw = 0
δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 ] = (ΣδTn/ΣTn)
δTavg = (ΣTn/n) * (ΣδTn/ΣTn) = (ΣδTn)/n
δTavg is the average uncertainty of the Tn dataset.
If δT1 = .1, δT2 = .2, δT3 = .3, δT4 = .4, δT5 = .5 then the ΣTn = 1.5
1.5/5 = .3, the average value of the data set.
If all δTn are equal then you get δTavg = (n δT)/n = δT
———————————————–

Taylor’s 3.18:

If q = (x * y * z) / (u * v * w) and y=z=u=v = 0

Then δq/q = sqrt[ (δx/x)^2 + (δw/w)^2 ]

if w is a constant then this reduces to

δq/q = δx/x

If q = Tavg = (ΣTn for 1 to n) /n

then the rest follows.

δTavg/ (ΣTn for 1 to n) /n = (ΣδTn)/n

Again, this basically means that the uncertainty of T is equal to the average uncertainty of the elements of T.

The uncertainty in the mean doesn’t decrease to a lower value than the elements used.

You keep confusing accuracy and precision. You can calculate the standard deviation of the sample means to any precision you want, it doesn’t make the mean you calculate any more accurate.

bdgwx
Reply to  Tim Gorman
January 15, 2022 6:16 pm

You made the same arithmetic mistake here as you did down below. I showed you what the mistake is and how to fix it here. There no need for me to rehash that in this subthread.

Tim Gorman
January 16, 2022 9:34 am

Nope. You didn’t show me anything. You just stated an assertion with no proof.

I answered you on it. If you are going to sum temperature measurements to do an average then you need to sum the uncertainties as well. You are trying to say ΣδTn is not right while saying ΣTn is ok.

You can’t have it both ways.

MarkW
January 15, 2022 4:03 pm

They can’t, but that won’t stop the climate alarmists from violating all the rules of statistics, science and engineering.

bdgwx
January 15, 2022 6:19 pm

I’m not the violating rules of statistics. I get the same answer whether I apply Taylor 3.9/3.16, Taylor 3.16/3.18, Taylor 3.47, GUM 10, GUM 15, or the NIST monte carlo methods. The reason why Tim gets a different answer with Taylor 3.18 is because of an arithmetic mistake. If he were to do the arithmetic correctly he would get the same answer as everyone else does and would conclude that the uncertainty of the mean is less than the uncertainty of the individual elements from which the mean was computed. Don’t take my word for it though. I encourage to apply each of these 6 methods and verify the result for yourself.

Last edited 1 day ago by bdgwx
Tim Gorman
January 16, 2022 9:37 am

I showed you the calculations WITH NO MISTAKES. You can point out any mistakes. All you can say is that it is wrong.

“δTavg = (ΣTn/n) * (ΣδTn/ΣTn) = (ΣδTn)/n
δTavg is the average uncertainty of the Tn dataset.”

You can’t show where this is wrong.

bdgwx
Reply to  Tim Gorman
January 16, 2022 11:29 am

You wrote this:

δTavg/Tavg = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]

That is wrong. The reason it is wrong is because δ(ΣTn) does not equal Σ(δTn). Refer to Taylor 3.16 regarding the uncertainty of sums.

Fix the mistake and resubmit. I want you to see for yourself what happens when you do the arithmetic correctly.

Carlo, Monte
January 16, 2022 11:35 am

You are in no position to make demands, foolish one.

Bellman
Reply to  Carlo, Monte
January 16, 2022 12:06 pm

Do you remember how you were insulting people always having to have the last word yesterday?

(This is a fun game of chicken. Will he respond?)

Carlo, Monte
January 16, 2022 9:05 pm

More whining?

bdgwx
Reply to  Carlo, Monte
January 17, 2022 7:52 am

I don’t think it is unreasonably to ask that arithmetic mistakes be fixed especially when the disagreement disappears when the math is done correctly. I understand that GUM 10 requires Calculus so I’ll give people grace on that one. But Taylor equation 3.16 and 3.18 only require Algebra I level math which I’m pretty sure is a requirement to graduate high school at least in the United States so this shouldn’t be that difficult.

Last edited 5 hours ago by bdgwx
Tim Gorman
January 16, 2022 3:02 pm

What do you think each Tn has for uncertainty?

In order to find the uncertainty for ΣTn you have to know each individual δTn so you have ΣT1, δT2, δT3, …., δTn.

Once again you can’t face the fact that the uncertainty of each individual element gets propagated into the whole.

So you wind up with δT1/T1, δT2/T2, δT3/T3, …., δTn/Tn for the relative uncertainties of all the elements.

If Ttotal = T1 + T2 + T3 + …. + Tn = ΣTn from 1 to n in order to calculate an average then why doesn’t the total uncertainty = ΣδTn from 1 to n?

If you want to argue that the relative uncertainty for each individual element needs to be added as fractions I will agree with you. But then you lose your “n” factor from the equation!

So which way do you want to take your medicine?

﻿

bdgwx
Reply to  Tim Gorman
January 16, 2022 7:10 pm

Taylor 3.16 says δ(ΣTn) = sqrt[Σ(δTn)^2] and when δTn is the same for all values then it reduces to δT * sqrt(n). You get the same result via GUM 10 and the NIST uncertainty calculator as well. This is the familiar root sum square (RSS) or summation in quadrature rule.

Tim Gorman
January 17, 2022 8:38 am

it reduces to δT * sqrt(n).”

ROFL! So you admit that it doesn’t reduce to δT / sqrt(n)?

That means that uncertainty GROWS by n, it doesn’t reduce by n.

Which means, as everyone has been trying to tell you, that the uncertainties quoted for GAT and the climate studies ARE NOT CORRECT!

bdgwx
Reply to  Tim Gorman
January 17, 2022 1:22 pm

TG said: “ROFL! So you admit that it doesn’t reduce to δT / sqrt(n)?”

I’ve never claimed that δ(ΣTn) = δT / sqrt(n). I’ve always said δ(ΣTn) = δT * sqrt(n).

TG said: “That means that uncertainty GROWS by n, it doesn’t reduce by n.”

Yeah…duh. That’s for when δq = δ(ΣTn). Everybody knows this. It is the familiar root sum square (RSS) or summation in quadrature rule.

TG said: “Which means, as everyone has been trying to tell you, that the uncertainties quoted for GAT and the climate studies ARE NOT CORRECT!”

What? Not even remotely close. I think you’ve tied yourself in a knot so tight you don’t even realize that Tsum = ΣTn is not the same thing as Tavg = ΣTn / n. I highly suspect you are conflating Tavg with Tsum. Let me clarify it now.

When q = Tsum then δq = δTsum = δT * sqrt(n)

When q = Tavg then δq = δTavg = δT / sqrt(n)

If you would just do the arithmetic correctly when applying Taylor 3.18 you’d see that δTavg = δT / sqrt(n) is the correct solution. This is literally high school level math.

Do you want to walk through Taylor 3.18 step by step?

Last edited 14 minutes ago by bdgwx
Jim Gorman
January 15, 2022 4:17 pm

I don’t think eq. 10 says what you think it says. First here is a section about adding variances from: https://intellipaat.com/blog/tutorial/statistics-and-probability-tutorial/sampling-and-combination-of-variables/

Linear Combination of Independent Variables

The mean of a linear combination is exactly what we would expect: W = aX + bY.

If we multiply a variable by a constant, the variance increases by a factor of the constant squared: variance(aX) = a2 variance(X). This is consistent with the fact that variance has units of the square of the variable. Variances must increase when two variables are combined: there can be no cancellation because variabilities accumulate.

Variance is always a positive quantity, so variance multiplied by the square of a constant would be positive. Thus, the following relation for combination of two independent variables is reasonable:

More than two independent variables can be combined in the same way.

If the independent variables X and Y are simply added together, the constants a and b are both equal to one, so the individual variances are added:

Now lets look at what sections 5.1.2 and what 5.1.3 say about eq. 10. I’ll include a screenshot.

“The combined standard uncertainty uc(y) is the positive square root of the combined variance uc^2 ( y), …, where f is the function given in Equation (1).”

Why don’t you define the function of how temps are combined so you can also properly define the partial derivative of “f”.

It goes on to say:

“The combined standard uncertainty uc(y) is an estimated standard deviation and characterizes the dispersion of the values that could reasonably be attributed to the measurand Y (see 2.2.3). … Equation (10) and its counterpart for correlated input quantities, Equation (13), both of which are based on a first-order Taylor series approximation of Y = f (X1, X2, …, XN), express what is termed in this Guide the law of propagation of uncertainty (see E.3.1 and E.3.2).”

Read 5.1.3 very carefully also. You may be able to understand why variances are very important and why they are asked for. When you quote a GAT without the variance, i.e. the dispersion of temperatures surrounding the GAT, then you are leaving out a very important piece of information.

It is why the SEM is a meaningless statistic of the mean of the sample means when it comes to defining how well the GAT represents the data used to calculate it.

Section E.3.2 also has a very good discussion of how uncertainties of are truly the variance of the input quantities to “wi” are equal to the standard deviations of the probability distributions of the wi combine to give the uncertainty of the output quantity z.

In fact, it is appropriate to call Equation (E.3) the

law of propagation of uncertainty as is done in this Guide because it shows how the uncertainties of the input quantities wi, taken equal to the standard deviations of the probability distributions of the wi, combine to give the uncertainty of the output quantity z if that uncertainty is taken equal to the standard deviation of the probability distribution of z.

Do you want to explain where the standard deviations of the probability distributions of the input quantities used to calculate a GAT are calculated and what their values are and how the variances are combined?

Last edited 1 day ago by Jim Gorman
Jim Gorman
Reply to  Jim Gorman
January 15, 2022 4:19 pm

From the GUM.

Bellman
Reply to  Jim Gorman
January 15, 2022 4:48 pm

Not sure why I’m jumping down this rabbit hole again, but in the link you have the formula

$\sigma_w^2 = a^2\sigma_x^2 + b^2\sigma_y^2$

So if you are taking the average, and a = b = 1/2, what do you think $\sigma_w$ is?

Bellman
January 15, 2022 4:51 pm

Actually you don’t have to work it out, because they tell you in the next section, Variance of Sample Means.

bdgwx
Reply to  Jim Gorman
January 15, 2022 8:26 pm

For the application of GUM 10 the function f is defined as f(X_1, X_2, …, X_n) = Σ[X_i, 1, N] / N and so ∂f/∂X_i = 1/N for all X_i.

Pat Frank
January 15, 2022 12:17 pm

It also doesn’t apply when measurement error varies systematically due to uncontrolled environmental variables.

bdgwx has been exposed to that qualification about a gazillion times by now. It slides off his back like water from a greased cutting board.

MarkW
Reply to  Pat Frank
January 15, 2022 4:04 pm

The alarmists have been conditioned to believe what ever nonsense their bishops tell them to believe.

Carlo, Monte
January 14, 2022 4:13 pm

Did you remember to polish your gold star this morning?

Oh, and don’t forget that you thrive on abusing canned equations while pooh-poohing experienced professionals.

Jim Gorman
January 14, 2022 4:27 pm

None of those say that. The uncertainty if the mean to which you refer is the Standard Error of the Sample Mean, i.e., the SEM. That is the width of the interval within which the population mean may lay when estimating it by using sampling and calculating a mean from a distribution of sample means.

You have no idea of what the basis of these documents is built upon. Cherry picking equations without making sure the underlying assumptions are met is what an academician would do, not a person who has physically worked with measurements. You can’t even deal with precision and the accompanying Significant Digit rules needed to insure appropriate precision is displayed.

bdgwx
Reply to  Jim Gorman
January 14, 2022 8:35 pm

Yes they do. You’re own preferred method Taylor 3.18 says so.

Tim Gorman
January 14, 2022 6:44 pm

I’m sorry I got busy and didn’t get back to you.

Here is the result of Eqn 10 from the gum and Eqn 3.18 from Taylor.

Equation 10 from the gum and Taylor’s Eqn 3.18

Tavg = (ΣTn for 1 to n) /n

δTavg/Tavg = δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]

δw = 0

δTavg/(ΣTn/n) = sqrt[ (ΣδTn/ΣTn)^2 ] = (ΣδTn/ΣTn)

δTavg = (ΣTn/n) * (ΣδTn/ΣTn) = (ΣδTn)/n

δTavg is the average uncertainty of the Tn dataset.

If δT1 = .1, δT2 = .2, δT3 = .3, δT4 = .4, δT5 = .5 then the ΣTn = 1.5

1.5/5 = .3, the average value of the data set.

If all δTn are equal then you get δTavg = (n δT)/n = δT

uncertainty of the mean is less than the uncertainty of the individual measurements that went into that mean. “

Nope. There is no “uncertainty of the mean” calculated from the sample means. There is the standard deviation of the sample means which is the PRECISION with which you have calculated the mean. It is *NOT* the uncertainty of the mean.

The uncertainty of the mean is the propagated uncertainty from the data set values. As shown above it is at least the average value of the various uncertainties associated with the data in the data set.

bdgwx
Reply to  Tim Gorman
January 14, 2022 8:22 pm

Nope. sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ] does not follow from Taylor 3.18.

Last edited 2 days ago by bdgwx
Tim Gorman
January 15, 2022 12:58 pm

Of course it follows. Relative uncertainty is δT/T.

Since Tavg uses ΣTn in its calculation then δTavg must also be based on ΣδTn/ΣTn.

If you don’t want to use Tavg and δTavg then figure out a better way to formulate the problem.

﻿

bdgwx
Reply to  Tim Gorman
January 15, 2022 3:53 pm

The Tavg method is fine. But there is a very subtle mistake in your first step.

You have this which is wrong.

δTavg/Tavg = sqrt[ (ΣδTn/ΣTn)^2 + (δw/w)^2 ]

The correct step is this.

δTavg/Tavg = sqrt[ (δΣTn/ΣTn)^2 + (δw/w)^2 ]

Pay really close attention here. In plain language you don’t take the sum of the uncertainties of Tn but instead you take the uncertainty of the sum of Tn. Or in mathematical notation if x = ΣTn then δx = δΣTn. It’s not δx = ΣδTn. If the first step is wrong then all steps afterward are wrong.

It’s easier to spot if you break it down into simpler steps.

x = Tsum = ΣTn

w = n

q = x / w = Tsum / n = Tavg

δq/q = sqrt[ (δx/x)^2 + (δw/w)^2 ]

δTavg/Tavg = sqrt[ (δTsum/Tsum)^2 + (δn/n)^2 ]

δTavg/Tavg = sqrt[ (δTsum/Tsum)^2 + 0 ]

δTavg/Tavg = sqrt[ (δTsum/Tsum)^2 ]

δTavg/Tavg = δTsum / Tsum

δTavg = δTsum / Tsum * Tavg

δTavg = δTsum / Tsum * Tsum / n

δTavg = δTsum / n

At this point we have to pause and use Taylor 3.16 to solve for δTsum. Once we do that we can plug that back in and resume Taylor 3.18. When you do that you’ll get δTavg = δT / sqrt(n).

MarkW
January 15, 2022 4:02 pm

They can say something that is not true as many times as they want. It still won’t make it true.

bdgwx
January 15, 2022 6:09 pm

Who’s “they”? What are “they” saying?

MarkW
January 14, 2022 10:53 am

0.05 is about half the error for the instruments involved and it completely ignores the error caused by woefully inadequate sampling.

Only those who are desperate to keep the scam alive believe that the 95% confidence interval is a mere 0.05K.

bdgwx
January 14, 2022 1:06 pm

Read Lenssen et al. 2019 for details regarding GISTEMP uncertainty. In a nutshell though the core concept is that the uncertainty of the mean is less than the uncertainty of the individual observations that went into that mean. Don’t hear what I didn’t say. I didn’t say the uncertainty of the individual observations is less when you have more them. Nobody is saying that.

MarkW
January 14, 2022 2:35 pm

More excuses for ignoring the basic rules of statistics.

Carlo, Monte
January 14, 2022 4:14 pm

He’s really adept at using the appeal to authority fallacy.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 5:37 pm

What reasonably and ethical alternative is there for citing sources? How do you do it?

MarkW
January 15, 2022 4:07 pm

When your so called authority has been shown to be incorrect, just repeating that you have an “authority” is neither reasonable nor ethical.

bdgwx
January 15, 2022 6:08 pm

I don’t base my position on authorities. I base my position on evidence. I’ve not seen a challenge that the evidence presented in Lenssen 2019 is 1) has an egregious mistake, 2) is accompanied by a result with the mistake fixed so that the magnitude of the mistake can be assessed, and 3) doesn’t itself have an egregious mistake.

BTW…I’d still like to see those basic rules of statistics that you trust and use yourself.

MarkW
Reply to  Carlo, Monte
January 15, 2022 4:05 pm

As a good alarmist, he’s been trained to believe whatever he’s told to believe.

bdgwx
January 15, 2022 6:02 pm

I’m trained to believe what the abundance of evidence tells me to believe. I’m not sure if I’m an “alarmist” though. Different people have different definitions of “alarmist”. If you provide a definition for which I my position can be objectively evaluated I’ll be happy to give you my honest assessment of whether I’m an “alarmist” or not. Just know that assessment would only apply to your definition.

bdgwx
January 14, 2022 5:29 pm

Can post a link to the basic rules describing how you quantify the uncertainty of the mean?

Jim Gorman
January 15, 2022 5:32 pm

Yes. But first you must declare some baseline assumptions.

1) Is the data set you are using considered a population or does it consist of a number of samples?

2) If the data set is a number of samples, what is the sample size?

3) If the data set is a number of samples are the individual samples IID? That is the same mean and standard deviation?

You might want to review this link.

Independent and Identically Distributed Data (IID) – Statistics By Jim

Carlo, Monte
January 14, 2022 4:15 pm

Bollocks, this Lenssen authority that you appeal to must be another climastrologer who is either ignorant of uncertainty or is heavily into professional misconduct.

bdgwx
Reply to  Carlo, Monte
January 14, 2022 5:35 pm

I’m not appealing to Lenssen. I’m appealing to the evidence provided. I always cite my references by name and year so that 1) you can find and double check it yourself 2) that is the standard and 3) so that I’m not accused of plagiarism. Don’t confuse appealing to evidence with appealing to authority.

Tim Gorman
January 14, 2022 7:15 pm

Uncertainty is accuracy not precision. If you don’t propagate the uncertainty of the sample data into the sample mean and just assume the sample mean is 100% accurate then all you are doing is calculating the precision with which you have calculated the standard deviation of the sample means.

If sample1 mean is 30 +/- 1, sample2 mean is 29 +/- 1.1, sample3 mean is 31 +/- .9 then the mean you calculate from those stated values *will* have an uncertainty interval associated with it.

You simply cannot assume that 30, 29, and 31 are 100% accurate and their standard deviation is the uncertainty of the population mean. As shown in the prior message the uncertainty associated with the mean will be the average uncertainty of the individual components or +/- 1.

The standard deviation of the sample means will be

sqrt[ |30-29|^2 + |30-30|^2 + |31-30|^2 ] /3 =

sqrt[ 1^2 + 0^2 + 1^2 ] /3 = sqrt[ 2] /3 = 1.4/3 = .5

The standard deviation of the sample means is half the actual uncertainty propagated from the uncertainties of the individual elements.

All the .5 tells you is how precisely you have calculated the mean of the sample means assuming the stated values are 100% accurate. The higher the number of sample means you have the smaller the standard deviation of the stated values of the sample means should get. But that *still* tells you nothing about the accuracy of the mean you have calculated.

I’m not surprised climate scientists try to make their uncertainty look smaller than it truly is.

But it’s a fraud on those not familiar with physical science.

Pat Frank
January 15, 2022 12:24 pm

Lenssen, et al., describe systematic error as “due to nonclimatic sources. Thermometer exposure change bias … Urban biases … due the local warming effect [and] incomplete spatial and temporal coverage.“

Not word one about systematic measurement error due to solar irradiance and wind-speed effects. These have by far the largest impact on station calibration uncertainty.

Lenssen et al., also pass over Folland’s ±0.2 C read error in silence. That ±0.2 C is assumed to be constant and random, both of which assumptions are tendentious and self-serving, and neither of which assumptions have ever been tested.

bdgwx
Reply to  Pat Frank
January 15, 2022 2:37 pm

Why would the solar irradiance and wind speed effects be a problem at one period of time but not another? In other words, why would that systematic bias survive the anomaly conversion?

It sounds like you are working with more information about that Folland ±0.2 C figure than was published. What does it mean? Why should it be factored into uncertainty analysis on top of what is already factored in already?