A number of mainstream media outlets have uncritically echoed the proclamation of 2024 being the “hottest year on record,” such as, CNN with a story titled, “2024 Confirmed as World’s Hottest Year on Record”, and the BBC with a headline that declared, “2024 Confirmed as Hottest Year Ever Recorded.” When these media reports are examined in long-term historical context of the available global temperature data, it becomes clear that the claims lack the certainty their headlines proclaim and are likely false or exaggerated.
A close examination reveals that such declarations are predominantly based on data from the Copernicus Climate Change Service (CCCS), a European organization but only one of several global temperature monitoring systems. Their press release, 2024 – a second record-breaking year, following the exceptional 2023 became immediately re-used by media outlets around the world.
The CCCS says:
- 2024 was the warmest year in a multi-dataset record of global temperature going back to 1850.
- 2024 had a global average temperature of 15.10°C; 0.12°C higher than the previous highest annual value in 2023.
- 2024 was 0.72°C warmer than the 1991–2020 average, and 1.60°C warmer than the pre-industrial level, making it the first calendar year to exceed 1.5 above that level
But, those numbers differ from those of other sources, such as a data set from the U.S. National Oceanic and Atmospheric Administration (NOAA). NOAA reports:
The year 2024 was the warmest year since global records began in 1850 at 1.29°C (2.32°F) above the 20th century average of 13.9°C (57.0°F). This value is 0.10°C (0.18°F) more than the previous record set last year. The ten warmest years in the 175-year record have all occurred during the last decade (2015–2024).
While NOAA repeats the “hottest year on record” claim, their numbers differ from Copernicus, undermining any confidence one might have in the precision of the global average temperature measurements for 2024, and any record breaking claims flowing from their disparate data measurements.
Further, these claims completely ignore the evidence based research, such as the surface stations project conducted by The Heartland Institute, showing that the Urban Heat Island effect and the poor placement of temperature stations used to measure temperatures, from which long-term temperature data is gathered, may account for as much as 50 percent of recent warming, with the remainder likely being partly or wholly natural, such as being driven by El Niño events.
It’s also worth noting that the phrase “hottest year on record” typically refers to records spanning about 150 years—a mere blink in geological time. Paleoclimatological evidence shows that Earth has experienced periods with significantly hotter temperatures long before industrial revolution. For instance, during the Eemian interglacial period around 120,000 years ago, global temperatures were comparable to or even exceeded current levels.
See the graph below from a scientific study titled “A 485-million-year history of Earth’s surface temperature.”
How quickly they forget: Media Confirms the Earth Is Not Abnormally Warm, Rather It Is in Its Coldest Period in 485 Million Years. Proxy data also suggests temperatures in more recent periods, like the Roman Warm Period and the Medieval Climate Optimum were likely comparable to or even higher than they are today, despite carbon dioxide levels being significantly lower.
In the rush to blame climate change for 2024’s temperatures, the media also underappreciates the impact of natural climate phenomena such as El Niño on temperatures in 2023 and 2024. This creates an oversimplified narrative that ignores the complexities of climate systems. The El Niño event in of 2023-2024 has been a major contributor to the recent global temperature anomalies, with many reports stating that it significantly boosted global temperatures compared to a neutral ocean pattern state, making 2024 one of the hottest years on record; this is due to the warming ocean surface temperatures associated with El Niño adding to the overall heat in the climate system.
According to NOAA’s Climate Prediction Center, following the conclusion of the 2023–2024 El Niño event, there has been a notable decline in ocean temperatures. This cooling trend is particularly evident in the eastern and central tropical Pacific regions. NOAA reported that by December 2024, La Niña conditions had emerged, characterized by below-average sea surface temperatures across these areas.
Oceans are considered the biggest influence on atmospheric temperature, as they absorb the majority of the sun’s radiation, acting as a massive heat reservoir that regulates global climate by storing and distributing heat around the planet through ocean currents; this means that changes in ocean temperature significantly affect the overall atmospheric temperature. Therefore, with cooling oceans, it stands to reason that cooler global atmospheric temperatures are likely ahead for 2025.
In summary, while global temperatures have gradually risen in recent decades, it is unclear whether portraying 2024 as unequivocally the “hottest year on record” is justified, or rather whether it is a temporary anomaly reflecting, in part, a combination of natural conditions and human measurement error.
What is clear is that regardless of the global average temperature human welfare has never been better. According to Our World in Data, average life expectancy has more than doubled during the recent warming. Further, deaths from extreme weather are markedly down, and deaths tied to temperatures have declined because cold kills more people than warmth.
CNN and BBC do a great disservice to their audiences by not placing their claims in the broader historical context of long-term temperatures and data, and by downplaying or ignoring entirely natural weather phenomena, and problematic temperature measurement conditions, which impact temperature measurements, when asserting that temperatures are the highest on record. An approach that considers the full range of scientific data, historical context, and natural variability would provide a more accurate and less alarming understanding of our planet’s climate dynamics.
Unfortunately, as Climate Realism has demonstrated repeatedly, the media seems more interested in pushing a climate doom narrative than factually reporting the complex truth about climate.

Anthony Watts is a senior fellow for environment and climate at The Heartland Institute. Watts has been in the weather business both in front of, and behind the camera as an on-air television meteorologist since 1978, and currently does daily radio forecasts. He has created weather graphics presentation systems for television, specialized weather instrumentation, as well as co-authored peer-reviewed papers on climate issues. He operates the most viewed website in the world on climate, the award-winning website wattsupwiththat.com.
Originally posted at ClimateREALISM


The point is, that the planet is, in reality, only a degree or so above the COLDEST period in the last 10,000 years.
The “recorded” temperature data, basically starts at that COLDEST period or soon after…
… so of course it has warmed… thank goodness !
Plenty of proxy data from all over the planet clearly shows that most of the last 10,000 years backward from the MWP were at least a degree or so warmer than now.
Visualized for context:
(from Kaufman, et al., 2020)
Trying to minimize the model warming is not a scientifically tenable position.
This is like saying you always find things in the last place you look.
For those of us who didn’t read Kaufman, et al., 2020, can you explain how they measured the temperature everywhere around the Earth for the last 12,000 years? I get the feeling that the values that are graphed are not real measured temperatures.
The authors use a large database of proxy temperature records to compile five different types of paleoclimate reconstruction, and use these to characterize patterns of global surface temperature change and associated uncertainty.
Why does your graph conflict with the historical and recent proxy records (MWP, RWP, Holocene)?
The graph is based on historical and recent proxy records, it cannot possibly conflict with them. If you have some specific data you’d like to share, please do.
It shows the Holocene as cooler than today.
It shows that today is within the bounds of uncertainty for the Holocene climate optimum, but not by much, and the warming continues.
IRONY ALERT!
Truth: the ruler monkeys only care about measurement uncertainty when they need to prop up their hockey stick frauds.
Proxies can’t give you a temperature record. They can give you trends and an estimated temperature with pretty high uncertainty. Also, proxies won’t give you daily values so the precision is very poor. They are good for trends over long periods of time, but not for short periods at 2 decimal points of a degree. Anybody who thinks they can is just an idiot.
Good points. Instead of accepting the concept that warming is bad, it should be viewed as benefit for humanity and the biosphere in general.
From that point, we are now poised for some significant global cooling. In fact, UAH shows the peak warmth was last May and we have already cooled by 0.3 C. As La Nina continues to influence these temperatures it is likely we could see this doubled by this coming May.
This would remove 0.6 C from the claimed 1.5 C of warming. That means we will have lost 40% of that beneficial warming.
We should celebrate whatever beneficial warming we get.
It was 9 F this morning. A little warming would be appreciated.
The warmest year schist comes at a time to cause cognitive dissonance.
For many in the Nation the next week will be dangerously frigid. Minus digits are on the way. They are there already in North Dakota.
The 1004-foot-long freighter – James R. Barker – just got stuck in ice at the Port of Cleveland. And for the umpteenth time the presidential inauguration is beset with nasty cold weather.
Stay warm and safe all you eastern folks.
The Rocky Mountain oysters are going to be frozen in Boulder with temps not far from cold records of a hundred years ago.
No they didn’t. The appropriate wording would be that 2024 was less bitterly cold than previous years.
If it was the “hottest” year on record then there would be numerous new daily high temperature records reported across the globe. Please provide you extensive list of new daily temperature records for review and validation.
According to UAH, August was the “hottest” month in 2024 in Australia. August is late winter so being “hotter” means less bitterly cold.
Many places set new snow records in 2024. This is what will eventually be recognised as the important change in climate as the northern oceans warm up. There are already new local snowfall records for 2025.
https://mainichi.jp/english/articles/20250117/p2a/00m/0na/018000c
https://www.newsinenglish.no/2025/01/07/snowfall-breaks-records-as-ice-fears-rise/
https://www.msn.com/en-us/weather/topstories/txdot-crews-help-clear-i-40-following-record-snowfall/ar-BB1rfMtQ
John,
The world really is a tad bigger than the US.
From NOAA: (National Oceanic and Atmospheric Administration)
“The minimum temperature trend outcomes after 1985 climb significantly faster than do the maximum measured temperature trend outcomes. Since the average temperature is not a measured value but instead the calculated mathematical average of the minimum and maximum measured temperatures {(TMax + TMin)/2} the average temperature calculated trend outcome is controlled and dominated by the much larger increase occurring in the minimum measured temperature trend versus the maximum measured temperature trend.”
Perhaps someone can do a chart of max temperatures instead of avg(max,min).
They probably have , but it destroys their claims of CAGW .
How about a chart with sampling every second. What makes min/max for a day valid numbers? The length of time at min and max could be changing which would give a much different trend.
Maybe NOAA is not totally asleep. Look at these graphs. It is apparent that the winter months have been rising far faster than the temperature average of the summer months.
When are climate scientists going to learn that averages hide much information. It is why it is vitally important to include the standard deviation of any mean. That allows folks to have a view of how spread out temperatures truly are.
Yes, and it looks like Montana and Kansas were just as warm in the Early Twentieth Century as they are today. The year 2024 is not the “hottest year evah!” in Montana and Kansas, and a lot of other places.
and Kansas were just as warm in the Early Twentieth Century as they are today…..
no, it was a lot hotter then .
Look at the temp. records. And . no AC then ….
geezer from Ks ..
😉
I agree it was hotter in the 1930’s in the U.S. than it is today according to the data, but I just say today is no hotter because invariably someone will show a chart that is a couple of tenths of a degree cooler than today, so I generalize.
I was a kid in the 1950’s, and it was very hot in the summers back then but it’s hard to judge from a kid’s point of view and the fact that we didn’t have air conditioning either, so without air conditioning, it felt a lot hotter back than. The temperature data shows the 1950’s to be just about as warm as the 1930’s.
We did have a watercooler, which cools be blowing air over colder water, but above about 100F it didn’t do any good, it just blew hot air.
Do you see these charts, Anthony Banton?
Do you deny that it was just as warm in Montana and Kansas as it is today?
Do you deny that regional charts from all over the world show the very same thing?
Who is the denier here?
That is correct.
Minima are rising faster than maxima.
The physical reason:
Minima take place order a ground-based inversion and the cooling involved comes from a small vertical portion of the air. (UAH does not and cannot see this).
Maxima are limited by convection through a sig vertical portion of the atmosphere and are self limiting unless the airmass is inherently warmer eg: advection from the Sahara, as happen over the UK on 19th July 2022, when many places reached ~ 40C (1.6C above the previous highest recorded at Cambridge ion 25th July 2019).
Ooh, that looks terribly like evidence for contamination by the urban Heat Island effect!
Climate Reanalyzer! LMAO
Climate Reanalyzer uses the ECMWF Reanalysis version 5 (ERA5) which combines model data (i.e. made up) with observations.
The dataset covers the period from 1940 to the present, so can’t possibly confirm today’s temperatures are hotter or colder since the LIA.
GIGO
Good points.
Dealt with above.
So you claim.
I expected that.
Never the less that is the current state of the global deltaT.
Model data are excellent at 24 hrs range (must be as it is a current render).
Else your 24 hr forecast from the Met would be crap also.
Hint: It’s not.
There are no observations during a LIA of temp global enough, if any at all, to specify it. And I gather from some at least that they regard proxies as being unable to accurately attribute the LIA also.
(you can’t at the same time that proxies can both identify a MWP and a LIA and at the same time say Proxies are not fit for purpose when they do not fit your imaginings… maybe not you but some)
Upshot is you ask/expect the impossible.
But what’s new here.
Proxies are valid for showing differences over time but you can’t get a temperature reading from them, unless you’re a climate scientist.
There is no issue with the weather forecast, but as we’re talking climate adding in made-up data is invalid no matter how you look at it.
No global observations/proxies show that Mann’s one tree ring to rule them all is valid.
I didn’t mention proxies, I said:
“Climate Reanalyzer uses the ECMWF Reanalysis version 5 (ERA5) which combines model data (i.e. made up) with observations.
The dataset covers the period from 1940 to the present, so can’t possibly confirm today’s temperatures are hotter or colder since the LIA.
GIGO”.
All map projections other than on a globe are wrong. One should choose a projection that minimizes the distortion that is of greatest importance for a particular application.
Do you even know what map projection that the Reanalyzer is using?
Oh FFS …
What bollocks Clyde.
Try another switcheroo !
This then is the ECMWF 850mb temp anomaly analysis for12Z today 18th Jan…
What is the point that you are trying to make with this map?
I was originally replying to John Hulquist,
“ The warmest year schist comes at a time to cause cognitive dissonance.”
That to me implied that because there is forecast to a a frigid Arctic outbreak for the Inauguration, that somehow the record warmth of 2024 was not a thing. (He may well not have re-reading it)
The 850mb anomaly shows that there is a greater area of warmer than baseline average air than colder currently in the NH.
Speaking of a “switcheroo,” I asked you a direct question and you didn’t answer it. Is that because you don’t know? Of do you not know what a map projection is?
It must have been much hotter in some other part of the world because it was relatively mild where I live. And that other part of the world is probably one with no real thermometers.
The linked CNN report is over a year old.
The BBC correctly reported that 2024 was the hottest year on record. It was.based on Met Office data, which showed it was 0.10C hotter than the previous record, in 2023. CCCS said it was 0.12 C hotter. That is pretty good agreement.
Here is a plot of the progress of the record, as measured by NOAA. It is just a bar chart of years, but colored as the record changes, and with a horizontal showing the record value while it lasts:
Here is the UAH plot, with no issues of UHI etc:
A pretty convincing record break.
So tell us the accuracy of UKMet for 1850 again? 7 decimal places. 😉
SST’s not fit for purpose. Very few stations in the SH early on.
It would need an error of over 1.5C for 1850 to be a record.
So very likely, knowing the state of the UK surface stations.
2000C in parts of Moss Landing yesterday and today.
That fire still going?
It went out and then came back.
How do you know the true value?
Still don’t know the difference between error and uncertainty.
It has
would needan error of over 1.5C“SST’s not fit for purpose.”
Yes, that’s for sure.
“Very few stations in the SH early on.”
True, but the ones we had/have all show it was just as warm in the Early Twentieth Century as it is today. Not a Hockey Stick temperature profile among them.
No they didn’t. The appropriate wording would be that 2024 was less bitterly cold than previous years.
If it was the “hottest” year on record then there would be numerous new daily high temperature records reported across the globe. Please provide you extensive list of new daily temperature records for review and validation.
According to UAH, August was the “hottest” month in 2024 in Australia. August is late winter so being “hotter” means less bitterly cold.
Many places set new snow records in 2024. This is what will eventually be recognised as the important change in climate as the northern oceans warm up. There are already new local snowfall records for 2025.
https://mainichi.jp/english/articles/20250117/p2a/00m/0na/018000c
https://www.newsinenglish.no/2025/01/07/snowfall-breaks-records-as-ice-fears-rise/
https://www.msn.com/en-us/weather/topstories/txdot-crews-help-clear-i-40-following-record-snowfall/ar-BB1rfMtQ
“According to UAH, August was the “hottest” month in 2024 in Australia. August is late winter so being “hotter” means less bitterly cold.”
Well, it certainly was bitterly cold where UAH measures, but rarely experienced unless you are riding on the wing of a plane.
Did you experience bitter cold in August?
Someone seem to be neglecting that 2024 was covered every month by an El Nino effect.
No sign of any human causation except really corrupted surface data and homogenisation to heavily tainted urban.
If you think there is any human causation, you have to explain the following graph.
and why the period from 2017 to start of the recent El Nino has a distinct negative trend.
Let’s have the full record …
By eye I make that a warming rate of ~ 0.18 C/dec.
The global UAH warming rate is ~ 0.15 C/dec.
The satellite era started just after the GAT began to rise from a low point. The GAT in 1958 was the same as it was in 2008.
As you can see from the radiosonde data below, [ which agrees with the satellite data – imagine that! 2 different methods of measurement agreeing! What does that tell you??], 1958 was was WARMER than pervious years meaning that the temperatures then (1935/40) were probably not that far different from around 2015.
There has been very little global warming for anyone born 60 years ago or more. So since the war, we ”may” be around 0.2C degrees warmer today. 0.2C is close enough to insignificant. (the current anomaly should not be included in ”today’s temperatures”)
No amount of whining on your part can alter that fact.
Excellent! Where is this from?
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2004JD005753
“As you can see from the radiosonde data below, [ which agrees with the satellite data – imagine that! 2 different methods of measurement agreeing! What does that tell you??],”
That tells me that the UAH Satellite data is confirmed, and makes me wonder how well NASA and NOAA’s surface temperature charts correlate with the Weather Balloon data, seeing as how they have deviated from the UAH satellite record, the deviations being so much that as the UAH satellite data showed cooling after 1998, the NASA and NOAA data showed warming after 1998, to the point that NASA and NOAA were declaring each successive year after 1998 as being “the hottest year evah!”, when the UAH satellite chart clearly shows cooling.
It’s important to keep in mind that the UAH data and surface data are measuring different things. The surface indexes are recording temperature 3m above the surface, while the UAH satellite data are observing swathes of the atmosphere 10km+ thick:
This expectation that they need to match perfectly is ill-conceived. The trend across the troposphere might be different than the trend at just the surface.
Yep.
It’s important to remember that UAH claims they measure temperatures from the surface to the upper atmosphere.
Weather balloons measure the temperatures from the surface to tthe upper atmosphere.
The UAH data and the Weather Balloon data correlate with each other at about 97 percent.
“There has been very little global warming for anyone born 60 years ago or more. So since the war, we ”may” be around 0.2C degrees warmer today. 0.2C is close enough to insignificant. (the current anomaly should not be included in ”today’s temperatures”
Not true:
And amount of whining on your part can definitely alter that fact.
(unless your *knowledge* stumps the world’s experts)
BTW: I expect that you do.
No they are no incompetent.
No they are not Fraudsters.
They just know more than you.
60 years ago the average annual global temperature was circa 1.2C cooler.
Further – your graph:
“Global mean HadAT2 time series for 100 hPa and 500 hPa on a seasonal basis”
You do realise that 100mb is in the lower Stratosphere which has been cooling?? ……
The Lower Strat has cooled by ~ 1C since 1980.
Are going to argue that that does not severely compromise any suggestion that Trop temps from 500mb to under the Tropopause have not warmed ?
What the hell are you dribbling about?
Obviously something you have no understanding of.
It called science, in this case meteorology
I told you that graph is not representative of what is really going on.
1998 and 2016 were basically the same (see UAH) not 0.4 degrees different. Please go away and lecture some school kids.
And I showed you that you should not expect them to be “basically the same”.
As in that time the lower Stratosphere has cooled.
Because of the increasing presence of GHGs.
From 1998 to 2016 the Lower Strat (on that graph) cooled by ~ 0.5C.
Now go away and learn some stuff about what you post.
You would be lost without that bogus Hockey Stick chart, wouldn’t you Anthony.
***should say COOLER***
I take it from your deflecting question rather than an answer that you could not find a single new daily temperature record for 2024.
No, I stayed inside with the wood burner doing its thing. But UAH had the highest evah anomaly for Australia of 1.8C above the average.
In fact the BBC do not realise they are making the leap from anomalies to temperature. Something you may not realise either given your charts.
Exactly on point!
Anomalies are ΔT’s and trying to compare values from different locations is useless without knowing the base temperatures at each location.
Spot on. Alarmists always use anomalies rather than measured temperatures to push their lies.
August in the Hunter Valley was absolutely gorgeous…. 🙂
So Nick, if “rarely experienced” low temperatures in winter can be dismissed as any kind of significant consideration in the whole climate change thing, why do say > 38 C temps in summer in temperate zones get headlines about life- threatening conditions?
(always attributable to “climate change” of course)
Hunga Tonga
Hunga Tunga alone does not account for the ocean heat content increase. (and subsequent decrease) CO2 does NOT account for any of it. There are numerous indications of an unexplained increase in geo thermal beyond Hunga Tunga affecting ocean T. However, nothing we understand explains the overall increase in OHC. It is curious, but 2024 kind of drstroys CO2 as a cause, just as the MWP does as well.
No, it isn’t. It shows that the 2nd digit to the right of the decimal point is uncertain; thus, it is not a “significant” figure and should not be displayed as though it were. It is evidence supporting my assertion that two digits is a digit too far.
I’ll guarantee the folks who do this, climate scientist or not, have never taken any upper level lab courses in physical science. They would never have received a degree in physical science if they handled measurements this nonchalantly.
They need to take some time and educate themselves on information theory. The addition of resolution digits that weren’t measured is akin to taking a quote from a famous person and adding words to it, then claiming it is only making the meaning more clear.
It is another form of climatology Fake Data.
I really felt how hot the last year has been. The 0.10C change in temperature made things unbearable.
And if these numbers were properly rounded to 0.5°C (which is being generous) they would show a change of zero.
PANIC!
Indeed. Life as we really feel it is more like this.
Key phrase “on record.”
The written, historic temperature record puts the lie to all these “hottest year ever” claims.
It was just as warm in the Early Twentieth Century as it is today. No need to go back to the Roman Warm Period for examples of warm periods in history.
The current global temperatures are cooler than the high points of 1998, 2016, 2023, and 2024.
Tom:
Have you not noticed that all those dates you give as being warmer than now, are at or towards the top of large spikes caused by an EN ?
You could run a line of linear fit least squares though all the high points and all the low points and still get a warming rate of ~ 0.15C/dec.
So your point is spurious and a denial of the fact that UAH is warming and that of course there is natural variation within.
Comparison of a single month to any other single month and then say “current global temperatures are cooler than the high points of …” takes some serious motivated reasoning.
As for your “It was just as warm in the Early Twentieth Century as it is today.”
As put up here many times by me and others.
It was not.
It just wasn’t.
So 2016 was half a degree warmer than 1998 was it?
Please!! Go and have a look the the radiosonde graph again and tell me that 1958 was <> 0.3 cooler than 1997.
Your Hadcrut, NOAA, Berkeley, JRA, GIS, ERA lock-step, perfectly identical (Lol) graph is loaded with bullshit! Get rid of it. It’s a made-to-order graph baring no resemblance to reality and anyone believing it is a climate zombie.
The only graphs to reference are the radiosonde (before adjustment) and the satellite which agree with each other.
I have replied to that.
“Your Hadcrut, NOAA, Berkeley, JRA, GIS, ERA lock-step, perfectly identical (Lol) graph is loaded with bullshit! Get rid of it. “
And you bollocks right there tells me that it is pointless me conversing with you conspiracy ideation.
Goodbye
It’s quite amazing how naive you really are.
And he has been a meteorologist for 32 years.
People believe what they want to believe, and can fool themselves into believing things that are not true.
I think that is the case here.
No, you have not.
You’re not going to convince anyone here with that abortion of a graph.
blanton is always quick on the hockey stick.
Yes,
Because it is what the science (so far) tells us.
And not how your imaginings what things to be.
The science DOES NOT tell us that 1998 was 0.4 degrees cooler than 2016. Liar!
Explain the discrepancy between this and UAH
That chart just demonstrates how NASA Climate and NOAA bastardized the temperature record after 1998, so they could declare “Hottest Year Evah!” year after year after year after 1998, making it appear that temperatures wre getting “hotter and hotter and hotter”, as a means of scaring the Public into doing the Climate Alarmists’ bidding.
But the UAH satellite chart doesn’t show all this “hotter and hotter and hotter” climate change propaganda, UAH shows major cooling after 1998.
The temperature data mannipulation after 1998, ought to be enough to bring those who did this data mannipulation up on charges of defrauding the American public. They have perpetrated a crime against the American people and the people of the world, and their lies and distortions about the climate are causing the waste of literally TRILLIONS of dollars.
Somebody needs to pay for all this lying.
The bogus Hockey Stick chart is the only thing he can show.
I’ve replied to your conspiracy bollocks above
Yes, by saying ”no you are wrong” Lol.
No, I explained, with a graph even..
It is you that can only say “no you are wrong”..
I’ll try again then …
Your UAH/radiosonde graph is an average of the 500mb temp and the 100mb temp.
100mb lies in the lower stratosphere which has not been warming.
It has been cooling
As my posted graph shows.
That is why there is a discrepancy.
The chart he uses over and over are not a true record of datapoints, that is why they are in lockstep which should be IMPOSSIBLE.
Anthony, You do know that you are posting adjusted data. Right?
Alterations To Climate Data | Real Climate Science
This comment sums this site up.
You hate raw data because it is contaminated by things like urban heat island effect (or orbital drift, when it comes to satellites); but on no account must it be adjusted to compensate for such things!
Final, you as I do, and a few notable others, know that posting the science on here is ultimately futile (and I do not try or expect to) in changing the opinion of the entrenched “contrarians”, I do so just to counter the mostly misconception ( kindly) and outright bollocks, such that any neutral reading can see the real science.
The *contrarians* will always have the *win* (in their minds) in the end, with ultimate resort to fraud or fake data. Conspiracy theories require no evidence, all you have to do is accuse.
And therefore there can be no response that will satisfy them.
The lower strat graph is contaminated ?
Taken from UAH and radiosonde data?
Sorry dude, it was in Kansas, Alabama, and other places. Tell us why we should be worried about your hockey stick!
[img
[/img]

Remember, these come from one of your vaunted and impeccable sources so they must be irrefutable. You just need to explain how there is no warming over and above early 1900’s.
Anthony has no answer for this. He rejects regional temperature data that shows we are NOT experiencing unprecedented warming today, because it doesn’t fit in with his Climate Crisis meme.
Some people pick and choose what they want to believe, and ignore facts that don’t fit their worldview.
Jim is engaging in flagrant cherry-picking. Parts of the contiguous US did experience comparable temperatures to today in the mid-1930s:
But clearly that was not reflective of global trends. Nor is this fact being hidden in any major temperature dataset.
ALL RED, PANIC!
No cherry picking here. Those are state averages straight from NOAA. You are dead set on averaging being a good way to analyze temperatures. Well, here are two, and there are others.
Your job is to justify these pretty large areas, separated by hundreds of miles, having no global warming for 120+ years. Is there a permanent inversion over the U.S.?
You are cherry picking by picking two locations in the contiguous US and claiming that they represent a global pattern. The map I’ve shown above presents any semblance of cherry picking and pretty clearly shows that the warmth the US experienced in the mid-1930s was limited to just the contiguous US.
That isn’t how trends work. There is an obvious warming trend in the graphs you present.
“Comparison of a single month to any other single month and then say “current global temperatures are cooler than the high points of …” takes some serious motivated reasoning.”
I threw that in there because we have all these Climate Alarmists claiming we are now at an unprecedentedly warm 1.5C level, but that’s not true now is it, the actual temperatures are now cooler than 1998, 2016, 2023 and 2024. That’s just the facts. What the temperatures do next is anyone’s guess, including yours.
“As for your “It was just as warm in the Early Twentieth Century as it is today.”
As put up here many times by me and others.
It was not.
It just wasn’t.”
You are just wrong and won’t admit it. I understand. Admitting that it was just as warm in the recent past destroys your “CO2 heats the world” worldview, and we can’t have that, can we. Keep denying. Keep ignoring the written, historic temperature records. Stay safe in the False Reality you live in.
Here you are a meteorologist who has seen regional temperature charts, yet you reject what they tell you in favor of a climate crisis fairy tale created in a computer.
You have no curiosity as to why the written temperature record looks so different from the Climate Alarmist temperature record (the bogus Hockey Stick chart)? How do you get a “hotter and hotter and hotter” Hockey Stick temperature profile out of written, historic temperature data that does not have a “hotter and hotter and hotter” temperature profile? No curiosity about that at all, huh? Those written, regional, historic temperature records were the only data the Hockey Stick creators had to work with, and none of them show a “hotter and hotter and hotter” temperature profile.
Logic is not on your side, Anthony.
The Earth is still in a 2+ million-year ice age named the Quaternary Glaciation, in a cold interglacial period that alternates with very cold glacial periods.
https://en.wikipedia.org/wiki/Quaternary_glaciation
From 2000 to 2019 about 4.6 million people died from cold-related causes and about 490,000 people died from heat-related causes.
‘Global, regional, and national burden of mortality associated with non-optimal ambient temperatures from 2000 to 2019: a three-stage modelling study’
https://www.thelancet.com/action/showPdf?pii=S2542-5196%2821%2900081-4
Cold causes the blood vessels to constrict to conserve heat and this raises blood pressure causing more strokes and heart attacks in the colder months.
‘QuickStats: Average Number of Stroke* Deaths per Day, by Month and Sex — National Vital Statistics System, United States, 2021’
“In 2021, the average number of stroke deaths per day was highest in January (275 for females and 212 for males) and then declined to a monthly low in June (235 for females and 180 for males)”
https://www.cdc.gov/mmwr/volumes/72/wr/mm7249a7.htm
‘When Throughout the Year Is Coronary Death Most Likely to Occur?’
“Conclusions—Even in the mild climate of Los Angeles County, there are seasonal variations in the development of coronary artery death, with ˜33% more deaths occurring in December and January than in June through September.”
https://www.ahajournals.org/doi/10.1161/01.cir.100.15.1630
The Hunga Tonga volcano eruption sent megatons of water into the stratosphere, warming the Earth.
https://eos.org/articles/tonga-eruption-may-temporarily-push-earth-closer-to-1-5c-of-warming.
Correct. Tonga caused the 2023/24 heat spike.
The peak temperature, likely due to HT, occurred in 2023. If the “wet” Askja volcano had a similar effect on weather, then we’ll see substantial cooling in 2025. Time will tell. I’ll update t his plot each month as new temperature data is published.
“But, those numbers differ from those of other sources, such as a data set from the U.S. National Oceanic and Atmospheric Administration (NOAA).”
Using the 1991-2020 base period, NOAA’s 2024 anomaly was +0.66°C, for CCCS it was +0.72°C. A difference of 0.06°C.
Here are the results for 2024 of various data sets, using the 1991 – 2020 base period.
All show 2024 as being the warmest year in their records, with 2023 being the second warmest.
Gees that shows up the El Nino events well , doesn’t it.. 🙂
UAH is atmospheric data, so responds a lot more to ocean energy releases, as anyone can clearly see in 1998, 2016 and 2023/24.
Good use of El Ninos to create a steep trend , though. 😉
It will also drop faster now we are in La Nina territory.
2023 was only 7 or so months under the effect of the El Nino
2024 was basically the whole year under the effect of the El Nino.
Still zero sign of any human causation.
More abuse of significant digit rules.
To 1 decimal place, NOAA and the CCCS are identical.
One decimal place is still abuse, and you have no qualms about propagating fake extra digits.
The sacred significant figure commandments must not be violated
OK to the nearest degree all data sets show 2024 as being 1°C warmer than the 1991-2020 average, and surface data show it being 2°C warmer than the later half of the 19th century.
Have you explained to Dr Spencer yet how he’s abusing the holy rules?
That you are comfortable spreading corrupt data generated by corrupt government organizations speaks volumes about your character.
You complain that I make to many comments, then keep trying to bait me with idiotic conspiracy theories. I’m not playing that game today.
Replying to your Fake Data claims is baiting you?
As a practitioner of the Fake Data Arts, you have no qualms against inventing whatever resolution is needed, data integrity is no barrier.
“Numbers is Numbers!”
And here is your corrupt government organization in action:
https://realclimatescience.com/2025/01/new-data-tampering-by-noaa/#gsc.tab=0
Thnx to John W
I liked BobG’s comment over there:
“Bernie Madoff also altered data and he went to prison for life for it.”
I think jail time should be an option for people who mannipulate the temperature record and cause TRILLIONS of dollars to be wasted..
And of course, when raw data is mannipulated, it is no longer “raw” (which means it was originally a written record from the past), and that is one way NASA and NOAA and Climate Alarmists in general try to manniplate the discussion by calling manniplated data, “raw” data.
One lie after another from the Climate Alarmists. Of course, they have to lie or they would be out of a job.
Absolutely correct. These corrupt frauds demand that I be deprived of gasoline, methane, and diesel fuel, for no reason, so I take it personally (it is -23C here right now).
The sacred significant figure commandments must not be violated
Where are the uncertainty calculations? If, like TN 1900, the uncertainty is ±1.8°C (±3.2°F), you miss quoting the fact that there could be cooling of 1.8°C.
Without the measurement model and procedure from single readings all the way to an annual values, there is really nothing supporting any of values in the data sets.
“If, like TN 1900, the uncertainty is ±1.8°C (±3.2°F)”
Why would even you think that the uncertainty of an annual global average would be the same as a single station for a single month with a third of it’s data missing?
The sum of the parts make up the whole. If you had any experience making things you would know and appreciate that. Baking, circuit design, structural design, telescopes (remember the Hubble mirror errors). Measurements are no different. If each measurement has uncertainty, the whole will also have the same uncertainty.
Every time you create a new random variable, you get an average and a new standard deviation. Read these two documents and tell us why they are incorrect about using the experimental standard deviation for the uncertainty.
You never show any references that show the standard deviation of the mean defines the dispersion of values except for examples that always declare a Gaussian distribution and random errors that cancel. You need to show us a histogram of temperatures that have both NH and SH temperatures that turn out Gaussian.
One day you are actually going to paste the text that explains about the uncertainty of the mean. Until then I just going to point out that means are not mentioned once in any of your random clippings.
You missed Note 2 of B.2.17
Nothing requires the distribution to be Gaussian. It does of course assume the uncertainties are random.
The use of √n is only valid for stationary data (such as the width/length of some particular measured object) where the mean and standard deviation do not change over time, and all variables except random variation introduced by the measuring device or observer, remain constant.
It certainly doesn’t apply to measuring similar but changing parcels of air with a trend, because the trend adds to the standard deviation. It doesn’t apply to averaging different parcels of air with different humidity. It doesn’t apply to conflating sea surface temperatures with air temperatures because water and air have different specific heat capacity values. It doesn’t apply to using thousands of different thermometers with different designs and calibrations.
As I keep trying to tell Jim whenever tries to use the TN1900 method on annual data, you can’t just just divide the SD by √N without considering the nature of the data. Non-stationary data is an example of that. If there is a strong seasonal cycle then the SD does not represent a random sample.
But that doesn’t make the method invalid. Like any mathematical result it assumes exact assumptions, if those assumptions don’t hold it can still be a useful approximation.
The problem here is people keep using these assumptions as a loop hole. If they do not hold, you through everything out and substitute something that makes no sense. If dividing by √n won’t give you the exact uncertainty, they argue that means you must multiply by √n instead.
“It certainly doesn’t apply to measuring similar but changing parcels of air with a trend, because the trend adds to the standard deviation.”
And that’s a problem with the TN1900 example. It ignores the fact that May is a warming month. But in reality it makes little difference in that case as the trend is small compared with the daily variation. If the trend is significant then you need to remove it.
But the effect of a trend is to increase the SD. Removing it will decrease the uncertainty.
I’m not sure if most of your other objections make sense, but regardless, none if the actual uncertainties are based on just dividing SD by √n. The global anomaly is not simply averaging lots of random temperatures. The uncertainties are usually combinations of multiple factors.
Nice hand-waved word salad.
¡Ole!
TN 1900 does not involving trending. The measurement is defined as monthly average. The GUM defines a measurement as:
You fail to read the entire document and find all the assumptions and measurement model used to determine the measurement.
Assumptions are a necessary part of physical science. Solving partial differential equations like Navier-Stokes or Maxwell’s EM equations requires substantial assumptions used to simplify them. It is one reason I had to take vector calculus to get my EE degree. Get over it and learn why they are made.
Their handling of uncertainty is a joke. Your insistance on dividing by the √n is simply ignoring internationally agreed upon portrayal of measurements.
“TN 1900 does not involving trending.”
Yes, that’s the point I was making. It assumes a stationary time series.
“The measurement is defined as monthly average.”
Yes. That’s what we’re talking about. The uncertainty of an average when the data is not stationary.
“The GUM defines a measurement as:”
There then follows a lengthy extract that does not define a measurement.
Here’s the actual definition in the GUM B.2.5
Besides, TN1900 says they are taking a broader definition of measurement. Here’s TN1900’s definition (Section 2, page 12).
“Get over it and learn why they are made.”
Assumptions are made because it would be impossible to model anything without them. All models are wrong, but they can be useful. The general law of propagation of uncertainties makes a number of assumptions, not least that the function is linear. But this does not mean it cannot be applied to non-linear functions, you just have to assume that if the uncertainties are small the error in the linear approximation will be small.
“Their handling of uncertainty is a joke.”
Feel free to explain why you think they are a joke. I’m sure no evaluation is perfect, but if your “joke” is your believe that uncertainty grows the more measurements you take – don’t expect me to take you seriously.
“Your insistance on dividing by the √n is simply ignoring internationally agreed upon portrayal of measurements.”
So many things wrong with that sentence, I’m not sure if it’s responding to. You need to find a reference that documents this “international agreement” that outlaws dividing by √n, and then explain why the rest of the world has to go along with this international diktat.
More mumbo jumbo showing you understand nothing about measurements, not have you learned anything from what you’ve been shown.
Uncertainties are not “random”. Uncertainty is based upon the probability distribution of measurements taken for a measurand. Uncertainties describe the variance of the distribution, they are not random nor any other statistical parameter. THEY ARE A STATISTICAL PARAMETER of a distribution.
If you can’t get by the need for a statistical description before even starting a measurement, you will never understand what uncertainty is all about. The process is something like:
Here is a sample uncertainty budget. Why don’t you tell us what TN 1900 used for each of these items and why.
This entire rant, quibbling on my use of the word random to describe an uncertainty, is in response to me correcting Jim’s claim:
I was agreeing that the errors have to be random, maybe I should have said random errors rather than random uncertainties, but I’m pretty sure Jim knew what I meant. What he doesn’t deny is that nothing requires them to be Gaussian.
As usual, nothing of import.
In an ideal world.
Depends on what rules you are following. The ones that makes sense to me are those laid down in the GUm and other sources on metrology. I paraphrase them as choose a reasonable number of digits for the uncertainty figure, 2 is usually enough, and then quote the result to the same number if decimal places.
If the uncertainty is 0.05 it’s entirely appropriate to quote the result to 2 decimal places.
Yeah, you follow the rules of trendology ruler monkeying, in which what you feel is ok.
Where are the formal uncertainties for the 5 different estimates? It looks as though it is going to be at least +/-0.05 for the mid-range temperature estimates, with no information about the weighting from the shape of the diurnal temperature curve.
Here is GISS plotted with the 95% uncertainty range:
“95% uncertainty range” — LiarJ is just another Fake Data clown.
See Lenssen, et al., 2024:
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2023JD040179
The result above is based on peer reviewed research.
So what?
I’m supposed to be impressed by corrupt peer reviewers who have no understanding of basic metrology?
I have no expectations of you, Karlomonte, I’m simply providing the source of the uncertainty ranges, since you seemed to be insinuating that the ranges were “fake.”
I’m insinuating nothing, if you knew anything about metrology you’d be able to understand how absurd those tiny gray areas are.
Unfortunately for you, that isn’t how science works. If you wish to contend that the ranges are “absurd,” you need to demonstrate that by pointing out specific errors or flaws in the methodology of the cited study. An argument from incredulity is simply a fallacy.
I’m not kowtowing to a gaslighting crank like yourself, LiarJ.
If you want to remain an ignorant boob, that is your choice.
Then by all means share your wisdom and insight by contesting the matter in the scientific peer-reviewed literature, rather than in a ludicrous blog.
Share your valuable insights where it actually matters.
You won’t.
Ask me if I care what you drool on and on about.
See https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6
Section H.6, equation H.38
This document does not address the GISTEMP uncertainty estimates. Is there a point you were attempting to make?
Sorry, I forgot to post the quote I was responding to.
“Absurd” is rather too strong a term, but they are certainly underestimated.
Lenssen et al 2024 is a spatial coverage work based on Lenssen et al 2019, which builds on an earlier Lenssen et al uncertainty paper.
The uncertainty derivation is in the 2019 paper.
The document you linked does not address the 2019 paper, either. Is there a specific argument you’re hoping to make?
The highlighted section addresses the propagation of resolution uncertainty.
Compare that to Lanssen et al 2019 Section 4. 4.2.1 is probably the closest match.
Lanssen, 2019 are employing probabilistic methods for uncertainty analysis, not simple error propagation. The referenced section does state, “The major source of station uncertainty is due to systematic, artificial changes in the mean of station time series due to changes in observational methodologies. These station records need to be homogenized or corrected to better reflect the evolution of temperature. The homogenization process is a difficult, but necessary statistical problem that corrects for important issues albeit with significant uncertainty for both global and local temperature estimates.” The random component is negligible given the number of observations involved.
Exactly. Resolution uncertainty is not addressed, but is quite large with respect to temperature anomalies.
The paper addresses the impact of random uncertainty:
“The random uncertainties can be significant for a single station but comprise a very small amount of the global LSAT uncertainty to the extent that they are independent and randomly distributed. Their impact is reduced when looking at the average of thousands of stations.”
Yes. That is the point. Lenssen et al 2019 is a refinement of the uncertainty model of Hansen et al 2010. Both are focussed on spatial and temporal resolution – essentially the sampling protocols.
Measurement resolution is conspicuous by its absence.
The papers are comprehensively addressing every aspect of uncertainty in the Gistemp analysis. The random resolution uncertainty of individual point measurements is negligible given the large sample size.
Again, you’re failing to substantively address anything in the paper.
You are misunderstanding the effect of resolution. It is a common factor, not random.
See H.38 for its propagation.
Limits of instrument resolution have a large effect on a single measurement at a single instrument, but are negligible at the global level when many thousands of stations are being considered. The major sources of instrumental uncertainty, as noted in the paper, are systematic.
So much bullshit! What do you think “combined uncertainty” means? Here is what the GUM says:
Sum of terms. Each measurement has an uncertainty, those uncertainty terms are summed to calculate the the combined uncertainty. The final assembly of temperatures is contained in a random variable. That random variable has both a value and a variance. You can’t just wave the uncertainty (standard deviation) away by saying they all cancel. Look at GUM Eq. 10 and H.38. Those ARE SUMS OF INDIVIDUAL MEASUREMENT UNCERTAINTY.
That isn’t relevant to the matter at hand. You need to read the paper I cited above, because any comments need to directly and substantively address the work therein.
Please re-read Section H.6.
The first 2 terms of the combined uncertainty equation H.38 are:
u_c^2(h) = s^2(d_k)/5 + delta^2/12 (+ terms which should be dominated by the first 2)
so we wind up with approximately u_c(h) = sqrt(s^2(d_k)/5 + delta^2/12) which will typically be of the same order of magnitude for small samples.
In this case there were 5 depth measurements taken, so the first term is already averaged. The resolution term is not averaged, so it doesn’t divide by the number of measurements.
AKA Fake Data.
Dr. Roy Spencer disagrees. His research shows that homogenization will contribute to smearing UHI across the record.
https://www.drroyspencer.com/2021/02/urban-heat-island-effects-on-u-s-temperature-trends-1973-2020-ushcn-vs-hourly-weather-stations/
Sorry dude, it is not up to critics to prove something is wrong. It is up to you , the person who posted it to have the information that supports your assertions.
Here are some things that aren’t shown in the reference summary or by you..
Here is a graph from a local station. The RSS values are the standard deviation from adding the variances from the monthly average and the baseline average. That is the uncertainty. Uncertainty always adds, so this is a base of what it may be.
Read this from NIST and tell us this was used to find the uncertainty shown in the study you referenced.
The onus is on the claimant. If karlomonte wants to convince anyone that the uncertainty band is too narrow, the only way to do that is by addressing the per reviewed literature. The estimates for Gistemp are produced using probabilistic techniques, and any critique of the methodology has to address explicitly what is being done.
I provided the reference.
Your reference does not address Lanssen, et al., 2019, and in fact never references it.
The causality is reversed here. Lenssen et al 2019 should have used JGCM 100:2008 “Evaluation of measurement data – Guide to the expression of uncertainty in measurement” or the equivalent ISO reference when evaluating uncertainty.
Such reference works are so foundational that they may well be assumed knowledge and not included in the bibliography of papers.
The authors followed applicable standards in their work. Your charge is to demonstrate exactly where their work is deficient, if indeed you believe it is, and to demonstrate how it should have been done instead. Perhaps you might be able to publish a paper with your analysis.
I had rather hoped that it was obvious from JCGM 100:2008 equation H.38, but apparently not.
Certainly. The measurement resolution uncertainty term should have been included.
For a measurement resolution of 0.5 degrees C, that is 0.5/sqrt(12), or 0.14 degrees C.
Adding that to the quoted 0.05 degrees C sampling uncertainty, we have sqrt(0.14^2 + 0.05^2) = 0.15 degrees C.
Included where? Specifically in which part of the research did you feel the term was missing? You need to offer specifics.
Alan, Alan, Alan.
Measurement resolution uncertainty isn’t explicitly mentioned in Lenssen et al 2019. It should have been included in Sections 4.2 and 4.3.
As per JCGM 100:
20192008equation H.38, it should have been included in the total global uncertainty calculation in Section 5.3 and subsequent.All measurements at all stations on the globe are Gaussian with random errors that entirely cancel thereby making all values 100% accurate. Measurement resolution is increased from integer to 3 decimal digits by averaging. The uncertainty of the mean value guarantees it! /sarc
I think Lenssen and his co-authors were so focussed on refining the sampling protocols and statistical analysis that they missed this source of uncertainty. In most circumstances, it is small enough to be ignored or incorporated in other sources of uncertainty.
Yes, it is small enough to be ignored here as well.
It will be quite interesting to see under which conditions 0.14 is smaller than 0.05.
Again, you completely fail to understand the methodology used. The aim is not that all errors are random and cancel, the claim is that the component of the uncertainty arising from random errors is reduced on taking multiple measurements. The rest of the paper deals with the nonrandom element, as this is the only part that contributes significantly to the uncertainty.
Completely and totally irrelevant to time-series air temperature measurements — the number of extra repetitions is always zero.
No amounts of word salad can change this inconvenient little fact.
All you are asserting is that the mean can be accurately calculated. That is not the measurement uncertainty.
Random errors cancel only if you know a true value when measuring the same thing. Please don’t insult everyone by claiming that all the temperature data comes from the same thing.
Measurements from multiple measurands are not of the same thing, nor do you know the true value of any measurement so you don’t know what the errors are. Using that as an excuse is far from being scientific. Why do you think the WORLD accepted a total rejection of using “true value±error” as an adequate description of measurements?
One of the requirements for a measurement description is giving the Degrees of Freedom and the “k” factor used to obtain an interval containing 95% of the dispersal of measurement observations used to calculate the uncertainty. You should list them in your assertion. That also lets people calculate the standard deviation that describes the dispersal of anomalies used to calculate the final uncertainty.
No, I’m asserting that the mean can be precisely calculated. The accuracy relies on addressing systematic error. But even when you address systematic error to the best of your ability, there is still uncertainty related to it, which is the subject of Lenssen et al., 2024. The uncertainty in the mean arising from random, independent errors is orders of magnitude smaller than the uncertainty arising from systematic biases.
Quit displaying your ignorance. You simply can not add resolution (or precision as you incorrectly use the term) to an average beyond the resolution of the data used to calculate the average. Your statement is what mathematicians and statisticians say bout not what physical scientists say. Show us one metrology reference that allows one to exceed the resolution of the measurements used to calculate the mean.
Read what NIST has to say about measurement uncertainty and probability distributions. NIST is discussing the Stefan-Boltzmann constant whose value is represented by the symbol σ.
https://www.nist.gov/itl/sed/topic-areas/measurement-uncertainty
There are many other references that explain this. Please show one that doesn’t rely on measuring the same thing that results in a Gaussian distribution with proven random measurement samples.
Don’t just refer to something, show it here, you are the the one making the argument, it is your responsibility to show you have the ability to understand and explain your references.
I’m deeply weary of this discussion with you, Jim, it never goes anywhere. The estimate of the mean grows more precise with a larger sample size. The precision of the estimate of the mean can exceed the resolution of the individual measurements used to compute it. That’s the point of estimating the mean instead of using a single individual measurement. You agree with this – you just talk around it so you can pretend like we are arguing.
Lenssen, et al. quite clearly states that the individual station measurement errors are independent and random.
You are not addressing anything in the referenced paper, you’re just making unrelated assertions and misstating my position. You should read the paper and construct an argument based on the actual work done.
You are so full of stuff up to your eyeballs should be brown.
There is no such thing as the estimate of the mean having increased resolution. What you are referencing is a decreasing uncertainty of the (sample) mean. That is, the standard deviation of the sample means distribution is reduced. That is an interval that has a 68% chance of the estimated mean being within it. It is why NIST requires an expansion of the interval to achieve an interval with a 95% chance of containing the actual mean. IT HAS NOTHING TO DO WITH INCREASING THE RESOLUTION OF THE ESTIMATED MEAN.
You, and climate scientists need to decide if you are using multiple samples or one large sample. The √n used to determine the SDOM (Standard Deviations Of the Mean) is the size of each of the multiple samples, not the number of multiple samples. If you are dividing by the number of stations, then that is the size of one sample and there are no multiple samples. Without multiple samples, the CLT is moot, i.e., there is no SDOM!
Read this:
https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/
That has nothing to do with the propagation of uncertainty.
What that is referencing is the CLT requirement of independent and random SAMPLES. This the key where climate science makes a mistake. They clipsim multiple samples, and then turn around and say the size of the samples is the number of stations. THE NUMBER OF STATIONS CANT BE BOTH MULTIPLE SAMPLES AND THE SIZE OF EACH SAMPLE!
It can be shown that the SDOM can be reduced by dividing the standard deviation of a population by the size of each sample. Guess what the size of each sample is with your description? ONE (1). That means you divide the standard deviation by the √1!
Thanks for linking that website, it confirms my position exactly. Have you read it? If you’ve read it, and you agree with it, then you and I have no argument on this point, and we can move onto the more relevant topic of addressing the substance of Gistemp’s uncertainty estimate.
You may have read it it but obviously you don’t understand it. Read this again. Are you trying to equate resolution of the estimated mean with the variability of the sample means distribution? If you are, then you have no idea of what statistics are.
The site also says this.
Again, smaller standard errors signify more precise estimates of a population parameter.
Yet you have no idea what precise means in this statement. It means the standard deviation of the sample means is smaller and more precisely defines an interval surrounding the estimated mean, not that the mean is more precise.
I don’t think you have any education in statistics. Otherwise you would know what these terms ,means.
The website aligns perfectly with my understanding and the statement made in Lenssen, et al.
It isn’t clear to me what you’re still arguing about, since you posted the article.
Yes, I know that.
How does resolution uncertainty become random when every measurement has the same resolution?
D’oh! I just realised this was a reply to the wrong reply. Please pretend it didn’t happen.
Just found NASA HDBK-8739.19.3 (2010) “Measurement Uncertainty Analysis Principles and Methods”.
Section 5, particularly 5.5 treats resolution uncertainty in the same manner as JCGM 100:2008.
It has a very nice coverage of the importance of the uncertainties in various scenarios.
AlanJ thinks peer review is equivalent to being “Handed down by God!”
As you say, corrupt peer review, is no review at all.
He repeats two irrelevancies over and over:
1 “its not global”
2 “its not peer reviewed”
The exhibited uncertainty is about +/-0.05 degrees C, for the average mid-range values. Thank you for the confirmation of my estimate above.
However, why is this graph not in agreement with your claims above at January 17, 2025 7:14 pm? This graph shows an anomaly of about 1.3 degrees while your tabulated values are in the range of 0.63 to 0.77.
I’m not the person who made the original graph you reference, but it’s because the records in that graph have been placed onto a common baseline for comparison, while my graph is just using GISTEMP’s 1951-1980 baseline.
More Fake Data.
GIISS is a load of massively tainted urban data , concocted from totally unfit-for-purpose surface sites, then mal-adjusted to conform better with the AGW agenda.
Great graph of urban warming from totally unfit-for-purpose surface stations.
The narrow range of uncertainty indicates that they are quite suitable for their intended purpose. Did you have a concrete objection to share?
About ±0.05 is around the stated value for most of the surface data sets. UAH doesn’t do an uncertainty analysis due to lack of funding, and in any case they are measuring a different thing.
Still is bullshit.
So UAH is bollocks as well ?
/plonk/
UAH measures something different than 2m surface temperature. Do you have a study showing that they should be the same? Is, show us. Better yet, show us what the satellites measured in the 1920′ and 30’s.
No, other than the Bom one below …
And no again, UAH is missing the increasing warmth of global land minima.
The graph shows UAH having a lower warming trend ….
The idea of a Global Average Temperature 🤣
The idea of a Global Average Temperature accurate to 0.01K 🤣🤣🤣
Once again it should be pointed out that the idea of a global average temperature was never called into question here during those brief periods when a global warming ‘hiatus’, or apparent temporary reduction in the warming rate occurred. Indeed, it was enthusiastically supported here.
Christopher Monckton ran two (so far) long-running “No global warming since….” series’ here based on global average temperature records, to great applause, as I recall. (You guys won’t fall for a third outing of this nonsense once temperatures briefly fall back, will you….? No, you will, won’t you.)
In fact, the existence of a global surface temperature is still strongly supported by whoever now runs this site; because the UAH global temperature update is published here every month and is featured prominently in the side-bar of this site.
So if you reject the notion of a global average surface temperature, then I’m afraid you are on the wrong website.
You are so full of crap. CMoB’s purpose was to show the “feedback” theory in the GHE was wrong. It was not to show that the temperatures were wrong.
Your assertion is WRONG!
His Lordship’s theory was most certainly wrong.
As his disappearance from here is testament to.
If you truly believe that, you are definitely on the wrong website. But we knew that.
I am also unsurprised that you don’t understand the concept of disproving your own temperature fetishism by using your own data*. It’s something that is probably above your ability to understand. It’s certainly an issue that is beyond my ability to help you understand.
(*I use the term ‘data’ advisedly)
I was using UAH data, among others.
Can you bring yourself to admit that 2024 was the warmest year on record for UAH, or do we file that under ‘alternative facts’?
So? UAH has only been in operation since 1979. So 2024 is the warmest year since 1979. The year 1934 was just as warm as 2024, but UAH wasn’t around to record it.
In a cyclical climate where the temperatures warm for a few decades and then cool for a few decades, and this pattern repeats, it is not surprising that we are currently in the warming part of the cycle, since 1979 was one of the coolest periods in recorded history. It was around that time that climate scientists were fretting that the Earth might be entering another Ice Age.
Ok, so we are at the warmest point in this particular warming cycle, which btw, is no warmer than previous warming cycles, and if you believe in a cyclical climate, then you are expecting that the cooling part of the cycle should be kicking in in the future.
Climate Alarmist expect the temperatures to continue to get hotter because CO2 is continuing to increase.
Time will tell who is right and who is wrong. Those who believe in a cyclical climate have history on their side. Those who believe in “hotter and hotter” have nothing but a bogus, bastardized Hockey Stick chart on their side.
Which is mixed with propaganda about the Evil Magic Molecule.
The Washington Monument is 555 feet. Being at the top and descending to ground level one would, on average, experience an ambient temperature increase of 3 F degrees. Color me not terrified. However, I think an in-place panic is involuntary just looking at the brightly colored charts Nick has provided.
Whatever your very local perception of temperature change with height, the Earth experiences the same over it’s entire volume with any driven forcing.
The atmosphere sustains the biosphere and (even though you don’t believe it) is incredibly sensitive to seeming (to humans) small temperature deltas.
So a ridiculous conflation.
What a completely ridiculous pile of garbage. Even the IPCC said…”We cannot detect the expected (human) signal”, so to say the Earth is incredibly sensitive is breathtakingly delusional as are all you posts.
“World’s
Hottestmildest Year on Record” “2024 Confirmed asHottestmildest Year Ever Recorded”It’s all hype of course NOAA calls 2024 the world’s warmest year on record.
The global average temperature in 2024 was 15.10C (59F) a whacking 1.5C 🫣 above the pre-industrial level (aka Little Ice Age).
Where I live 15.10C is not hot, in fact it is not even warm.
You’re right. Wearing light clothing, at an air temp of 15C, hypothermia is likely to occur in a few hours. In water at 15C, it would take 30-60 minutes to occur.
15°C is perfectly comfortable in light clothing.And is not a cause for hypothermia in water in half an hour. Takes a bit longer than that.
Let’s get facts straight.
But the point is that ‘global’ temperatures are weird anyway. Comparing tropical rain forest (+35°C) with arctic tundra (-20°C) shows a regional variation of over 55°C. any ±1.5°C is lost in the noise.
Yeah, nah.
It depends what you’re acclimated to. Under 20 is flanno shirt time. 15 definitely needs a jumper.
Add 5 degrees to those temperatures for somebody from north Queensland.
In the UK, it’s currently in single figures (C) and there are people walking around in shorts and flip flops
But they think 25 C is a heat wave instead of a nice winter’s day.
I was there in summer a few years ago, wearing 2 layers of clothes and a rain jacket.
We did have a wonderful time, but it’s best in small doses.
It is why you NEVER see a variance of the temperatures shown with the anomaly. Anomalies should inherit the variance of the temperatures used to calculate them.
Anomalies are calculated from the mean values of random variables. Those random variables also have a variance.
Everyone knows when you add or subtract means of two random variables, you must add the variances so that the sum/difference inherits the variance of the parent random variables.
That would give an anomaly of say, 0.15 ±2°C for each anomaly. Lost in the noise is a good description.
I was going by US Weather Service figures. I would concur that 15C water is “cold.”
No it’s not lost in the noise, as each monthly update by both UAH, RSS and the surface indices show.
They are very close to each other and showing a very similar rate of warming.
If your assersion were correct they would be all over the place
Are you saying the surface temps shows..
… basically no warming from 1980-1997
… basically no warming from 2001-2015…
and cooling from 2017-2023.4 ??
NV as is your magical EN’s
One day the penny might drop that there are cooling events in natural variation (well used to be under a LN – but no longer).
We just get Monckton Haituses.
NOAA doesn’t show warming in the U.S. rural states in surface temps. If the entire globe is warming as you propose, explain how NONE of it has leaked into those locations scattered across the U.S..
“NOAA doesn’t show warming in the U.S. rural states in surface temps”
You really need to justify that claim. Looking through all regions on the NOAA Climate at a Glance site, and they all seem to show warming since 1895.
Dude, do you see that watermark that has NOAA in it? I didn’t use Photoshop to play with it.
Here are the inputs to use.
Parameter: Average Temperature
Time Scale: 1-Month
Month: All Months
Start Year: 1900
End Year: 2024
State: Kansas
I checked right now and the graph looks the same now as what I show except for a few more months. If you don’t like what it shows I suggest you start being a sceptic like the rest of us.
Sorry to disappoint you! NOT! 😂
So, just one state. Let’s see what it looks like if you look at annual averages and turn on the trend line option.
Warming rate is 0.14°F / decade, or 0.08°C / decade in real temperatures, since 1895.
You haven’t refuted the fact that rural states show little to no warming. I use 1 month averages for the trend. Are you implying that these trends are incorrect? If so you need to have a damned good statistical theory why!
You have just illustrated why averaging to achieve longer time periods and larger areas can generate spurious trends.
By examining finer periods and smaller areas you can detect anomalous trends like yours.
There are a number of states that don’t have much if any warming in the summer months. You need to tell us why. Better yet, show us the states that are experiencing hockey stick warming.
Here are some other states. Maybe NOAA has screwed up their monthly averages?
You are the expert trendologist, tell us what is going on. Why is your trend so different? The answer is staring you in the face. Here is a hint, it isn’t the hot months!
“Are you implying that these trends are incorrect?”
What trends? You aren’t showing any information about! Tends in any of your graphs. The website won’t let you draw a trend line for all months data, presumably because it’s biased if you use data with a seasonal cycle.
In case you are getting confused by this, the line you are showing is the average value. It will always be a flat line.
Here’s the South Carolina NOAA monthly data with a trend line. I’ve converted the data to proper temperatures, and added a ten year rolling average, which makes it easier to see what’s happening. Temperatures took a big drop in the 60s, but have been warming faster since then. Using the 10 year average as a guide, South Carolina is currently at it’s warmest since records begin in 1895.
The trend is clearer if you use anomalies to remove the seasonality in the data.
The linear trend is not too meaningful given the non-linearity of the data over the last 130 years, but is a guide as to whether temperatures have been rising or not.
Using monthly anomalies it is
+0.05 ± 0.02°C / decade
For comparison, NOAA’s global trend over the same period is
+0.092 ± 0.003°C / decade
(Both uncertainties are 2σ, but neither are corrected for auto correlation)
You can do all the data dredging you want to create fake trends, but you can’t deny that monthly average temperatures in warm months are little different than in the past 120 years. THERE IS NO HOCKEY STICK. Here is a graph with a line at about 80 degrees and there is no hockey stick.
That is 0.5 degrees over a century. And you really think that is representative of the global warming trend? The question still remains. Why is so much of the U.S. not showing the warming found in the globe over the last 120 years?
And, by the way, it is not only the U.S. that has regions like this.
“You can do all the data dredging you want to create fake trends”
Astonishing comment, when I produced those trends in response to Jim saying:
“That is 0.5 degrees over a century.”
Yes, but as I point out the trend is not linear.
I told you what the global warming trend is over the same period. South Carolina has been warming at about 60% of the global rate. Why on earth you think it should be surprising that some small parts of the globe are warming at slightly different rates is a mystery. Especially when we are looking at a period that covers the substantial hot period over parts of the US during the 1930s.
“Why is so much of the U.S. not showing the warming found in the globe over the last 120 years?”
Why are other parts sowing faster rates of warming? Natural variability combined with American exceptionalism, perhaps.
60% of the global rate is a “slightly different” rate?
You just refuse to postulate why all 8 of the locations that I showed have little to no warming. Worse, the data covers 120 years of little to no warming.
The other part you won’t address, is that it isn’t just a small part of the globe.
Here is a graph for Belgium.

Here is a graph for a station in Peru that I made. Funny how Tmax isn’t driving the increase but rather Tmin.

The whole point of the exercise is that temperatures are not following an exponential increase due to CO2 as in the hockey stick predictions many have proposed.
You are simply cherry picking, Jim, exactly as you were doing before. Here is global TMIN vs global TMAX from Berkeley Earth:
I am cherry picking nothing. That is a red herring. You can check all these locations if you want.
The issue is that more and more people are doing their own analysis and finding stations from all continents showing little to no growth.
If you want to refute this, show the piece part local stations that support the Berkeley graph.
You will find, the same as Bellman, that homogenization, weighting, filtering, smoothing, averaging, etc. introduces spurious trends.
I’m not accusing you of falsifying records, I’m accusing you of simply ignoring any records that do not fit your preconceived conclusion. It isn’t averaging or smoothing that creates the warming trend, because it exists on gridded datasets:
It is not homogenization or filtering that produces the warming trend, because it exists in the raw data from all GHCN stations (black line)
https://imgur.com/TbtHeLB
Your analysis can’t be to simply comb through the station records until you find some that support your argument, you have to comprehensively analyze the entire global dataset together, then present your results.
Look at what bellman did with some of the available options at NOAA. Then tell me that you arrive at a similar value as just looking at temperature.
I’ve been showing monthly averages from whole states in the U.S., using data from NOAA. Are you implying these are incorrect? Are they illustrating something that is incorrect?
Try explaining what is occurring over an entire century that is different from the global temperature.
You can’t eyeball a trend in data with a strong seasonal cycle, that is basic time series analysis. You can either do a simple annual average to get rid of some of the influence of seasonality or you can perform seasonal decomposition of time series to extract the trend, seasonality, and residuals.
And as I’ve already pointed out in an earlier comment, there was anomalous warmth in the mid 1930s in the some parts of the contiguous US that reflect an acutely regional pattern, and this will influence the trend for this region, you cannot simply assume that the trend represents global change.
You just look more and more ignorant. Maybe you can’t discern the lack of CAGW warming in the higher months in these graphs but most people can. Maybe you can’t discern a change in low temperature months but most people can.
No one denies the usefulness of some of the time series analysis you mention, but you must also be able to recognize when those things are useful and when they are not.
Simple annual averages hide what is occurring throughout the year making them useless to know what is occurring at shorter time periods such as monthly. Smearing things together with averages, just doesn’t give the detail needed. Why else use monthly anomalies.
How do I know? I spent 30 years doing this in the telephone company. We had to forecast things like call volumes, lengths of calls, monthly changes, likelihood of maintenance and storm damage. These were used for equipment additions, labor hiring, training time, on and on. The underlying driver was money. Capital, expenses, profit and loss.
Don’t presume that you know more than me about analyzing time series data. I spent time working at Bell Laboratories with truly knowledgeable people about statistics attempting to develop studies to measure things to have more knowledge about probability distributions that were likely to appear. No one was happy using traditional Poisson and Erlang tables with simple high day/month averages. The real world just didn’t match mathematical theory. With computers we were able to begin using what occurred an hour ago, yesterday, a week ago, a month ago, a year ago.
What I learned was be wary of averages, smoothing, trending, etc. What you and other mathematicians categorize as noise IS NOT noise. It is signal variance. The closer you can see that variance and anticipate its value, the better prepared you will be.
You want to display your knowledge of GAT? Show us GATₜₘₐₓ and GATₜₘᵢₙ. Then break those down into land regions and ocean regions. Do that for the last 120 years. You’ll be able to then tell us what is truly occurring with the GAT.
Berkeley Earth provides gridded datasets for monthly TMAX and TMIN for land (SSTs are not provided as TMAX and TMIN because the diurnal range of SSTs is small). I hacked together some simple visuals for TMAX and TMIN trends from 1900-present (anyone feel free to check my work here):
It seems to pretty much confirm that both TMAX and TMIN are warming globally. This seems to be completely consistent with the picture of the average temperature trends from NASA:
You have never addressed why a substantial portion of the U.S. has had no warming of max temps which the CO2 theory predicts. Why is that.
You keep depicting processed data that show unrealistic trends. Here is what I showed bellman for Kansas. I suspect the other states will be the same.
These are the monthly average high temps for each year from 1970 to 2024. Generally July or August. It is obvious that CO2 has not caused any warming.
Tell us why this is occurring!
I think a correction is in order for your graph. If I understand what you said to Bellman below, those are actually the average temp of the month with the highest temperature for each year? So the average TMAX of the warmest summer month?
If we restate your question to be accurate, then, you are asking: “Why doesn’t the average maximum temperature in the month of August for Kansas show an increasing trend for the period 1970-2024?”
The answer to that question is that it was abnormally warm in August of 1982 and 1983 in Kansas and much of the central US:
And this significantly affected the linear trend.
Here I’ve done your cherry picking for you: Pick the august temperature series for any of the blue dots on the map and you will have found a point where daily max temperatures do not show much of a trend through 1970.
But this is nothing but cherry picking – you keep drilling down and sifting through the data until you find some angle on it that seems to show what you want it to show and then you hyper focus on that single angle. In reality, the world in August of 1982 and 1983 was more than a degree cooler than it is today, but even more importantly, nothing about the theory of CO2 driven global warming says that the summer of 1982 or 1983 cannot have been abnormally warm in parts of the world (and abnormally cool in other parts at the same time). This is weather. The theory says that over time there is a gradual increase in the global mean temperature, which is exactly what we see.
Maybe I was unclear. I was not asking why the data created a negative trend.
Also, these are not averages of just Tmax temps. They are the highest monthly Tavg in each year as NOAA shows them.
I was asking why is the CO2 growth not causing a correlated increase in the max summer months?
And, keep in mind the other states I have shown will end up showing about the same result.
Why is there no hockey stick rise in summer temps in most rural states of the U.S. over the last 120 years?
Let’s be honest. As I have done research I am starting to see that the way “anomalies” are averaged, calculated, and trended is causing spurious trends that don’t tell the story. This should be a warning that current climate science data treatment may have errors generated by not using proper time series analysis.
CO2 is causing a long term increase in the mean planetary temperature. Around the planet there will be local variability – some places will warm faster, others more slowly. There is at least one location – the North Atlantic – that has actually shown a cooling trend. You are picking through records for individual US states until you find trends that suit your preconceived beliefs and then stating that these cherry picked locations represent the whole.
You claim that the anomalies are calculated in error, but I’ve shown you over and over that the anomalies show exactly the same patterns you’re cherry picking out of individual station records. It is the cherry picking that is the problem, not the calculation of anomalies.
Inspired by Jim Gorman, I though I’s look at the trend for each state.
This chart shows the trend in °C / decade for each state, sorted from slowest to fastest warming since 1895. The black line shows the global rate of warming from NOAA for the same period.
Surprisingly there isn’t a single state that has a cooling trend, though the first 3, Alabama, Mississippi and Arkansas, are not significant at the 5% level.
A more meaningful question is how have the states been warming in recent years, during the period of global warming. Here are the trends since 1979, that is the same period as UAH. Again the black line shows NOAA’s global warming trend since 1979.
The trends in North and South Dakota, and Montana are not significant.
Why do you need a computer to draw a trend line for you? Use your eyeballs. Do you see summertime temperatures trending up? Not hardly. How about winter time temperature. I do see a growth there after about 1980. Isn’t that interesting? Do you see an increase in winter temperatures as a problem?
“Why do you need a computer to draw a trend line for you?”
Because it’s quicker and less error prone than doing it by hand.
“Use your eyeballs.”
You’ve really got he hang of the appeal to the stone. If you can’t see the trend, then the trend cannot exist.
“Do you see summertime temperatures trending up?”
And now the goal posts shift. Only look at the summer temperatures.
Out of interest here’s the warming rate of summer months (Jun – Aug) for all states. All still positive.
And here’s the summer warming rate since 1979
Now show the same info for the winter months. What will those cause your annual average to do?
“Out of interest here’s the warming rate of summer months (Jun – Aug)”
Huge mea culpa. I got the tables wrong, and the above graphs are just the annual data. I wish it was possible to edit the comment.
So here is the hopefully the summer trends, and the good news is there a re some states that have seen a cooling trend for summer months.
and for the trend since 1979
By request the same with Winter trends.
and since 1979
Look at that. 0.5 to 0.6 per decade! That’s 5 to 6 degrees in a century!
And you wonder what is driving the hockey stick?
Now, tell us how awful it is that winters are warmer! Use terms like First Frost and Last Frost. Discuss growing seasons length. Mention crop affects. Maybe less energy for heating.
“Look at that. 0.5 to 0.6 per decade! That’s 5 to 6 degrees in a century!”
Now you’ll have someone calling you a trendologist.
First you were saying some states weren’t warming as fast as the rest of the world, now you are saying some states are warming faster. This is expected.
But when you are looking at short periods in small parts of a small country the uncertainties are large. Here’s the post 1979 warming with 2σ uncertainties.
Looking at NOAA’s Northern Hemisphere, there isn’t too much difference between trends for summer and winter since 1979:
Summer: 0.27 ± 0.03°C / decade
Winter: 0.28 ± 0.05°C / decade
Here’s a side by side comparison of Vermont and the Northern Hemisphere for Summer and Winter. (Anomalies are based on the 20th century average).
It gives some indication of how much more variability there is in a single state compared with the hemisphere, especially in winter.
Still playing with the NOAA data. Looking at all seasons, and using the data for NWS regions. Same set up. Red line is the regional data, blue is the NOAA Northern Hemisphere.
I’ve also done the same, but replacing the annual lines with a lowess smooth.
In case the text isn’t clear, the columns are the seasons Spring, Summer, Autumn and Winter. And the rows, are the 4 NWS regions Easter, Central, Southern and Western.
I don’t have time to make a neat chart. Here is one showing the average temperature of the max month in each year from 1970 to 2024.

By golly, the max temps are falling! How can that possibly occur if CO2 is making temps rise as in your hockey stick charts?
Bet you the min monthly average doesn’t show the same thing.
You are data dredging trying to find something that confirms your preconceived outcome.
Try to find a theory that might explain max months falling and min months to rise while CO2 is constantly rising.
It makes little difference tot he trend, but I think your graph is back to front.
“By golly, the max temps are falling!”
In fact they are rising, but insignificantly. +0.03 ± 0.21°C / decade.
“How can that possibly occur if CO2 is making temps rise as in your hockey stick charts?”
Natural variability.
“Bet you the min monthly average doesn’t show the same thing.”
Quite different in fact.
Warming rate is +0.60 ± 0.36°C / decade.
I’ve still no idea what you think you are arguing here. You think that Kansas warming slowly during summer is evidence that CO2 has no effect on global temperatures, but also seem to be claiming that rapid warming in the same state during winter is not evidence of global warming.
“You are data dredging trying to find something that confirms your preconceived outcome.”
Data dredging? I’m not the one limiting a trend to a single month of the year, and cherry picking individual states. I’m trying to get a fuller picture by looking at multiple regions and multiple seasons. And I’m not sure what “preconceived outcome” you are claiming. I think it’s quite interesting that different parts of a country like the USA behave a bit differently.
“Try to find a theory that might explain max months falling and min months to rise while CO2 is constantly rising.”
The theory is that not all parts of the earth have the same weather and weather patterns. This can happen due to the peculiarities of the individual geographic locations, or by chance, or might even be a response to global warming.
You are correct. That will teach me about copying and pasting.
Here is a revised graph.
There are four points to be made.
One, CAGW alarmists propagandize the GAT to say the earth is burning up. The only time there is burning going on is during winter.
Two, how many graphs on here show that as CO2 increases, the average temperature increases everywhere. If CO2 radiation and the GHE is the control knob, it should increase ALL temperatures equally, Tmax, Tmin, and Tavg. It is not and a new theory is needed.
Three, if the GAT is useful, it should lead people to the correct conclusion. It does not do that, therefore, it is meaningless.
Four, all information I am using is data from long stations. Over 120+ years, the “small area” becomes moot. If CO2 heats globally, then over a long time period, warming should show up everywhere. Claiming that some areas are not destined to warm, ever, means the GAT is meaningless.
Sour grapes. BTW, summer is when the warmest temp occur.
These graphs show that the occurrences of higher temps in warmer months have not not changed appreciably.
Why have you failed to look at winter months? Are you afraid that will upset your annual average increase cause?
Good question.
Anthony doesn’t have an answer.
“They” are mid-range temperatures, not the arithmetic mean.
Crickey. At 15C most Australians would say it’s is cool… Maybe a windchill factor or that forecasters always quote the daily max. temp, so 15C doesn’t actually last long.
In line with metrics used for housing and income, perhaps Median temp is more meaningful.
Lost in the noise and even diurnal variation.
NOAA is at it again
https://realclimatescience.com/2025/01/new-data-tampering-by-noaa/#gsc.tab=0
I question whether we can determine the ‘temperature’ of Earth with a precision of two significant figures to the right of the decimal point, or +/- 0.005 degrees C. It is probably better to round to 0.1 degrees C than to claim that we know the ‘temperature’ to such a high precision. As has been discussed many times, all systematic errors must be removed from a time-series, leaving only random variations, before one is justified in using the Standard Error of the Mean as a measure of the precision. That is, one must be using stationary data.
Ignoring the question of the sampling protocol for now, and moving on to the the issue of accuracy of Earth’s temperature, we don’t actually have a good measure of the ‘average’ temperature of Earth, as average is usually understood. What we start with is the mid-range ({Tmax-Tmin}/2) of daily temperatures, which tells us nothing about the variance of the actual temperatures. All we really have is two data points for a given day and station, which only gives us the range and mid-range. (While modern weather stations can provide us with virtually instantaneous temperatures, the legacy historical data sets are limited by the ubiquitous diurnal high and low temperatures, which are used to calculate mid-range so-called ‘anomalies.’) Using the mid-range values alone, information, such as the fact that daily lows (night and Winter) generally appear to be increasing faster than than the daily highs, is lost. However, that relationship can reverse:
https://wattsupwiththat.com/2015/08/11/an-analysis-of-best-data-for-the-question-is-earth-warming-or-cooling/
What we end up with is a long time-series of the arithmetic mean of the monthly or annual mid-range aggregated temperatures, or the difference between those and the ‘baseline’ mid-range temperature for some 30-year period in the past, commonly called an anomaly, which is non-stationary data. That is, an index for the change in the mid-range temperatures. The point being, when looking at a table or graph of the data, what is presented is not arithmetic means of measured temperatures. It is actually the mid-range value (perhaps averaged) calculated from two samples. This is where we start delving into non-parametric statistics. That is, the fundamental thing we want to know about, the atmospheric temperature at any time, as provided currently, is lacking statistical descriptors such as mean, mode, median, variance, and kurtosis of the actual temperatures. Non-parametric statistics, lacking such descriptors, are a poor tool for prediction, which is of critical interest.
Something that I have never seen acknowledged is that the sampling time is not evenly spaced, other than by coincidence, and the time of the diurnal max’ and min’ usually varies every day. Roy Spencer takes great pains to insure that changes in the time of the microwave temperature calculation is minimized. Furthermore, it is not unheard of that the usual time for measurement of the Tmin (about 1/2-hour before sunrise) may be warmer than temperatures in the middle of the day if a cold front moves across the weather station. Similarly, other meteorological events can make afternoon temperatures untypical for the season. This degrades the accuracy of the target metric, ‘average’ daily temperature.
Definitions are important! Sometimes what isn’t said is more important than what is said.
The climate practitioners simply don’t care about these inconvenient details, it is much easier to go directly to the boiling oceans panic hype than to employ sound data handling.
Excellent and on point. Making predictions based on non-stationary time series is a joke. Trying to discern causal connections with different time series is a waste of time.
I have folks tell me that there is nothing wrong with adding decimal digits beyond what was measured. Invariably they have had a statistical class or two and were allowed to carry averages out to enough decimal places to observe a difference between two different averages. Heck, I was told, why can’t you do this with measurements too? That is the only rule they were ever given.
I might add this certainly looks like what climate science does. Carry integer temps from 1920 out to at least two decimal places so you can compare them with current temps that have a resolution of 0.1. They don’t realize that this is creating and adding information. It is similar to adding words to a quote in order to enhance the meaning. You are creating and adding information that wasn’t present originally.
Considering all the manipulation that goes on between thermometer and final graph, the tiny difference in the 2nd decimal place is totally meaningless, quite aside from the fact that a global average is meaningless in itself for any place or time.
Its cold outside tonite and there are no published temperature anomalies yet this year to let me know if 2025 is on track to be the hottest year on record.
I could use a little more heat, so I turned up the wall thermostat one degree. For some reason it wont let me adjust the temperature anomaly by tenths or hundredths of a degree.
Inuit peoples of the 19th and early 20th Centuries presumably had a story they would tell of a time when the ice would all melt and polar bears would roam the land, killing and feasting on the unwary.
Somehow belief in that Eskimaux Climate Armageddon got injected into the mush filled minds of Western children, although the polar bears miraculously evolved from apex predators into Coke-swilling playpals and protectors!
Without intelligent intervention, life on Earth will die from CO2 STARVATION within the next two or three million years; yet climate alarmists believe humanity should spend it’s greatest minds, time, and treasure finding ways to make it happen sooner!!? Hunh!!? You spend all your time worrying about rising seas and temperatures, yet ALL the evidence points to higher temps and CO2 levels in the recent geological past! That is indicative of a mindset so far removed from intelligence or wisdom as to be actively discouraged from reproducing! That may be why libtards worship abortion as one of their highest religious rituals; they feel deep guilt over their own ignorance and depravity!
How should CNN and the BBC have described 2024’s global temperatures then?
Every single record-based global temperature data producer, whether surface or satellite (lower troposphere), including UAH which is prominently featured on this very site, clearly show that 2024 was the warmest year on their respective records and by a clear margin.
What tortured phraseology should news corporations employ when describing this relatively simple fact other than state it in plain language?
We agree, news organizations push propaganda. Heck 60 Minutes actually edited what Kamala said in an interview.
So how should the BBC and CNN, etc have reported the fact that 2024 was the warmest year recorded by all global temperature data producers who record or use recorded temperatures, including UAH?
Tell people about the scammers in the climate cult like Al Gore and Obama.
Sorry, I asked how should the fact that every single global temperature producer there is has declared 2024 as the warmest year on their record be reported?
Do you just not want it to be reported? Shall we just hide from it?
Should we refer instead back to pre-historic times, before civilisation, or even human beings, existed?
Deny it?
“Sorry, I asked how should the fact that every single global temperature producer there is has declared 2024 as the warmest year on their record be reported?”
If I were reporting it, I would say that 2024 is the warmest year in the satellite era (1979 to present), and global temperature reconstructions of periods before 1979 are suspect, so it can’t be stated truthfully that 2024 is the warmest year in human history, and the written, historic, regional temperature records, from around the world, show temperatures were just as warm in the past as they are today..
How should CNN and the BBC have described 2024’s global temperatures then?
They should have a measurement model and measurement method and follow the GUM recommendation for giving a measurement.
This would allow people to see what the exact range of measurements were used to calculate the GAT. As it is, there is no standard way of calculating a GAT or all the different organizations giving an estimate would agree. In other words, pyhsical science would rule rather than statistical gurus.
The CNN and BBC do not employ experts in metrology/statistics in their news departments.
Of that I am sure.
So they trust the info being reported by the organisations responsible for that knowledge.
Perhaps they should call you, and you can befudle them (as you do me) with your endless audit of “uncertainties”
Infiltrated by charlatans like Mann
Whose “hockey-stick” paper has been replicated many times since.
The hockey stick is a programing artifact. Grow up Banton.
Please spend a few minutes and learn something.
From 12:35…
It’s funny how people are always pinning this just on the BBC.
Here’s the Daily Mail on the same subject
https://www.dailymail.co.uk/sciencetech/article-14266537/2024-officially-hottest-year-RECORD.html
Here’s a brief extract from the long article
And here’s the Daily Telegraph’s headline. (Can’t see rest of the article as it’s behind a paywall.
https://www.telegraph.co.uk/news/2025/01/10/year-2024-hottest-record-break-paris-agreement-climate/
And the Daily Express
Very good point.
The BBC probably has the largest total audience.
They all get their “news” from the same source that pointedly hide information about how uncertain these “measurements” truly are!
That is probably expecting a lot from the average reader. However, the media could provide citations and links to the GUM and, most importantly, provide a disclaimer that the mid-range values characterized as an “average” carry a lot less information than what is commonly implied by an “average.”
Eh? It falls to news broadcasters to build models to test the statements and data reported by scientific organisations. Does this also apply to economics, politics, medicine, etc.
If that were the case, there would be no such thing as news.
Have you been smoking the evil weed?
It is the so-called climate scientists that never quote nor properly define the uncertainty involved in the GAT that the media use for news.
They could start by acknowledging that what is called the “average” global temperature isn’t actually an arithmetic mean. It doesn’t provide us with daily or monthly values that provide information on what the temperatures are doing at even an hourly resolution, or at the same time every day. It is a mid-range value that doesn’t even have the redeeming value of providing the range, let alone the standard deviation so that one can assess the probabilities of occurrences, identify outlier reading errors, or understand heat fluxes. It is little better than an index as it doesn’t have descriptor statistics. That is, they should acknowledge the limitations of mid-range values and not pretend that we have quality measurements that might justify extreme changes to society.
Maybe they acknowledge that the various homogenisation and area-weighting techniques, etc used to produce the global average, as set out clearly in the various peer-reviewed papers (which you ‘skeptics’ are free to challenge, but never do) are sufficient?
As to UAH, this site’s beloved data set of choice; even it accepts that 2024 was the warmest year on its record, beating its previous record-warm year, 2023.
Where are UAH’s error margins, by the way? Have you ever asked?
It is a cute answer, but doesn’t really address my stated concern about describing the taste of apples when it is actually oranges that are being sampled.
Funny that you should make the unsupported snide remark, “(which you ‘skeptics’ are free to challenge, but never do),” when no one commenting here wants to deal with my complaint of laymen and expert alike using the term “average” when it is actually the mid-range that is being described, and don’t admit to the deficiencies of the mid-range computation. I suppose you all subscribe to the opinion that ‘crickets’ are the best response to “inconvenient truths.”
But they are not sufficient. The more I look at local and regional temperatures, there is not a significant growth, and certainly not a hockey stick. This leads to one conclusion, homogenization, area-weighting, and anomalies from averages are generating spurious time series trends that result in a faulty overall trend.
Look at this trend from Peru. You’ll have a hard time refuting that Tmin is causing a rising trend. Make us an argument that increasing Tmin is a danger to life on earth.
Then look at this graph and explain how a large U.S. state can be immune from warming by CO2 for 120+ years.
Excellent!
https://www.bbc.co.uk/news/articles/c30dn5dn53jo with the headline “Planet-warming gas levels rose more than ever in 2024”.
It then goes through the reasons
“Last year, CO2 emissions from fossil fuels reached new highs, according to preliminary data from the Global Carbon Project team.
There were also the effects of the natural El Niño phenomenon – where surface waters in the eastern tropical Pacific Ocean become unusually warm, affecting weather patterns.
The natural world has absorbed roughly half of humanity’s CO2 emissions, for example through extra plant growth and more of the gas being dissolved in the ocean.
But that extra blast of heat from El Niño against the background of climate change meant natural carbon sinks on land did not take up as much CO2 as usual last year.
Rampant wildfires, including in regions not usually affected by El Niño, also released extra CO2.
“Even without the boost from El Niño last year, the CO2 rise driven by fossil fuel burning and deforestation would now be outpacing the [UN climate body] IPCC’s 1.5C scenarios,” says Prof Betts.
These factors meant that between 2023 and 2024 CO2 levels increased by nearly 3.6 parts per million (ppm) molecules of air to a new high of more than 424ppm.”
They state that land sinks are absorbing less carbon dioxide, that it’s burning fossil fuels etc.
They say another sink is the oceans dissolving carbon dioxide.
They mention El Niño as increasing the temperature, but fail to acknowledge that warmer ocean surface temperatures mean that they do not absorb carbon dioxide instead they outgas carbon dioxide.
““Last year, CO2 emissions from fossil fuels reached new highs, according to preliminary data from the Global Carbon Project team.”
You mean Germany and UK’s Net Zero suicide has resulted in increased CO2?! Clueless politicians just spinning their wheels and bankrupting their nations as a result.
Yes, it looks like an Idiocracy from here.
Sad.
Perhaps I’m thick but in a warming globe every year should be a record hot? If it isn’t then there’s something wrong with the theory?
It is why time series analysis should be done and not simple linear regression forecasts.
That’s a bit like saying that every day of winter should be colder until mid-winter then warmer after that.
Of course there is natural variability. Each year is not expected to be warmer than the next; but the long-term trend should show sustained warming.
“Perhaps I’m thick but in a warming globe every year should be a record hot? If it isn’t then there’s something wrong with the theory?”
I think that was NASA and NOAA’s thinking, and that’s why they started bastardizing the temperature record after 1998, where they managed to declare “hottest year evah! about 10 or more times between 1998 and 2015, and they are so good at data mannipulation that they managed to make each successive year a hundredth of a degree hotter than the previous year to make it appear that temperatures were getting hotter and hotter and hotter and hotter and caused, of course, by increasing levels of CO2.
Meanwhile, the UAH satellite chart shows cooling after 1998. No year after 1998, could be declared “the hottest year evah! going by the UAH chart because no year after 1998 was hotter than 1998 until the year 2016 is reached.
So if NASA and NOAA used the UAH chart they couldn’t declare “hottest year evah!” but one time, when 2016 was reached. Nothing scary in that, so they chose to declare “hottest year evah! about a dozen times after 1998.
NASA and NOAA are doing psychological climate change warfare. Playing with the numbers and trying to scare people into submission.
At this point, “hottest year on record” and “last year” are practically synonyms.
“Where did you go on holiday in the hottest year on record?”
“Oh, we went to Greece, lovely little hotel. We’re thinking of going again this year.”
Yeah, it wasn’t the hottest year on record around here.
The global average isn’t applicable to regional temperatures, especially bastardized global averages.
Hottest year ever!? Don’t make me laugh!
The BBC has nothing but contempt for the majority of its audience and those who pay to fund them. Sadly, as the number still willing to pay for their woke biased output falls, the inevitable ‘funding from taxation’ has been raised meaning that unless you can avoid paying ANY tax, you are forced to contribute to this vile organisation who has been found to promote a drill rapper whose ‘song’ is about a child he murdered for which he got a ‘life sentence’ of just 14 years and is now out of prison.
great write up Anthony.
two key observations…
“While NOAA repeats the “hottest year on record” claim, their numbers differ from Copernicus, undermining any confidence one might have in the precision of the global average temperature measurements for 2024, and any record breaking claims flowing from their disparate data measurements.”
there is no precision in global temperature measurements for 2024 or any year prior.
and…
“It’s also worth noting that the phrase “hottest year on record” typically refers to records spanning about 150 years—a mere blink in geological time. Paleoclimatological evidence shows that Earth has experienced periods with significantly hotter temperatures long before industrial revolution. For instance, during the Eemian interglacial period around 120,000 years ago, global temperatures were comparable to or even exceeded current levels.”
whats the term they use? oh ya, catastrophic climate change.
anyone that believes that a 1.5c temp change, measured or fabricated is cause to make policy from is on a fools errand. especially against the back drop of millions of years of temperature cycling the planet has gone through.
when wheat no longer grows in Canada, or oranges do…that is climate change.
There are some misconceptions. The average temperature over a 24 hour period is not the arithmetic average over the minimum and maximum temperatures. It can be very far from the figure. One need to integrate the temperature over the time period in question.
The next problem is that one cannot take average of temperatures in a meaningful way. This is clear from thermodynamics, but can also be seen from this experiment. Take two one liter containers of air with the same pressure, one has a temperature of 30 degrees Celsius, while the other is 40 degrees. Now mix the air in the two containers. What is the resulting temperature. According to IPCC that would be 35 degrees. In reality it could be 31 degrees or maybe 39 degrees. It depends on the humidity of the air in the two containers. It is no more meaningful to calculate the average temperature of the Earth than to calculate the average telephone number in a phone book.
This whole problem is so complex that it is easy to temporarily forget issues like how humidity buffers temperature changes and changes the lapse rate. The problem is something like wanting to know what the density of a cake is and measuring the weight and volume before it is baked.
It’s the UKMO rather than the BBC. The BBC just jumped on the pqssing bandwagon
Very nice Anthony.
Hottest global temperature on record is meaningless. Recording global temperatures started in the 1880s. That is less than 150 years. The instruments used to measure global temperatures then aren’t the same as now. The accuracy of the early instruments wasn’t the same as now. Do we know the placement of the instruments then and the time the measurements were taken and who took the measurements?
We do know the accuracy of today’s instruments, we know when the measurements are taken, we know who takes them and we know what time they are taken. Knowing all that we also know many instruments are in unacceptable places and these readings shouldn’t be used. We know that not all the temperatures being used were actual instrument readings. We know that there are human actions destroying the usefulness of the readings we are using. The instruments are acceptable but they are located in parking lots, or near runways or near air conditioning units or near buildings and on and on. Not to mention homogenizing readings, do we even know what’s happening there?
With all of that these dishonest goons expect us to believe that they can accurately measure global temperature to within a hundredth of a degree. That is BS of the highest order. The records in the 1800s are undoubtedly more honest than those today even if their instruments weren’t nearly as accurate.
The WUWT trendologists go on the warpath whenever I point this inconvenient little fact out.