Guest essay by Larry Hamlin
In a clear discrediting of NOAA’s and the media’s recent overhyped and flawed global temperature claim that “July 2021 was the hottest month ever recorded” (with this hype promoted by NOAA climate alarmist “scientists”) updated data from all major global temperature anomaly measurement systems (including NOAA as discussed below) proves that NOAA’s claim was exaggerated, deceptive and distorted.
The 4 major global temperature measurement systems including satellite systems UAH and RSS and surface measurement systems GISS and HadCRUT revealed that NOAA was an isolated outlier in making their exaggerated claim that was so ridiculously overhyped by the climate alarmist media as clearly demonstrated by the headline and picture shown in the above article by the AP’s decades long biased climate alarmist activist Seth Borenstein.
The combined land and sea global surface temperature monthly anomaly data are available for each of the 5 major global temperature measurement systems at HadCRUT5, UAH6.LT, GISSlo, RSS TLT V4 and NOAAlo as discussed (with links) in the information provided below.
The UAH, RSS, GISS and HadCRUT global temperature monthly anomaly measurement systems showed that the highest July occurred in years 1998, 2020, 2019 and 2019 respectively and not year 2021 as claimed by NOAA.
Furthermore, NOAA’s “July hottest month ever” claim was both exaggerated and deceptive because it was based on a trivial and miniscule 0.01 degrees C above the prior July NOAA peak monthly anomaly measurements which occurred in years 2020, 2019 and 2016.
The NOAA July 2021 global monthly temperature anomaly measurement 95% confidence level (accuracy range) is +/- 0.19 C which is nearly 20 times greater than the miniscule 0.01 degrees C temperature anomaly difference between July 2021 and July 2020, 2019, and 2016 meaning that the difference between these July temperature anomaly measurements is scientifically insignificant and unnoteworthy.
Further adding to NOAA’s and the media’s deception that the July 2021 global temperature anomaly “hottest month ever” hype is the fact that this week (9/14/21) NOAA reduced its July 2021 temperature anomaly value by 0.01 degrees C as a part of its August 2021 global temperature anomaly system update meaning that July 2021 is not the “hottest month ever” but tied with July year 2019 with years 2020 and 2016 July anomalies just 0.01 degrees C lower.
Where are the climate alarmist media headlines announcing NOAA’s embarrassing reduction in its prior reported July 2021 temperature anomaly “hottest month ever” hype and acknowledging this change in the public press? Don’t hold your breath waiting for the NOAA and media alarmist correction announcement.
The highest peak global monthly temperature anomaly for all 5 temperature measurement systems including the UAH, RSS, GISS, HadCRUT and NOAA measurement systems occurred over 5 years ago in year 2016 during the months of February and March.
More significantly the media’s ignorant and misguided “July hottest month” exaggeration and deception deliberately tried to grossly distort the global monthly temperature measurement anomaly record by concealing the fact that global monthly temperature anomaly declines have been underway since peak year 2016 as clearly reflected in all 5 global temperature anomaly measurement systems data records as shown below in the UAH, RSS, HadCRUT, GISS and NOAA data records respectively.
The graph below shows the HadCRUT4 data. HadCRUT5 data has about 14% higher values. The February 2016 peak anomaly for HadCRUT5 data is 1.22 C
versus about 1.1 C for HadCRUT4.
Of course, there will be no news article blaring headlines or climate science ignorant (yet incredibly arrogant) TV broadcasters in the biased climate alarmist media acknowledging the erroneously flawed hype of the “July 2021 hottest month ever” scam that was nothing but politically motivated climate “science” alarmist propaganda consistent with the usual alarmism and media shenanigans built upon climate hype dishonesty through use of exaggeration, deception and distortion.
The declining global monthly temperature anomaly data trends for all 5 major temperature measurement systems over the last 5+ years as shown above clearly establish that there is no climate emergency.
Additionally, the U.S. and EU who have been driving the UN IPCC climate alarmism political campaign for over 30 years have now completely lost the ability to control global energy and emissions outcomes through the IPCC’s flawed climate model contrived schemes.
In 1990 the year of the first UN IPCC climate report the world’s developed nations led by the U.S. and EU were accountable for nearly 58% of all global energy use and 55% of all global emissions. But that dominance in global energy use and emissions by the developed nations changed dramatically and completely disappeared over the next 15-year period.
The world’s developing nations led by China and India took command of total global energy use in 2007 (controlling more than 50% of all global energy use) after dominating total global emissions in 2003 (controlling more than 50% of global emissions).
In year 2020 the developing nations controlled 61% of all global energy use and 2/3rds of all global emissions with these nations clearly on a path to further increase these commanding percentages in the future. The developing nations have no interest in crippling their economies by kowtowing to the western nation’s flawed model driven climate alarmism political propaganda campaign with the developing nations having announced to the world that they are fully committed to increased use of coal and other fossil fuels.
In year 2020 the developing nations consumed 82% of all global coal use with China alone consuming 54% of the world’s coal. China was the only nation in the world that increased both energy use and emissions in pandemic year 2020.
The U.S. and EU have not contributed to the increasing level of global emissions over the last 15 years. In fact, these nations reduced emissions during this time period by many billions of metric tons. Yet global emissions have continued to dramatically climb ever higher by many more billions of tons driven exclusively by the increased use and unstoppable growth of fossil fuel energy by the world’s developing nations.
Assertions by U.S. and EU politicians that massively costly, horrendously onerous and bureaucratically driven reductions of emissions will “fight climate change” along with bizarre claims of supporting a “net zero” future are ludicrous, disingenuous and represent nothing less than completely fraudulent proposed schemes.
It’s time for the developed nations to stop their scientifically incompetent, globally irrelevant, real world inept and purely politically driven flawed climate model alarmist propaganda campaign.
NOAA reported that July 2021 was the hottest month ever because July is the hottest month of the year and it was the hottest July on record, not because the anomaly for July was the highest anomaly ever recorded.
WHAT? Are you whitewashing?
You can’t read, why ever.
The confusion is strong in you.
Just looking up the UAH record, we have a couple of Julys warmer than this years.
I think it is obvious that the NOAA is referring to their own temperature index, here, not to UAH.
NOAA’s index might be worth looking at if NOAA had no preconceived idea of what it should be.
Then not only does its own temperature reading not confirm what others are saying, its allowing the media to hype up a none event, your efforts would be better spent talking to noaa and the media, let me know how you get on.
UAH6 is the only Temperature data set that matches SST data closely.
I’m not sure what you’re trying to show – you need to place everything on the same baseline to compare, like this.I would hope that HadCRUT is consistent with HadSST since HadSST is the SST component of HadCRUT.
You made an off set adjustment for two data sets to match up which changed its true starting point downward.
You manipulated the numbers, shame on you.
HadCrut starts at about -.175C in 1979
UAH6 starts at about -.45C in 1979.
My friend, the original link that you provided had the HadSST series offset to place it on the same baseline as the UAH series. HadSST and HadCRUT are on the same baseline, so you should have shifted both, or neither. If you want to claim that changing the baseline is “manipulation,” then this is what you get, and by your logic UAH doesn’t match the SSTs at all.
And, again, I will point out that HadSST is the SST component of the HadCRUT dataset, so there is no question whatsoever that both series are consistent.
Ahh I missed that, you are correct.
But even then the SST and Hadcrut don’t match up either, requires an offset to line them up together.
That is because the oceans are warming more slowly than the land (in no small part because water has a higher heat capacity than land surfaces do). So it isn’t an offset but a difference in trend.
How does your comment reconcile with this post?
The bottom line is, July was NOT any hotter than any other in recent years! Anyone who was born before 1990 already knows this. If the only number we could go by were from the past 20-30 years, then, MAYBE, it might have seemed hotter. I remember all the way back to the late 1940’s and believe me, it was MIGHTY hot then, too! I hate it when these Johnny-come-lately kids try to blow smoke up everyone’s az!
At least in North America, Hansen said the year 1934, was 0.5C warmer than 1998. I assume there was a pretty hot July in 1934, although I don’t have the exact numbers, and may never have the exact numbers considering how the temperature record was bastardized.
But Hansen had a colleague who wrote to Hansen and said his estimate of the difference between 1934 and 1998, of 0.5C was consistent with his figures. This communication is enshrined in the Climategate emails.
Oh, wow! 1998 was hotter than 1934 by 2 one hundredths of a degree. Maw, fetch me my micrometer.
That was in the US Only.
What a hideous travesty! A hideous, expensive travesty. Thanks for the graphic example of the Big Lie, told by promoters of Human-caused Climate Change.
The Hockey Stick charts are computer-generated science fiction.
Real temperature charts from around the world tell a completely different story. They say we have nothing to fear from CO2, and they say today is not the hottest time in human history.
That’s the lie told by the Hockey Stick charts and their creators.
Wasn’t it Gavin Schmidt that essentially said climate is only important where you live? And, given all the temperature recordings’ problems and adjustments, the world has only warmed an estimated approximately 1.0 C in 220 years. Maw, go fetch me my speedo.
That’s after adjustments. After the temperatures stopped climbing in 1998, the Data Manipulators started cooling the recent past to make the present look warmer.
About 2007, Hansen was still saying 1934 was warmer than 1998, but they whittled the difference down in their computers in coming years to the point where they were showing 1934 as cooler than 1998. It’s all a scam.
Below is an analysis I did comparing unadjusted data (black line) to the major temperature indices from NASA and others. The raw data show that 1998 was much warmer than 1934 globally.
I personally don’t have confidence that the “unadjusted data” isn’t adjusted. As an example, the weather for Cape G. Missouri was originally missing for 3/2/2021, but was later “fixed.” Issue is that the high temperature weather now recorded for that day is about 20 warmer than what the weather reports were for that date. There’s also issues with weather stations over that period of time, such as ones located near airport runways. Could anything have changed at airports since WWII that would an local increase in temps unrelated to green house gasses?
You provide further proof that the governments cool the past and warm the present and near-present. Without the recent Super El Nino there would be no significant warming in the late 20th and early 21st Centuries. UAH6 shows what is actually happening in the atmosphere and ARGO does likewise for the oceans. They both show minimal 21st Century warming, with UAH6 showing an almost 19-year halt in warming before the recent Super El Nino. The governments’ data, however, show what the politicians want to show us.
The radiosonde, satellite and ARGO observations during the decade of the 2020s and early 2030s will tell the tale, one way or another. With China, India & etc. pumping out CO2, anything other than a steep increase in global warming will put the final nail in the UN IPCC CliSciFi models’ coffins.
“With China, India & etc. pumping out CO2, anything other than a steep increase in global warming will put the final nail in the UN IPCC CliSciFi models’ coffins.”
Back before everyone had AC hot was just another summer day, now people hibernate inside in climate control and when they go out it feels horrible to them. This I think is the biggest reason people actually believe that it is hotter today then it was 50 years ago. Even if you believe the BS numbers that show it is a little hotter today then back then the data shows the difference is very small and its all at night, but ask any true believer and they will tell you they can feel how much hotter it is now then back in the day. This is evidence of the strong brain washing that is going on today and, worse, how ridiculously susceptible to it it people have become.
You manipulated the numbers, shame on you.
HadCrut starts at about -.175C in 1979
UAH6 starts at about -.45C in 1979.
Of course they have different values, they have different baselines! Hadcrut is the temperature compared to the 30 year average Jan 1961 – Dec 1990, UAH6’s baseline is Jan 1981 – Dec 2010. As the Hadcrut baseline is earlier and temperatures have risen of course it’s anomalies are ‘warmer’. If you want to compare anomalies with different baseline periods you have to apply an offset so all are being compared to a common average. This is all spelt out in the woodfortrees documentation where they give the required offsets to align with UAH6 as
So the correct 1979 Hadcrut4 value to compare with UAH6 is (-.175 – 0.29 =) -0.46.
Comparing raw anomalies with different baselines and no offset is like comparing the heights of two people while one is standing on a box.
UAH says land is warming 50% faster than the ocean so that fact that UAH’s global trend matches the SST trend is pretty odd.
Land temperatures are whatever the governments say they are. Actual long term atmospheric temperature trends should mimic long term SST trends. IIRC, studies show the ocean temperatures drive atmospheric temperatures. The UAH6 trend just confirms that.
Are you saying the government told UAH what to report?
NOAA engages in data tampering if the US data.
In what way?
By systematically altering historical and new data to change temperature trends … https://www.youtube.com/watch?v=Pvuhxv1Ywd4
That video seems to be Heller making his same old errors. He doesn’t grid the data, which creates huge biases in his averages, and he doesn’t apply any adjustment for ToBs, which creates moderate bias in his temperature graphs, and a huge bias in his graphs of “% of days above x temperature.” He also seems continually confused (or at least wants his viewers to be confused) about why the NOAA performs infilling. Do you have a better source?
The “source” is the difference between the Raw and the Altered data. The “altered” data is in essence manufactured miss-information — not data. Like they say — if the data doesn’t fit the theory — change the data.
The data aren’t being altered – they’re being used in an analytical product, which must account for imhomogeneities in the station records. For NOAA, whose approach requires continuous records, this means doing infilling using nearby stations. Otherwise, you’re inserting the global average in for the missing value, which you certainly don’t want to be doing when you have better information. Of course, there’s no need to infill if you use gridded anomalies, which Heller could do, but then he’d get the same answer as the NOAA.
Explain why USHCN (US Historical Climatology Network) station observations prior to 2008 received cooling temperature adjustments and those after 2008 received warming adjustments … https://www.youtube.com/watch?v=vnmzOeG_N64
This video commits all of Heller’s usual fallacies.
You seem to be fascinated with the word Heller. I don’t care what he did or does, I only care why are tax monies are being used to alter good data. So, again, explain why USHCN (US Historical Climatology Network) station observations prior to 2008 received cooling temperature adjustments and those after 2008 received warming adjustments.
You linked to another of Heller’s video as a source for your argument. I’m pointing out that there is no evidence of nefarious data tampering. The big differences Heller finds in the raw vs. adjusted data are coming from his own analysis errors. There is nothing in the video suggesting that temperatures were cooled prior to 2008 and warmed after, and you’ve provided no other evidence of this, so there’s nothing for me to address there.
With over 37 million views, isn’t it interesting that no one has yet found Tony’s USHCN analyses in error? Have you ever looked at the data?
Are you kidding? Heller has been debunked time and time again.
Here’s one – it makes much the same points as weekly_rise.
That link does not answer the core question … why (on average) were the USHCN station “raw” data cooled before 2008 and warmed after 2008?
You do understand that “no evidence” is not a proof that nothing occured don’t you? You can’t say there is no evidence of rain inside my house therefore it is not raining.
The fact that I don’t see fairies dancing in my living room also isn’t proof that fairies don’t exist, but absent any evidence of their presence I have no reason to believe they exist.
The problem you overlook is that you also have no reason to believe they don’t exist. I’ll say it again the lack of evidence proves nothing. What you believe in that case is based purely on faith. Since faith is the basis, you can not denigrate someone who chooses the opposite. An argument asserting wrongness is facetious from the get go.
If you approach me and say, “explain why there are fairies in my back garden.” I would reply, “you haven’t shown any evidence that there are fairies in your back garden, so there is nothing for me to explain.” That’s no me asserting that you are wrong, that’s me asserting that the burden of proof has not yet left your shoulders. John demands that I explain why adjustments before 2008 are cooling the trend and adjustments after 2008 are warming it, but he’s provided no evidence that this is actually occurring, so there’s nothing to explain.
A more important point might be that the raw data is not saved by NOAA, but only the adjusted and rewritten data.
The “Raw” and the “Edited” data are publicly available for download. NOAA does not make it easy to find — but it’s there — and it better be because our taxes paid for it.
Infilling is not creating measured data. It is creating an artificial metric that may or may not be useful.
Take your grids and find the trends. You’ll never find enough grids with hot enough values to offset the areas with little or no warming and areas with cooling.
Unless you have a thermometer covering every single point on Earth’s surface, you are performing infilling whether you want to or not – the values you do have are providing estimates of all the values you don’t have. If you choose not to use nearby stations to estimate the missing values, then you’re using every point on the planet to estimate the missing values. I’m not sure about you, but I don’t think the best estimate of temperatures in Death Valley are the temperatures in Antarctica or Tibet.
You do the *exact* same thing when you use anomalies. When you combine the anomaly from Denver with the anomaly from New Orleans exactly what are you doing? You don’t know what caused the anomaly in either location so how can the best estimate of temperature be based on anomalies?
If you can not have temps from every point on earth, why do you worry about trying to invent a fake metric that has no meaning anyway? You do understand that GAT describes no place on earth accurately, right?
It GAT accurately represented every point on earth, you really would only need one thermometer to determine what is happening. The fact that you need to create pseudo-temperatures through infilling amply demonstrates that GAT is a made up metric with no meaning. It is like finding the average height of a herd of half Clydesdale and half Shetland horses. Exactly what good is that metric? It describes nothing accurately.
Knowing the planetary mean surface temperature is quite handy if you want to go and visit that planet and are wondering what clothes to pack, wouldn’t you say? Mars is a much colder planet than earth, for instance. But how can I make such a claim?
Mean surface temperature is clearly a useful metric to calculate and to track – changes in the mean surface temperature are directly correlated with changes in climatological variables like ice volume and sea level.
The mean surface temperature of Mercury would be totally useless.
If we observed a long term change in the mean surface temperature of Mercury that would indeed be valuable information, agree or disagree?
“Valuable” in what context?
If you were visiting Earth from Mars would the GAT tell you what clothes to pack? What if you landed Miami vs San Diego? Or Port Barrow vs Mexico City?
The GAT doesn’t tell you the variance of the temperature profile at any specific location. And it is that variance that is of the most importance in deciding what clothes to wear. GAT is an almost useless metric .
Changes in mean surface temperature will *NOT* tell you anything about what is happening with climate. Again, climate is the entire temperature profile at a location and the GAT loses all that data in its calculation.
Think of it this way. If it is MINIMUM temps going up that raises the GAT then that minimum temp has to raise far enough to change the volume of the oceans and to change the melting rate. If the sea level is determined by max temp then min temps going up won’t change seal level much if at all. If the temp on a glacier changes from -10C to -9C how much extra melting will occur since both temps are far below the melting point of 0C?
There *are* other factors that are the real drivers of all of this. But the climate models ignore them all. As Freeman Dyson pointed out years ago, climate models are *not* holistic models of the environment and only a holistic model can truly tell you what is going on with the environment. The current climate models were useless in predicting the greening of the earth that has happened since 1980. Why is that? Ans – because the GAT doesn’t even begin to address that piece of the holistic model. Instead we are told that the GAT going up means the Earth is turning into a cinder and crops are going to fail and deserts will overtake the planet. Each claim totally unsupported by the GAT because the GAT can’t tell you what is happening on the Earth.
My answer was tongue in cheek – but you realize that there is a difference in the mean surface temperature of the planets, and you realize that the mean surface temperature of the earth indicates something about the state of the climate system (if the mean temperature is going up ice sheets will be retreating, going down they’ll be advancing, etc.). If the mean annual temperature is rising then we don’t much care if it is max or min daily temperature – the mean of both must be rising.
We don’t actually think there is a family in the US consisting of 3.15 persons, but the average family size in the US is unquestionably a useful metric to track.
Absolutely. The average family size is a useful metric to track. So is the average height of people. BTW…the Gorman’s think that given a ±1 cm on height measurements that the uncertainty of the mean of 1,000,000 people is given by root sum square or sqrt(1^2 * 1000000) = ±1000 cm. So if the average height is 168 cm they think the 95% CI on that is -1832 to 2168 cm. And if you measure a million more people the uncertainty of the mean increases even more they say. Even as absurd as that is they refuse to budge on their position regarding the uncertainty of the mean and continue to claim that it is actually statistics texts, expert statisticians, scientists, and everyone else that are all wrong.
I confess that I am utterly bewildered by Tim/Jim’s position on this. I’ve read as much of the debate between them and Bellman as I care to and their position is nonsensical. They literally seem to be saying that the larger your sample size the less certain you can be about the population mean.
That is exactly what their argument is. I’ve gone around with them several times as well myself. Bellman has the patience of Job.
I’ve generally found that I agree with Gorman’s opinion. However, I can’t speak for him. On the other hand, I believe you are misrepresenting what he has said. How about actually providing a quote instead of your interpretation of what you think he said?
The Standard Error of the Mean can be improved generally by taking multiple measurements of a stationary or fixed parameter. On the other hand, with non-stationary data, that is a time-series with a trend, both the mean and the standard deviation will change over time. If it is a positive trend, both the mean and standard deviation will increase over time. It makes no sense to claim that taking more measurements will improve the accuracy or precision when both are a moving target.
“But the uncertainty of the mean is the RSS of the individual uncertainties. You do not divide by N like you do when calculating the average. It’s just RSS. And the uncertainty of the mean is the RSS.” from here. That is not the only post like that though.
“On the other hand, with non-stationary data, that is a time-series with a trend, both the mean and the standard deviation will change over time.”
The inconvenient truth they all try to sweep under the carpet.
We aren’t discussing a timeseries of monthly anomalies or the uncertainty of a trend thereof. We are discussing the uncertainty on a single monthly global mean temperature anomaly only. The mean does not change over time. In other words, the mean for August isn’t any different because it is now September. Likewise the mean for 2020 isn’t any different because it is now 2021.
The Gorman’s say the uncertainty of the average of all grid cells in the global mesh is Utotal = sqrt(Ui^2 * N) where Un is the uncertainty of each individual grid cell. Statistics texts, expert statisticians, and everyone else says it is Utotal = Ui/sqrt(N). This can also be demonstrated via a monte carlo simulation.
You have just demonstrated that you don’t understand the problem. The current July is being compared with previous Julys. That is equivalent to sub-sampling to once a year, with July being the month for comparison. Thus, it becomes a time-series of Julys where the monthly average and the trend are generally increasing.
Comparing two or more monthly values is a completely different topic; one which would undoubtedly come with its own challenges by contrarians. But before you can even compare monthly values you must first produce the monthly values and provide an uncertainty for each one. The challenge being made on a single monthly value basis is that the uncertainty is not Utotal = Ui/sqrt(N), but Utotal = sqrt(Ui^2 * N) where Ui is the uncertainty on individual grid cells and N is the number grid cells. And its actually even more fundamental than that even. Some contrarians on here think the uncertainty of the mean of any set of values is described by the RSS as opposed to the SEM. That is the core issue being debated. And at its very core actually has nothing to do with comparing two or more means or even temperatures at all.
We can’t tell if the July average is increasing, becasue the mean keeps increasing. Is that what you are saying?
I’m basically saying that because the mean is changing during any month, and over a period of years, the standard error of the mean cannot be used to justify a higher precision than any individual reading. Therefore, with low precision, one cannot distinguish a 0.01 deg C difference.
Increases in precision can only be justified with stationary data, when the variations from reading to reading are random and normally distributed.
Of course, he then goes on to explain at great length about the uncertainty of the sum, without ever mentioning the average.
So, if an extraterrestrial alien decides to visit Earth, and lands in Antarctica in the Winter, or the Sahara in Summer, he/she/it will have adequate clothing for the environment based on the average temperature? No way! What is important is the range of seasonal temperature extremes, or the likely local temperature for the locality for landing. However, it is typically extreme temperatures that kill any organism, so it would still be more useful to know the range than to know the mean.
“they’re being used in an analytical product, which must account for imhomogeneities in the station records”.
Lmao, how many times have you bought the Brooklyn bridge? people are so ridiculously easy to fool these days.
Quiz: I want to take the average height of two people, so measure each of their heights from the floor to the tops of their heads. I see that one person is barefoot, the other is wearing 6″ platform heels.
a. take the average exactly as-is, because data tampering is a sin and the word “inhomogenous” sounds like something Satan would say?
b. subtract the height of the platform heels before taking the average?
In this case, wouldn’t “account for imhomogeneities” be a subjective judgement of the person doing the “accounting”?
In other words these computer temperature adjustments are based on personal opinions of the adjuster. A biased person could easily insert his bias into the calculation.
I don’t buy the adjustments, especially those before the satellite era. Instead, we should stick with the unmodified, regional temperature charts to guide our path, and they tell us we have nothing to fear from CO2 because it’s not any warmer today than it was in decades past, so CO2 has not added any additional warmth to the picture, at least not enough to measure, or see.
The unmodified, regional surface temperature charts were made by people who had no climate change bias or agenda. They just recorded the temperatues as they saw them. So to eliminate the bias in the temperture record, let’s go with an unbiased source, the regional surface temperature charts.
That’s what I’m doing.
You don’t really need to do any adjustments at all, below is what the raw data (black line) look like compared to the adjusted datasets for the global land surface – you can see that the effect of adjustments is quite small:
As long as you’re accounting for station distribution (e.g. by gridding) and using the anomalies there isn’t much to worry about.
Nearby stations, nice! What is the NOAA infilled value for the two endpoints of the highlighted line between Hiawatha and Salina? Do you think NOAA would end up with a lower temp than the two endpoints?
Also, look at the variance (range) in temps over a small area. Do you think a range of 7 – 10 degrees in a small area can be adequately handled by an algorithm? With what uncertainty? I’ll bet it is a lot more than the 0.001 degree claimed precision. You will end up with something like 0.001 ± 1 degree.
“Also, look at the variance (range) in temps over a small area. Do you think a range of 7 – 10 degrees in a small area can be adequately handled by an algorithm? ”
You have just highlighted another excellent reason to use anomalies rather than absolute temperatures, spatial correlation is a lot better.
“An anomaly is the change in temperature relative to a baseline which usually the pre-industrial period, or a more recent climatology (1951-1980, or 1980-1999 etc.). With very few exceptions the changes are almost never shown in terms of absolute temperatures. So why is that?
There are two main reasons. First of all, the observed changes in global mean temperatures are more easily calculated in terms of anomalies (since anomalies have much greater spatial correlation than absolute temperatures). The details are described in the previous link, but the basic issue is that temperature anomalies have a much greater correlation scale (100’s of miles) than absolute temperatures – i.e. if the monthly anomaly in upstate New York is a 2ºC, that is a good estimate for the anomaly from Ohio to Maine, and from Quebec to Maryland, while the absolute temperature would vary far more. That means you need fewer data points to make a good estimate of the global value. The uncertainty in the global mean anomaly on a yearly basis (with the current network of stations) is around 0.1ºC in contrast that to the estimated uncertainty in the absolute temperature of about 0.5ºC (Jones et al, 1999).
The problem with anomalies is no different than the problems with actual temps. The locations with higher values of anomalies will bias the total and increase the variance.
If you doubt this, when was the last time you saw a variance quoted with any anomaly average?
“… the observed changes in global mean temperatures are more easily calculated in terms of anomalies …”
Are you kidding? Do you really think that a computer has a big problem calculating numbers like 30 degrees vs anomalies like 2 degrees?
You have yet to show how any average of Tmax and Tmin over any time period shows what is actually changing. IOW, is Tmax increasing/falling, is Tmin increasing/falling, or some combination of both?
Somehow a metric derived from averages of averages of averages of averages has been assumed to be the correct depiction of what is happening everywhere globally. Averaging summer/winter, land/SST, coastal/inland, etc. simply can not tell you what is happening and where.
As evidence, I’ll give you a GAT of anomaly of 1.5 degrees C. You tell me where and how the temperature at any given location has changed. If you can’t, the metric has little or no value to anyone other than allowing propaganda to be propagated.
Anomalies lets one show a graph that shows a 5% increase in “temperature” rather than the 0.3% change in actual temperature. How scary!
For sure, this is why anomalies are much better to use. But surely you recognize that infilling a missing values from nearby stations is better than infilling the value with the mean of all the stations in the entire region for which you’re creating an average. Because that is what you’re doing if you leave the values as NULL.
NOAA is still altering data
In 2010 the anamoly for year 2010 was 0,62C, but in 2020 the anamoly for year 2010 was 0,72C (falsely adjusted)
Whoops! You didn’t get an answer, did you? Maybe he overlooked your question.
NOAA has to adjust the temperatures upwards so they can keep claiming we are experiencing the “hottest year evah!”
They have been doing it for years and getting away with it.
Why would you adjust a 10 year old year upwards in order to claim the current year was the hottest ever? And if their intention is to show every year is the hottest ever, why are so many years not the hottest ever?
Well, quite a few years are shown incorrectly as the “hottest year evah! by NOAA.
“The world’s seven-warmest years have all occurred since 2014, with 10 of the warmest years occurring since 2005.”
Now reconcile the NOAA statement above with the UAH satellite chart (below). If you look at the UAH satellite chart, you could not say that any year between 1998 and 2016/2020, was “the hottest year evah!”, yet NOAA claims 10 of those years were the “hottest year evah!”
NOAA is putting out climate change propaganda. They are trying to scare people with their manipulated charts.
Look for yourself:
““The world’s seven-warmest years have all occurred since 2014, …” As you can see from the above, it is caused by a Super El Nino occurring at the end of their series beginning at 2014, something they don’t tell the plebs.
It is adjusted upward just enough to show the rate of warming that the UN IPCC CliSciFi models need (still not enough) but not hot enough to ruin the latest hottest year evah meme. It also might ruin the meme if somebody explained the difference between the energy required to raise the atmospheric temperature 1 C at the Antarctic vs the tropics. I’ve never done the calculations.
[And could someone explain to me what the difference to the globe a change of 0.01 (or even 0.001) C might be?]
Yes, warming the past. Must be an error!
Heller is always open to debate..try him….
He is happy to debate people (and I have engaged him in the past), but continues repeating the same incorrect things regardless. Smarter people than me have directly and clearly pointed out his errors to him many times over the years and nothing has changed, not one iota. Wasn’t he barred from contributing to this very blog because Anthony got fed up with this behavior?
(Tony Heller can post here as he has on occasion) SUNMOD
You’re wrong. Look at the analyses of Time of Obs bias – There should be as many biases from morning as afternoon data. The stations without TOB show different statistics from those that have it. Besides, those corrections should be very minor.
NOAA screws with the data. See Dr. Humlum’s analyses as well.
ToBs adjustments make up about half of the magnitude of adjustments to the USHCN data. And they make a very substantial difference when your metric is “number of days with x temperature.” You’re literally talking about counts of days at a given temperature, which highly depends on the time of day that measurements are being taken (a hot day will be counted twice with an afternoon reading, a cold day counted twice with a morning reading). His approach makes it completely impossible to parse climate trends out of trends resulting from changes in the station network composition or observing practices. And he knows this, it’s been painstakingly pointed out to him innumerable times.
Most temperature monitors had(have) min/max recorded temperature for the 24h. I think the argument was originally about the time of reset. This is all red herring and Heller envy. He’s got you on the run.
You are correct – if I reset the thermometer at the hottest part of the day, and the next day is colder than the day the thermometer was reset, then the max temperature recorded for the “next day” will actually be the max of the previous day. You’re double counting warm days. Vice versa for cool days.
If there is a gradual shift over many decades from volunteers taking readings in the afternoon to taking readings in the morning, which there was, it will impart a spurious cooling trend into the network that is not a climatic effect.
None are so blind as those who will not see! The readIngs, min max are recorded for the day and reset for the following day. The readings, min max are each for the date of interest.
Not always. That is what Weekly_rise is trying to explain. Obviously this is not an issue with MMTS instruments, but its definitely an issue with LiG instruments.
TOBS is only necessary if you are trying to create a population of similar things. It is one reason that trends from local and regional locations do not add up to the GAT. Which is more important? The local and regional temps!
“Wasn’t he barred from contributing to this very blog because Anthony got fed up with this behavior?”
I see you are desperate. Anthony didn’t get fed up with anything that had to do with the temperature record. The controversy was about whether CO2 could freeze out solid in Antarctica or not. Get your facts straight.
What other facts don’t you have straight?
Very good point Tom — thanks for that note.
That certainly seemed to be a part of the reason Anthony gave him the boot. But, as Anthony said in a comment on a another blog some years ago, it was Heller’s overall pattern of dishonest behavior, including refusal to admit to his many errors on USHCN, that drove the banning:
I don’t see anything there where Anthony says Heller is dishonest. One can be stubborn, argumentative, and even wrong, while at the same time not being dishonest. If you believe what you say, you are not being dishonest, even if you are wrong.
I guess that “dishonest” characterization is yours, not Anthony’s.
From Judith Curry’s Blog CE:
Nick Stokes | June 29, 2014 at 8:52 pm |
omanuel | June 29, 2014 at 8:25 pm |
“Steven Goddard aka Tony Heller has issued an open invitation to debate those who disagree with him.”
I remember an earlier invitation
“Again, I am happy to debate and humiliate anyone who disagrees with this basic mathematics.”
I’ve tried. His motto seems to be “never explain, never apologize”. And his basic mathematics is hopeless.
As hopeless as your grammar?
That is Nick Stokes’s “grammar”
Unless you mean “From Judith Curry’s Blog CE” ?
So I will just assume a sad denizen desperate to get in a disparaging dig at someone that they cannot counter with anything else.
I’m guessing Carlo, Monte mistakenly thinks the word mathematics is plural.
Heller, unkindly, also publishes the official charts showing how warm it was in the 1930s.
And backs those charts with headlines from the period.
Government is “cooking” the books to promote fear.
On a subject that only government can fix.
First, his “official charts” are his own – compiled using raw station data without taking into account the uneven spatial distribution of surface stations (i.e. gridding or any other weighting scheme). His results arise because of his incorrect methodology, not because anyone is nefariously fiddling the data. The black line in the graph linked below is the raw GHCN station data, with no adjustments applied whatsoever, but gridded to avoid oversampling the areas with the highest station density:
The trend is almost indistinguishable from the major indices from NASA, Berkeley Earth, and the CRU.
Second, the 1930s were quite warm in the contiguous US, so it is hardly surprising that Heller can find lots of newspaper clippings from the time saying so. This does not suggest that the US, or the globe, was warmer in the 1930s than it is today.
“and he doesn’t apply any adjustment for ToBs,”
Heller says ToBs are unnecessary and then goes on to demonstrate it.
ToBs are personal opinions, when it comes to the historical temperature record. Personal opinions and interpretations, and psychoanalysis of those who recorded the historic temperature readings so many decades ago, are not data.
He goes on to “demonstrate” that ToBs adjustments are unnecessary by using the same data containing all the other inhomogeneities he ignores. ToB is not a personal opinion – it is well documented that volunteers shifted from an afternoon read time to a morning read time over the course of the 20th century in response to a directive to improve precipitation measurements. The bias that this imparts into the historical trends is also not an opinion. This bias is especially prevalent in the analyses Heller presents showing “percentage of days above 90 degrees.” If you double count warm days in the past and gradually shift to double counting cool days instead, you’re going to produce a downward trend in the number of days above 90 degrees that has nothing to do with the climate. This will be a much bigger bias than even the bias in the daily mean temperature trend.
“it is well documented”
So you claim.
“Heller says TOBs are unnecessary and then goes on to demonstrate it.”
And Heller is plainly wrong.
It’s intuitively obvious that if you record the daily max on the evening of a hot day (and v importantly RESET the thermometer at the same time), then if a change to a cooler airmass occurs overnight it will result in the previous days high temperature at 9pm that day still being on the max thermo – so the previous hot day is recorded twice when the second day may be several degrees cooler.
Recording the max at 9am (and resetting) stops that.
This is Nick Stokes’ analysis ….
“Because evening TOB has a warm bias, through double counting very warm afternoons, the change to 9am has a cooling effect. Here is a histogram of the putative effects for the 190 stations of changing from 5pm observing time to 9am. Positive effect indicates (in °C) the bias that has to be subtracted from temperatures before the change.”
Since these all use opposite hemisphere data, winter vs summer, one needs to know the other statistical parameters associated with the mean. Tell us what the variance, skewness, and kurtosis parameters are. If these aren’t available, your GAT is meaningless.
The only thing they care about is division by the square root of N.
Look at the difference between 1998 and 2016 on the UAH satellite chart (below), and you will see that 2016 shows to be 0.1C warmer than 1998. The 0.1C figure is within the margin of error for the measuring instrument so 1998, and 2016, are basically tied for the warmest period in the last few decades.
Then, look at the NOAA chart, at the difference between 1998 and 2016. The NOAA chart shows that 2016 is about 0.4C warmer than 1998, so the NOAA chart is showing a big difference between 1998 and 2016, and it certainly doesn’t show they are tied for the warmest temperatures in the recent past.
How do you explain this discrepancy?
To me, this discrepancy is a transparent effort to manipulate the temperature record so NOAA can proclaim year after year as the “Hottest year evah!” and thus scare people into believing in the Human-caused climate change scam.
NOAA couldn’t be proclaiming any “hottest year evah!” using the UAH satellite chart because it shows there are no hotter years than 1998 until we reach the year 2016.
NOAA’s data manipulation of the temperature record is pure climate change propaganda.
The “discrepancy” is explained by the fact that these are different temperature estimates made using very different instrumentation, at different grains, and compiled using different methodologies. the NOAA is using surface (0-2 meter) temperature measurements taken by thermometer after steps to homogenize the station network, while UAH is using lower troposphere (0-10km) measurements made using MSU instruments on satellites, converted to temperature estimates after applying adjustments for drift, orbital decay, sensor calibration, etc.
The discrepancy is a red flag for the significant difference between 2 adjacent measurements. In addition to data altering, are you aware of the station location management activities of NOAA?
It is not a red flag at all to people who recognize a difference between satellites and weather stations. It is also worth noting that UAH does not even agree on the difference between 1998 and 2016 with other satellite-derived temperature estimates. It is the dataset of choice amongst those doubtful of global warming, but it is the singular outlier amongst its peers. Perhaps the discrepancy you note is actually suggestive of a flaw in the UAH methodology.
I guess we’ll all have to suffer under our red-flag climate … https://www.youtube.com/watch?v=p7WsUECyDkc
I posted a rebuttal to your UAH being the outlier somewhere. Maybe farther down in this thread. I’ll look for it.
Roy Spencer says the other data bases are the real outliers because they are using satellite data in their output where Roy says the particular satellite they are using is trending towards being too warm, and Roy dropped it out of his data, but NOAA and the rest are still using this “hotter” data which causes them to register higher temperatures than the UAH satellite.
I just found the rebuttal a few comments below this one.
Mears adjusted RSS in response to the pressure he was getting from CliSciFi practitioners that his results were being used to discredit their memes by the ‘denier’ crowd
IIRC, NOAA’s satellite data and the reanalysis datasets are both closer to UAH6.
Two different years almost two decades apart have average global temperatures recorded using two different methods. The difference in temperature between the the two years using method A (0.1 C uncertainty) is 0.1 C, meaning there is no statistical difference in the temperatures over the period of almost two decades. The difference in temperature between the two years using method B (unknown uncertainty due to wildly varying measuring conditions) is 0.4 C, meaning the measurements are useless because we can’t estimate its accuracy. Lets go with method A since method B is worthless.
The above implies we should use pre-radiosonde, pre-satellite and pre-ARGO temperature measurements only to reveal general temperature movements. For scientific work, only radiosonde, satellite and ARGO data should be utilized.
Somebody prove my analysis is incorrect.
Additionally, anybody screaming “hottest year evah” is an ideologue, not a scientist. Any organization that publishes reports to the public about “hottest year evah” without clear explanation of uncertainties is politically corrupt.
“I think it is obvious that the NOAA is referring to their own temperature index”
NOAA’s “temperature index” is not in Kelvin or Celsius degrees???
Temperature “index”??? Is the SI temperature scale not good enough?
Is NOAA doing science or is it doing hokus-pokus???
I saw the Steve Miller Band play at the Punchbowl in Hawaii. They flew the band in by helicopter. My buddy met the love of his life that day at the concert.
Yep. Larry hasn’t grasped even the basics of the topic he is writing about.
The irony is strong in this one.
Larry is also reporting HADCRUT for July 2021 before it has been released.
“New versions were published in December 2020: HadCRUT5, CRUTEM5 and HadSST4 (see papers). These are the recommended versions because of the improvements in data and processing methods over the previous versions.
They are updated every couple of months and currently have data up to June 2021”
People less kind than myself could be forgiven for concluding Larry is a bit clueless.
That is a very slow process they run on, all other sets have already published, why so slow John?
Why did Larry say they had reported, when they haven’t?
Seems a more relevant question, in the context. 😉
Probably a mistake in his assuming that a large data set would have been updated weeks ago?
You still haven’t explained why they are over 1 1/2 months late in updating just a single month of data, very easy to do on the computers they have.
Why the long delay John?
This question in this context is completely irrelevant. Furthermore, John has no obligation to answer this question at all. He’s just pointed out that the July 2021 result is not published yet.
Translation: I don’t have an answer to offer on it either.
No. The correct translation: you just tried to change topic (with insisting on getting answer to an irrelevant question) when it became obvious you were wrong.
Probably a mistake
A mistake which combined with his feeble grasp on anomalies invalidates the entire piece.
I personally think all traditional datasets should delay reporting for 3 months. It takes about that long for the majority of already digitized records to make into the observational repositories. By updating so quickly we’re really only getting preliminary results that then change often significantly within the next few months. There’s still the issue of slow upload streams that are delayed longer than a few months and the even more onerous problem of handwritten records that often take years to get digitized and uploaded, but usually within 3 months the vast majority of the observations are available so I think it is a reasonable compromise. Just my $0.02…
It takes that long to decide what the readings should be, yes
Assuming that was a serious post I’m not sure what you mean. The issue is with the timing of the upload; not what the readings actually are. Other datasets like reanalysis which have rigid assimilation windows that can be as little as 1-2 hours in some cases that are often hard cutoffs such that if the data isn’t delivered in the specified window then it is not incorporated with some exceptions. As a result these datasets are generally available within a few days of the end of the month and are mostly locked in at that point. For example, NCAR is lagged by about 1 day. Copernicus reporting is lagged about 1 week. Of course, reanalysis assimilates more observations in a single month than these traditional datasets have over their entire periods so it’s not really comparing apples-to-apples.
And suitable news window.
I tend to agree, or at least they should put big health warnings that any reported data is likely to change slightly over the net few weeks and months.
The problem I think is that people have become a bit too obsessed over every monthly release (myself included), rather than looking at the big picture.
Yeah. I’m always eager for the monthly updates too. Maybe just better communication to the general public that observations continue to roll in and that the published figures will change as the later updates incorporate them. Of course, the argument is that if you’re seriously analyzing these datasets you should already be aware of the timing of uploads. I guess what I’m saying is that for those truly familiar with these datasets its rather obvious already.
The first thing the marxists ALWAYS do is manipulate numbers, from temp data to COVID deaths to election results.
It really is amazing how alarmists actually believe they understand the science.
As the article pointed out, only NOAA came to the conclusion that July was the hottest, the other 4 didn’t. Are you in the habit of only choosing the data set that shows what you want to see?
Beyond that, the claimed “record” was so far below the confidence level as to be completely meaningless. Only clueless trolls would try to tout a 0.01C increase as meaningful.
And, it was only a temporary “conclusion” that they then corrected downward.
But it got the headline.
The correction didn’t.
Leave them with the desired impression.
That’s all that matters.
“Only clueless trolls would try to tout a 0.01C increase as meaningful.”
In the NASA data July 2021 is only 0.02C cooler than July 2019. Must both be the warmest month on record, by that logic. 😉
You don’t understand anything about confidence intervals, do you.
How about you? 🙂 As far as I know you don’t have any tertiary education, and I seriously doubt you’ve learnt anything about statistics in high school. Regarding John, he seems to know what he’s talking about.
Hmm, so you’re saying anyone who didn’t go to university is thick. Well you certainly disprove the notion that university education is a sign of intelligence
Well, anyone who didn’t go to uni has problems with “confidence intervals”. For that matter, most of those who did go has this problem too 🙂 Anyway, MarkW was bullshiting about confidence intervals as if he knew what he was talking about when it was obvious he had no idea.
Exactly, like Rory or some other guys here. But the thing is there’s a strong correlation. Rory is an outlier.
Anyone who thinks you can take 21 students in an electrical engineering lab, have them build 21 circuits, individually measure the amplifies, and think you can combine the results to get a true value simply hasn’t learned about uncertainty (i.e. confidence intervals). It doesn’t matter how precisely you calculate the average of the 21 results, the uncertainty in the average will remain the uncertainty associated with the elements used to build the amplifiers and the uncertainties associated with the measuring devices.
You don’t seem to understand this basic fact of physical science. And yet you are willing to state that those of us that *do* understand physical science actually don’t. Heal thyself, physician.
🙂 Yeah, for sure. How do you think they measured gravitational waves with displacements like the diameter of a proton? Did they have a single instrument with uncertainty less than that? A very good ruler? And a good eyeball? Why does it take weeks to get the result? (Hint: postprocessing,”precisely calculating”).
“How do you think they measured gravitational waves with displacements like the diameter of a proton? Did they have a single instrument with uncertainty less than that? A very good ruler? And a good eyeball? Why does it take weeks to get the result? (Hint: postprocessing,”precisely calculating”).”
Because they were taking multiple measurements of the same measurand using the same instrument. Thus forming a probability distribution around the true value. By analyzing that probability distribution they could close in on the true value.
When you are measuring DIFFERENT MEASURANDS there is *NO* probability distribution formed.
It would be like averaging the gravity of Mercury, Venus, Earth, Mars, and Jupiter. Calculate the average to the millionth decimal place. YOU STILL WON’T HAVE THE TRUE VALUE FOR GRAVITY. And you will not have reduced the uncertainty associated with that mean by one iota. That’s because those independent, random measurements of different things don’t form a probability distribution.
It’s no different than measuring the temp at noon and at midnight and expecting the average of the two to have a lower uncertainty than each individual temperature. That average is not a true value of *anything* and the two measurements do not form a probability distribution that can be analyzed statistically.
🙂 Somehow you think here it is possible.
Change subjects much you.
You are treating temperature results as pure numbers whose precision is determined solely by the size of a floating variable in a computer. The same with confidence limits. You don’t even know the difference between numbers and MEASUREMENTS!
Measurements have their own confidence levels for each and every measurement. These confidence levels are described by the uncertainty in measuring a value. The uncertainties carry through to the final calculations. Most of the temps in the 1st half or more in the 20th century were measured in integers. That controls the ultimate precision of calculations on temperature measurements.
The fact that you and others continue to quote unwarranted confidence levels for temperature averages simply points out that you are ignorant of physical science and the necessity of proper treatment of measurements.
A classic example I’ve seen in an introductory statistics text for regressing two variables is the yield of corn per acre of farmland. What isn’t discussed much at all are the assumptions made for the analysis. First, the bushels of corn from a field is basically an integer, and the field size is a known number that doesn’t change (probably a land survey measurement). In other words, both the X and Y data are treated as being without error.
The analysis is done for a single growing season, so there is no time dependence. The sample population is all the fields in a single county, which means there are minimal spatial variations.
As a result, it is justified to say that the standard deviation (or variance) of the regression is the uncertainty of the bushels per acre result. If multiple regressions are done over multiple seasons and/or counties, the uncertainty must increase. All of the individual variances must be accounted for.
I think you are underestimating MarkW.
The honest confidence intervals, if they are honest would be at least + or – 5 C.
Actually, Phishlips, accounting for statistical uncertainty, 4 years in the last 6 could be in the running for the warmest July on record, 2016 and the last 3 are all statistically tied.
That’s what makes NOAA so dishonest – in their announcement they don’t admit that they really don’t know the global average temperature accurately enough to make such a claim based on such TINY differences between years.
But the warmest July was 1934. Not? That is surely the nub of all the argumentation here. Heller’s point is, I think, individual, reliable records prove that it was.
It has generally been warming for the past approximately 300 years. Man started to get interested about 150 years ago. In the intervening time it has been warming and cooling in multi-decadal-long cycles, with a generally warming trend. Accordingly, the most recent decadal-long temperature recordings will be greater than those of previous decadal-long periods.
IIRC, the temperature differences between each decadal period have been reducing, indicating that increasing temperature trends have been plateauing. 21st Century data from all sources confirm that as a fact. CO2 does not significantly drive overall global temperatures, although it theoretically has some minor impact. Use long-term data to prove me wrong.
That’s a second to second temperature variable, not a global average, quite apart from being impossible to physically measure.
I’m just pointing out an error in the article. Do you agree or disagree with the comment above? You’re taking an adversarial tone, but I don’t think I’ve said anything controversial.
Of course you have. When the uncertainty interval is wider than the increment you have calculated you truly have no idea what actually happened. July 2021 could have been 0.1C LOWER and still been within the uncertainty interval!
Why does NOAA and the media never include the uncertainty interval? They include it for political polling. Isn’t temperature measurement as important?
It’s not, that’s the reason. You’re inability to understand this is really hilarious.
“your” instead of “you’re”
Hilarious you haven’t figured out there’s an ‘edit’ function on this blog.
… with a time limit.
Yeah, I know, and it’s only usable for a few minutes after posting. Apparently you haven’t figured out this.
In other words you can’t refute anything I asserted. All you have is the argumentative fallacy of Argument by Dismissal. Why am I not surprised that all you have is an argumentative fallacy?
No need for that, this is science, please read and understand at last the textbooks. Furthermore, the other guys here have refuted your assertions 100 times already.
You are *STILL* using an argumentative fallacy. This one is named Appeal to Authority. No actual refutation provided, just an appeal to authority.
If you can actually refute what I asserted then do so. If you can’t then stop using argumentative fallacies in a vain attempt to make yourself look smarter than you actually are.
You would lose a middle school debate using these tactics.
yep, science is THE authority.
So say’s political science (except when it comes to biology).
Yet somehow, scientists always claim “its worse than we thought”
Are you saying that the uncertainty interval is less than +/- 0.01 degrees?
Only fools think the can produce a world temperature number that would have any basis in reality. Satellite measure can be the only thing that might tell you something but the satellites have their own set of problems. The truth of the matter we might have the technology to do what we are trying to do today but more than that it is like we never will. The variables will eat you up and you cannot correct for them if you do not understand what they are. We are where we were, when our elites were discussing how many Angels can fit on the head of a pin, the argument is the same there is no answer. It to bad the politicians are using this BS to pick my pocket.
Do you have a reading comprehension problem, or is your short-term memory failing?
Whether or not there were later revisions that changed the anomaly for July 2021, my point about why NOAA reported it as the warmest month on record stands. Yourself and others in the comments are trying to argue with me over things I have not said.
You are defending the initial claim after NOAA revised the anomaly downward. Larry makes the point that there have been no media retractions.
Furthermore, when the uncertainty is +/- 0.19, a claim of +0.01 is meaningless.
I’m not defending any claim, I’m pointing out an error in the article. Despite all the downvotes, no one in this thread has actually tried to dispute my point. I am also fairly certain that the NOAA does not run all the world’s media, so they are hardly to blame for any retractions or lack thereof.
Yes they hav,e but you just carry on ,making ever more stupid comments
NOAA did, in fact, “run to all the world’s media” when it made its announcement in alarming terms. When NOAA revised the number, it did not, in fact, make the same sort of widespread announcement about the downward revision. It is a propaganda machine, not a neutral arbiter of science news.
The NOAA did not “make the announcement in alarming terms.” This is hyperbole. They made the announcement in their July 2021 Global Climate Report, using exactly the same language they use every other month (literally – the reports use the same template each month and the NOAA just updates the numeric values). As far as I’m aware, those reports are “snapshots” that are not revised or updated.
Why does NOAA never quote their uncertainty level when describing these values. Do you not understand that the measurement of 0.01 has no scientific significance! Rather than promoting a “hotter” than ever record they should be saying this value has no scientific significance.
“… using exactly the same language they use every other month (literally – the reports use the same template each month and the NOAA just updates the numeric values). So every month they report that the current month is the hottest, coldest, whatever since …? You know damned well that NOAA hypes the “good” (bad) news to us plebs.
That was a simple statement of fact which garnered 11 minuses (so far); thanks clapping monkeys.
The downvotes were for the assumption that if NOAA says it’s the warmest, then it is definitely the warmest. Who cares what the other data sets show.
An “anomaly” is your tattered attempt to articulate an explanation…
What you are stating here is a trivial and non-substantial fact. It is trivial in the sense that it does not add any understanding of the matter discussed in the article you are commenting, and non-substantial in that it is widely known that the “global mean temperature rise” is in fact primarily a rise in minimum temperatures.
The point made in the article you comment on is that the difference in this years anomaly and any previously recorded anomaly to the same average is less than the uncertainty range for the aggregate anomaly in question. The argument is correct that you cannot in any sensible way distinguish between July of 2021 and several previous July records.
NOAA is still claiming there is a 0.01 °C difference as if it has any meaning. Given the nature of these measurements and the actual data we cannot tell whether July 2021 was hotter, colder or the same as previous Julys. Within the margin of uncertainty its can be any of those, and it is very misleading to say it is THE hottest. The only thing one can say is that it might be among the hottest.
Does anyone know how the confidence level is calculated? What sources of error are left out?
0.19 for NOAA’s measurements seems very small to me for a world wide measurement.
It is way too small, and ignores any and all instrumentation uncertainties.
Very likely it is just the standard deviation of the average of the average for July, multiplied by 2.
Exactly. It’s just the standard deviation of the “sample average.” They completely ignore uncertainty. These are PhDs mind you, making a mistake that would get you flunked in High School Chemistry.
That’s what I suspected. But, as you indicate, the error seems so obvious that I wondered what I was missing.
Climate “scientists” believe that by subtracting their holy anomalies, error is canceled. Very rarely will any of them admit this, but it is the elephant inside the tent.
You have nailed it. It is the standard deviation of the sample means average. Too many “scientists” and folks here treat it as how accurate or precise the mean is. They need to go back to school and learn statistics.
SIGMApopulation = SIGMAsample • √N
SIGMApopulation is what they should be using for a minimum uncertainty.
Do you mean Σ or σ? Either way I can’t figure out how you get that equation.
Confidence level is usually something like the RMS deviation from the final average of all the individual measurements that went into the final average.
Only in Climate “Science” are meteorological temperatures reported to 2 decimal places.
In any case the very concept of a global temperature is absurd, as is that of a global climate.
NOAA assumes all errors are random and cancel, without evidence, like all of the other climate frauds, which is the only way to get down to ±0.19C. It’s utterly absurd to have a temperature record from the 1800s with glass thermometers and then pretend you can torture this data and end up with an averages that let you distinguish between 5th hottest and 6th hottest years which are statistically indistinguishable. It’s all fraud.
Uncertainty for individual, independent, random measurements grows as you add more and more data points. It adds by root-sum-square. It is *not* the precision with which you calculate the average value of those independent, individual, random measurements — which actually tells you nothing about physical reality.
Daily maximum and minimum temp values are combined to produce a daily MID-RANGE value (which is *not* the average temperature value). These daily mid-range values are then averaged to produce a monthly average. Those monthly averages are then averaged once again to get an annual average. With each average you lose data which is needed to tell you what is actually happening. Then when you use thousands of annual averages to create the Global Average Temperature you totally lose what is happening physically in the thermodynamic system we call Earth.
If the uncertainties were calculated at each stage of this process you would wind up with an uncertainty for the GAT that would actually make it unusable. And, again, the precision with which the averages are calculated has nothing to do with the total uncertainty of the average being calculated.
Just think about the uncertainty of the mid-range value. If each measurement has a +/- 0.6C uncertainty then their “average” would have an uncertainty of sqrt(0.6^2 + 0.6^2) = 0.8. You already have an uncertainty wider than the anomaly of 0.01C that NOAA is stating. Nothing you can do by combining individual, independent, random measurements will ever reduce that uncertainty to a level that justifies stating a 0.01C anomaly difference.
I like how the average human has 1 testicle 🤔
..and one ovary and half a uterus.
Not sure about the half a uterus, but one Mann comes to mind that probably has half a …… never mind.
(But he does keep using the courts to scr*w people that disagree with him and never quite succeeds.)
Actually, the average human has no testicles. I believe there are very slightly more women than men, and some men have only one, and some none at all.
The average human certainly has less than two arms and two legs, and almost certainly one head.
After reading Derg’s post, I knew somebody would push it too far! 😉
Actually no. Less than 1. There are more women (slightly), and not every man has two testicles. Like you.
Do you treat people you encounter in real life like this?
No. Maybe this was unjustified. “To err is human, but it feels divine”.
Short version of your (excellent) post: averaging temperatures is as meaningful as averaging telephone numbers.
As far as mid-range values are concerned there are only two values being used. Therefore you get *no* cancellation of plus and minus values that you might get in a large number of data points (which is why you use root-sum-square). For just two values the uncertainties should probably be added directly, e.g. 0.6 + 0.6, giving an uncertainty for the daily mid-range value of +/- 1.2C. Averaging multiple mid-range values will *never* decrease the uncertainty below that +/- 1.2C interval. I don’t care how precisely you calculate the average of those multiple mid-range values, that average will always have a minimum uncertainty of +/- 1.2C.
If you can’t identify a difference greater than the uncertainty interval then you don’t really know if you have an actual difference or not!
As the General said when informed his weather forecasts were wildly inaccurate, he responded with “I know that, but I need them for planning purposes.” When planning for multi-trillion dollar spending, however, one needs more accuracy for climate predictions. And UN IPCC CliSciFi global climate models have proven to be massive failures.
NOAA is trustworthy—in the sense one can be assured they are playing politics with their reporting.
They have one clear objective = promote the AGW by any means,as the Allah of globalcommunism can never have enough believers.
I have predicted a bunch of new ultimate never seen before all time records already at the beginning of July though i believe in the ice age scare.
The secret of my prophecy?
The summer was supercold in many parts of the world including massive snowfalls in superhot regions.
To counterbalance reality they were forced to come up with some really scary nonsense following the 2nd commandment of communism “The bigger the gap between reality and official reality the bigger the lie must become”
As communism is nothing without fear reality needs some serious adjustments from time to time to maintain the illusion of the perfect utopia.(a reality check for reality itself)
Wether these adjustments are called Holdomor,Nazino affair(what lovely names they give their own attrocities)Lysenkoism(special scare and subjugation tactics against experts)or the destruction of the four olds in China,
is irrelevant.Relevant is the impact,be it mentally or physically.
And from now on its only getting worse and worse the closer we get to (agenda)as some radical changes will come and people won’t accept them without the necessary AGW, Covid,Terrorism- fear.
I think you’re seeing the writing on the wall, so to speak, but I suggest it’s not really enthusiasm for “communism” that is behind it. It’s enthusiasm for controlled society, and the underlying “ideology” is essentially just good old elitism.
The “rule by consent of the governed” movement that sprang up a few centuries ago presented some serious problems for the “Globalists” (elitists), and the vilification of that movement (and the societies/peoples which flourished since it took root among them) became a high priority recently.
Ideologies that have proven effective in the past for vilifying and undermining established societal orders, were “customized” and “deployed/funded” to aid in the “taking down” of what is often referred to as “the West”. (or Whiteness if you prefer ; )
The NOAA shows that in America, it wasn’t even close with 26 Julys hotter than the July of this year,
You’ve plotted the maximum temperature, to be comparable you should plot the mean.
That would just pollute the “hottest evah” meme.
Haven’t you been reading the thread? There is no such thing as the “mean”. At best it should be called a midrange temperature. A mean temperature would require taking a large number of periodic measurements throughout the day and then finding the mean.
As to the earth getting hotter, what do you think would best show us burning up, high temps during the day or low temps at night? Why would increasing nighttime temps and unchanging daytime temps cause a big stir among people?
With the hottest July in sine 1896, i.e., July 1936 much hotter than July 2021 by 3.18°F.
No. The maximum temperature in July 1936 was 3.18F warmer than the maximum temperature in July 2021. That’s what I was getting at. Plotting the right variable makes a difference.
The average temperature – which correlates to the mean quoted by NOAA – was warmer but by a more modest 1.4F.
And why wouldn’t it, in a overall slightly warming world. It is not, however, proof that maximum temperatures are increasing to record-breaking levels as the CliSciFi alarmists insist in their propaganda.
June on the other hand …
Hottest evah 🤓
Boooooosted by Russsssiaaa no doubt.
Also showing minor warming occurring on a cyclic basis with statistically expected excursions.
The graph shows an undeniable cyclic trend.
claim was exaggerated, deceptive and distorted
Just what the narrative driven media demand. Unbridled alarm.
“July was the world’s hottest month ever recorded, a US federal scientific and regulatory agency has reported.
The data shows that the combined land and ocean-surface temperature was 0.93C (1.68F) above the 20th Century average of 15.8C (60.4F).
It is the highest temperature since record-keeping began 142 years ago. The previous record, set in July 2016, was equalled in 2019 and 2020.
Experts believe this is due to the long-term impact of climate change.”
“July was world’s hottest month ever recorded, US scientists confirm
Confirmation of the record July heat follows the release of a landmark Intergovernmental Panel on Climate Change (IPCC) report on Monday “
So, who are these US scientists? In the end it doesn’t matter, what matters is that this fake fact followed the release of the, er, landmark Intergovernmental Panel on Climate Change (IPCC) report. Timing is everything – ask a comedian or a musician.
No media outlet will publish a correction to this, so July was still the hottest month evah.
“No media outlet will publish a correction to this, so July was still the hottest month evah.”
What would a correction say? July 2021 is an insignificant 0.02C below July 2019 in the NASA data, HADCRUT – contrary to Larry’s ramblings – is not out yet, and the satellites measure a different quantity.
So what needs correcting?
What would a correction say?
How about the truth?
All 5 Global Temperature Measurement Systems Reject NOAA’s July 2021 “hottest month ever” Claims
NOAA could publicly admit it…
But they won’t, because it’s about the narrative, not science, not fact.
Definitely over the target.
And now it’s back to 0
That’s what I call confidence.
The truth? You still haven’t stated what needs correcting, just copied Larry’s mistake.
How can 5 datasets reject anything when only 4 have reported?
And in the surface dataset that has reported 2019 and 2021 are joint warmest with an insignificant 0.02C difference. Not enough to ‘reject’ anything.
“You still haven’t stated what needs correcting”
What needs correcting is they should be using the UAH satellite charts as the official temperature record.
If they used the UAH satellite record, they would show July of 1998 as being warmer than any subsequent July. See the list of hottest July’s listed above for the UAH satellite chart.
Using the other charts, which have been manipulated for political purposes, is just simply pushing climate change propaganda. It’s scaremongering, plain and simple.
Correction would also say that July 2021 was not the hottest July on record. Doubt we’ll see anyone in the MSM publishing that.
Correction would also say that July 2021 was not the hottest July on record.
It is the hottest in the NOAA data.
In the NASA data, 2021 and 2019 are joint hottest.
HADCRUT is not out yet.
UAH and RSS don’t measure the surface.
Yes – alert the media!
“Despite the most obvious explanation that the NOAA-14 MSU was no longer usable, RSS, NOAA, and UW continue to use all of the NOAA-14 data through its entire lifetime and treat it as just as accurate as NOAA-15 AMSU data. Since NOAA-14 was warming significantly relative to NOAA-15, this puts a stronger warming trend into their satellite datasets, raising the temperature of all subsequent satellites’ measurements after about 2000. . .”
“Clearly, the RSS, NOAA, and UW satellite datasets are the outliers when it comes to comparisons to radiosondes and reanalyses, having too much warming compared to independent data.
But you might ask, why do those 3 satellite datasets agree so well with each other? Mainly because UW and NOAA have largely followed the RSS lead… using NOAA-14 data even when its calibration was drifting, and using similar strategies for diurnal drift adjustments. Thus, NOAA and UW are, to a first approximation, slightly altered versions of the RSS dataset.”
Here’s one reason the UAH satellite chart should be the official temperature record.