
The graph they built on a lie. The iconic ‘climate’ graph that’s undermining industrial capitalism and taking our freedom…and it’s 100 percent garbage. Watch this film, and learn the shocking truth with @tomnelson2080
[editor’s note: Anthony Watts’s surface station work is featured, starting at 3:50]
Autogenerated and autoformatted transcript.
Look up climate change on the internet
Or check out any mainstream media story about climate change, and you’ll see this graph. This one’s from NOAA, and there’s an almost identical one from NASA. This is the graph for climate alarmists.
We know the Earth is warming. We know the Arctic is melting. Unless we make major changes to stop global warming, the consequences could be irreversible.
This is the graph on which all of those claims about record temperatures are based.
We’re starting to see our temperature on the increase—that’s well stated, well advertised. We see an acceleration of warming over the past 50 years. From 1801 till now, the numbers all say the same thing: the world is getting warmer, faster.
This is the graph that is said by the science TV presenter Brian Cox to prove that global warming is true—human action is leading to an increase in average temperatures.
You may try to argue with that, but you can’t.
“No, I brought the graph!”
But, as we will see in this film, this famous graph is a travesty—a shameful lie masquerading as science.
My name is Tom Nelson, and this is Guerilla Science.
Let’s take a close look at this graph. The agencies that produce these graphs, like NASA, NOAA, and the UK Met Office, all rely on the same data from the US and Global Historical Climatology Networks, which gather temperature recordings from meteorological agencies across the world.
The graph starts in 1850, and we are told that it’s an accurate instrumental temperature record of global temperature change. By instruments, they mean thermometers, as opposed to reconstructing past temperatures from indirect clues like tree rings.
But here we come to a big problem: where a thermometer is located can have a huge effect on its temperature readings.
In the early 20th century, many thermometers were erected just outside towns—easy enough to check every day, but away from the artificial heat of urban life. But as population has risen, those towns have grown.
Over the course of the 19th and 20th centuries, the population of the US—and also globally—has expanded enormously. NOAA’s graph starts in 1850, when America’s population was about 20 million. Today it’s 330 million—16 times as big.
This has led to a huge expansion of towns and cities. In 1900, the population of Phoenix was five and a half thousand. Now it’s about one and a half million. Thermometers that were once in open fields have become engulfed by shopping malls, warehouses, and suburban housing.
This matters because urban areas are much warmer than rural areas. Here, for example, is a satellite heat map of Paris, which can be as much as 6°C warmer than the surrounding countryside. Suburban and semi-rural areas too are significantly warmer than fully rural areas.
Many thermometers in the first half of the 20th century were also located at airports to give pilots information for flight safety. For example, more than half of the temperature stations in the UK are located at airports. But airports, like towns, have grown and changed enormously.
What used to be open airfields with a few propeller planes have turned into vast seas of heat-absorbing concrete, sheltered and surrounded by large terminals, hangars, and car parks, with dozens upon dozens of large jet airliners pumping out hot air.
So how much of so-called global warming is just, in fact, heat generated from population growth and urban expansion? In other words, corrupted temperature data?
The National Weather Service stipulates that temperature reading stations should be 100 feet or more away from anything that artificially reflects or radiates heat—or might otherwise artificially raise temperatures—like cars, buildings, air conditioning units, tarmac, and concrete. In other words: human civilization.
So how are America’s climate tracking thermometers doing?
Good evening, everyone, and thank you for joining us here on Action News Now.
In 2009, meteorologist and TV weather forecaster Anthony Watts decided to see for himself. He and a team of volunteers inspected and photographed a random selection of 850 temperature reading stations. Where did they find them?
They found them here, and here, and here… They were, for the most part, in the immediate vicinity of machines, buildings, vehicles, and urban infrastructure of one sort or another—by buildings and air conditioning units, by car parks and buildings and air conditioning units, on top of concrete, beside concrete and asphalt, sheltered by buildings, by buildings and cars, beside tarmac and airplanes, in airports, beside electrical equipment. Lots are in car parks.
You get the picture.
According to the 2009 survey, almost 90% of the 850 stations inspected failed to meet the official NWS requirements of being set apart from artificial heat sources.
That creeping urbanization has corrupted global temperature data is freely admitted by many climate scientists. A study carried out by NOAA scientists has concluded:
“These results suggest that small-scale urban encroachment within 50 m of a station can have important impacts on daily temperature extremes—maximum and minimum.”
Another study found that the difference of temperature between urban and rural stations exhibited a progressively statistically significant increase over the studied period.
Multiple studies now suggest that most land thermometers have been corrupted to varying degrees by creeping urbanization.
And it’s not just in the US. A study published by the Royal Meteorological Society found that urbanization has significantly increased the daily minimum temperature in the UK by as much as 1.7°C.
In other words: the whole of the supposed man-made global warming.
One study published in the American Meteorological Journal describes in China the problem of rapid local urbanization around most meteorological stations. The study found that since 1985, the percentage of stations with a significant urban heat bias increased from 22% to 68%.
Another study in China found that urbanization-induced warming is significant, accounting for up to 80% of the overall warming between 1961 and 2000.
This is a well-documented worldwide phenomenon.
Here, for example, is the thermometer temperature record for urban Tokyo since 1907—it’s rising gently. But here is the temperature record for the same period from the nearby rural Hachijojima Islands—there is barely any change.
But it gets worse. Across vast swathes of the Earth, there are no temperature reading stations at all. So what do climate agencies do? Suppose there are no stations in a largely empty African savannah—scientists will take the data from the nearest station in a town or airport many miles away, and they’ll apply that to the entire region.
This treats whole regions effectively as superheated cities.
This is even true of the US. One study of mountain regions in America found that extreme warming observed at higher elevations is the result of systematic artifacts and not climatic conditions. These erroneous adjustments were amplifying warming by as much as 560%.
Now, you might have thought that all of this would be taken into account when calculating temperature change. But no.
The UN’s IPCC has decided that the effect of urban development on the temperature record is negligible. NASA says the impact of these urban heat islands has a minuscule effect on global temperature. Berkeley Earth says the urban heat island effect is real but the effect on our global estimate of land temperatures is indistinguishable from zero.
So in effect, they ignore urbanization and population growth.
All that artificial heating goes into this graph.
But it gets worse. Much worse.
NASA tells us that their scientists make adjustments to account for station temperature data that are significantly higher or lower than that of nearby stations. By comparing data with surrounding stations, they identify abnormal station measurements, which are then assumed to be wrong.
The trouble is, many of the abnormal station measurements which are eliminated are likely to be rural stations which show little or no warming.
According to one analysis of official temperature adjustment, the good quality stations are likely to be considered as statistical outliers. Instead of correcting the poorly and badly sited station records to match the well-sited stations, it appears to have blended the temperature records of all stations to match the poorly sited stations.
The US temperature record is by far the most reliable, longest-running temperature record in the world. The actual temperature readings from all the official USHCN thermometers across the country suggest that, despite the increase in urbanization, average temperatures today are lower than they were in the 1930s.
Let’s look at how this data has been adjusted by official agencies—supposedly to correct for artificial bias. Instead of reducing the amount of warming, incredibly, they’ve done the exact opposite.
The adjustments are even more extreme when we look at maximum temperatures.
Here’s the actual data from US thermometers showing maximum temperatures since 1895—there is no signal of any global warming. But now look at the same data once it’s been adjusted by climate agencies—suddenly the temperature looks like it’s rising.
And this is happening all over the world.
Here is the raw average temperature record in Australia taken from thermometers—showing no warming since the 1980s. And here is the adjusted record—showing a significant increase.
Here’s raw temperature data from weather stations in Greece—no warming. After it’s adjusted—lots of warming.
Raw data from Ireland—a slight cooling. Once it’s adjusted—suddenly there’s warming.
Here is the raw temperature data from Reykjavik since 1900—current temperatures are similar to those of the 1940s. And here’s the adjusted data, which shows massive warming.
You get the idea.
Is there any other way of testing if this famous graph is accurate or the result of corrupt data, exaggerated still further by poor data handling?
Yes, there are several:
- Just examine the temperature readings from rural stations only. Forget the urban thermometers. This has now been done. Here is the temperature record of the US since 1880 taken from rural thermometers. Temperatures rose significantly to the 1930s and ’40s, then fell dramatically to the late 1970s. They have risen since then, but today they are barely higher than they were in the 1940s.
Here is the temperature record from China using only rural stations. Once again, the 1940s look as hot or hotter than today.
It’s the same pattern again and again—with little or no net warming since the 1940s.
- If the heating was mainly urban, you’d expect to see less temperature change evident in tree rings—since the trees measured tend to be in rural areas. And again, this is exactly what we find.
Here’s a record of temperature change from tree rings going back 200 years. Not surprisingly, it closely resembles the rural temperature record—a rise to the 1940s, a sharp drop to the 1970s, and then a recovery with recent temperatures on par with those of the 1940s.
Here is another tree ring study published in Climates of the Past, showing no net warming between 1940 and 2000.
Here’s another, published in Geophysical Research Letters, covering the past 1,500 years—it shows variation, but no overall trend. If we zoom into the last 200 years, it shows the same pattern with temperatures at the end of the 20th century cooler than the 1930s.
Every one of these studies directly contradicts the graph being pushed by official climate agencies.
- In urban and suburban areas, concrete, tarmac, and brick soak up heat in the day and then release it during the cold night. Minimum temperatures usually happen at night. As a result, the most obvious signal of urban heat bias is a rise in minimum temperatures as recorded at night.
Remember that Royal Meteorological Society paper:
“Urbanization has significantly increased the daily minimum temperature in the UK by as much as 1.7°C.”
What do we find in the US?
Here from NOAA is a record of maximum summer temperatures in the US since 1895—there’s not much change. And as we see, according to NOAA’s own data, summer temperatures in the US are still not as high as they were in the 1930s.
But how about minimum temperatures over the same period? They’ve been rising. That is a clear signal that it’s elevated temperatures at night—caused by heat retention in urban environments—that is causing the shift in average recorded temperature data.
And this has been a feature of temperature data across the globe in recent decades.
- If the recorded warming is just the result of urbanization, we would expect far more warming on land than we do at sea, where there are no towns or cities. And that is exactly what we do find.
Here from Berkeley Earth is the official measure of land temperatures compared to ocean. This ocean data has itself been adjusted to show steady warming (which we’ll look at elsewhere), but even after these adjustments, it still shows that since the 1940s, the rise in recorded land temperature is three times as high as the temperature change in the ocean.
- If urbanization were to blame, in countries with very little urbanization—like Greenland—you would expect to see far less warming. And that again is what we do find.
Here’s a graph showing temperature records of Greenland since the mid-19th century. Once again, a rise to 1940 followed by decades of falling temperatures, and then a recovery with temperatures today similar to those of the 1940s.
- If recent warming was due to urbanization and population growth, you would expect to see a much bigger warming signal in the Northern Hemisphere, which is where 90% of the human population lives. And that’s exactly what you do find.
Here’s the official UK Met Office temperature data for the Northern Hemisphere since the 1970s.
And here’s the Southern Hemisphere, which shows far less warming.
There is a huge amount of scientific evidence on this, pointing in one direction.
This graph—this famous graph—the graph on which the whole climate alarm hangs—is not just wrong, but spectacularly wrong.
Not only is it thoroughly corrupted by a false warming signal from population growth, but that warming has itself been magnified and exaggerated by the adjustments made by climate agencies.
There is, to repeat, a mountain of scientific evidence that shows this graph to be wrong.
The graph that we see again and again in the media, in schools, and in presentations to politicians.
The graph that is used to make all these claims about record temperatures.
The graph that is being used to force through the most dramatic and damaging public policies.
The whole of Western industrial society is being turned upside down because of this graph.
Governments in many countries are taking a wrecking ball through their energy and transport systems.
Whole industries are shutting down. Whole populations find themselves bullied out of owning and driving cars, forced to buy certain appliances, find themselves hit at every turn by punitive green taxes and regulations.
Scientists make mistakes all the time—but this is different. Since so much hangs on this graph, we should surely, as a matter of democratic right, be made aware of any evidence at all that might suggest that it’s wrong.
But no. Our publicly funded science establishment and the mainstream media have gone to great lengths to silence any doubts or criticism.
But now we come to the question: why?
Why are so many organizations and so many people so feverishly attached to the idea of man-made global warming?
Let’s look at a publicly funded organization like NASA, which has taken a leading role in promoting the climate scare.
NASA receives $20 billion a year of taxpayer money. Why? They already beat the Soviets to the Moon. There’s no point going again and again. Elon Musk is better at launching rockets.
There’s only so many pictures of the Milky Way that you can look at.
To justify its existence, NASA has decided that its new vital mission is to help us combat climate change. The trouble is, that means that NASA’s continued funding depends on the climate alarm.
And since it has skin in the game, it’s hardly surprising that NASA is only too keen to emphasize the potential horrors of climate chaos.
And it’s not just NASA. There are huge UN agencies, and dozens of university departments, and legions of academics—not to mention all the sustainability officers, climate advisers, and renewables companies—that rely for their funding on the climate alarm.
Hundreds of thousands of careers have been built on this. Countless scientists have staked their reputations on it. Their lives have been defined by it. Their livelihoods depend on it.
But even more than this, the climate alarm has become an article of faith for those on the left. And the left-wing bias within universities, the public sector, mainstream media, and the university-educated intelligentsia more broadly is well-documented and familiar to all.
And that is why it is unacceptable—in universities, the mainstream media, and the publicly funded science establishment—to entertain any doubt.
Contrary evidence must be hushed up. Ignored. Dismissed.
The fiction must be maintained—that this appalling bit of nonsense is a true account of what’s happening in the world.
At Guerilla Science, our aim is to submit the climate alarm—and our tax-munching establishment—to proper scientific scrutiny.
But to carry on, we need your help.
Please subscribe and please, please—if you can—donate.
I’m Tom Nelson, and this is Guerilla Science.
They’re still presenting averages (or averages of anomalies, just as irrelevant). Fake news.
Seriously! You can’t average intensive values, but the alarmists don’t know that. And anomalies are also fake–as you said.
Or maybe they do know it, but that knowledge doesn’t suit their agenda
I feel more and more sympathy for Tony Heller’s basic tone.
Tony Heller’s work on the NOAA USHCN data, both the daily and the monthly station records, motivated me a few years ago to independently analyze those records by writing and running R scripts. He’s not wrong. He also has an impressive ability to find and post historical accounts of flooding, storms, fires, etc. for context when the news starts to bleat about a current weather event.
Heller has also been the victim of a vicious campaign of slander and character assassination by the Climate Lobby.
Climate Syndicate might be a better exxpression.
Climate Mafia even better
Syndicate = Mafia
He was banned from contributing this website by Anthony Watts for his blatant and unapologetic dishonesty. He is dismissed by all serious people.
Tony Heller is actually mostly correct and is far more honest that most climate alarmists.
That one instance was here he stuck to his opinion, even though obviously not correct.
How would you have a clue what serious people dismiss or not. ??
AnalJ is unable to rebut a single thing Heller says, so resorts to ad hominem slurs. Standard operating procedure for climate alarmists.
So he was permanently banned for repeatedly lying? Understood.
Wrong.. as usual.
The argument was about atmospheric CO2 freezing out of the Antarctic atmosphere.
Tony didn’t understand the partial pressure chemistry but stuck to his guns.. the argument got heated, and Tony got banned.
It was not dishonesty, which is a deep-seated alarmist trait, but a lack of scientific understanding on that particular piece of science.
Basically everything else Tony Heller has put forward is verifiable by data and historical accounts…
… unlike anything you have ever put forward.
That was an embarrassing thread for Heller, but it was not the final straw.
Yeah. Anthony provides some insights into why he banned him in this blog post.
https://rankexploits.com/musings/2014/how-not-to-calculate-temperature/
You can see his comment at this link.
https://rankexploits.com/musings/2014/how-not-to-calculate-temperature/#comment-130003
It’s been many years already so I’m not sure how much longer these links will live. For those interested in archiving what went down now would be a good time to save this webpage.
You think someone is interested in archiving this conversation? For what purpose?
bdgwx, the rankexploits article is archived here:
https://archive.is/3HNm2
No he wasn’t banned for lying, it was something else which you never saw as this was years ago, he has been able to post since then but chose to stick with his own blog.
Get lost, troll.
A disagreement about whether CO2 can freeze out of the atmosphere in Antarctica can hardly be called dishonest.
Anthony and Tony have since kissed and made up.
Satellite since 1979?
Not of much use re. temperatures prior to that time, to which most of the data tampering, aka ‘adjustments’, have been applied.
There was quite a bit of data tampering after 1998, too (which is why I stick with using Hansen 1999). That’s how NASA and NOAA managed to mannipulate the data so they could claim that after 1998, they had year after year which was the “hottest year evah!” and each successive year was even hotter!
Of course, if you look at the UAH satellite chart, you will see that NO years between 1998 and 2016, were warmer than 1998.
Lying about the temperature data and scaring people. That’s the NASA and NOAA way.
See if you can see any years after 1998, that were warmer than 1998. NASA and NOAA found 10 years between 1998 and 2016, which they claimed were the “hottest year evah!”
Here’s the UAH chart. Those “hottest year evah!” claims couldn’t be made using this chart:
NASA and NOAA had to do a LOT of mannipulating to make the years after 1998, look hotter than 1998.
This was around the time that Hansen started trying to say that 1934 was not warmer than 1998. Before, he said 1934 was 0.5C warmer than 1998. But he changed his tune when he realized the temperatures were going to cool after 1998, instead of increasing, as he expected them to do. So he had to scramble to keep the human-caused climate change narrative going by lying about the temperatures.
Show a decadal increase in atmospheric temperatures of about 0.15 C; a nothingburger.
And it’s no warmer than in the past, and in the past, when it got this warm, a cooling spell came along.
If we could, we would make the climate warmer.
Agreed.
To make the world more amenable to humans, I would:
1.) Warm the winter temperatures more than the summer temperatures.
2.) Warm the daily lows more than the daily highs.
3.) Warm the arctic more than the temperate zones and the tropics.
Oh wait, I don’t even need my magic wand, that is exactly how the slight warming is manifesting!
In the American northeast, the growing season is 3 months if you’re lucky!
Should, not would.
Just shared this link to 9 people. Good work!
Yes, Tom does good work.
I’d be interested to know if any of those 9 will change their minds after viewing. I find people remain attached to their belief, even when the falsehoods are exposed and undeniable.
“It’s Easier to Fool People Than It Is to Convince Them That They Have Been Fooled.” – Mark Twain
Step 1) Label Tom Nelson a “Climate Denier”.
Boom, done.
Tom Nelson’s Rumble channel is one to keep an eye on for new material, as well as Gorilla Science:
https://www.youtube.com/@tomnelson2080
https://rumble.com/user/GorillaScience
Thanks for this, Karl.
You are most welcome.
I live near Portland Oregon, but on the west side, between the Pacific and the city. Most of the time, our weather comes from the Pacific. Almost always, Portland nights are about 10 degrees F higher than what I experience. And the local weather stations duly report that difference. But, when the wind shifts around to the east, it is warmer here than in Portland. Why? Portland’s heat load is passed off to my area.
Portland’s official weather station was in Portland’s downtown from 1859 to 1941, when it was moved east to the Portland International Airport, which was then out in the boonies. In comparing those two weather graphs, it shows a clear rise of temperature over the years. They almost match, if you don’t look at the actual numbers. Both start low, then rise over time. But now, the Portland International Airport is nearly a city unto itself. My point is that these long periods of weather measurement (80 years, and 80 years) show exactly how the Portland area has grown. We have two long graphs that are nearly identical, starting at the end of the little ice age right up to today. You could not tease ‘climate change’ out of that if you tried. I have tried to find both those graphs, but have no longer been able to find them. Lots of numbers, no graphs.
Here at WUWT we would be remiss not to make reference to Pat Frank’s work published in 2023.
https://www.mdpi.com/1424-8220/23/13/5976
From the abstract (LiG means Liquid in Glass; GSATA=global surface air-temperature anomaly):
“LiG resolution limits, non-linearity, and sensor field calibrations yield GSATA mean ±2σ RMS uncertainties of, 1900–1945, ±1.7 °C; 1946–1980, ±2.1 °C; 1981–2004, ±2.0 °C; and 2005–2010, ±1.6 °C. Finally, the 20th century (1900–1999) GSATA, 0.74 ± 1.94 °C, does not convey any information about rate or magnitude of temperature change.”
We have already hashed this out before. The gist of the mistake Frank commits in this publication is assuming that the error correlation r(x,y) = 1 for every single thermometer in existence and at every point in time. In more plainer language he is arguing that the error E is the same for all measurements. This assumption is problematic in two ways. 1) It is patently and absurdly false and 2) even if it were true that means the error E would entirely cancel during the anomalization process leaving no uncertainty in the final anomalies at all which is also obviously wrong. Notice that equations 2-7 are a special case of the law of propagation of uncertainty where r(x,y) = 1 for all combinations of x and y. But then when he jumps to equation 8 which propagates uncertainty of anomalization he changes it to r(x,y) = 0. The deception is subtle indeed. If you didn’t know about the law of propagation of uncertainty you almost certainly would not have caught this subterfuge. There are other egregious mistakes in the publication, but I personally feel they are secondary to what I mentioned above.
Equations 2–7 in the paper combine independent sources of uncertainty: instrument resolution, nonlinearity, and so on. None of which are treated as correlated. Not seeing the sleight of hand here.
Likewise, Equation 8 combines the final propagated uncertainties of land and ocean datasets, again assuming no correlation between them, and for the same reason: they arise from fundamentally different measurement systems and are treated as independent. The logic is consistent throughout.
Please elaborate on your sleight of hand claim.
Second, in what way does anomalization eliminate systematic error? You’re suggesting that averaging 30 or 31 systematically biased daily readings into a monthly mean, and then subtracting a long term monthly average (also assumed to be error free), somehow removes the original bias from those 30/31 measurements?
Yes they are. Refer to the law of propagation of uncertainty when r(x,y) = 1 for all combination of x and y. When the measurement model is in the form y = Σu(Xi) then u(y) = sqrt[Σu(Xi)^2] only when r(x,y) = 1 which is what he did in equations 4, 5, and 6. [JCGM 100:2008 equation 16]
Correct on the “assuming no correlation” part. That’s my point. He sets (implicitly and perhaps unbeknownst to him) the correlation r(x, y) = 0 when propagating the anomalization.
It is exactly as I said. He treats the measurements as having errors with perfect positive correlation such that r(x,y) = 1 in part of the multistep procedure and r(x,y) = 0 in another part.
It’s the way the law of propagation of uncertainty plays out when the measurement model is y = a – b. When r(a, b) = 1 then u(y) = 0 regardless of what u(a) = u(b) actually is. You can use the NIST uncertainty machine to prove this out for yourself.
It removes SOME of the systematic bias. If r(x, y) = 1 for all x and y combination then and only then does it remove ALL of the bias. r(x, y) is never equal to 1 in reality though
Did you not notice that the very document you cited explicitly places equation 16 under Section 5.2, titled “Correlated input quantities”?
Which distribution am I supposed to select? Day to day field temperature measurements don’t share a common distribution. Their error characteristics vary with conditions, instrumentation, and time. Assuming they do is just that: an assumption. And it’s almost certainly inaccurate, especially if you’re treating large averages, monthly or multi-decadal, as inputs in your uncertainty model.
From Frank’s paper:
From Google AI:
(Bold mine)
That’s the problem. RSS is only appropriate for measurement models in the form y = Σu(Xi)/n and r(Xi, Xj) = 1.
What was the prompt you gave Gemini?
Here is the prompt I gave Gemini 2.5 Flash…
For a measurement model in the form y = Σ[Xi, 1, n] / n and given that u(x) = u(Xi) for all Xi what is the formula for u(y) when r(Xi, Xj) = 0?
Gemini’s conclusion was u(y) = u(x) / sqrt(n).
I then asked…
What happens when r(Xi, Xj) = 1?
Gemini’s conclusion was u(y) = sqrt[u(x)^2] = u(x).
It is interesting to note that Gemini that derives these formulas from the law of propagation of uncertainty.
No, it is not appropriate for r(Xi,Xj)=1.
Yes, but Dr. Frank explicitly makes clear that uncertainty only reduces with increased sample size under specific conditions:
Just to clarify: when the above quote refers to assumption of uncorrelated errors, it’s talking about systematic errors shared within, say, the same spatial domain. Not the individual uncertainty components (like resolution or nonlinearity) used in the error propagation equations.
Yes it is. If you want we can walk through the derivation together.
That condition for r(x, y) = 0 is when the partial derivative ∂y/∂x < 1/sqrt(N). I discuss this condition in my post here. If you want we can walk through the derivation together. When r(x, y) > 0 the denominator of this condition increases from sqrt(N) to N as r(x, y) approaches 1.
Anyway, the partial derivative ∂y/∂x when y = Σ[Xi]/n is 1/N which is less than 1/sqrt(N).
I agree with Dr. Frank on this point. What I disagree with him on is that the measurement error is exclusively systematic and correlated such that r(x, y) = 1 which is what he is assuming in the first part of his propagation of uncertainty calculation.
And just so we’re clear here assuming r(x, y) = 1 is a statement that all measurement error for all instruments over all periods of time is exactly the same every single time a measurement is taken. It is such an absurd argument that it almost defies credulity that a university educated graduate is defending it.
This statement is true. Correlation refutes a necessary assumption of the CLT.
Non-normal systematic errors mean the individual random variable distribution will also be non-normal.
Measurement uncertainty is not about sampling. It is about developing a good probability distribution so intervals around the stated value can be determined. Too many statisticians can only see the world through sampling distributions.
The GUM says this about random variables.
I’ve tried many times to explain to these folks that this is one reason the GUM says that the standard deviation of the mean is not a standard error of a sampling distribution.
“Correlation refutes a necessary assumption of the CLT.”
Correct, that of independence.
“Non-normal systematic errors…”
What is a non-normal systematic error? By definition systematic errors are not random, so do not have a probability distribution.
“…mean the individual random variable distribution will also be non-normal.”
And how many times do I have to explain that non-normal distributions are not a problem for the CLT, that’s sort of the whole point of the CLT.
“I’ve tried many times to explain to these folks…”
And us folks keep pointing out that you are describing how to calculate the uncertainty of a single measurement, and that the uncertainty of the mean of multiple measurements of the same thing is defined by what the GUM calls the “experimental standard deviation of the mean” as described in 4.2.3, the example in 4.4.3, Note 2 of B.2.17, and TN1900 example 2.
You just answered your own question. Without a probability distribution there is no way to use statistical methods to identify or evaluate the error other than calibration and obtaining a correction chart.
I thought you were an uncertainty expert. You should know these things.
“You just answered your own question.”
No. My question was, what do you mean by a non-normal systematic error? I would add, why do you think it would make a random distribution non-normal, and why do you think that invalidates the CLT?
Of course you cannot use these statistical techniques to eliminate systematic errors or biases. But that’s not a problem with the techniques – it’s a problem with how you are conducting the experiment (or analysis or measurement or whatever).
“I thought you were an uncertainty expert.”
Why would you think that. I’ve repeatedly told you I’m no such thing. All I’ve said is I can understand the maths and try to correct your misunderstandings.
Read this definition from the GUM.
C.2.16 population
the totality of items under consideration
NOTE In the case of a random variable, the probability distribution [ISO 3534-1:1993, definition 1.3 (C.2.3 )] is considered to define the population of that variable.
The probability distribution is considered the population of the random variable.
That probability distribution can be anything, including something that doesn’t have a standard description. That is non-normal.
My last response is why do you think Dr. Frank is wrong simply because you don’t understand what he has said?
It’s alright, I wasn’t expecting an answer.
Nothing you said has any relation to my question which was what you think a non-normal systematic error means. A systematic error is not a random variable..it is just a constant. It’s probability distribution is just 1 at that value.
The effect of a systematic error is just to add a constant to the random variable. If the variable has a normal distribution, it will still have a normal distribution when you add a constant. If it wasn’t normal it will still have the same shaped distribution, just shifted.
“My last response is why do you think Dr. Frank is wrong…”
You need to narrow down which particular “wrong” you are talking about.
As we discussed a few years ago there are some basic mathematical concepts where he is just the opposite of correct. E.g. standard deviations being negative. Then there are numerous typos in his equations.
As far as the claims about uncertainty of the global average, it’s possible to bend over backwards and say using his definition of uncertainty he could claim to be correct. It’s just not a definition that I would find useful, and requires you to abandon everything described in the GUM, and all the other books on measurement uncertainty.
Show your math that refutes his. I tire of you presenting yourself as a worldwide expert whose assertions and opinions should be taken as gospel without any references or showing any math whatsoever. I want to see both references and math.
We’ve been through this ad nauseam. You always just try to weasel out of the conclusions. Quote some passage about repeatability conditions or some such, twist the definition of uncertainty or jump to some unrelated subject.
It really just comes down to Pat claiming with no evidence that the correct way to determine the measurement uncertainty of a mean is using RMS, by which he means taking the standard deviation of the errors. In his case he already knows what this is because he’s stated the standard uncertainty for a single daily measurement. But he goes through the whole pantomime if squaring it, multiplying it by the number of days in a month, then dividing by the number of days, then taking the square root. All to get back to the number you first thought of. A lot of effort to determine that the average uncertainty is the average uncertainty. The result of this is it makes no difference if you are talking about a single day, the average of 30.147 day, or the average of 30 years. You always have the same uncertainty.
And all of this would be fine, if there was any justification given for why you think RMS is the correct way to determine the measurement uncertainty of the average of multiple independent measurements. But no justification is given. (As I said, I think he tries to use an obscure definition of uncertainty, but he never explicitly explains this in the document as far as I can recall.)
How I would do it, if you can assume these measurement uncertainties are random and independent, is to use the tried and tested methods outlined in the various books and documents I keep being pointed to, such as Taylor, or the GUM. We can then treat the averaging of multiple daily readings using the rules for propagating uncertainties. As has been explained many times this leads to a general equation for the uncertainty of the average of the individual daily uncertainty divided by the square root of the number of observations.
But this is only telling you the uncertainty caused by measurement uncertainty. This will be small compared with the the uncertainty you would get from treating the observations as a random sample That would be the standard deviation divided by root N. And that in turn is not the actual uncertainty caused by estimating the global average from sets of imperfect data, that is not randomly distributed, and which has to be adjusted to reduce the imperfections in the data. How you do that is not something I would like to try, given as I keep pointing out , I am not an expert.
If you really want me to spell out the maths of the measurement uncertainty, I will, for the 1027th time, but I doubt you’ll accept it any more than you did the first time.
Aye, there’s the rub.
Once all the other sources of uncertainty are minimised, measurement resolution dominates, and is still Resolution / sqrt(12)
This is a corollary of the Simson et al paper you referenced ages ago. It’s somewhat akin to Heisenberg uncertainty.
Sorry, Phillips, not Simson.
Random and independent are not sufficient for determining measurement uncertainty. The same measurand must used.
A monthly average is a single measurand with a single input quantity. The average of the observations do not require propagation of uncertainty. Propagation of uncertainty is only needed for >1 input quantity. Section 4.2 of the GUM provides how to determine the Type A measurement statistics of the random variable containing the observations.
Why do you never do any research of your ? You just keep acting like you are THE EXPERT that knows more than everyone else. Here is a NIST document on LIG thermometers.
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication819.pdf
You don’t have to read far to see a non-linear correction chart.
The value of correction changes based on temperature being measured. THAT IS NOT A CONSTANT VALUE.
Yes I was oversimplifying. A systematic error can vary with the quantity. The fact still remains, it is not random, it does not have a probability distribution.
See GUM 3.2.4.
Yes, you correct for known systematic errors, and those corrections have uncertainties. I’m not sure if you think that answers my question about how you can have a probability distribution for a systematic error. The problem is the uncertainty is not the systematic error (or effect as the GUM calls it.) See the note to 3.2.3
You need to back this up with some references that specifically state this.
When reviewing NIST Tn 1900 Example 2, they do not make this an issue.
Part of your problem is you have no definition of what the measurand is when describing the math. If the measurand is the monthly average, then the associated random variable has a given distribution which defines the uncertainty. NIST is very specific in their description of observation equations and how they are combined into an uncertainty.
If you wish to deal with correlations, then you need to also understand what a correlation actually means. Note below in 5.2.1, the physical quantities are assumed to be invariants. The random variables are the subject, not the physical quantities. That is one reason the NIST in TN 1900 Example 2 assumed most values of uncertainty were negligible and the uncertainty was determined from the probability distribution of the monthly temperatures.
You just brush off “crocodile” with some math equations and assumptions. Nowhere do you justify the physical reasons for saying Dr. Franks math is wrong. Dr. Frank is an analytic chemist. These folks deal with measurement uncertainty in everything they do. Every titration, solution, and mass measurement is subject to some kind of uncertainty that combines to make the stated value have an uncertainty interval. Simple griping about whether he used the correct evaluation means nothing without being able to give the physical reasons from the data used.
From the GUM
I would definitely read F.1.2.1, F.1.2.2, F.1.2.3, and F.1.2.4.
And F.1.1.3 makes a good point.
Has you EVER taken into account the fact that influence quantities are not constant in a monthly average. Has anyone seen a paper where the means of the first half of a month and the last half of a month have been compared to see if there is an effect varying with time? Believe me, in most months there is. Is that ever applied to the calculated uncertainty?
In the end, uncertainty is all about how sure one is of a measurands value. The dispersion of measurements around the stated value is the internationally accepted method of publishing the value of the measurand being evaluated. That is normally done using the standard deviation of the measurements taken. There is only one instance where the standard deviation of the mean is significant and that is when there is a single sample, measured multiple times under repeatable conditions. That is something that atmospheric temperatures will never be.
How many times has this been pointed out to him? And that the average formula is not a “measurement model”? Innumerable.
The B&B Gang generated a lot of noise here but failed to invalidate anything in the Gorilla Science video. Just noise and deflection.
Amen! +1000
I should probably clarify something here as well. First, that should read as y = Σ[Xi]/n where y is none other than an average. Anyway, the point I want to clarify is that at no time is u(y) ever actually RSS even when r(Xi, Xj) = 1. It’s just that it is closer to RSS when r(Xi, Xj) = 1. So in that regard Dr. Frank is doubly wrong.
Yep. I sure did!
Whichever one you want. I would try them all if I were you that way you get a feel for how it changes the result. Hint…in most cases for a sufficient number of measurements it doesn’t! That’s part of the elegance of the law of propagation of uncertainty. For a more complete breakdown of why this is true I encourage you to read sections E, F, and G of JCGM 100:2008. In a nutshell the law of propagation of uncertainty is compatible with any distribution. This is why it works for triangular, rectangular, etc. in the same way it works for normal distribution.
BTW…the law of propagation isn’t the only way to skin this cat. In fact, it may not work very well some non-linear measurement models. A more broadly applicable method is the monte carlo method. Fortunately the NIST uncertainty machine provides the results of both methods. For more details in this regard to refer to [JCGM 101:2008].
I think you misunderstood my point. By assigning fixed distributions to monthly and multi decadal means (the inputs), you’re implicitly assuming the individual temperature measurements they’re calculated from share a common distribution (i.i.d). That assumption doesn’t hold in real world, making the model inapplicable to this context.
And I think you missed my point that it doesn’t matter. The law of propagation of uncertainty usually works regardless of the distributions of the inputs. In the more complicated cases where it does not yield adequate results you can use the monte carlo method instead which always works.
BTW…just because I fervently defend the mathematics does not mean that I think global average temperature or trend measurements have zero uncertainty or are the be-all-end-all for drawing conclusions with unbridled statistical significance. Far from it. The problem is that many people here get so pissed off at the math that shows indisputably and unequivocally that the uncertainty of the average scales as 1/sqrt(N) when r = 0 that Bellman and I can never have genuine discussion about what the true uncertainty even is. And some familiar commenters here are so triggered by the math they make the most absurd math mistakes imaginable to disprove the consequences of the law of propagation of uncertainty. I kid you not. Some of these math mistakes are so trivial that middle schoolers could identify them.
Oh…and speaking of trivial mistakes…Dr. Frank also thinks 2 = 1.96 as is the implication in all of the equations 2-8. Like how does that even make it past peer review? In Dr. Frank’s defense I think he did eventually concede that this was a mistake in one of the many rounds of comments between him and Bellman and I. The mistake also isn’t material to the conclusion so I don’t really care. But it does highlight the sloppiness in both his submission and the journal’s peer review process. BTW…I did email MDPI regarding many of the issues in the publication. Their response was best categorized as apathy and indifference.
You reveal your ignorance of making measurements and how to treat them. You really should study more and cherry pick less.
From NIST. Expanded uncertainty and coverage factors
A k=2 is a common factor used in most labs and is recommended by NIST and other bodies.
It’s a minor point, but the humour is in the way Frank keeps using 1.96 as the factor to get a 95% confidence interval, whilst calling it 2σ.
The other nonsense was, iirc, the way he multiplies every standard uncertainty by 1.96, only to then have to divide it again by 1.96 to put the standard uncertainty into the next equation. But whole section is just an exercise in performing a series of operations, then inverting them in order to get back to the number you first thought of.
The real issue I had with this is the way he talks about 95% confidence intervals, whilst using a model of uncertainty that only allows absolute intervals. I asked him several times what he meant by 95% whilst claiming the interval had no probability distribution and just represented a zone of ignorance, or whatever he called it. He never explained this paradox.
From NIST. https://physics.nist.gov/cuu/Uncertainty/coverage.html
A person familiar with measurement uncertainty would know this from studying NIST documents. ISO also recommends using a k = 2 factor. You obviously have not spent much time learning about metrology.
Just stop cut and pasting the same irrelevant pieces of text. I know what a coverage factor is and I know what a confidence interval is. None of that is relevant to what Dr Frank claims he is doing.
You still miss the stupidity of him saying he is using a 2 sigma interval, and then multiplying the standard uncertainty by 1.96. And you still don;t get why his definition of uncertainty, which is completely different to the GUM’s does not allow a 95% confidence interval.
Here’s the problem.
Is the 2 sigma ±1.94, mean an “an interval having a level of confidence of approximately 95%”, or is it a range of ignorance wherein the true mean can be anywhere? Is there a 5% chance that it could also be outside the interval?
By the way, looking at that quote did you ever call him a fraud for using too many significant figures in his uncertainty intervals, or using inconsistent decimal places with some of the others?
The question you pose illustrates your uncertainty of the definition of an expanded uncertainty. It tells me you don’t know, that you are uncertain.
That is also the definition of measurement uncertainty. YOU DON’T KNOW THE TRUE VALUE OF A MEASUREMENT. How do you describe the range of what you don’t know. The GUM was developed in order to have a standard method of describing the level of uncertainty. The expected interval of uncertainty is derived by creating a probability distribution that is based on the dispersion of multiple measurements of a measurand. To understand the effects that influence quantities have on the range of dispersion, one must analyze all of the individual influence quantities affecting each measurement and how they propagate to determine the combined uncertainty. That is what an uncertainty budget is designed to accomplish. It describes the individual categories and how they affect the overall dispersion of measurements.
To answer your questions more directly.
The term “true mean” has no meaning. The arithmetic mean of a probability distribution is the maximum frequency of a normal distribution. Is it the real value? Only by luck. Can it be different? Sure, and it probably (sic) is.
I’ll quote from Experimentation and Uncertainty Analysis for Engineers by Coleman and Steele.
Think about how one determines the uncertainty in a sample-to-sample measurement of temperature in a monthly average.
From the GUM
“That is also the definition of measurement uncertainty. YOU DON’T KNOW THE TRUE VALUE OF A MEASUREMENT.“
No. The definition as given by the GUM is a value that that characterizes the dispersion of the values that could reasonably be attributed to the measurand. This could be given as a standard deviation or an expanded uncertainty, or half of a specified confidence interval. Just saying “we don’t know” is not really characterizing uncertainty.
“The expected interval of uncertainty is derived by creating a probability distribution that is based on the dispersion of multiple measurements of a measurand.”
Or it can be based on other methods, i.e. a Type B uncertainty. But regardless the point is that there is an assumed probability distribution.
“That is what an uncertainty budget is designed to accomplish.”
How the uncertainty is derived isn’t the point we were arguing. It’s the fact that Pat Frank’s definition of uncertainty is based on ignoring the probability distribution, treating the interval as a range of ignorance. Yet ignores the fact that if there is no probability distribution, saying the range represents a 95% confidence interval is meaningless.
“The term “true mean” has no meaning.”
Call it what you want. I was just simplifying Frank describing it as the “physically correct mean anomaly”.
“Is it the real value?”
It’s the value we are interested in. If you want to argue what’s really real, you’ll need to talk to a philosopher.
“I’ll quote from Experimentation and Uncertainty Analysis for Engineers by Coleman and Steele.”
And I’ll ignore it. As I keep saying if you think what I said was wrong, explain why and provide a quote if you think it supports your argument. Mindlessly quoting whole passages with no indication that you actually understand them is not a good substitute for providing an argument. Next you’ll be asking an AI to do your thinking.
OK, I lied. I did read it, which I suspect is more than you did. How does any of that answer my question?
“Think about how one determines the uncertainty in a sample-to-sample measurement of temperature in a monthly average.”
What do you mean by “sample-to-sample”? And what does any of this have to do with my question about whether you or Pat Frank thinks that his 95% uncertainty interval actually means there’s a 5% chance that the “physically correct mean” can be outside the interval.
.
Quoting the GUM is irrelevant when he insists that the GUM is not describing the correct type of uncertainty.
Tell you what, since you are the expert in measurement uncertainty, show us how Dr. Frank should have done the calculations.
I want to see your math and your assumptions that show WHY his results are in error.
So far you have only asserted unsubstantiated mistakes and opinions. Show your work that refutes what he has done.
I don’t think he actually believes that. The discrepancy you pointed out relates to the tails of a normal distribution. Two standard deviations cover about more than 95% of the curve, but since Dr. Frank is aiming for exactly 95% confidence, he uses a multiplier of 1.96 instead. It corresponds exactly to 95% coverage.
Right. So the left hand side of equations 2-8 should be 1.96σ; not 2σ. It’s an egregious mistake. Just because the consequence.is minor doesn’t mean the mistake isn’t major. In this case it is major because it is so obviously wrong and highlights the sloppiness of Dr. Frank’s submission and MDPI’s peer review process.
And BTW…notice that Gorman is defending this obvious mistake as somehow being correct. That is typical behavior so you need to be careful when siding with them otherwise you’re going to be implicated in making absurd and trivial math mistakes as well.
So you believe there is a significant difference in an expanded uncertainty result if a coverage factor of 2 is used instead of 1.96. This is your “egregious” mistake?
Another “egregious” indication you have zero experience in real-world metrology.
See GUM G.6.6
Making engineering assumptions is well beyond the realm of these math jockeys. The same is true of Type B evaluations, the numeric values do not vary appreciably between triangular and rectangular, it is more prudent to take the worst-case and assume a rectangular interval.
They also whine about Pat’s use of RMS but this is on the level of Nick Stokes nitpicking. The truth is they understand neither error or bias.
You entirely miss the point of uncertainty. The goal is to evaluate the uncertainty of each input quantity in a functional relationship then propagate each uncertainty based upon the functional relationship. T
he key is that you need to have analyzed each input quantity with its own probability distribution. Only at that point can you “propagate” the individual uncertainties of each input quantity based upon the functional relationship that defines how they are combined into a measurand’s value.
Section 4 in the GUM defines how each input quantity should be analyzed using a random variable to hold the observations of a single input quantity.
A single monthly average has only one input quantity, the monthly average as determined by ~30 observations. The distribution of that random variable defines statistical parameters used to determine the measurands value.
I would point you to the GUM
Do a search on “dispersion” in the GUM. You will find that it refers to the variance of the random variable quantities, which is what NIST TN 1900 does, and not the variance of the calculation of the mean.
bdgwx, Jim Gorman posted an excellent reply. Please, I encourage you to read it. It encapsulates my point well.
His post has almost nothing to do with anything either you or I were discussing.
The only exception is his reference to NIST TN 1900 E2 which I highly recommend you read.
Notice that NIST assesses the uncertainty of a monthly average temperature at their HQ using the 1/sqrt(N) scaling rule.
That is completely contrary to what Dr. Frank did.
And it is not what Jim Gorman says NIST.did. Jim is either lying about what NIST did in that example to try and fool you (and everyone else) or he is grossly incompetent in his reading of that example. After years of interactions with him I have no idea which one it is.
Don’t take my word for it. Read it. Pay particular attention to final and salient point on pg. 31 where they say and I quote, “Therefore, the standard uncertainty associated with the average is u(r) = s∕ sqrt(m) = 0.872 ◦C” It couldn’t be more clear. NIST divides the standard deviation of the observations by sqrt(m) where m is the number of observations that go into the average.
The fact that the Gorman’s either can’t see this or are straight up lying about it almost defies credulity. Bellman’s response below regarding this topic does a pretty good job of reflecting on the Gorman’s absurd position.
“His post has almost nothing to do with anything either you or I were discussing.”
Here’s what Jim Gorman said:
This just reinforces my point that your measurement model isn’t applicable when using monthly inputs as averages.
Under certain assumptions.
I saw what he said. That’s how I know it is not related to the correlation of errors of measurements and whether Dr. Frank’s math is correct or not.
His post is meant to deflect and divert away from what we were discussing.
It also happens to be wrong.
First…that wasn’t your point. Your point was that Dr. Frank’s math was correct.
Second…if that is your point now then understand that it too is wrong. If you want to switch the discussion over to this new talking point then fine. We can do that.
Like I said…when the measurements are not correlated.
But in the context of what the Gorman’s are saying that is moot since they don’t accept the NIST solution of scaling the propagated by 1/sqrt(N) regardless.
In fact, they think the uncertainty of the average is computed via RSS and that NIST should have reported it as u(t) = sqrt[ 4.1^2 * 22 ] = 19.2 C.
Here are some other things the Gorman’s say and that I want you to consider.
.1 – thinks sums and averages are the same thing
.2 – thinks division (/) is the same as addition (+)
.3 – thinks PEMDAS rules are optional.
.4 – thinks the surface area of a hemisphere is 4πr^2/2 + πr^2
.5 – thinks Σa^2 = (Σa)^2
.6 – thinks sqrt[xy^2] = xy
.7 – thinks ∂q/∂(x/w) = ∂q/∂x when q = x/w
.8 – thinks Σ[u(x_i), 1, N] / N = u(Σ[x_i, 1, N] / N
.9 – thinks sqrt[Σ[x_i^2, 1, N]] / N is an average
.10 – thinks [u^2(x) + u^2(y)] / n is an average
.11 – thinks u(q/n)^2 is an average
.12 – thinks ∂(q/n)/∂(x1/n) = 1/n when q/n = Σ[x_i, 1, n] / n
.13 – thinks a/b = x is solved for a
.14 – thinks a/b = a is a valid identity
.15 – thinks y = (a – b) / 2 is an average
.16 – thinks ∂(Σ[x_i, 1, n]/n)/∂x_i = 1
.17 – thinks ∂(πR^2H)∂R = 0
And this is just a subset of stuff they have told me. So you need to be very careful when agreeing with them. As you can see some of the stuff in the list is so absurd that even elementary age students would understand as wrong.
Oh please, not the math “errors” database again.
I’m not going to address all your lies. This one will be the only necessary to show what you are.
Neither Tim nor I have said a sum and an average is the same thing. What we have said is the uncertainty is the same. The uncertainty of a sum is calculated suing the propagation of uncertainty.
You are the one that wants to divide both the sum by a value (number of items) to obtain an average and then divide the uncertainty of that sum by the same number. That calculation provides an average uncertainty, not a representative statistical parameter of the dispersion of measurements in the values making up the average. You would like to sell me a product whose actual dispersion of measurements was not known. In other words 10,000 2×4’s with an average length of 8 feet ±0.1 inches when in reality, the product is 8 feet ±4 inches.
We have both told you that the best example is what would happen if you tried to sell a product that way.
Making up stuff is a sure sign that you have no legitimate response. ROTFLMAO.
For the lurkers…you can see the conflation of a sum and average as mistake #14 in this comment. In the equation q/n = x1/n + x2/n + … + xn/n the variable q is a sum; not an average. So he tried to calculate the uncertainty of the sum; not the average. And you can see later on that Tim defends his original assertion that it is sum.
And it’s not just mistake #14. You can see that this isn’t an isolated mistake either. There other instances where they have a problem identifying what an average actually is.
So if the Gormans truly do know what an average is then they are doing everything they possibly can to convince everyone otherwise.
For the Gormans…if you’re ready to start addressing some of these math mistakes I’m willing to engage you. But you’re going to have to actually fix the mistakes without making other mistakes. Are you guys to do this?
“What we have said is the uncertainty is the same.”
You’re still claiming that? So why do you keep saying the standard deviation is the true uncertainty? Do you still not see the implication of claiming the uncertainty of an average is the same value as the uncertainty of the sum?
“You are the one that wants to divide both the sum by a value (number of items) to obtain an average and then divide the uncertainty of that sum by the same number.”
Yes, that’s what I’ve been telling you since 2021.
“That calculation provides an average uncertainty,”
No it doesn’t. Adding up all the uncertainties and dividing by N would give you the average uncertainty. And in case you haven;t noticed, that’s what Pat Frank does.
“not a representative statistical parameter of the dispersion of measurements in the values making up the average.”
You’re describing a standard deviation, not the uncertainty of the mean.
“In other words 10,000 2×4’s with an average length of 8 feet ±0.1 inches when in reality, the product is 8 feet ±4 inches.”
And again you are comparing the uncertainty of the mean with the standard deviation. And neither is what you claim you want, which is to treat the uncertainty of the sum as the uncertainty of the mean. What’s the uncertainty of the sum of your 10000 planks of wood?
Because I am purchasing dimensional lumber of a certain size for use in building. I could care less what the interval is that describes where the mean of 10,000 items might lie.
I want to know how much time and waste will be generated cutting up to 4″ off half the of 2×4’s. I want to know if half the lumber will be unusable because it is too short.
We aren’t dealing with how accurate the mean of some poll might be. Or, if a sample of some population will be appropriate for an advertiser.
I want to know if every O-ring I install will hold under a certain pressure. I want to know if each pushrod I put in an engine will work or if I’ll have to tear down the motor a second time. I want to know every manufactured I-beam I purchase will support a given weight.
That is why the uncertainty of the mean and average uncertainty only applies to a single measurand when measurement observations have been taken under repeatability conditions.
Do a search on the word dispersion in the GUM. Show how many of the uses of that word mentions the phrase “standard deviation of the mean”.
“I could care less what the interval is that describes where the mean of 10,000 items might lie.”
Then it wasn’t much of an analogy You do care about the interval that describes where the global mean may lie.
You fail to realize what the import of that is. NIST also says that other uncertainty is negligible. That may be the case at their station. I doubt it is at others.
Most importantly, when one has sufficient readings of the measurand, and expects no other uncertainty to be important, then one can calculate the standard deviation of the mean FOR THAT SINGLE MEASURAND.
Therefore the mean for that single measurand has that uncertainty. It does not apply to any other stations or even for other months at the NIST station.
My problem wiith NIST’s example is that these are single measurements rather than measurements of the same thing. They are done under reproducibility conditions and not repeatable condition. The Standard Deviation describes the dispersion of the measurements and that is the value that should be used when calculating an anomaly (whose variability is another issue).
In the end, the expanded measurement uncertainty is 1.8°C which entirely subsumes anything like a one-thousandths degree calculation. It means that under significant digit rules, only a one-tenth value is all that can be used.
No lying here. Why did you not discuss the assumptions made in Example. Why didn’t you quote the expanded uncertainty of 1.8◦C? That is the interval that NIST and ISO require to be quoted when values are published.
Why didn’t you mention that another procedure arrived at a closer but wider interval?
See the last paragraph below recognizing a different procedure by “(Wilcoxon, 1945; Hollander and Wolfe, 1999). This procedure doesn’t rely on specific assumptions about the probability distribution.
Did you try this procedure, I did. Did you ignore it because it gave a much bigger uncertainty than the standard deviation of the mean?
Why didn’t you discuss how this large uncertainty affects the determination of temperatures to the one-thousandths digits. That’s like saying 0.001 ± 1.8◦C.
Don’t accuse me of lying when you are an expert of lying by omission.
You are close to part of the issue. Too many here think statistical sampling controls all this. It does not. First, one must evaluate for a series of daily temperatures, do you have a population or samples. If a population, then sampling has no meaning. If samples, do you have one sample with 30 data points, or do you have 30 samples each with 1 data point. One sample, regardless of size can not form a sample means distribution. In fact, if that one sample is done correctly, its standard deviation should match the population’s standard deviation. If you have 30 sample of size 1, then the standard deviation of the mean is calculated by dividing the population standard deviation by the √1.
In both cases, the standard deviation is the uncertainty. Exactly what NIST does in TN 1900.
Yes! Something else B&B refuse to acknowledge.
“In both cases, the standard deviation is the uncertainty. Exactly what NIST does in TN 1900.”
How many more times are you going to repeat this nonsense before you actually read the TN 1900 example?
They do not, absolutely not, in any shape it form, use the standard deviation as the uncertainty of the mean. They use the standard deviation of the daily values divided by the square root of number of observations as the standard uncertainty of the mean.
The fact that you cannot even see that they do that really hints at how strong your cognitive disconence is in this matter. It’s just not something you want to believe and so it just vanished from your mind.
Jim, I take your point that for a sample to accurately represent the population distribution, the measurements themselves must be accurate. One important factor that comes to mind for ensuring accuracy is the use of a properly calibrated instrument.
That said, I’m not quite sure what you mean by ’30 samples with 1 data point.’
ALso, I was under the impression that Bellman’s statement about the denominator in the equation referring to the square root of the number of samples, and not the standard deviation, was correct. Was there a miswording?
BTW, I’m a layman and my knowledge of metrology is pretty basic.
Under sampling theory, the Central Limit Theory predicts that with a large number of samples, and with each sample having a size >30 one can accurately predict the population mean value by creating a new distribution by using the mean of each sample. The more samples you have, the more detailed the “sample means distribution” will be. The standard deviation of the “sample means distribution” is the term “standard error”.
This is all done because one can not know all the values of a population so samples are done.
The CLT uses this formula for Standard Error.
SEM = SD/√n
Where n equals the SIZE of the samples, not the number of samples.
The GUM defines the members of a random variable holding the observations of a measurand as a population. If you know the entire population, there is no need for sampling.
It boils down to the fact that measument uncertainty is determined by the observations of a measuurand creating a probability distribution used to define the dispersion of the observations.. One hopes it is normal, but it can be one of many.
I understand you better now. I apologize for grossly mischaracterizing your point when I said:
Hey, no problem. I enjoy the chance to help someone understand. I tutor high school kids in math and science and when the light bulb gets brighter it is a joy.
Too many statisticians here get lost in measurement uncertainty and want to treat measurements as if they were samples of a population. That is not the purpose. The probability distribution is solely used to have a common basis for publishing measurements. It is one everyone can understand and replicate.
Here is an example of an uncertainty budget. Every temperature device should have one of these if scientific use is to be expected. None fo, yet climate science tries to convince everyone that they know temperature values to the one-thousands of a degree. LOL
“None fo, yet climate science tries to convince everyone that they know temperature values to the one-thousands of a degree.”
Missed this bit of nonsense. Noone claims we know the global temperature to 0.001°C. HadCRUT gives uncertainty for recent annual anomalies as between 0.03 to 0.04°C. that’s the standard uncertainty so a 2σ uncertainty of ±0.06°C at best.
Ten one-thousands is just as absurd — Fake Data, created from nothing.
The uncertainty you are quoting of say 0.03 is a standard deviation of a probability distribution.
What you are claiming is that 68% of the thousands of data points lies within ±0.03 of the mean without having any values beyond two decimal points.
Even a digital thermometer with a two digit display has uncertainty in the third decimal points even if you can’t see it.
Perhaps you would like to explain how to get 0.03 uncertainty from CRN stations whose resolution is 0.1°C.
“What you are claiming is that 68% of the thousands of data points lies within ±0.03 of the mean…”
No. No. No. It means they think there’s a 68% chance that the thing being measured, the “physically correct mean anomaly” is within ±0.03 of the stated value. (Depending on exactly how you are defining probability) Or more generally is reasonable to attribute a value to the correct mean, that is within say ±0.06, or ±0.09 of the stated value.
“Perhaps you would like to explain how to get 0.03 uncertainty from CRN stations whose resolution is 0.1°C.”
I’ve done similar things before, you just pretend to not understand the argument. I’ll see if I can do it again when I have time, but regardless, this is not the point of the uncertainty estimate. It is not mostly about the resolution of the individual measurements, it’s about all the steps that go into estimating the mean.
The quote you gave did not state any statistical parameters used for uncertainty.
Read the GUM 7.2.4.
This is how NIST quoted TN 1900. Too bad climate science has never heard of NIST.
I was just quoting their final figure. If you want details of how it was calculated I’m sure it’s available.
Do you really want me to go through the rigmarole of GUM 7.2.4 every time I quote an uncertainty? E.g.
By the way – looking these detail up I realize I must have read the wrong uncertainty file, and the uncertainty for HadCRUT5 is somewhat better than the values I previously quoted, still much worse than 0.001°C though.
Yes. You should inform EVERY reader what it means.
“Perhaps you would like to explain how to get 0.03 uncertainty from CRN stations whose resolution is 0.1°C.”
OK, so I downloaded the latest daily CRN data, and for this experiment I’ll look at the average of daily values for 2024. I am not claiming this is an accurate temperature average for the US, just that it’s a base line ot compare different resolutions.
After eliminating days with missing data, I have 55894 daily values, and using the stated TAvg values, given to just 1 decimal place I get an average of
11.7926°C.
I’m quoting far to many decimal places, just to annoy you, I mean just so we can compare.
Now a take the daily TAvg values and round them all to the nearest degree. This gives TAvg values that have an uncertainty of at least ±0.5°C.
Using the standard propagation of measurement uncertainty I would expect an uncertainty in the average of all these values of 0.5 / √55894 = ±0.002°C, which I find difficult to believe, but I have faith in statistics.
Pat Frank would say the uncertainty is √(0.5^2 * 55894 / 55894) = ±0.5°C.
You would argue the uncertainty should be the same as the uncertainty of the sum 0.5 / √55894 = ±118°C, which I would say is physically and mathematiccaly impossible.
So let’s put it to the test. Averaging all the rounded daily values I get
11.7918°C.
Identical to the thousandth of a degree. Difference is 0.0008°C. Given the uncertainty of ±0.002, this is quite a lucky result, but it would have to be a whole lot luckier if the actual uncertainty had been ±0.5.
Pushing things too far to be sensible, I can round all the valujes to the nearest 10 degrees. ±5°C uncertainty at least for each daily value, and using the propagation rule, an uncertainty of ±0.02°C. Is that remotely possible? The main problem with such a large rounding is that there may be a systematic bias introduced. But lets see.
Annual average for 2024 based on daily readings to the nearest 10°C
11.7879°C
No, I’m not sure I believe it either. A difference of just 0.0047°C.
Do you want to try that again, or reword it? 11.792 != 11.793
That ties in nicely with the Phillips et al paper you pointed out in an earlier thread, where the SEM becomes a good estimator of the uncertainty of the mean where the s.d. exceeds 0.6 * the resolution.
That doesn’t seem correct. Resolution uncertainty is Type B (resolution / sqrt(12)), so 1/sqrt(12) or 0.289 degrees C.
Your SEM should be s.d. / sqrt (n), which is almost certainly larger than half-width / sqrt(n).
“Do you want to try that again, or reword it? 11.792 != 11.793”
Arg, you’re right. The problem of arbitrary rounding. If both figures had been 0.0002 cooler then they would have been identical to 3 decimal places.
“That doesn’t seem correct.”
It was a simplification to get a ballpark figure. Yes, strictly this should have been done with standard uncertainties.
With the aid of a digital computer and the IEEE floating-point number standard, you have calculated the average of a set of numbers out to 16 decimal digits.
So what?
Exactly what does this number tell you about the nature of physical reality?
“So what?”
It was a demonstration asked for by Jim, “explain how to get 0.03 uncertainty from CRN stations whose resolution is 0.1°C.”
It shows that within reason the resolution of individual readings is not a bottle neck for the uncertainty of the average. It doesn’t say anything about the accuracy of that mean. If all the readings are 2°C too cold the average will be 2°C too cold. And any number if issues regarding how the stations are distributed and how the average is calculated can be problems. But that’s true if the resolution of the stations is 0.01, 0.1 or 1°C.
Didn’t think you could give a cogent answer, this word salad confirms it.
You are manufacturing information from nothing, the essence of Fake Data.
I didn’t think you’d admit to understanding it. If there were any words that were too difficult for you, you only have to ask.
Bit if you ate going to accuse me of “manufacturing” I formation you need to justify that accusation. The CRN data is freely available, you can check it for yourself.
That you don’t understand how you are making Fake Data is another indication of your built-in circular biases toward hockey sticks — you NEED them.
Typical bellman intellectual superiority rant ignored…
So you are not going to provide evidence that I faked the data? Thought not.
You still have no understanding of uncertainty—the interval is part of the data, whether you like it or not. By alleging an impossibly small interval you are faking data, regardless of where it came from.
This illustrates part of the problem you have with measurements. The accuracy of the mean is not what measurement uncertainty tries to define. The dispersion of observations is the issue, i.e., the standard deviation of the probability distribution of the observations.
The accuracy of the mean only tells you that the sample means distribution is very peaked and that it is good estimate of the population mean.
You show that the standard deviation of the 55894 data points is
±0.002°C × √55894 = 0.5°C.
That should be the Standard Deviation of the entire population of temperatures.
Do you really expect anyone to believe that the standard deviation of 55894 stations from pole to pole with the tropics in-between to be ±0.5°C?
You obviously used a computer. What does it tell you about the variance of the 55894 data points?
Do you recognize the problem? The 0.5°C is already a Type B uncertainty that can be ADDED to other uncertainty components.
You are basically trying to estimate the uncertainty of the uncertainty in a circular fashion.
As OC has already mentioned, the resolution uncertainty for a CRN thermometer is 0.1/√12=0.03. Uncertainties add, so this is added to to the uncertainty budget. You have completed a budget haven’t you?
You asked
“Perhaps you would like to explain how to get 0.03 uncertainty from CRN stations whose resolution is 0.1°C.”
But of predictably ignore the demonstration, and start throwing in all your usual distractions.
“The accuracy of the mean is not what measurement uncertainty tries to define.”
It is if the mean is measurand you are trying to measure.
“The dispersion of observations is the issue, i.e., the standard deviation of the probability distribution of the observations.”
That’s just your delusion. Measurment uncertainty of a mean is not the “dispersion of observations”. You just want to claim it is in order to justify using the standard deviation as the uncertainty. That’s when you are not claiming the uncertainty of the sum is the uncertainty of the mean.
And try to learn what these terms mean. You keep talking about the probability distribution of observations. That’s not what a probability distribution is. The observations can be a way of estimating the probability distribution, but the distribution is not defined by the observations.
“The accuracy of the mean only tells you that the sample means distribution is very peaked and that it is good estimate of the population mean.”
Hand waving talk of “very peaked” distributions aside, yes, that’s exactly what the uncertainty of the mean is. It’s telling you the interval it’s reasonable to attribute to the physically correct mean.
“You show that the standard deviation of the 55894 data points is
±0.002°C × √55894 = 0.5°C.”
No I don’t. I didn’t even mention the standard deviation of the data points. Nor did I attempt to estimate the uncertainty of the mean. All I showed was that it’s possible to get very close agreement between means calculated from data with 0.1 and 1°C resolution. The implication is that when looking at the uncertainty of the mean, having low resolution is not generally an issue.
“Do you really expect anyone to believe that the standard deviation of 55894 stations from pole to pole with the tropics in-between to be ±0.5°C?“
No – I made absolutely no such claim. The 0.5 is the uncertainty caused by rounding the temperature to the nearest integer. A rounded value of 10°C could result from anything between 9.5 and 10.5°C, hence an uncertainty interval of ±0.5°C.
“You obviously used a computer.”
So? Does you engineering education ban the use of computers?
“What does it tell you about the variance of the 55894 data points?”
Who cares about variance. It tells you nothing useful. If you mean standard deviation, that depends on exactly what you values you are interested in. If you just want the deviation of all the daily averages, then it’s 11.5°C, and a variance of 132.4 square degrees.
But as we are talking about the annual average a more useful figure would be the deviation of all the annual averages by station. For 2024 this is 7.15°C. (This is just a quick look at the raw data – I’m not allowing for missing data in a year.)
“The 0.5°C is already a Type B uncertainty that can be ADDED to other uncertainty components.”
You are still not trying to understand the point of the exercise. I was not, as I said in the comment, trying to calculate the actual annual temperature for the US, let alone estimating the uncertainty. But the result of the experiment is to show that “ADDING” the resolution uncertainty interval would be pointless, because it makes almost no difference to the mean.
“Uncertainties add”
Stop trying to solve these problems with mantras, and look at the equations for propagating uncertainties. When you add values the uncertainties in quadrature. When you scale a value, the uncertainty scales. When you multiple or divide values the relative uncertainties add in quadrature. When you apply any function to a measurement the uncertainty changes with the derivative of that function.
I find it astonishing that self-proclaimed experts here completely fail to understand how the equations work.
“Perhaps you would like to explain how to get 0.03 uncertainty from CRN stations whose resolution is 0.1°C.”
But of predictably ignore the demonstration, and start throwing in all your usual distractions.
You didn’t answer the question. How do you get a precision of measurement of 0.03°C when the physical resolution is 0.1°C? You have always failed to show any reference that discusses how the physical resolution is increased by some definite mathematical equation. Even copilot agreed with me. It is not proper science to increase the apparent resolution beyond what significant digits would allow.
The standard deviation of the mean is a measure of accuracy of the estimated mean, that is, the stated value. It is not the dispersion of observations in a probability distribution. The GUM says:
Look dude, you claim it is an uncertainty. Uncertainties are considered either the standard deviation or the standard deviation of the mean. You pick which one it is. Either way, your statement is far off from reality.
If 0.03°C is the standard deviation of the mean, then the calculation I showed is correct. The standard deviation of the population is 0.5°C.
If you want to claim it as the standard deviation, that is even more outlandish. It would mean that there is practically no difference in temperature from pole to pole.
The variance is a calculated value from the data. It is a necessary step to calculate the standard deviation. It is a common statistical parameter. If you don’t care about the variance, you have no business telling folks how measurement uncertainty works because you don’t care about probability distributions.
You just keep falling over yourself. The resolution uncertainty has nothing to do with the value of the mean. It is part of the combined uncertainty that describes the dispersion of observations surrounding the stated value (the mean).
Here is an example of an uncertainty budget which each station should have to meet ISO requirements for accurate data acquisition.
See that category of “resolution”? It is just one of several uncertainties that should be analyzed and respected.
“You have always failed to show any reference that discusses how the physical resolution is increased by some definite mathematical equation.”
https://nvlpubs.nist.gov/nistpubs/jres/113/3/V113.N03.A02.pdf
You should remember this, as you were the one who introduced it to me. Special Test Scenario, Rule 3. When the standard deviation is greater than 0.6 times the resolution, then the best estimate of the uncertainty of the mean is s / √N.
“The standard deviation of the mean is a measure of accuracy of the estimated mean, that is, the stated value.”
Yes. Please keep repeating that, maybe you will someday understand it.
“Look dude, you claim it is an uncertainty.”
No I did not. I’m not sure what you are even referring to. You claimed that I showed that the standard deviation of the data points would be 0.5°C. I did no such thing. What I said was the expected measurement uncertainty of the large number of daily measurements would be 0.5 / √55894 = 0.002. For some reason you reversed the equation and said that meant I was claiming the standard deviation of all data points would be 0.5. You are confusing the 0.5 measurement uncertainty, with the standard deviation of all points.
“If 0.03°C is the standard deviation of the mean”
Where do you get that from? I purposely didn’t calculate a standard error of the mean as I doubt it would be a good estimate of the uncertainty of the CRN data.
“If you want to claim it as the standard deviation”
I don’t. You keep reading things into what I said that are just not there.
“If you don’t care about the variance”
Good grief – my point was that variance, as a value is not very meaningful. That’s why you always want to take the square root to get the standard deviation. That’s the value you want if you want to know what the dispersion of the distribution is.
“the Central Limit Theory predicts that with a large number of samples, and with each sample having a size >30 one can accurately predict the population mean value by creating a new distribution by using the mean of each sample.”
That is not what the CLT says. You still seem to think that sampling requires you to multiple samples. The point of probability theory, including the CLT is that you can calculate what the sampling distribution will be without having to estimate it by taking multiple samples.
And there is no magic size 30. The theory says that the larger the sample size the closer the sampling distribution will be to normal. 30 is sometimes used as a rough rule if them for a good sample size, but it really depends on the shape of the parent distribution.
“It boils down to the fact that measument uncertainty is determined by the observations of a measuurand creating a probability distribution used to define the dispersion of the observations. One hopes it is normal, but it can be one of many.”
Again, it doesn’t matter if the distribution is normal for the CLT to work, but it helps if they are close to normal if you ate only measuring something a few times. This is where Monte Carlo estimates can be more useful.
If it is then it is a new development because ever since I showed the Gormans NIST TN 1900 E2 they’ve been insisting the divide by sqrt(m) isn’t there.
Typo…that should be y = Σu(Xi)/n which is the formula for an average.
Most of Nelson’s criticizes is focused on urban heat islands (UHI). It is important for people to understand the concepts involved.
UHI Effect – This is the real phenomenon in which land use changes results in higher temperatures in urban areas.
UHI Bias – This is an artificial phenomenon in which global or regional average temperatures are too high/low due to methodological choices in grid meshing, spatial averaging, etc.
The folly I see most often from contrarians like Nelson is that they do not understand the fundamental concepts in play here. They conflate the effect with the bias. Those are different concepts. They also erroneously assume that the bias can only ever be positive.
How can the UHI bias be negative? Consider a grid cell that starts with a land use ratio of 25/75 urban/rural and a station mix of 75/25 urban/rural then overtime changes to a land use ratio of 75/25 with a station mix of 50/50 that will create a negative UHI bias. There are other ways a negative bias can manifest. Berkeley Earth concluded that the UHI bias was statistically equivalent to zero, but if anything it is more likely to be negative than positive after 1950. [Wickham et al. 2013]
So what about the effect? Because urban areas cover a relatively small percentage of Earth’s surface even assuming a generously large UHI effect in those areas it gets mostly washed out when aggregated with the much larger non-urban land areas. Using Dr. Spencer’s new urban heat island dataset we can estimate that the global effect is on the order of a few hundredths of a degree C.
Speaking of Dr. Spencer hot off the press is [Spencer et al. 2025]. They conclude that the effect (not the bias) has waned significantly here in the United States in the later part of the instrumental record. They also report that their research is similar to that of [Hausfather et al. 2013] who compared the rural USCRN dataset to the USHCN-adjusted dataset. Hausfather et al. concluded that USHCN-adjusted may still be underestimating the actual warming rate in the US. Note that over their overlap period USCRN shows a warming rate of 0.78 F.decade-1 vs nClimDiv (formerly USHCN) of 0.65 F.decade-1.
And, of course, we show the oceanic heat content (OHC) uptake which has no contribution from urban land use changes because…ya know…urban areas aren’t over the ocean. OHC has risen significantly so Nelson’s challenge that the Earth cannot be warming is not consistent with the consilience of evidence. [Cheng et al. 2025]
Kindly stop posting your laughable ocean heat content graph.
Oh, and take your equally ludicrous “global average temperatures” elsewhere too.
Like many of my posts here on WUWT this one was put in the “awaiting moderation” queue. As you can see a moderator approved my post with the OHC dataset included. So if OHC dataset are banned from WUWT then neither I nor whichever moderator approved it is aware of that ban. In the unlikely event that it is actually banned then I will obviously comply with the ban and request my post be deleted and avoid including OHC datasets in future posts.
BTW…speaking of moderators…I’m one of the posters that gets frequently moderated. I want to extend my utmost appreciation to you guys (or girls) for reading, vetting, or whatever it is you are doing to my posts prior to them going public. I know that must be annoying and/or inconvenient even if only minor. I also want to extend my gratitude for always (as best I can tell anyway) approving my posts. Thank you!
Jeebus, you have a highly overinflated opinion of yourself.
I’m genuinely sorry you feel that way. The next time you are in St. Louis let me know. Let’s me up and chat. I think you’d find that I’m an okay guy and not the monster some make me out to be.
You have never explained how an ocean temperature increase of 0.16C from 1960 to date is meaningful, let alone measurable.
Refer to [Cheng et al. 2017].
Not measurable, stop pretending.
Before 2005, there was limited measurement coverage of the whole of ocean… any pretence that you can MAKE UP data to cover the whole ocean is a farce.
Mr. x: Who said “banned”? It was a request, kindly stated, out of obvious concern that you lose credibility posting nonsense. I have seen that your credibility is lost and matters not one whit to you, but Mr. cat may still have hope for you.
The thing is, who cares? Because whether this is right or wrong, whether there is a warming crisis, nothing much going on, or something in between, why does it matter?
Whatever is the case, the energy policies that the activists in the English speaking countries and maybe Germany want to force on their countries will make no difference and will inflict enormous damage. The damage will be due to the fact that they are impossible to implement. The attempt will be hugely expensive, futile and even were it to work, totally ineffective.
For 40 or 50 years now the activists in the US, UK, Canada, Australia, NZ and Germany have been trying to persuade the rest of the world both that there is a climate crisis and that getting to Net Zero, starting with moving electricity generation to wind and solar, is the solution.
They have failed dismally on both counts. China, India etc have been sabotaging COPs and growing coal use as fast as unrestrained economic growth requires. No-one else believes it, no-one else is making any efforts at emission reduction.
Its been an episode of complete hysteria. Its like someone telling you your winter cold is really pneumonia or Covid, and then telling you that the remedy is for you to stand on your head for five minutes every hour. You don’t have pneumonia, and you cannot stand on your head anyway, and even if you did and could, it wouldn’t help.
At last one of these countries, the UK, seems to have found some politicians who are ready to name this nonsense what it is and some mainstream media ready to publish what they say.
Read this, if you can bypass the paywall:
https://www.telegraph.co.uk/news/2025/05/09/net-zero-must-end-before-britain-de-industrialised-reform/
On May 3, another piece, after the Reform wins in the local elections, he was reported saying this. The piece was called “Reform’s councils begin war on net zero projects in countryside”
“We will attack, we will hinder, we will delay, we will obstruct, we will put every hurdle in your way. It’s going to cost you a fortune, and you’re not going to win. So give up and go away.”
This was in reference to blocking renewable projects like solar farms, pylons, and battery storage systems in areas controlled by Reform UK councils, such as Lincolnshire.
Additionally, the article notes Tice’s broader strategy:
Reform UK will use its new control of ten councils to use “every lever” available to block renewable projects. He indicated that Reform-controlled councils and mayors, such as in Lincolnshire, would leverage their authority to delay or block projects, potentially through mechanisms like the judicial review process.
Reform is 30% in the latest polls. Its dead, Jim. Not the climate hysteria yet, but Net Zero is dead.
Since I do try to make a reasonable effort to follow with replies of my commenters from posters whose content seems genuine I wanted to make sure to reply to yours. However, since your posts seems focused more on policy and/or politics and doesn’t seem to address anything I’ve said directly I am going to respectfully bow out of this discussion. I usually disengage from policy and/or politically oriented discussions because I don’t think I can add anything of value to them nor is it something I’m passionate about. That’s not to say that it isn’t important. It’s just not for me. For that reason I’m out. No disrespect to you.
Yes, understand this and my reply was indeed only indirectly on topic with what it replied to.
But the thing that is going to affect all our lives (has already) is not the long distance prospect of warming. Its the measures politicians and activists are taking now to supposedly prevent it. It seems to be generally assumed among activists that if you have made the case for dangerous levels of warming a generation from now, you have thereby made the case for whatever measures on energy they advocate in the near term.
I think it therefore very important to keep making the point, that when proposing these measures either directly with this justification or by implication, you have to show that they are doable, that they will work, and that they will be effective. And affordable with it. Not just that its warming. But that some action is necessary and that what is proposed is fit for purpose.
I suspect that in fact you will not be so indifferent to policy questions. In the end this is almost never an academic question about meteorology or climate trends. The entire program is to make a call for action, with claims about the climate justifying it. Its not just discussing the chemical composition of some planet orbiting a star light years away.
You aren’t commenting on WUWT solely to get an academic hypothesis discussed more rigorously. You’re doing so because you think it has policy implications. Or is this wrong?
This is why I keep saying, never mind if there is a climate crisis. Whether there is or not, what you all want to do is not remotely sensible. Don’t dispute is there warming or how much. Dispute what is doing the real damage right now, the mad policies people have invented to deal with it.
Right to the core of the real problem, Michel. What changes are needed to policy?
Is it warming – yes/no.
If yes: Is the warming a problem – yes/no.
If yes: Can we change it – yes/no.
If yes: Can we afford to implement those changes or is it cheaper to simply adapt?
Seems to me it started warming this time in around 1700, and it was a good thing it did. The Little Ice Age was not a good time (though it did end up producing some very close-grained Swiss Pine that made violins sound really good by the time they cut those trees).
Given that we don’t know why it was warm in the Ionian warm period, Roman warm period, and Mediaeval Warm period (about a thousand years between those and the last one 1000 years ago), or why it got colder between them, and according to the ice cores they can’t have been caused by CO2 level (and this warm period can’t have been caused by CO2 level either since it started around 1700), the warming/cooling almost certainly wasn’t human-caused, and it follows that we also can’t affect it by much either.
I’d point to the fact that we don’t get the sorts of white Christmases that Charles Dickens describes as showing that the warming since 1700 is in fact real and not a measurement error. Similarly the evidence that the Vikings were growing barley and making beer with it in Greenland 1000 years or so ago tells us it really was warmer then than now.
History shows us that the warm periods are not a problem, so there’s no reason to try to reverse that even if it was humanly possible. Getting cooler would on the other hand be a real problem, and we’d need to adapt because we can’t in fact change that.
Brilliant, Simon!
Yes
No
No
Adapt (enjoy!)
100% Michel. (The is no crisis, but if there were, it’s trivial in comparison to the economic devastation of the supposed cure).
And bdgwx, I believe you that you’re a decent guy, even tempered, patient and knowledgeable. Furthermore I have to agree with the thrust of your argument. My take would be that if UHI were the only cause of rising temperatures, the UAH satellite data would not be showing much warming. But to observe that temperatures are probably in a warming trend apart from UHI effects as I’m happy to do, is a far leap from accepting that there is anything but benefit to mankind.
It really doesn’t impact you out in the middle of the red sea of relatively sane flyover country. Try visiting Massachusetts or Lancashire. At some point, you have to take notice that the Climate Change agenda is destroying our children’s future.
I’m totally on board with the conclusion that civilisation needs to wholly undertake “Adapt-ism” as the only sensible response to changing conditions in climates and other geophysical effects all around the world.
Let academic studies of climatic behaviors continue apace on their way, as most such topics in search of additional knowledge do (tempered by rational grant allocations of course).
But the umbilical that has been built from the start between climatic behaviours and energy systems policies needs to be cut forever.
“Its dead, Jim”
I think so.
I looks like the public is starting to understand the predicament their politicians have put them in.
Maybe the UK won’t have to go bankrupt before the politicians change course. Let us hope.
They’ve got four years of communist government to live through and Mad Ed Milliband is obsessed with eliminating industry in the cradle of the Industrial Revolution.
Further to Graeme’s request, maybe you can provide a cogent explanation of how ‘OHC’ is determined. Leather buckets over the side at 8 bells, etc.? Or is it just another example of modeled shtick used by alarmists to ‘plug’ to their desired estimates, e.g., EEI, which are in fact far smaller than the well know errors inherent in direct (satellite) measurements?
For the methodology of the OHC dataset I cited refer to [Cheng et al. 2017].
From your link: An accurate assessment of OHC is a challenge, mainly because of insufficient and irregular data coverage.
The understatement of the Century.
I think everyone would agree that assessing OHC is challenging. Insufficient and irregular data coverage is an obvious example why. It also happens to be relevant to the reason why the UHI bias exists in the land record as well.
Yes, it is basically sparse and erratic JUNK data.. And anything that comes from it is a joke.
UHI on land is obvious and the effect on thermometer readings is also obvious.
Pretending it hasn’t massively increased over time is just silly.
Oh and thanks for agreeing that even land measurements are often sparse and erratic (although far, far less sparse and erratic than the ocean), and that …
….the surface temperature fabrications are totally unfit for the purpose of comparing global temperature over time.
This is exactly the opinion that Tom Nelson expresses in the video.
Still waiting for you to explain how a temperature increase smaller than measurement uncertainty over 50 years can be meaningful.
The temperature increase was not smaller than the measurement uncertainty. I provided the citation for the information you seek already. You don’t need to wait for my permission to read it.
Nowhere in your paper can I find ANY discussion of instrumental error or drift, or any of the myriad factors affecting temperature measurement at sea. The authors seem to be statisticians and computer jockeys who see nothing absurd about giving temperatures to three decimal places.
First, it’s not my paper. Second, if you didn’t find “ANY” discussion of error or uncertainty then you couldn’t have possibly read the publication. Serious question…what was is your point in asking for information if you’re just going to ignore it? I’m asking because from context clues in your responses it seems like you aren’t actually interested in learning how OHC is measured. If I’m wrong then now is the time to show a genuine interest in learning how it is done. We can even learn together if you want.
Read what I wrote again, this time s l o w l y:
Nowhere in your paper can I find ANY discussion of instrumental error or drift, or any of the myriad factors affecting temperature measurement at sea.
The only error the authors discuss is sampling error, and the “paper” (if it can be dignified with that title) just seems to be a statistical rehash of sparse, crappy, noisy data.
The entire paper discusses this. Literally…the whole thing. Not even a single section is absent a discussion in some way of uncertainty and error both at the individual instrumental level and/or the sampling of those instruments. The only possible way you cannot find ANY discussion of it is if you didn’t make it past the 1st paragraph of the publication.
That is patently false. Right there in the 2nd paragraph. There are 9 citations right out of the gate related ocean observation measurements including errors and uncertainties especially as it relates to bathythermograph instruments. :That is exactly what you wanted to know more about. Yet you feign indignancy that it somehow doesn’t exist.
If you are truly interested in learning how OHC is measured then you doing everything you possibly can to convince me otherwise. For what purpose? I have no idea.
I searched the text using Control F and “instrumental error”. Yes, they merely mentioned it but gave precisely NO discussion of how it overwhelms any signal.
Where do they give figures for the instrumental error of the thermocouples in plus-or-minus degrees Celsius? If they did, their entire paper would be revealed as absurd. Anyone who thinks a temporal resolution of 0.008 degrees Celsius per decade as they do is clueless by definition.
It’s literally in the first citation mention in the text for bathythermographs which is another 33 pages of material directly relevant to your question that you could have easily seen had you had simply just read the content. And BTW…that 33 pages of additional material (by some of the same authors in fact) cites yet another 378 sources that drill down even further. Yet here you are still feigning like none of this exists.
And there it is. It doesn’t matter what evidence exists. You’ve already rejected it because of your feelings to the extent that you feel anyone who is involved in that evidence is “clueless”.
Serious question…why bother asking how OHC is measured if you don’t actually care and by your own admission would dismiss it regardless?
“I’m asking because from context clues in your responses it seems like you aren’t actually interested in learning how OHC is measured.”
Shouldn’t your last phrase in that sentence actually read “how OHC is not really being measured”, in view of the fact that the Argo-buoys that are supposed to be doing the measuring only go down to a maximum depth of 2,000 metres while the average depth of the world’s oceans is 4,000 metres?
Not just error and drift, but changing microclimates and land use changes.
There is a reason that NOAA has the following for accuracy of single readings.
[img
If air temperatures are only given to +- 0.3 Celsius, why should ocean temperatures be any more accurate?
They are not. Everyone looks at the uncertainty of the thermocouple as stated by the manufacturer. That is usually the smallest part of uncertainty. The associated electronics have a drift, temperature sensitivity, water flow restrictions from flora and fauna in the tubes.
I’ll say again, nowhere to be found is an uncertainty budget that takes all the influence items and their uncertainty into account to determine a final combined uncertainty. Just make yourself look better by finding the one item with the smallest uncertainty and concentrate on it.
Thanks for the link – a sample from the third paragraph of the introduction:
‘Recently, Cheng and Zhu (40) used an ensemble optimal interpolation method (CZ16) combined with covariances from Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel simulations to provide an improved prior guess (section S2).’
So, the state of the art in model-land, then, is a circular path that assumes that radiant transfer models can provide an accurate estimate of OHC variability, that in turn is assumed to provide an accurate estimate of EEI, that in turn is assumed to be caused by radiant forcing in the troposphere.
I think it’s time to face the reality that there is no evidence from the geological record that CO2 is the control knob of the Earth’s climate. Which means it’s time to re-examine the base assumption that radiant transfer models accurately explain how thermal radiation emitted from the Earth’s surface and absorbed by IR-active gases is actually conveyed to the upper troposphere where it can be radiated out to space.
It’s good thing the author’s estimates OHC did not rely on CMIP5 alone. They only used as first guess to form the 3-dimension field. They then use variational assimilation techniques to morph that field so that it represents the actual observations. This is a standard practice when working with both scalar and vector fields in 3 dimensional domains.
The fact that they rely on CHIMP5 at all shows that they are total garbage from the start.
From the link bdgwx supplied, the answer is the latter.
If the question is…was a model used…then the answer is a definitive yes. Then again, all measurements require a model of some kind. Even something as trivial as spot temperature measurement require electromagnetic modeling, thermodynamic modeling, material modeling, etc. so if the idea of a model is offensive to you then you are probably going to find science in general offensive.
Utter drivel. No model is needed to measure a physical temperature.
How do you measure temperature without invoking any kind of rule, heuristic, algorithm, or equation which accepts an input and produces an output?
And in case there is any confusion about what I mean by “measurement model” I’m talking about the concept in [JCGM GUM-6:2020].
Oh, a “concept”.. you mean like CO2 warming ??
Unproven and meaningless. !
When I measure a temperature I don’t bother with your pseudointellectual BS but instead I use a thermocouple previously calibrated against an NIST or NPL standard and note the reading.
A thermocouple measures voltage. The voltage is then used as an input into a model that maps it to a meaningful temperature. That’s assuming it isn’t mapped into a 4-20 mA signal by a transmitter first. If it is then yet another model is used to map amps to a meaningful temperature. Further still some data loggers (like ASOS stations) employ yet another layer of modeling to average multiple instantaneous readings to reduce noise and abate the uncertainty of the reported reading.
All temperature measurements require some kind of measurement model even if only simple. Even a LiG requires a measurement model despite being simple a subtraction of the coordinate points between the zero point and the liquid height to get the length of the change which is then multiplied by a coefficient to yield a meaningful temperature. That’s not even considering the material modeling that goes into determining how the liquid’s density changes as temperature changes which isn’t always linear meaning that the coefficient I mentioned above isn’t necessarily constant.
Science is not creating models, whatever you think. Science is using using experiments to determine if an physical result. Models can help one determine methods and procedures to use in experiments but the final result must be determined by physically making measurements.
Why do you think theoretical physicists build billion dollar colliders? Or space telescopes. Or gravity wave detectors. Or measurements of entangled particles.
Model outputs are not physical evidence for a reason. They only prove that the output is what the programmer intended.
“OHC” is just a shell-game synonym (relying on a misunderstanding of what the physics term “heat” means) for “temperature”. Why not post the temperature measurements, with their error bars, instead? Is it because we would laugh at them?
Mostly, temperature measurements of the ocean are along thin bands of the main maritime transport routes. The coverage for most of the oceans before 2005 was very sparse and of highly dubious quality.
But they like to pretend they can “calculate” OHC to tiny accuracies.
Its a total farce.. !
Why not post the temperature measurements, with their error bars, instead? Is it because we would laugh at them?
Excellent question.
Curious silence from bdgwx…
There’s a difference between measuring a single rod length 100 times and measuring 100 different rod lengths–one time each. They try to claim that statistical analysis on both cases give the same precision result. It’s complete nonsense. Even the temperature measurements at a single site is not measuring the same rod length multiple times.
I’m curious…how do you think 100 different rods measured once should be handled? For example if the uncertainty on each measurement is ±1 mm what is the uncertainty of the average? For completeness consider the cases were the correlation is r = 0, r = 0.5, and r = 1.
Are you really serious? The result is not homing into the the correct result. You are violating standard statistics. Amazing!
I am completely serious. Let’s make the scenario concrete so that we can check your result with the NIST uncertainty machine. You have 4 rods, 100 ± 1 mm, 80 ± 1 mm, 120 ± 1 mm, and 90 ± 1 mm where the value after ± is the standard uncertainty. What is the standard uncertainty of the average of the 4 rods when r = 0, r = 0.5, and r = 1?
If you want to play hypothetical, then do it properly.
Let’s follow NIST TN 1900 recommendations.
The first step is to define the measurand in detailed fashion.
Then we must define the proper definition of uncertainty.
Define the measurement model to be used.
Moving to the GUM, Sections C.3.5, C.3.6 and F.1.2 there are definitions and equations F.1 and F.2 for use in finding correlation uncertainty. Section 5.2 equations 13 -17 also discuss cooreation and the information to analyze it
At a minimum the correlation values you show are meaningless. Correlation between two quantities must have an r(X1,X2).
You have cherry picked your way through all of measurement uncertainty arguments with little understanding. Your statistical and sampling training is not of much use in assessing measurements.
You want a real evaluation, then make like someone actually doing measurements under laboratory grade requirements. That entails a lot of work to identify and qualify what goes into a proper assessment.
Define the measurand first and the functional description second.
One needs to know these to determine what the purpose is. You have already calculated the uncertainty in each bar. Why is a combined uncertainty needed? Are you perhaps going to sum them?
Measuring a single rod 100 times generates a probability distribution FOR THAT BAR.
Measuring each of 100 bars once raises the question of why one would want to find an average uncertainty.
Exactly. Measuring temperature at one site is not measuring a single value (rod). Wind, humidity, convection, and pressure changes mean one isn’t measuring the same thing. Also the atmosphere isn’t in thermodynamic equilibrium, so technically, the atmosphere doesn’t have a thermodynamic temperature. That’s why meteorologists created LTE (local thermodynamic equilibrium) to allow them to claim temperatures and pressures actually exist at various sites.
Can you do the calculation or not?
BTW…speaking of NIST TN 1900 E2 that Gorman mentioned just above in this subthread…I recommend you review it because it is an example of measuring the average temperature at one site and how the uncertainty of the average is calculated. Spoiler alert…despite Gorman’s repeated assertions otherwise they scale the uncertainty by…gasp…1/sqrt(N).
“Can you do the calculation or not?”
Maybe. The mean of your values is 97.5. If you take the errors all in one direction, then the mean could range from 96.5 to 98.5. That would indicate an error of +- 1. That’s not how the error term is actually calculated. It’s the square root of the sum of the squares of the errors divided by n. That gives us 0.5.
However, we need to apply engineering restrictions. You cannot obtain a precision greater than the least precise value. We need to round the mean value to the nearest whole millimeter. Because there is no close whole number (0.5), we use the second rounding rule by rounding to the nearest even number. So the mean is 98. And since we shifted the mean by 0.5, we must shift the error term by the same amount. The answer is 98 +0 -1 mm.
I assume the r’s are correlation values. I’m not a statistician, so enlighten me.
The mean is a statistical descriptor of the distribution of values, so the “precision” (actually the stated value) is determined by the order of magnitude of the size of the sample as well as the readings.
A mean of 97.5 +/- 1 provides more information about the distribution than rounding it to 98 +/- 1.
A mean by itself is of limited use, so it’s usually best to have the median, mode, s.d. and range as well.
Context shouldn’t matter when calculating the descriptors, but it often does matter when utilising them.
In any case, you can’t increase the number of significant figures beyond the least number. That would be two, in this case.
Okay.
Vastly oversimplifying, if we only read to whole numbers.
If the mean is 97.5; 97 and 98 are equiprobable.
If the mean is 98; 97, 98 and 99 are equiprobable, or 1:2:1 (or 1:0:1).
What odds will you give me of not measuring 98 in each case?
98 wasn’t measured in the original case. 98 is a computed value.
Fair enough. I didn’t look hard enough 🙁
bdgwx proposed:
putting it into more useful notation, those are {0.100m, 0.080m, 0.120m, 0.090m} +/- 0.001m.
The mean is still 0.0975m (+/- 0.001m or 0.0006m, depending), but the sample is so small and s.d. so high that it does make more sense to round it.
Context does come into play. It would be a different matter with a larger sample and smaller s.d, say {0.097, 0.097, 0.097, 0.097, 0.097, 0.098, 0.098, 0.098, 0.098, 0.098} +/- 0.001. Just interested, would you still round the mean to 0.098?
OC,
The real problem is in determining uncertainty while not knowing what the measurand is. Are these 3 separate input quanities used to determine a single measurand’s value and if so what is the functional description.
If these are 3 unique measurand’s each with their own unique value, then as you say, the standard deviation is so large, one can not average them effectively because the SD is so large. If these are supposed to be identical, quality control really needs to be implemented.
I don’t know why it is so hard for people to understand that measurement uncertainty primarily deals with a single measurand that may have several input quanities combined via a functional relationship. The GUM does not address directly how one should address the issue of determining “uncertainty” amongst different items. That is what quality control is about. For that, SD’s are used to establish control limits.
The observations for a single input quantity are grouped into a single random variable and form a probability distribution. The statistical parameters for that distribution are the mean and variance. values. From those you can calculate SD and SDOM. The proper choice of which to use, boils down to repeatability. If you can’t repeat measurements ON the exact same thing, then an SDOM is is not appropriate. NIST gets around this in TN 1900 by declaring the measurand to be the monthly average and the observation are of the same thing. The thing to notice is that they consistently provide details of what the intervals are, degrees of freedom, and other details.
This is not an exercise in random sampling from a larger population, the set of rods, measured once, is the population.
The ±1mm uncertainty is supposed to encompass all sources of uncertainty, including temperature, wear and tear, calibration, operator bias, as well as digital resolution. It can’t just be swept under the carpet and ignored, but this is standard operating procedure for temperature-equals-climate practitioners.
And, the purpose or reason for performing these measurements is of utmost importance. The way this virtual pencil exercise is framed, I would argue the standard deviation is irrelevant, along with all the other statistical measures.
Rounding is supposed to reflect the uncertainty in the measurement, but climate practitioners just ignore significant digit rules: witness bellman in this very thread claiming milli-Kelvin “uncertainties” from 1-degree temperature data. B&B fervently believe it is possible to manufacture information from the vacuum of ignorance.
https://www.isobudgets.com/calculate-resolution-uncertainty/
Very good comprehensive information.
“witness bellman in this very thread claiming milli-Kelvin “uncertainties” from 1-degree temperature data.”
Stop lying. I said the CRN data did not reflect the uncertainty. What I demonstrated was that rounding to the nearest degree did not significantly change the result. The average is still within a few thousands of a degree of the result using 0.1°C daily figures. If you don’t agree with that you have to demonstrate why the result was wrong not just claim it’s impossible.
And if you are going to keep accusing me of manufacturing information you need to provide evidence or do your own research.
Where do you think this manufactured information comes from? If I add up 100 integers, does the sum have “manufactured”information? Or does the manufacturing only happen when you divide that interger sum by 100?
Lying? You are the one calculating absurd milli-Kelvin fake uncertainty numbers, not I. You believe there is information inside a 1-degree measurement uncertainty interval: if you didn’t believe this, you wouldn’t go around quoting these ridiculously tiny numbers.
Only hockey stickers care what these air temperature averages are.
“You believe there is information inside a 1-degree measurement uncertainty interval:”
And the lies continue.
“Only hockey stickers care what these air temperature averages are.”
You mean all those times WUWT has published articles using the CRN averages to claim there has been no warming in the US since 2005?
The NIST uncertainty machine reports y = 0.0975 with u(y) = 0.0005 for the average of the 4 rods.
For your larger set of 10 rods it reports y = 0.0975 with u(y) = 0.000316.
This is when R = 0.
JCGM 100:2008 recommends 2 significant figures and to report as 0.097500 ± 0.00063 m for 2σ for the 10 rod case.
Those uncertainties devolve to standard error / sqrt(n)
R=0 seems an extremely unlikely case.
btw, the SEM for the 4 rods is 0.00740, and 0.00025 for the 10 rods.
Using the Phillips rule of thumb, the uncertainty of the 4 rod case is 0.00740 and for the 10 rod case is at least 0.00029 (from the resolution).
That doesn’t look like 2 significant figures; more like 6 for the average and 5 for the 2 sigma.
It’s 2 significant figures for the uncertainty. And by rule you report the measurement to the same magnitude as the uncertainty.
And yes for the R = 0 case it simplifies to the SEM as I keep saying.
And under no circumstance is it ever RSS like what the familiar contrarian crew want us to believe.
Oh, and speaking of significant figures, it’s not just JCGM that uses 2 significant figures for uncertainty. If you look at the NIST constants database you’ll see they too report the standard uncertainty with 2 significant figures and the value to the same magnitude as the uncertainty.
That appears to cover measured values and ratios of measured values.
There is a vanishingly small likelihood that any of those added significant figures to the value beyond their measurement resolution.
Note to self: Emgage brain before hitting “Post” 🙁
Isn’t it 5 & 4? – the 2 sigma should inherit the leading figures from the mean.
For s >= 0.6 resolution.
That wasn’t what you used, though. You used the standard error added in quadrature.
I used the NIST uncertainty machine.
Yep, and that used sqrt((sum(u_i^2)/n), which equates to std err / sqrt(n) in this case.
The Phillips et al paper is comparatively recent, so is unlikely to have made it to the uncertainty machine.
Oh…I missed this. We may be talking about different things. I was just plugging the +/- 0.001 m values into the NIST uncertainty machine without consideration of components of uncertainty arising from resolution. Sorry, the grouping of conversations in WUWT makes it hard to track which conversation is which sometimes.
Yeah, that’s adding the uncertainties in quadrature, which seems to be fairly standard. That’s different to the standard error of the mean, which is estimated by the sample standard deviation / sqrt (sample size)
It’s very easy to lose track 🙁
[Edit] That’s one of the reasons I’ve taken to adding quotes. It sort of helps.
You are making sense
Where’s the fun in that? Stop Making Sense.
Come to think of it, the original specification of the 10-rod case had readings of 97. mm (0.097m) and 98. mm (.0.098m), se we already know it’s at the 3rd decimal place (3 significant figures).
2 significant figures is 0.10m for the average and 0.00m for the 2 sigma.
The 10 rod case was already at 3 significant figures, so why are we recommending losing information?
Discarding the leading zeros and specifying the 2 sigma value as 6.3E-4 metres does display 2 significant digits, but using the same approach the readings were 9.7E-2 m and 9.8E-2 m, so there is an exponent mismatch between the readings and 2-sigma value.
On that basis, the mean rounds up to 9.8E-2 m, rather than the higher information 9.75E-2 m
Ah, the joys of significant figures…
D’oh! I got that wrong 🙁
Yep, discard the leading zeros for the significant figures.
That makes it 5 & 2 significant figures.
Mathematicians have a hard time with significant digits. They simply don’t (or won’t) understand the need to preserve the integrity of information obtained through measurements. They would rather tell tell you that every teacher and professor in physics, chemistry, and engineering is incorrect.
What I was taught was that the mean is rounded to preseve the signicant digits that were measured. The uncertainty term should be a value that covers the possibilities.
When uncertainty came along, it was decided they should add, either directly or with RSS, which accomplishes the same purpose.
Yeah, that’s why context is important.
Given the readings:
one could make a pretty good case that the readings are at the 10mm level (despite the stated +/- 1), and an average of 98mm has added an additional sig fig 🙂
Also on the context front, the small population (and high dispersion) gives an order of magnitude of 10^0, so adding a sig fig is already on shaky ground.
If the readings had been {97, 97, 98, 98}, then quoting an average of 97.5 rather than rounding up is quite reasonable.
That isn’t what I was taught. 97.5 says you measured to the tenths digit. I was trained that you round that up to 98 degrees. The error statement would create an interval including 97.5.
With the uncertainty paradigm, the expanded uncertainty provides a 95% interval that should also cover that value. NIST TN 1900 arrived at an expanded interval (1.8°C) that certainly covered any half resolution rounding.
The 97.5 average says that it’s somewhere between 97 and 98.
That’s part of the reason for taking the midpoint of the median values in an even-numbered data set.
Rounding up to 98 says it’s somewhere near 98.
Having the 4 actual readings is far more useful in this case.
Summary statistics really aren’t much use for small populations.
That gets back to context and additional information. What is being measured, where were the measurements taken, etc.
If it’s measuring a shaft, the readings could well indicate ovality or taper.
What a mean of 97.5 from those measurements really tells you is that you needed a higher resolution instrument 🙂
Yep. R is Pearson correlation coefficient of the random variables. R = 0 means no correlation while 1 means perfect positive correlation. In other words, when R = 1 it means that every measurement yields the exact same error every single time. That’s obviously an unrealistic scenario since all measurements will have at least some component of error arising from a random effect resulting in R < 1.
You can use the NIST uncertainty machine to calculate the uncertainty of the average of the rods with different correlations. Just click the slider to enable correlation and then enter the correlation matrix of the inputs into your measurement model.
Yep. That’s for R = 0. For R = 0.5 it is 0.791. And for R = 1 it is 1.
Yeah, well so much for that!
Exactly. There is no difference in the type B procedure if you are measuring the same thing or different things.
And NIST TN 1900 E2 is a good example demonstrating that a type A evaluation can be used on different things as well.
Not true. The measurement model defines the measurand as the monthly average Tmax for May. Each measurement observation was of this value under repeatability conditions. See the equation t = τ +εi.
Why do you keep failing to provide the context of the measurements you are discussing?
You want to be an expert? Read carefully NIST TN 1900 and follow all the steps listed and create an uncertainty budget to validate all the Uncertainties.
Spoiler alert…despite Gorman’s repeated assertions otherwise they scale the uncertainty by…gasp…1/sqrt(N).
But do you know why?
Did they use multiple observations of the same measurand?
Did they claim that other uncertainties were neglible?
Did they assume a Gaussian distribution?
Does averaging stations maintain the requirement of observations of the same thing?
You are asserting that weather stations are evenly and equally distributed between urban areas and non-urban areas.
As they need to be manned, you are assuming that we had lonely humanoid robots in the 1950s or very well trained badgers. Which is a fascinating viewpoint.
However, I fear you may have made a slight mistake.
I think you have me confused with someone else. I never said anything about weather stations being evenly and/or equally distributed between urban areas and non-urban areas. In fact, I said the exact opposite in my post. That is they are not equally distributed or evenly spaced and that’s one of the factors that causes the UHI biases.
I’m sorry but this is not correct. Weather stations do not need to be manned. In fact, most of them today are automated.
I think you have me confused with someone else. I never said anything about humanoid robots in the 1950s or badgers.
That is certainly possible. I do make more than my fair share of them. I’m more than happy to address a mistakes if you can specifically mention them. Just make sure it is something I actually said and not something someone else said or thinks. And if there is ever any question about what I think then feel free to ask.
Yep , the surface station data is basically sparse, erratic, urban and site corrupted and totally unfit for the purpose of measuring “climate” over time.
The surface station fabrications are basically junk non-science.
Were they really automated in the 1950s? I thought I was taking the mick.
Are the badgers real too?
More seriously, your assertion that I quoted verbatim, that:
Because urban areas cover a relatively small percentage of Earth’s surface even assuming a generously large UHI effect in those areas it gets mostly washed out when aggregated with the much larger non-urban land areas.
That assumes the empty areas are not infilled by extrapolation using the nearest urban areas.
Which is the mistake you have made, because it assumes the empty areas are not empty.
I don’t think any station was automated in the 1950s. Not that it matters since being automated in the 1950s is not necessary for them to be automated today.
I have no idea what you are talking. What do badgers have to do with anything?
They aren’t.
A grid cell can be empty or filled and still be non-urban. I’m not sure what part of this concept is your missing. If you can clarify what you mean perhaps we can delve deeper.
There is absolutely no way the ocean heat content can be known from measurements even after 2005. It is from models and proxies
And if you really want to use this tiny warming you have to look at it in terms of other. See that little red squiggle… that is your faked OHC data added to proxy data.
Perhaps you should read what goes on in the “REAL” world of make believe.
https://tallbloke.wordpress.com/2025/05/09/refusal-to-disclose-information-under-the-environmental-information-regulations-2004/
You can forget figures when they are all made up
I read it. I’m not sure what that has to do with anything I said. They don’t even talk about the UHI effect or bias at all as far as I could tell.
“They don’t even talk about the UHI effect or bias at all”
Well done.. you uncovered the HUGE problem !!
There is hope for you yet.
You throw around the term “bias” like you know what it means in terms of measurement uncertainty.
Do us all a favor and carefully define your use of the term “bias” both in description and mathematics.
From NIST: 2.4.5. Analysis of bias
From the GUM
Measurement uncertainty “bias” arises because one doesn’t know the required value of to correct a systematic effect. Exactly how does one know from calibration what correction factor to apply to individual stations 100 years ago to correct a systematic uncertainty?
And he continues to claim that subtraction of a baseline cancels bias.
I know he is reading post in this thread. Yet he has not answered the question of what “bias” that needs correcting actually is.
Statisticians throw around the terms biased and unbiased about statistical properties, but measurement bias that requires calibration to determine its value is something entirely different.
I don’t really expect a cogent answer because climate science has no clue about how to treat measurements.
I think I have died and gone to Heaven! Finally a focus on the BIG LIE of Climate Alarmist Propaganda, the bogus, bastardized Hockey Stick Chart.
The Hockey Stick chart is a lie. It does not represent reality. It’s “hotter and hotter and hotter” temperature profile is a computer-generated lie. None of the original, written, regional temperature records have a temperature profile like the scary Hockey Stick chart. All of them have a benign temperature profile, where the temperatures warm for a few decades and then cool for a few decades, and then the process repeats.
It is no warmer today than in the recent past. The Dishonest Climate Alarmists who created the instrument-era Hockey Stick chart want everyone to think that the temperatures rose steadily as CO2 levels rose. But the truth is the temperatures warmed and cooled and warmed again and cooled again while CO2 levels increased. CO2 has had no apparent effect on temperatures because it is no warmer now, with more CO2 in the air than it was in the recent, recorded past, when there was less CO2 in the air..
Uh.. the hockey stick chart was the result proxy reconstructions
and more specifically about if stripbark pristlecone pines respond to temperature alone over time
I just want to caution, not to mix different topics. This post seems to focus more on the UHI effect and while tree rings are mentioned and of course Mann and others use “real” temperature to calibrate their data (which picks the wrong proxies if the “real” data is “unreal”), the hockey stick fallacies are multiple and not discussed here.
Those bristlecones are thousands of years old. I’d think that their ability to accurately respond to temperatures is limited. Even if they do respond, they’re way up in the mountains so can hardly indicate much about global climate. I’m 75 and I don’t respond to some things as quickly as I used to either. 🙂
“This post seems to focus more on the UHI effect”
Yes, and it is discussed as being a part of the instrument-era Hockey Stick chart time period, and is much more relevant to the temperatures of the satellite era than to previous centuries.
For previous centuries, the original, written temperature records are relevant, and my point is that the written temperature records show it was just as warm in the recent past as it is today, UHI effect, or no UHI effect.
I think UHI effects, although relevant to a certain extent and time period, are a distraction from the fact that the original temperature record has been bastardized by dishonest climate scientists to make it appear that today is the hottest time in human history, when, in reality, it’s not even the hottest time since the end of the Little Ice Age.
Yes
T H E. C L I M MA T E. C R I S I S is. T H E. B I G. L I E
Every generation seems to have at least one.
According to my grandfather and uncles (back in the ’70s, all Iowan farmers), they had climate crises, too. My first was the ice age. My uncles told me theirs was a planet on fire. Grandpa told me his was an ice age. And we’re back to a planet on fire. That is definitely more than one climate crisis per generation.
Your relatives appear to go through cyclical climate changes, just like the climate does! 🙂
Since l have been keeping my own temperature record l can certainly confirm that the UHI effect has a large impact on recorded minimum temperatures during clear night’s.
Last night was a typical example of its effects. Here in central Scunthorpe l recorded a minimum temp of 5.2C, yet the local rural weather stations on Hatfield Moor and Thorne Moor recorded lows of 2.4C and 1.5C.
This typical happens on clear night’s, as with cloudy nights there is usually far less difference.
Good observation, this should be universal knowledge. Water is the most powerful greenhouse gas, and dewpoint is a very good predictor of overnight lows.
The expression greenhouse gas is used for propaganda purposes.
The earth is not one giant greenhouse and the energy flows are starkly different than how a greenhouse works.
Water vapor is the governor of the earth energy systems. Oceans are the heat pumps (sink and source of thermal energy). The whole thing operates as a giant thermal engine moving thermal energy across the globe in may delightful and fascinating ways.
Yup. I’d also like to see the temperature profile with height in a real greenhouse on a bright sunny day.
Here is a further example on the impact that the UHI effect has on the warming of the minimum temperatures. As last night was a other clear night.
Here in Scunthorpe the minimum temperature got down to 7.9C, but on both Hatfield Moors and Thorne Moors the lows were down to 2.8C.
So the difference was a whole 5C and shows that any temperature record from a location that has not remained rural over the recorded period. Needs to be treated with caution.
Excellent beat down on the temperature. BUT that’s not the only dishonest tale that is being told. My personal favorites are Methane and Sea Level.
Already dairy farms are feeding Bovaer their cows to reduce methane emissions.
Sea level is being used to scare people into believing costal cities will be flooded in just a few years.
Methane is not 83 times more powerful than CO2 at trapping as claimed by the IPCC’s latest assessment report.
Tide gauges with records back to 1807 tell us that sea level is on track to rise less than a foot by 2100, not the better part of a meter as claimed the satellites that have data only back to 1992.
Besides all that, there aren’t more storms, droughts and hurricanes.
Oh yeah the polar bears are doing just fine.
Those are all dishonest tales indeed, and don’t forget the absurd dishonesty of “radiative forcing” and “downwelling infrared atmospheric power” !
Like I’ve told you before “radiative forcing” is the term given to the left-hand side of the 1LOT equation divided by time and area when applied to the Earth system bounded by the top of the atmosphere. And while the 1LOT does get challenged here on WUWT sometimes I think the majority of WUWT participants do still accept it and would find your categorization that it is “absurd” and “dishonest” untenable. Nevermind that the rest of the world outside of WUWT universally accepts the 1LOT as a fundamental hypothesis of physics that has never been falsified.
Man, are you an arrogant pr*ck.
I’m genuinely sorry you feel that way. If my unwavering acceptance of the 1LOT makes me an “arrogant pr*ck” then I guess I’ll have to accept that I’m an “arrogant pr*ck” by your definition because I’m definitely not going to abandon one of the most important (if not the most important) and unassailable laws of physics.
Laws of physics??
They show that incremental CO2 cannot cause any measurable warming.
Perhaps your understanding of the laws of physics is somewhat lacking !
The 1LOT and gas laws imply that at atmospheric current levels, incremental CO2 CANNOT cause any measurable warming.
Is that what you are trying to say ?
Nothing can “trap” heat.
Heat is thermal energy (aka kinetic energy) flowing across a temperature gradient (hot to cold).
If it is trapped, it is not flowing and therefore not heat.
Heat cannot be trapped.
I understand where you are coming from, but I think that is a level of pedantism that is unjustified. That’s like saying that people who flow into a room cannot be trapped in that room because once they enter they are no longer flowing into it. It’s not so different with energy. Just because the energy is no longer flowing doesn’t mean it isn’t confined, retained, or “trapped”.
Another more analytical way to look at it is starting with the 1LOT equation ΔU = Q – W. For simplicity let’s assume W = 0 that means ΔU = Q. And since we commonly refer to ΔU as the “trapped” portion it isn’t unreasonable to directly associate that with Q since the 1LOT literally says they are equivalent. Using the more modern nomenclature ΔE = Ein – Eout. The right-hand side Ein – Eout is the net energy transferred (or heat). And you can see that it is equivalent to the ΔE (or the trapped amount).
Remember, in this context “trap” is referring to the left-hand side of the 1LOT equation which is the amount of energy retained or accumulated in a system.
‘Another more analytical way to look at it is starting with the 1LOT equation ΔU = Q – W.’
No one has a problem with that. The problem is with the magic thinking the alarmists invoke to transform it into this form:
Q = ε σ Ts^4 + C dTs/dt
https://wattsupwiththat.com/2011/01/28/the-cold-equations/
A lot of people have a problem with it. I was called a pr*ck in this very blog post because I accept the the 1LOT. And you’ve seen how triggered people get when I talk about how the door to a running kitchen oven traps heat causing the inside to warm. You even jumped on the bandwagon yourself challenging the experiment and claiming that it only works in vacuum. In your defense you did conceded that the 1LOT works for kitchen ovens not in a vacuum as well. but not without insinuating that the “trap” is unique to convection. So yeah people, including you as far as I can tell anyway. have a problem with it
For starters, I cringe whenever I see commenters resort to inappropriate language, which in your case seems particularly unwarranted given that you’ve always maintained a civil tone in your discourse here. As for the oven door ‘incident’, I can assure you I have nothing but respect for the 1LoT, but if I had been making your point, I would have chosen an example wherein the radiative impact was isolated and of paramount importance.
I don’t care about the language. My point was that some people are so convicted in their rejection of the 1LOT that they feel they need to use acerbic language.
And regarding the oven…as I keep repeatedly pointing out it is meant as a simple experiment that everyone can perform that indisputably falsifies the hypothesis that cold bodies cannot be the cause of warm bodies getting warmer and that argument that the 1LOT and 2LOT forbid this is false. The mechanism of paramount importance here is that cold bodies can trap heat. If people cannot be convinced of this fact with the simple oven experiment they will never be convinced with more complex setups that actually do isolate radiation. I rarely get to defend the 1LOT/2LOT’s application to radiation as well because so many people just straight up reject them in any scenario regardless of whether it is dominated by radiation, convection, or conduction.
The heat being discussed here is the downwelling 15 micron from CO2 reradiating what it absorbed from outgoing radiation through the atmospheric window.
A black body that radiates predominately at 15 micros would be a brick of dry ice. That isn’t going to warm anything. Your oven example is an oven stuffed with dry ice.
It’s the sun that does the warming
Interesting. I did not have you on my bingo card of potential people who would blatantly reject the 1LOT like that.
Hmmmm, If the oceans were liquid nitrogen instead of water, then the 15micron back radiation could do some warming.
CO2 blocks cooling in that 15 micron band in the atmospheric window, but it’s the sun that does the resulting warming.
Put a blanket on a fresh corpse* and it won’t warm up, but it will cool off a bit slower.
That same blanket keeps you warm in bed because it blocks the heat generated by your body’s metabolism from escaping into the room and it’s your metabolism just like the sun that warms you up, not the blanket.
CO2’s “greenhouse effect” is real, but the mechanism by which it works is misunderstood and leads some people to into some twisted logic.
*Fresh corpse” isn’t your every day garden variety term (-:
Not true. Water absorbs 15 um radiation just as readily as liquid nitrogen. Actually water is far more greedy in its absorption of 15 um radiation than liquid nitrogen. Anyway, ceteris paribus if water absorbs more 15 um radiation and nothing else changes then it will warm. That is an indisputable fact of the 1LOT ΔE = Ein – Eout and heat capacity ΔT = ΔE / (mc).
The Sun is a participant in the 1LOT budget. But it’s not the only participant. Remember, the 1LOT says ΔE = Ein – Eout. So for a system starting in steady-state (ΔT = 0) and then if the system changes such that ΔEout < ΔEin then it is necessarily the case that ΔE > 0 which means ΔT > 0.
Since you reject the 1LOT I may just be pissing in the wind here, but I am hopeful that I can convince you that the 1LOT is true and that bodies can, in fact, trap energy when Ein > Eout. And remember, in this context “trap” is referring to the left-hand of the 1LOT equation ΔE > 0.
The sun is fundamentally the ONLY PARTICPANT in providing heat to the system. Everything else is simply moving heat around.
You do realize that Ein to water does not raise its temperature, right? Look up the definition of latent heat to see what it means.
As stated by JM, there is no “heat” if it is trapped. Have you ever designed and done the thermodynamic equations for heat sinks? Explain how a heat sink like CO2 traps heat. Show all the gradients both, in and out.
I want you to show a sink that absorbs heat, not energy, but actual heat, yet has a zero gradient, i.e., traps heat.
A CO2 molecule would have to reach untold temperatures to trap any heat.
First…that is patently false. The Ein component of the climate system is also composed of radioactive and tidal energy inputs.
Second…you are ignoring the Eout potion which is just as important as the Ein component of the 1LOT budget.
Yet another blatant rejection of the 1LOT. ΔU = Q – W. And given the specific heat equation ΔT = Q/(mc) then yeah if a body traps energy (ΔU > 0) it’s temperature will rise such that ΔT > 0. You and Jim Masterson can reject this all you want. Nature doesn’t care about your rejection. It still happens whether you accept it or not.
And I’m going to nip your gaslighting in the bud right now. No. I did not forgot about latent heat L = Q/m. That just says there is an amount of heat that is required to cause the body to change phase. Once the phase change happens its back ΔT > 0 regime.
Latent heat arises from the absorption of energy. In many cases this absorption results in a change in kinetic energy, that is, sensible (measurable) temperature. With water, the absorbed energy is converted to potential energy. This occurs without a temperature change, hence the term “latent” (unmeasurable) heat.
This is why climate science has missed the boat by using temperature as a proxy for heat. Enthalpy is the proper proxy because it is a measurement of total internal energy, kinetic and potential. The enthalpy equation is:
H = U + PV
where U is internal energy, both kinetic and latent. By concentrating on kinetic energy only (temperature), climate science misses a whole category of energy.
Enthalpy is why Miami, Fl and Las Vegas, Nv have similar temperatures but clearly different climates.
Many of us learned about thermodynamics taking a plethora of classes used in designing steam driven turbines for electricity generation. Believe me, we didn’t rely on temperature to determine the energy that could be extracted from steam.
Ever see him include entropy in his rants about thermodynamics and “1LOT”?
This from the guy who thinks it is possible to create information from nothing by the magic of averaging, and who has zero clues about basics of thermal insulation and heat transfer.
Gaslighting indeed.
What stupid nonsense! The thermodynamic definition of heat is the transfer of energy across a system boundary due to a temperature difference. They often add the 2nd Law requirement that the flow is from a higher temperature to a lower temperature. Heat “trapping” is complete nonsense. You cannot trap heat.
Interesting. I definitely did not have you on my bingo card of potential people who would blatantly reject the 1LOT like that.
It’s clear that you don’t know the first law of Thermodynamics.
I I know that the 1LOT absolutely allows ΔU > 0. And I will defend this fact fervently despite how deeply you or anyone else want to dig your heels against it
The first law is a conservation law. You seem to be confusing the first law with the second law. There is no requirement that the change in internal energy must be greater than zero. The second law states that the change in entropy must be greater than or equal to zero for an isolated system.
I never said the 1LOT requires ΔU > 0. I said that it allows it. And no, I’m not conflating it with the 2LOT. BTW…you actually stated the 2LOT correctly. Most WUWT commenters I engage with reject the isolation clause of the 2LOT so job well done there. Now let’s get back to the 1LOT which still says ΔU = Q – W. The 2LOT does not override the 1LOT therefore a body can absolutely trap (ΔU > 0) heat (Q). In fact this is one salient points of the 1LOT.
To quote my thermodynamics textbook:
“Heat is defined as the form of energy that is transferred across the boundary of a system at a given temperature to another system (or the surroundings) at a lower temperature by virtue of the temperature difference between the two systems. That is, heat is transferred from the system at the higher [temperature] to the system at the lower temperature, and the heat transfer occurs solely because of the temperature difference between the two systems. Another aspect of this definition of heat is that a body never contains heat. Rather heat can be identified only as it crosses the boundary. Thus, heat is a transient phenomenon.”
You cannot trap heat. Likewise, you cannot trap work.
Jim,
I don’t think some of these folks have ever taken a physical lab where measurements were crucial to the final results.
My 200 thermo class had a professor that helped design the Alaskan pipeline. He brooked no fudging. You couldn’t get an answer and then change assumptions to make it all add up. Tuff class but learned a lot.
I don’t see anywhere in that quote where they say Q > 0 and ΔU > 0 isn’t possible. In fact, if it is a reputable thermodynamics textbook it will have examples where Q > 0 and ΔU > 0.
I agree with you on the trap work thing though. Trap means to hold or retain, but the implied connotation is that entry (in) is allowed while exit (out) is restricted. Work is F*d (or PΔV if that’s your thing) which has a connotation more of being an action so saying work is trapped is awkward at best. If Sparta Nova had said you cannot trap work I would have agreed.
And because you’re quoting your thermodynamics textbook my guess is that you actually do accept the 1LOT and that your grievance is primarily with the word “trap” itself. So if that’s your schtick then just say you hate the word “trap” and replace it with another reasonable word choice that means to hold or retain with the same allowed in/entry and restricted out/exit connotation resulting in ΔU > 0. Other words I’ve recommend to people are gain, accumulate, and retain.
It’s a word salad galore! Why do you think I don’t understand the four laws of thermodynamics? What is your passion with ΔU > 0? ΔU can be positive or negative. Internal energy is a state variable. Heat and work are path variables. They can also be positive or negative.
You’ll notice there are NEVER quoted reference sources to validate the assertions.
It appears obvious that he has had several calculus based thermodynamics courses so he is well educated on thermodynamics.
We are discussing the case where ΔU > 0. And I don’t know about the other laws, but in regard to the 1LOT you say that bodies trapping heat is nonsense. That’s obviously false since I’ve been saying the 1LOT allows Q > 0 and ΔU > 0. If you want clarification on regarding something specific I said I’m more than happy to do so.
“And I don’t know about the other laws . . . .”
Let me help you. The zeroth law states that if system A is in equilibrium with system C, and if system B is in equilibrium with system C, then system A is in equilibrium with system B.
And a good friend gave me this scenario of the laws: The first law says you can’t win–you can only break even or lose. The second law says you can only break even at absolute zero. And the third law says that you can’t reach absolute zero.
I know what the laws are. I just don’t know to what extent you do or don’t think they are nonsense.
And after a moment where I actually thought you fully accepted the 1LOT and all of its consequences and your initial challenge was just an off-the-cuff kneejerk reaction to me I’m now back to thinking you really don’t fully accept it because you are defecting and diverting and feigning like you didn’t understand what I said. So if you really do accept the 1LOT you’re doing everything you possibly can to convince me otherwise.
“U” is the total internal energy, both kinetic and potential (latent). One can not measure the potential energy directly. That is what enthalpy is for when it takes “PV” into account.
Temperature is not a good proxy for total energy.
We need more of this, please.
check out some more at https://www.youtube.com/@WatchGorillaScience/videos
Thank you!
We’ve been hearing this for decades. And the question I always ask is if you think all these graphs are wrong, why not produce your own graph using your own preferred methods. There’s a lot of companies organisations and governments who have vestied interests in disproving the current evidence. They could easily afford to finance a “correct” graph. Then we could debate the merits of the different approaches.
Well, I’ll give a go. Since the land mass is only about 30% of the total surface of the earth, meaning that the oceans are the MAJORITY, concentrating the bulk of urban measurements in the larger cities is guaranteed to slant the readings one gets. Plus, the area of these UHi is very small, compared to the overall area of land. The vast majority of land area is NOT found in these heat islands. Trying to prove that the earth is getting hotter, based ion these erroneous readings is a fools plot. Here in the Midwest we see the same thing. In Winter, Springfield Illinois shows LOWER winter time temps than the surrounding communities and farmlands. Who are they trying to fool? Also, the addition of ‘heat indices’ just adds more confusion in an attempt to make people believe that it is hotter/colder than it really is!
In summary, it isn’t any hotter, or colder, than it has been in recent times. People with an agenda to harvest MORE of our hard earned incomes are interested in making us believe that we are, indeed, heading for a catastrophe, and that THEY have the solution, which just happens to involve MORE TAXES for you and I. MY solution to this is to simply IGNORE them and NOT vote for anyone who supports their bogus theories.
“Well, I’ll give a go. Since the land mass is only about 30% of the total surface of the earth, meaning that the oceans are the MAJORITY, concentrating the bulk of urban measurements in the larger cities is guaranteed to slant the readings one gets.”
Huh? The fact that land only contributed a third of the global anomalythat makes any urban effect less relevant.
“Tying to prove that the earth is getting hotter, based ion these erroneous readings is a fools plot.”
You are just doing what I’m arguing against. Claiming the data is wrong without providing a better alternative.
‘In summary, it isn’t any hotter, or colder, than it has been in recent times.”
Again, what evidence do you have for that claim? Just saying you don’t like the current evidence isn’t a substitute for providing your own.
“MY solution to this is to simply IGNORE them and NOT vote for anyone who supports their bogus theories.”
Thanks for making it clear you have a personal motive for not wanting to believe the evidence.
Where are the thermometers located? How many are floating on the Arctic Sea? The Pacific Ocean?
What is the ratio of land based measurements to ocean based measurements?
What is the ratio of NH to SH?
Best to look at one of the few surface sites that has been properly maintained since 1880 and has basically uncorrupted data.
If you look , you can clearly see that the decade 1930-1939 was warmer on average than any decade since (blue dots)
Most raw data in the NH and many other places in the SH follows that pattern as well (before adjustment)
Pretending you have enough ocean data before 2005 to create any “global” temperature before then, is just blowing smoke.
Why would you base the entire global temperature on just one station?
“If you look , you can clearly see that the decade 1930-1939 was warmer on average than any decade since (blue dots)”
I’m not sure where you got your data from. I used GHCN monthly data, and it doesn’t show that 1921 as being anything like that hot. Moreover your claim is just wrong. The last three decades have all been warmer than 1930-39. Even 1940 – 49 was slightly warmer.
This only goes up to 2019.
Here are the annual; temperatures with a 10 year rolling average.
By the way, comparing the Valentia temperatures with those for CET show that Valentia is somewhat warmer despite all the claimed urban warming in England.
This is the same graph as above, with the 10 year rolling CET average added in blue.
I’d guess that shows how much warming you get from being on the coast.
roflmao.
I said decade average, and you are using “adjusted” GHCN data, not real data.
And thanks for showing us just how much CET has been affected by urban population growth
No. I’m using the unadjusted GHCN data. But if you could provide a link to the dat you are using, we could compare the two.
“No. I’m using the unadjusted GHCN data.”
No, you are using computer-generated bastardized data.
Original, written, temperature data does not have a “hotter and hotter and hotter’ Hockey Stick temperature profile. It takes a computer and a dishonest computer/climate scientist to create the Hockey Stick temperature profile.
That’s what I don’t understand about you and a few other alarmists that post here who seem to have enough intelligence to understand you are dealing with bogus temperature data, when dealing with the Hockey Stick data, yet you completely ignore this and carry on as though the Hockey Stick temperature profile is legitimate.
You’ve seen the original, written, regional charts. None of them have a “hotter and hotter and hotter” temperature profile. The Hockey Stick chart creators only had this data to use when they started out, so how did they get a “hotter and hotter and hotter” temperature profile out of regional data that does not have a “hotter and hotter and hotter” temperature profile?
You don’t find this strange?
I would contend that there is no way one can legitimately get a “hotter and hotter and hotter” temperature profile out of data that does not have a “hotter and hotter and hotter” temperature profile.
Do you know of any way this can be done?
Talk about confirmation bias. In your mind there is no warming so any station showing warming must be “basterdized”.
I’m quite prepared to believe that the raw Valentina data shows no warming, but si far I can’t find this raw data exists and no one will provide a link. The data I used is the ghcn monthly QCU version. That means quality controlled but not adjusted.
bnice has a fixation on Valentia. I have explained a few times to him that he needs to apply meteorology to it, and basic meteorology at that. Valentia lies on the SW coast of Ireland and has a prevailing (often strong) SW’ly wind. That means that its temp is strongly affected by SSTs of the Atlantic Ocean, including, as your graph shows, a higher ave temp than the CET series (because of higher minima). The opposite side of the coin is a suppression of max temps. The ‘bump’ in the 40’s is indicative of both the AMO and the PDO being in their +ve modes (the only time this has occurred in the modern instrumental record. The dip following being the aerosol cooling from industry after WW2 plus a switch in both oceans to their -ve mode in the 60’s.
You again show that Valentia is TOTALLY REPRESENTATIVE of the region. That means that CET is not.
Thanks 🙂
Mainly because the met office and it incompetent employees have allowed the surface stations to fall into an abysmal state of disrepair that is totally unfit for the purpose of climate measurements over time.
You were one of those incompetent employees , weren’t you ?
“I used GHCN monthly data”
Very funny.. You mean agenda-“adjusted” FAKE data.
The data I have is direct from Valentia, without FAKE “adjustments.”
In the real data, the decadal average of 1930-1939 is obviously cooler than the decadal average of 2010-2020..
Darn…. I typed that wrongly
In the real data, the decadal average of 1930-1939 is obviously WARMER than the decadal average of 2010-2020..
The Menne FAKE homogenisation algorithm is all over the place as it tries to FAKE a warming trend at a pristine station that does not need “adjusting” (except for climate propaganda purpose)
The past changes every year…. always fabricating an upward trend.
It would be hilarious.. if it weren’t such a sad indictment of the AGW scam.. !
It’s more than sad, it’s criminal.
“If you look , you can clearly see that the decade 1930-1939 was warmer on average than any decade since (blue dots)
Most raw [original, written] data in the NH and many other places in the SH follows that pattern as well”
Yes, it does. The original, written temperature records are all we have that tell us about the past temperatures, and they show it was just as warm in the past, all over the world, as it is today.
I think what many here are proposing, is that the famous graph cannot be relied upon due to poor weather station siting. The only way produce a more accurate reading of raw data would be to have all weather stations that are contaminated by urban heat, to be resited in rural locations. There’s little point in doing any serious reanalysis until this has been carried out. In the meantime, we do have satellite data, which of course also shows a warming trend, albeit not as extreme.
The warming trend over the oceans in the satellite data comes ONLY at El Nino events.
There is no evidence whatsoever of any CO2 caused warming in the ocean UAH data..
The warming trend over land in UAH is about 1.5 times that of the oceans.
This slight difference is almost certainly caused by human heating and land use changes.
“‘In summary, it isn’t any hotter, or colder, than it has been in recent times.”
Again, what evidence do you have for that claim?”
The original, written temperature records confirm that claim. They show it was just as warm in the past as it is today.
I have charts showing just that, if you want them.
How do you know that if you think the observations that would support your claim are erroneous?
Because my tomatoes refuse to grow in March or November for over 50 years no matter how hard I wish them to.
Tomatoes require a minimum 50F nighttime temperature to set fruit. That result is called observation. If it was actually continuously getting warmer, I would now have tomatoes setting fruit in those months and I don’t. That’s called observation as well.
You should ignore the evidence of your lying eyes and believe the computer models instead.
Do think the tomatoes in your backyard adequately represent the tomatoes over every square meter of the Earth’s surface?
Do you think temperature is the only thing that modulates the growth of tomatoes?
Do you think using tomatoes, trees, etc. are an adequate proxy for the temperature in general?
Do you think using tomatoes, trees, etc. are an adequate proxy for the temperature in general?
Actually, yes. Look up the Koppen-Geiger Climate Classification, which uses plants. Climate is much more than temperature.
The tomatoes mean vastly more than your idiotic “average” temperatures and computer models.
Can you post a link to a dataset that uses tomatoes as a proxy to show much the globe has warmed/cooled since 1880?
I have underlined and boldened globe to drive home the fact that we are trying to asses the change in global temperature; not the temperature in your or doonman’s backyard.
Pathetic response.
There is no such thing as a global climate, nor a global temperature.
BINGO as your climate classification makes very clear.
If you cannot provide evidence showing that the Earth as a whole is warming/cooling using the the tomatoes in your backyard then I don’t really have any other choice but to dismiss your hypothesis.
At long last, we are treated to the real game for Mr. x- “the Earth as a whole” is beyond our capacity, which is a feature-not-a-bug for these select math trolls. Gorman and crocodile took Mr. x apart, but he just keeps posting the same nonsense, that GAT has any meaning beyond grant requests. GAT is a great big fudge that is used to serve CliSci.
You are ever-so-correct. He and bellman both claim no expertise in metrology and measurement uncertainty, but then proceed to lecture experienced professional engineers as if they are experts.
And despite their high volume of posted comments, they still haven’t refuted a single point in Tom Nelson’s Gorilla Science video on the hockey stick fraud.
Isn’t it funny that warming only ever appears in averaged, adjusted “global” temperatures or urban sites, never in pristine, measured temperature series?
It is one of the myths that never dies.
It is one of the myths that never dies.
bdgwx then proves my point buy posting an averaged, adjusted global temperature.
Well played Sir!!
The blue line is the raw data which also shows warming. In fact, it shows more warming than the adjusted data. Is this point you were trying to prove?
There is no such thing as a Global Temperature.
Now try posting an unadjusted, raw temperature series from a particular location devoid of UHI which shows warming.
Sure.
It would be interesting to see the data before 1960.
From the NSIDC: The 2024 melt season for the Greenland Ice Sheet ended with the second-lowest cumulative daily melt extent in this century, ranking twenty-eighth in the satellite record, which began in 1979. Summer air temperatures were generally low over the southern half of the island, with a persistent low-pressure system over Iceland driving cool northern winds across the ice sheet.
A really rural station!
Well spotted! I’m sure much or all of the warming is due to UHI plus plane exhausts.
Failed again. You really think an airport is pristine and devoid of UHI?
You really aren’t much good at this, are you?
Time series need a special analysis. Otherwise changes in standard deviations, means, seasonality, etc. can cause spurious trends and all kinds of wrong conclusions.
[img
[/img]
It is clear you have never grown tomatoes as they are very fussy about temperature, I have to seal my tomatoes plants at night to give it a weak greenhouse effect and block cool night winds.
I’m growing tomatoes right now. Not that it matters since growing tomatoes isn’t necessary to understand that 1) my tomatoes aren’t going to grow at exactly the same time and rate as someone else’s tomatoes, 2) temperature isn’t the only thing modulating their growth, 3) tomatoes don’t tell you what the temperature actually is.
Hate to break it to you but they have as many squirrels in St. Charles as in St. Louis. We went with 2, 4′ * 8′ * 2′ high Walmart raised beds, using the “German” method of logs on bottom, twigs, raised bed mix, selective fertilization. The lazy rabbits won’t jump, but when tto’s start bearing, we’ll need a Plan B…
So far the squirrels and rabbits haven’t been a problem. We’re still relatively new to growing tomatoes though. Last year wasn’t too bad. We’ll see how this year turns out. I love tomatoes.
‘Do think the tomatoes in your backyard adequately represent the tomatoes over every square meter of the Earth’s surface?’
I’d argue no, but since the narrative insists that CO2 is 1) a well-mixed gas and 2) the control knob of the Earth’s climate, there should have been an observable impact on growing season length. Apparently, none has been observed.
The great bdgwx opines:
Do think the tomatoes in your backyard adequately represent the tomatoes over every square meter of the Earth’s surface?’
This is the same person who thinks that occasional, sparse, poorly-calibrated temperature measurements represent the every cubic meter of the Earth’s oceans.
I live in a small river (creek) valley maybe 1 mile wide and depressed maybe 60 feet. I can assure you that cold air sinks. The “official” morning temperature is always 4 – 5 degrees F warmer than my location. The difference disappears by mid morning after the sun rises.
Mr. cat: Yes, Mr. x is so committed to “temp=climate”, and “temp records can be averaged to tell us about climate” that he cannot consider the quality of the temp records, and we should stop, too. In the end, he is not so skilled at math as he is keeping a comment string running.
Tony Heller has done it.
Yes. And if you’ll remember it was his gross incompetence in doing so that played a role in getting himself banned from this site.
I don’t know why someone should be banned for being incompetent. I think the story is more complicated than that- though it happened before I ever came here.
You are correct. It is complicated. It wasn’t just his efforts in creating his own temperature dataset that rubbed Anthony the wrong way. Anthony stated at the time that Steve’s insistence that CO2 freezes on the surface of Antarctica and that his stubbornness to admit mistakes as playing a role as well. I should point out that “Steve Goddard” is the alias Tony Heller chose for himself. At the time of the ban Anthony said he knew “Steve Goddard” was a made up name, but didn’t know his real identity. He only knew that his identity was going to be outed at the Heartland Institutes ICCC9 conference in 2014 and that he didn’t want to have anything to do with it. Tony’s deceit of his identity seems to have also played a role in the ban as well.
Still seems a bit over the top to ban Tony. Everyone can be stubborn at times and we all make mistakes. Seems more of a personal grudge between the 2 of them. Meanwhile, there a few serious climatistas still active here- which is of course a good thing, IMHO. Good to have the challenge.
I don’t disagree. Giving people grace is a virtue. I know I’m guilty of many mistakes myself. If I got banned for every mistake I made I’d never be able to post in any forum including but not limited to WUWT. And on top of that I’ll will sometimes challenge Anthony’s arguments. Despite that I’ve yet to be banned here. I’m grateful because posting here forces to me to learn more about a topic I’m passionate about.
“why not produce your own graph”
I just did that in the other thread, Bellman. Yesterday. Then you dishonestly accused me of “hiding” the “details”. You are an arrogant and mendacious propagandist, at best.
Can you post the graph and details of how you created it here so that we can all review it?
You mean here?
https://wattsupwiththat.com/2025/05/08/hottest-start-to-may/#comment-4070259
That was not what I was asking for. It was a graph of CET data, not global, not your own, in fact produced by the same organization this article is attacking, and it shows more warming than any of the graphs mentioned in this article.
What I accused you of hiding was the trend, by using the seasonal range of values.
The graph only shows warming if you tilt it anticlockwise.
Look again, at a graph designed for the purpose we use graphs, which is to highlight things not easily seen in just raw data …..
I would be surprised not to see this very mild warming trend considering the World is recovering from the LIA. The warming in the 2000’s is almost certainly UHI contamination since the temperature series from the pristine Valentia Observatory in SW Ireland shows that the 1930’s were warmer than today.
CET is based on totally unfit for purpose surface sites and as bellboy showed, they are heavily affected by UK population growth and urban expansion and densification.
They got that way because of the incompetence of past Met Office employees.
Even with the tampering and UHI contamination, CET only shows 0.03C rise per decade. Remove the UHI of the late 20th Century and early 2000’s and even this warming will likely disappear.
“things not easily seen in just raw data”
This may be the only true physics-related thing you have ever said. Because catastrophic warming is certainly not easily seen in just raw data. That is my entire point. If only we could get this on the nightly news…
Here is the Tmin chart corresponding to the above Tmax one. While I don’t see any trend in Tmax with my unaided Mark I eyeball, I do see one in the Tmin, although the present day conditions still look no different than the 1930s and 1940s. Regardless, what that means is that rather than getting “warmer” in the heat of summertime, what we are looking at is a country that has gotten one or two degrees “less cold” at night and in the winter. Do you have a problem with that? If so, why? And what do you think CO2 has to do with any of this, and why?
You have basically ZERO data for most of the ocean before 2005.
Pretending you can create a graph is just silly.
Most raw land data from the NH and much of what there is from the SH shows 1930,40s being similar to around 2000 with a dip to a cold period around 1979
Yes, that’s the one.
“That was not what I was asking for.”
Technically, yes it was. You asked us to “produce our own graphs”, which I did.
“CET data, not global”
What precisely is “global temperature”? And what’s the problem with CET data? Do you have a long-term graph of any other rural station that shows a different pattern? Can you show me? And if there is no cause for alarm in England, which there obviously isn’t, then what need do British people have to pay carbon taxes and subsidize the installation of masses of solar panels and wind turbines, ruining their previously beautiful farmland?
“What I accused you of hiding was the trend,”
What trend? I’m not “hiding” anything. What I posted is the opposite of “hiding”. You, on the other hand, are carefully selecting some data points that support your “narrative”, and hiding everything else. Disingenuously and mendaciously.
I asked “why not produce your own graph using your own preferred methods.” I hoped from the context it would be understood I was asking for an equivalent of “The” graph described in this article, i.e. a global temperature reconstruction.
“And what’s the problem with CET data?”
CET is a fine piece of work and is useful for comparing temperature change over aong period if time, but it is not global. By definition it is an approximation of English temperature. Also by it’s very nature, using only a few station data, and it will not be as accurate as the Met Office regional data.
It’s odd how differently so called skeptics think of CET. Half the time they imagine it’s some pristine in-adjusted holy writ, that can be used to disprove anything you don’t like about global temperature data. Then when they realise it actually shows more warming than the global average they say it’s not fit for purpose, and a complete fraud. See bnices comments here for example.
“What trend?”
The trend that becomes obvious if you fir instance look at 30 year averages.
“You, on the other hand, are carefully selecting some data points that support your “narrative”, and hiding everything else. Disingenuously and mendaciously.”
I am not selecting any data. I’m using all the data you used. I’m just showing that data in a way that let’s you see the woods rather than the trees.
Looks like UHI contamination to me. Stevekj’s observation that recent warming is in Tmin and not Tmax is evidence of this.
Why is this warming not seen in the Valentia series, which really is pristine?
“Looks like UHI contamination to me.”
It’s the data that Stevekj claimed as the graph that contradicted all the other ones. As always it only shows “UHI contamination” when you don’t like the results.
“Stevekj’s observation that recent warming is in Tmin and not Tmax is evidence of this.”
This is not true when it comes to CET.
Since 1970 minimums have been warming at 0.24 °C / decade. Maximums by 0.32°C / decade.
Some of this may be down to increased sunshine.
“Why is this warming not seen in the Valentia series, which really is pristine?”
Still waiting for someone to actually supply this “pristine” data.The adjusted GHCM data shows warming. Not as much as CET, but the Valentia observatory is close to the Atlantic ocean which would likely moderate the amount of warming.
“The adjusted GHCM data shows warming.”
roflmao…. Of course it does. That is part of the whole farce. !
( take foot out of mouth next time.. 😉 )
Data source is actually the Irish Met, BEFORE the adjustments.
Thanks again for showing just how much urban warming is in the CET data., mainly because of urbanisation, homogenisation, adjustments and the parlous/pathetic state of the Met Office surface sites.
Since 1970 minimums have been warming at 0.24 °C / decade. Maximums by 0.32°C / decade.
This is most odd, since there was global cooling between 1940 and 1980 (See https://wattsupwiththat.com/2018/11/19/the-1970s-global-cooling-consensus-was-not-a-myth/)
So, was anyone alarmed about that 0.5 degrees of Tmax warming from 1900 to 1950? That’s 1 degree C/century, and no one claims that we were emitting enough CO2 to make any difference then.
Here’s another way of looking at the rolling average. I can’t get Excel to do a 9000-day (30-year) average, so the longest period I’ve got is a 255-day average, plotted at the same scale as before. (Annual might be slightly better, but longer than that and you risk introducing significant artifacts at the edges of the range)
So what I see looking at it this way is that there was no discernible change at all until about 1989, and then it looks like it got about 1 degree warmer all of a sudden, and stayed that way ever since. Should we be alarmed? If so, why? Does any of this look like it was caused by gradually increasing CO2 emissions since 1950? How and why?
(And did anything happen to this monitoring station in 1989? Hmm)
I will repeat my earlier question: looking at the raw Tmax (or Tmin), would you have any reason to spend any time at all trying to process and magnify the average into something scary-looking? If so, why?
“it is not global”
No it isn’t. But if there is no reason to be alarmed about English temperatures, then what other station would you like us to look closely at?
There have been several such graphs.
Could you point me to one?
Pretending you can make sensible and accurate graphs from highly tainted JUNK data.
Only an activist does that.
Okay, firstly in the UK the Met Office LIE endlessly – don’t believe me then try this one for starters
https://tallbloke.wordpress.com/2025/05/09/refusal-to-disclose-information-under-the-environmental-information-regulations-2004/
Secondly recreating a historic national temperature record is exactly what I am doing. Try this
https://tallbloke.wordpress.com/2025/05/07/lerwick-wmo-03005-dcnn-0043-0044-an-introduction-to-data-analysis-of-the-surface-stations-project/
Hi Ray, I would like to thank you, and any helpers, for showing us the parlous, unfit-for-purpose junk stations that the Met currently relies on. 🙂
Excellent beat down on the Holy Writ of the Climate Cargo Cult! Once again we see the machinations of the global elites are not to elucidate, but to further confuse the feeble-minded and readily brainwashed who avidly follow their religious screeds!
The High Church of Climastrology is vying with Marxism for the title of most destructive hoax in human history. Other lesser fish, drafting in its wake, are the ruminant burp hoax, the USG Food Pyramid Hoax, the Saturated Fat/Heart Health Hoax and the much touted Russian Collusion Hoax among too many others to list! Look for the gullible to emphatically believe all, or many, of these psyops mantras, breathlessly presented by the lapdog media to preserve the status quo!
“My name is Tom Nelson, and this is Guerilla Science.”
Then why does the logo say Gorilla Science?
“Let’s take a close look at this graph. The agencies that produce these graphs, like NASA, NOAA, and the UK Met Office, all rely on the same data from the US and Global Historical Climatology Networks,”
Are you talking about one graph or multiple graphs? They do not all rely on the same data. Hadcrut uses private data, that was what all the foi arguments were about.
“NOAA’s graph starts in 1850”
It starts in 1880. (I know I”m being picky. But these constant mistakes don’t lead to the impression of a well researched argument.)
NOAA’s graph starts in 1880 because there’s a temperature spike in 1878 that they don’t want you to know about. If you load up HADCRUT data it’s there.
I’m on an IPad otherwise I’d post a link.
“NOAA’s graph starts in 1880 because there’s a temperature spike in 1878 that they don’t want you to know about.”
Devious bastards, aren’t they.
Please post the chart showing the high temperatures around 1878, the period NOAA doesn’t want us to see..
You mean the one true graph this article is about. Here you go.
https://www.metoffice.gov.uk/hadobs/hadcrut5/
Again, based on measurements that just don’t exist.
Even Phil Jones admitted that ocean measurements were “mostly made up”
And where they do exist in the latter half of the picture, numbers are based on surface data that is proven to be totally unfit for purpose of measuring temperature changes over time.
A travesty of fabrication.
Why do you suppose the rate of warming in your chart is exactly the same for the period 1920 to 1045 as it is for the period starting in the 1970s?
I mean, why do you think the planet was warming just as fast back then as now?
Please explain.
Exactly the same rate! What happened?
CO2? …Oh wait.
Slight clarification. I copied that graph from the Hadcrut 5 homepage, which for some reason only goes up to 2018. Here’s the full range of annual data.
You know it is JUNK data from JUNK sites.. yet you still keep using it. !!
You are only fooling yourself.
Red thumber.. show us where all the data comes from to make that graph from in 1850 ..
You and everyone else knows that IT DOES NOT EXIST
And the ocean data .. also DOES NOT EXIST before 2005 for much of the world’s oceans.
You KNOW it is a JUNK fabrication from JUNK or NON-EXISTANT and FAKE data.
You have yet to face that fact, so you just continue to fool yourself. !
No you are not being picky …more of a dick head
Not going to argue with your description.. 😉
Also cannot read or has very low comprehension skills.
““My name is Tom Nelson, and this is Guerilla Science.””
You obviously missed the line just before that says…
Autogenerated and autoformatted transcript.
Correct, he does say “gorilla” science. Seems more accurate in any case.
I seem to remember an article describing the loss of roughly 5,000 cold weather stations when the USSR collapsed. It discussed that military bases were known to alter the temp logs and exaggerated how cold it was to get more fuel.
as an aside, I have 2 OSAT sensors roughly 50’ apart. An analog on the north wall of my deck (in the shade), and a Davis in the yard on my fence (west side). The Davis is 4’ off the ground, with gravel below it.
i can get as much as a 10-degree delta depending on the time of day and cloud/wind conditions.
Useful to me as to where I want to spend time outside…
I read decades ago about that, but I thought it was 1200 weather stations on the Arctic Circle.
Has Gavin been fired yet?
Maybe Trump will deport him.
We have enough liars in the UK already, thanks all the same.
I am hoping that POTUS makes it known to all Scientists, Data analysists etc. working for the State on climate data, or, indeed, funded by the State, that anyone knowingly producing false data may well be charged with malfeasance in office and the full weight of the Judiciary may well descend upon them. If they have done nothing wrong then no probs, but ..!
Let us see if he makes this declaration and puts DOGE and the FBI (say looking into mass fraud) on the case how many ‘jump ship’ and try and get a deal!
I know next to zero about “the law” but I suspect trying to prove that someone is “knowingly producing false data” will be difficult.
I think the instrument-era Hockey Stick chart is vulnerable to such scrutiny for being fraudulently created.
After all, how can they explain getting a “hotter and hotter and hotter” Hockey Stick temperature profile.out of original temperature data that does not have such a profile?
I ask that question all the time, and not one climate alarmists has tried their hand at answering this question, which just tells me they have no answer. They would reply if they had a good explanation.
Let’s ask the temperature data mannipulators this simple question. The only plausible answer is somebody has bastardized the data for political/personal purposes.
This is called science fraud, and it is the biggest, most expensive Science Fraud in human history.
Will any Climate Alarmist weigh in on this topic? How do you get a Hockey Stick out of data that has no Hockey Stick?
Malfeasance is probably justified, but I would be thrilled if the charge was treason. It won’t be, but one can dream…
I consider this Hockey Stick science fraud a Crime Against Humanity.
It is a perversion of the human mind. You cannot argue against belief. Argument only reinforces the belief.
The graph starts at the nearly end of the little ice age. Of course we are getting warmer!
Meanwhile, we’re still in the big, long term ice age. We should be more fearful of the return of the continental glaciers than a trivial warming.
Yes, we warmed up out of the Little Ice Age to a high point in the 1880’s, and it hasn’t gotten any warmer since that time. The 1880’s were just as warm as the 1930’s, and the 1930’s were just as warm as today. No net increase in warmth over this entire period. It is not getting “hotter and hotter and hotter”.
The example given — In 1900, the population of Phoenix was five and a half thousand. Now it’s about one and a half million. Thermometers that were once in open fields … — seriously understates the magnitude of the growth:
That’s a yugely fertile once-agricultural valley, converted to residential – commercial – industrial uses.
Where did all the citrus orchards & farms go? Mostly well downriver (the Gila, toward its confluence with the lower Colorado at Yuma, just above the Gulf of California / Sea of Cortez).
… following the sun, and the canals, well away from the mountain valleys and canyons preferred for human settlement (homes & gardens).
There’s nothing dodgy about global temperatures constructs except their Probity, Provenance and Presentation.
Nothing to see here –
I like the image of the reporter in front of huge arson fires saying that it is a mostly peaceful protest.
Every single effort to “stop global warming” has been an adjunct failure for over 40 years.
The track record is 0% effective.
Keep promoting failure and cash the government checks. A wonderful plan.
Average/mean temperatures is bogus. Temperature cycles are not sinusoidal over a 24 hour period.
Median would be a somewhat better statistical tool. Even then, median does not help with the T^4 blackbody emissions calculations.
Do the calculations for +25C +15C and +5C. Average the W/m^2 for +25C and +5C and compare to the +15C W/m^2. They are not the same.
UHI is more than just concrete, glass, steel, and energy used. It is also the surface areas of the buildings versus the building footprint area.
Meanwhile the lemmings head toward the cliff – a very real tipping point.
That might be true for some types of analysis, but it isn’t universally true. For example, the mean can be used to calculate the change in energy via Q = mcΔT. If you try to model Q using a median for ΔT you’ll get the wrong answer if the mean and median are different. A relevant topic for you to explore is the mean value theorem for integrals. This concept explains at a mathematical level why a mean works here. Note there is no analogous median value theorem for integrals.
The relevant topic is called the rectification effect. Scientists are well aware of the 4th power relationship between temperature and radiant exitance. That’s what causes the rectification effect. And it is because scientist understand this concept that we can estimate it. It turns out Earth’s rectification effect is about 6 W.m-2 or 1 C. [Trenberth et al. 2009].
Yeah. We know. This was discovered in the 1800’s.
You mean “Talk about anything except the mythical Greenhouse Effect”, don’t you?
Waffling about pointless numbers is all well and good if you don’t want to admit that adding CO2 to air doesn’t make it hotter, and that the Earth has cooled in spite of having an atmosphere and four and a half billion years of continuous sunlight.
Average that!
May I just ask everyone reads my latest report which will be followed up shortly with yet more proof of numbers simply being made up.
https://tallbloke.wordpress.com/2025/05/09/refusal-to-disclose-information-under-the-environmental-information-regulations-2004/