An August 3rd article in The Washington Post (WaPo), “titled Study suggests nearby rural land can cool cities by nearly 30 percent,” admits what others, including The Heartland Institute, have noted for years: rural areas are much cooler than cities. What is new is they are suggesting that the rural areas actually keep the cities cooler by transport of cooler air from the rural regions into the city. They write:
Rural land surrounding urban areas could help cool cities by up to 32.9 degrees Fahrenheit, an analysis in Nature Cities suggests, hinting at a way to cool increasingly scorching urban areas.
In an attempt to understand how rural land cover affects urban heat islands — a phenomenon in which cities become significantly warmer than the areas surrounding them — researchers studied data from 30 Chinese cities between 2000 and 2020. They looked at land cover surrounding the urban areas and ranked the capacity of various urban-rural configurations to cool the cities.
Note the bolded 32.9 degrees Fahrenheit from the WaPo story. We’ve often said that journalists don’t have a clue about the most basic science, much less complex climate science. In this story WaPo reporter Erin Blakemore makes this abundantly clear.
The study press release from the University of Surrey referenced in the WaPo story says: “Rural belts around cities can reduce urban temperatures by over 0.5°C.”
Why the difference? Apparently the WaPo reporter can’t convert between Celsius and Fahrenheit correctly, due to an attempt to “Americanize” the story, she wanted to display the temperature in Fahrenheit, used in the United States.
If you don’t know the formula, you can use Google to do it, which is what she likely did. With Google 0.5°C converts to 32.9°F.
But that’s clearly wrong, because Blakemore is assuming that 0.5°C is an absolute temperature, rather than a difference in temperature due to the cooling effect between rural and city areas. If what she said were true, a pleasant city temperature of 65°F would be impossibly cooled down to 32.1°F (nearly freezing) by the effect of nearby rural areas. How embarrassing.
Despite this laughable failure of grade-school science, the WaPo story does point out the mechanism for how rural areas help keep cities cooler. The study, done in China says that heat is transported from the cities to the rural areas through a meteorological mechanism. The reason is a matter of physics, they write:
Air warms in cities, leaving a low-pressure zone near the ground that then helps transport cooler air from surrounding rural areas. The rural areas then go on to absorb the heat.
The study has implications for very large cities which often set new high temperature records when surrounding cities do not. The Urban Heat Island (UHI) effect has been known for quite some time, as shown in the illustration below.
From Climate at a Glance, Urban Heat Islands we know:
- Urban heat islands, which grow along with the size of cities, create artificial warming at many long-term temperature stations.
- On average, urban heat islands increase the global surface temperature trend by almost 50 percent.
- Nearly 90 percent of U.S. temperature stations have been compromised by urbanization effects.
- Almost half of the reported U.S. warming disappears when reporting only stations uncorrupted by heat islands.
One of the biggest issues with UHI is how it affects the surface temperature record for the planet.
The data in the figure below show temperature stations that have not been corrupted by the urban heat island effect report significantly less warming than temperature stations corrupted by urban heat island impacts, as seen in Figure 1, below. Still, despite this well-known problem, corrupted temperature stations compose a majority of the stations used to report official U.S. temperature data.

The biggest UHI issue is the fact that the thermometers located in cities tend to bias the trend of “global warming” upward, because cities are over-represented in the temperatures station record as a percentage of the Earth’s surface area. Rural areas where thermometers have not been compromised by UHI have a far lower 30 year trend in temperature.
It’s good that the UHI is finally being recognized by the WaPo, but given the scientific skill level displayed by the writer of this story it is doubtful they will ever figure how badly the UHI is biasing the surface temperature record which they find so alarming. I sadly expect WaPo and its science illiterate writers to continue to push the climate narrative blaming carbon dioxide produced by human fossil fuel use for causing a climate catastrophe, despite the mounting evidence that the recent temperature rise, properly measured, is unalarming.

Anthony Watts is a senior fellow for environment and climate at The Heartland Institute. Watts has been in the weather business both in front of, and behind the camera as an on-air television meteorologist since 1978, and currently does daily radio forecasts. He has created weather graphics presentation systems for television, specialized weather instrumentation, as well as co-authored peer-reviewed papers on climate issues. He operates the most viewed website in the world on climate, the award-winning website wattsupwiththat.com.
Originally posted at ClimateREALISM
Editor’s note. The feature image has the “32” circled in red as the source of WAPO’s error. Adding the 32 in the classic formula only applies to absolute temperatures, not a delta change in temperature.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Being innumerate is very helpful if one wants to be a green. Various questions inconvenient for spreading the message just do not occur.
She was actually right on top of the answer.
0.0°C = 32.0°F
0.5°C = 32.9°F
^.5°C = ^0.9°F
Just got lazy…probably couldn’t find where anyone else had done the work already so had to muddle through it herself and got lazy near the end
To a person not half asleep, the answer she came up with should have lit up a big flashing red light. How could anyone even start to believe the difference could be so large?
… as a large language model, I’m not particularly well suited to considering the reasonableness of any particular answer.
True.
I can see her taking the easy route and using converter site.
But, considering what she was writing about, once the result came back over 32*F, her reaction should have been, “Wait a minute. That can’t be right!”
All you have to do is recognize that you are dealing with interval and do a Google.
“c to f temperature interval”
Viola! https://www.unitconverters.net/temperature-interval-converter.html
Good point.
Many people don’t think things through, climate catastrophists and their Marxist teachers included.
I sincerely hope you forgot the /s tag. Otherwise it’s an attempt to justify the unjustifiable.
Lazy and inept. Copy that.
She didn’t make a math error. She made a fundamental logic error. She converted 0.5 degree C as if it was a temperature rather than a change in temperature. It is true a temperature of 0.5 degree C is equivalent to 32.9 degree F but a change in temperature of 0.5 degree C is only equivalent to a change of 0.9 degree F. This is ignorance not laziness.
This is ignorance not laziness.
I disagree. It’s both. Ignorance can be cured, but not if you’re too lazy to care.
Apparently she let Google do the math for her and blindly copied the search result.
Not a good source of reality IMO. 😉
I have pointed out that one degree F is not the same as one F degree.
Example: 5 F. degrees is a difference. 5 degrees F is a temperature.
Like this: 5°F versus 5 F°
I should mention I have taught 1st year college Earth Science.
Erin Blakemore, bless her little heart, would have flunked.
Don’t confuse people with accuracies.
Journalists are trained only in ideological obedience. In that curricula, basic algebra does not even come into consideration, indeed it is much more preferable for such students to be utterly ignorant of such matters, lest they perform a simple calculation and determine that the message they have been instructed to promote makes no sense at all. They are barely useful idiots.
They’re also apparently trained to parrot other journalists stories instead of doing their own work
Churnalists
It’s why a Google search brings up the same alarmist stories ahead of the more measured stories. AI is making things worse.
trained to parrot other journalists
Look at the publications (i.e. Axios) that are laying off “reporters” in favor of AI generated articles. AI is great at regurgitating what others write, even at rephrasing it. But not at original research.
They can make that replacement because most “journalists” today don’t DO any original research.
Using “percentage” for temperature change.. JUST DUMB !!
And a similar amount clue about significant digits as the typical climate scientist/trendologist — none.
True for any temperature not relative to absolute 0.
______________________________________________________
Apparently the WaPo editor can’t do it either.
I wonder if most media actually have editors today.
Certainly not ones who think about an article then grill the writer.
This pretty much leaves me speechless… where do you go from here when both the author & evidently the editor lets this pass through?
My personal experience with journalists whenever I have personal knowledge of a situation is that journalists rarely get it right – it’s not necessarily malicious, more just cluelessness… and how many count on them for accurate information?
It’s Michael Crichton’s Gell-Mann Amnesia effect.
If they can’t convince you with their ideology, they baffle you with their ineptness?
Do not attribute to malice that which can easily be attributed to stupidity.
In the case of the Climate crisis, it’s both.
how many count on them for accurate information?
What always amazes me is how people can catch them with errors like this, yet STILL continue to accept their reporting as gospel on everything else.
Falsus in uno, falsus in omnibus
Actually, that is a logic fallacy.
It is true enough often, but still you have to assess the whole work. Usually there are many errors in the omnibus.
Literally taken, sure, but in application – if they’re so often wrong on subjects that I know about, why would I trust them implicitly on anything else? What reason is there to believe they are ONLY wrong on one subject?
G’Day Tony,
“What reason is there to believe they are ONLY wrong on one subject?”
Absolutely no reason what so ever.
Especially when it comes to politicians: “Don’t listen to them, look at their past accomplishments.”
When a newspaper routinely publishes climate alarmism and Marxists I say it is not ‘cluelessness’ but deliberate.
Such as the Times Colonist of Victoria BC which regularly publishes the ranting Marxist Trevor Hancock. Water scarcity, forced displacement among challenges we face – Victoria Times Colonist
“. . . Blakemore is assuming that 0.5°C is an absolute temperature . . . .”
I don’t think “absolute” is the correct term. Rankin and Kelvin are absolute temperature scales. Instead of a difference, it was assumed that it was a reading on a Celsius thermometer. You only need to multiply by 1.8 (9/5) to convert a Celsius difference to a Fahrenheit difference (or divide by 1.8 [multiply by 5/9] to convert back).
Two negative votes. What clueless individuals.
Everyone gets downvotes. No one is perfect.
But where is my statement wrong? The people down-voting have no clue.And I never claimed to be perfect. What stupid ——!
I didn’t say your statement was wrong nor I did say you claimed to be perfect.
Let the quality of your comments speak for themselves is my point. Harping about downvotes comes across as defensive.
Seriously!
I negated one of them. 🙂
Thanks.
Have you noticed that individuals who know actual physics are denigrated, but individuals that don’t know physics are exonerated. It’s sad, but true!
Exonerated AND celebrated.
I saw the absolute and passed on it. You are correct. “Absolute” is incorrect.
0.5 C is a delta temperature that the WaPo writer assumed was a measured temperature.
Yes, very bad arithmetic in the WAPO article. And so far uncorrected. Grrrr!
But ” it is doubtful they will ever figure how badly the UHI is biasing the surface temperature record which they find so alarming”? So why does Fig 1 stop in 2008? I’d bet that it is using USHCN data, which ended in 2014. But the whole thing becomes a nonsense when you look as USCRN, which has no UHI. That plot is shown on the front page of WUWT, but with the plot of the much larger dataset ccClimdiv (which includes cities) removed. If included, the results are almost identical. However, the trend of USCRN, without UHI, to end 2023 is 0.3 C/decade; for Climdiv (with UHI) it is less at 0.23 C/decade.
Again, your gormless twit, USCRN is a reference network used to adjust the UHI out of the more infected urban sites.
The difference between started with ClimDiv a bit high, and has now levelled off with ClimDiv just a bit lower.
Only a complete mathematical ignoramus would not see what is happening as they have gradually honed in their “adjustment” parameters.
2005-2015 Climdiv cooling slightly faster than USCRN.. then, as they “fix” the “adjustment parameters”..
from 2017. ClimDiv has warmed very slightly faster than USCRN
darn forgot image
False claim with zero data to support the claim. A lack of data never stops BeNasty from confidently stating his opinion as a fact.
The El Nino There Is No AGW Nutter does not nee data for conclusions and ignores all data that do not support his claptrap. A climate comedian, not a climate scientist.
I just love seeing the romance between Richard Greene and bnice2000 play out on these pages. So romantic! Such a break from otherwise adult discussions. Shouldn’t one of the moderators here chaperone them?
“Only a complete mathematical ignoramus would not see what is happening “
And RG appears, proving my point.
Hilarious. !!
“False claim with zero data to support the claim”
Still waiting for you to show us the warming in the UAH data,…
…. that doesn’t come from El Ninos.
I have posted data time and time again, to back up reality.
It is there for all to see. !
Still waiting RG to show us the AGW in the UAH data.
We know the surface data is full of AUW, but that is not global.
Where is the Anthropogenic “global” warming.
Please let us know.
Do not feed the trolls. They enjoy the meal too much and constantly demand more.
If Richard Greene is the troll. You can’t reasonably expect him to starve himself.
G’day Nick, I guess the medical term for that graph is “almost flat-lining”?
A trend of 0.3C/decade is not flatlining. 3C/century.
Only a complete mathematical moron extrapolates way passed the data period.
Warming in USCRN/ClimDiv comes only from the 2016 El Nino bump, and now the 2023 El Nino
Before the 2016 El Nino, they were both COOLING
Well Nick, i just wish my blood pressure graph looked that stable 🙂
Have you tried tilting the graph slightly? That way you get a warming trend.
The data is non-linear.
This is how we know you are stupid.
so in 1,000 years, it’ll go up 30C?
Spot on comment.
The Washington Post might report it as 32.5 deg F per decade ??
How scary would that be !
The IPCC members should hold a meeting outdoors in Geneva Switzerland, where their headquarters is located, in January with few clothes on and beach shoes and feel the climate firsthand. I’d bet they couldn’t last an hour.
Watts articles have been consistently accurate until this one:
“On average, urban heat islands increase the global surface temperature trend by almost 50 percent.”
Total BS
There are no data for that claim
The 71% of Earth’s surface that are oceans are not affected by UHI. So increased UHI near land weather stations would have to be HUGE.
Land weather stations moved from an urban locations to suburban airports can have a one time REDUCTION of UHI
Any land weather station can be affected by a gradual change in UHI, even rural weather stations
The NOAA USCRN network of all rural weather stations with no UHI has had a warming rate of +0.34 degrees C. per decade since 2005. Faster than all other averages that include increases of UHI
The faster warming rate of USCRN versus nClimDiv needs an explanation from NOAA, to determine why the all rural USCRN has faster warming than nClimDiv. Without an explanation from NOAA, there is just wild guessing from the peanut gallery.
This lack of accuracy from Watts continues the disturbing trend I have observed in 2024 among a growing number of conservative climate writers: Desperation after the very warm year of 2023.
For the first time in my 27.5 years of climate and energy reading, some conservatives can no longer be trusted to know the difference between real science and junk science. The same problem leftists have had for the past 50 years
Right now Cleveland is 67 F / 18 C and the airport is 73 F / 22 C. I looked a couple of days ago when it was much warmer and it was a similar difference. It’s probably all the asphalt and concrete that hold heat and get much warmer that the air.
https://www.accuweather.com/en/us/cleveland/44113/current-weather/350127
https://forecast.weather.gov/MapClick.php?lat=41.41083000000003&lon=-81.84943999999996
A month or two back if was 94 F air temperature and I measured the concrete temperature and it was 120 F and I guess it will eventually heat the air above it.
Scientists are more liberal than the general public, most want to discover new things and find out the truth about things.
The Republicans support the so-called “Climate Change” agenda. The House which is controlled by the Republicans passed the Inflation Reduction Act that funds US government “Climate Change” spending.
The alleged temperature in Cleveland versus the Cleveland airport is FAR from sufficient data to support the claim that 50% of surface warming is from UHI
First of all, UHI does not cause global warming
Global warming can be increased by increasing UHI and it is.
But UHI growth is almost impossible to measure. The primary problem for changes in global UHI is the lack of rural weather stations near cities, that remained rural and had long term continuous records with no gaps.
The US may have sufficient data, but the rest of the world does not. We are just 1.5% of the world
In May 2019 I studied the NASA-GISS UHI adjustment and wondered why no other organizations had a UHI adjustment.
The reason was the NASA-GISS adjustment was tiny, about 0.05 degrees C. per century
The adjustment was tiny because they assumed almost half of weather stations had reductions of UHI?
I believe they assumed stations moved from urban settings to airports had less UHI
My 2019 blog article concluded the NASA-GISS global UHI adjustments were junk science. Here is a link:
Honest global warming chart Blog: Urbanization bias adjustments are tiny, and they are science fraud, partially caused by the lack of rural weather stations with long, continuous records, located outside the US, needed to determine the correct adjustments (elonionbloggle.blogspot.com)
By the way, the population of Cleveland dropped from 920,000 in 1950 to only 362,000 in 2022. Maybe they had a decline in UHI?
As the city population declined, the Cleveland suburbs grew rapidly. The entire Cleveland metro area population doubled since the 1950s as the city population declined.
“Global warming can be increased by increasing UHI “
And that is where you are completely WRONG.
Urbans sites only represent a small fraction of the planet.
Any result derived from them IS NOT GLOBAL.!
Urban biases are NOT TINY at all, anyone living near a city knows that for a fact.
NOAA just makes them out to be tiny, and some not-very-bright people fall for their scam.
Sounds like every temperature site near Cleveland has got swallowed by by URBAN development. !
You want another example? Look at the temperature coming from RDU and the USCRN site in Durham a few short miles away. As I write this RDU is reporting 77F and the USCRN site is 74F. And this is with a tropical storm whipping up the wind and raining. During the summer, the high temp is consistently a 4-6F difference.
Temperature is a multi-factor relationship. Temperature at any specific location is a function of humidity, pressure, terrain, geography, wind, clouds, etc. It is an intensive property and there is no guarantee that the temperature in one location has any relationship to the temperature at a different location. It’s a big factor in why homogenizing and infilling of data does nothing but spread measurement uncertainty around to all stations. Temperatures in San Diego can be 30F different than in Romona which is only 30 miles away. Same for Pikes Peak and Colorado Springs. Climate science likes to say they take care of that by using anomalies but the problem is that temperature variance can be vastly different between cold temps and hot temps. That means that anomalies should be weighted to allow comparison but climate science just jams anomalies from the NH with those from the SH into the same data set with no weighting allowance for different variances at all.
Oh BS. When two locations are 15 miles apart and at the same elevation and one site is located next to a runway at a major airport and the other is sited according to accepted standards with calibrated sensors, which one do you use for the historical record? You would think it’s the one sited correctly but no, the RDU airport site is used as the historical record.
Climate science uses the sites that always report the higher temperatures, and if that doesn’t work, then homogenization is used to “correct” them higher.
Click-bait alert…
I’m still wondering why the GOP pass that act. Anyone have any clue? All the $$$ going to their area?
A couple years ago NOAA put out a corrected or adjusted USCRN data set. Why raw data needed to be corrected was never clear to me. Which USCRN data set is being quoted?
On 2013-01-07 at 1500 UTC, USCRN began reporting corrected surface temperature measurements for some stations. These changes impact previous users of the data because the corrected values differ from uncorrected values.
Yet another reason to not trust NOAA
Their global average made the 1940 to 1975 cooling go away
And having USCRN showing faster warming than nClimDiv makes no sense.
I lost faith in all surface measurements in the 1990s when the 1940 to 1975 global cooling began “disappearing”.
Then the 1956 US heat peak was reduced so 1998 could be warmer than 1936.
Surface averages always had too much infilling and there were too many years later “adjustments”
The world would be better off with the average temperatures replaced with TMAX and TMIN global average temperatures … or just local measurements.
No one lives in the global average temperature. And +1 degree of warming does not mean much if the warming os primarily in the six coldest months of the year.
“And having USCRN showing faster warming than nClimDiv makes no sense.”
You still think ClimDiv is something other than an adjusted match to USCRN.
How quaint !
Since a large proportion of the surface data comes from URBAN sites..
it wouldn’t matter what calculation you used.. It still wouldn’t represent “global” anything…
(unless you had a near pristine reference system all over the world, to adjust to)
Even a pristine reference system is going to have a measurement uncertainty of somewhere between 0.3C and 2C for each instrument just because of the technology available today. This would legislate against being able to identify global anomalies in the hundredths digit on a global basis. Anyone that claims to know what is happening with temperatures on a global basis is only fooling themselves – I think Feynman had something to offer on that!
“And having USCRN showing faster warming than nClimDiv makes no sense.”
You still think ClimDiv is something other than an adjusted match to USCRN.
How quaint !
Since a large proportion of the surface data comes from URBAN sites…
…. it wouldn’t matter what calculation you used.. It still wouldn’t represent “global” anything…
(unless you had a near pristine reference system all over the world, to adjust to)
This correction affects the infrared readings of the temperature of the ground below the station, not the near-surface air temperature measurements made using an array of several thermometer devices in each station that the average global or regional temperature indexes are compiled from.
The usual climate pseudoscience Fake Data fraud.
Are you specifically objecting to anything in my comment, or are you objecting to me in general?
DENIAL of the urban heating effect, YET AGAIN.
And still not comprehending that USCRN is, by its very definition, a set of REFERENCE stations.
Even RG should have enough intelligence to figure out what “reference stations ” would be used for!
But apparently not!
ClimDiv started a bit higher, and the “adjustment routines” have been gradually honed-in so that since 2017, the trend in ClimDiv essentially matches USCRN.
“some conservatives can no longer be trusted to know the difference between real science and junk science”.
I didn’t know you were pretending to be a conservative !
Everything you do seems to support the leftist AGW cultism and junk science.
““On average, urban heat islands increase the global surface temperature trend by almost 50 percent.””
Which is probably an under-estimate.
(It is not a “global” temperature, anyway. It represents mostly URBAN stations… not the globe)
But then, AW has back up the statement with facts and measurements.
RG.. not so much.
Yet here you are…
Missing word is TREND.
Uh.. besides that very embarrassing conversion mistake, it might be worthwhile to point out that they have it backwards altogether.. it is not the rural eras which cool the cities, it is the cities which distort the global warming record!
WaPo should immediately contact IPCC that they found UHIs and desperately urge them to consider it carefully in their next report!!
Nothing wind stillers and solar absorbers cannot fix. Just get rid of those horrible trees and get serious about industrialising the landscape.
Spot on comment.
What does 30% cooling mean? If the temperature in a city is 90F, what temperature would be 30% cooler (or warmer for that matter)? Stating temperature differences as a percentage can only be correct if temperatures are stated on an absolute scale — Kelvins or degrees Rankine. The use of “30% cooling” in the subject article is just as scientifically illiterate as the miss conversion of C to F temperature difference.
Others don’t, apparently, but I appreciate the sarcasm.
Once you know water freezes at 0C or 32F and boils at 100C or 212F then the temperature conversion methods available for mental arithmetic between the two scales become rapidly obvious.
This Isn’t an example of a slip. This is an example of exceedingly poor education standards, the tip of an iceberg that will cause much much more serious damage unless it is sorted very quickly.
Attention to detail is lacking in every aspect of modern education, and the media should set an example, especially as they try and brainwash the public without checking the facts.
Climate change is a perfect example of this.
The effort is afoot to eliminate scientific method from education and it’s ok to get the wrong answer as long as you understand the process.
Hmmm…. How can you demonstrate you understand the process if you do not get the correct answer?
Then one wonders why it is that UAH, a lower troposphere global temperature satellite data set heavily featured here at WUWT and which is obviously unaffected by UHI, has been warming at the exact same rate as GISS, a surface-based global temperature data set over the past 20-years?
This site gets sillier with each passing post.
Because the atmosphere responds more to El Nino events, dolt. !!
And there have been two strong El Nino events over that period. (didn’t you know that ???)
UAH has COOLING between those El Nino events.
Keep using those El Nino events.. they are all you have.
Now.. where is that evidence of human causation for the only atmospheric warming.. the El Nino events.
Still waiting!
You might try to explain why even GISS shows COOLING from 2017 to the start of the 2023 El Nino.
Must be CO2 , right 😉
Lol!
‘When it’s not warming at the same rate as GISS over the past 20-years it’s cooling!’
At least you get a laugh here!
We all get to laugh at you.
Can’t you see the cooling in UAH and GISS from 2017 until the start of the recent EL Nino.
Noted, that yet again, you are totally unable to produce any evidence of human causation..
Hilarious attempt to side-step using mindless blether, but all you did was fall flat on your a**e.
Hang on tight to those El Ninos… they are all you have. !
Nothing says authenticity like a seven year period with cherry-picked start and end points…
What you’ve actually demonstrated here is in fact the very thing you don’t seem to understand.
The last 2-1/2 years of your cherry-picked period contained a double-dip La Nina.
So you have inadvertently demonstrated that ENSO does indeed exert a short-term a cooling influence on global temperature and not just a short-term warming one.
Over a period as long as20-years these short-term cooling and warming influences cancel out – yet GISS and UAH both contain the same strong warming trend over that period.
You have accidentally educated yourself.
Are you sure you want to get into changes in earth’s temperature over different time spans?
Is 20 years more reflective of climate than 7 years?
Is 1880 any better? 1879 was warmer.
Why is the baseline for climate change based on a single year?
Should the baseline not have been the global mean/average temperature from 1850 to 1880?
Using a single year as the basis for comparing a running 30 year average just is not correct.
Not sure what you’re talking about. UAH and GISS both use 30-year periods.
I was talking more in general and not to the specific data set presented.
Sorry if that caused confusion.
TheFinalIdiot still thinks linearly about how the climate works. What an uneducated loser!
If you look above, your imbecilic pal, bnasty, also uses linear trends.
Yet not a word from you.
Most odd.
You are not here in good faith is the difference.
You poor ignorant little twerp.
You are clueless about basically everything, aren’t you.
Pretending to yourself you have a tiny amount of intelligence..
You really are off in la-la-land
Now, where’s that evidence of human causation for the El Nino warming that you keep using to create FAKE AGW. !
Giving a red thumb is NOT evidence… except of your incompetence and ignorance.
Still waiting for evidence of human causation.
You are still FAILING completely.
Nothing says moron like a bumbling idiot constantly avoiding the question of human causality.
You have NEVER been educated.. you are incapable of it.
Thanks for at least ADMITTING that the El Ninos are the cause of the warming..
… which is what you have inadvertently done.
Now point out the other La Nina events that had essentially zero effect on the trend.
Unlike the major El Ninos, that stand out as spikes and a step change as the warm water and air travel around the globe.
And thanks for confirming all the El Nino and La Nina events are TOTALLY NATURAL, with no human causation.
Well done, finally waking to reality ! 🙂
Why do El Ninos keep getting warmer and warmer and warmer over time, bnice?
Whatever it is, it ain’t CO2.
“Anything but the thing I don’t want it to be” isn’t an answer that any rational person should find compelling.
“Nothing but the thing I want it to be” isn’t an answer that any rational person should find compelling either.
You’ll be good enough to point out exactly where I’ve said anything of the kind, thanks.
There is no empirical scientific evidence that atmospheric CO2 causes
warming.
Not even you are stupid enough to say that the El Ninos are caused by human CO2
Or are you really that stupid. !!
AGW-collaborators, traitors to western society, will make up any fantasy to ease their little minds.
Still waiting for the empirical scientific evidence of warming by atmospheric CO2.
Giving a red thumb is NOT evidence… except of your incompetence and ignorance.
Has been explained several times before.
Learning is not in your brains capacity, so you just make up some BS about it being CO2.
Should be no trouble at all for you to point me to one of those many times, or just copy paste the explanation you’ve offered to other people in the past, then.
Your brain-washed miasma wouldn’t allow you to comprehend.
If you don’t understand that the upwelling warm ocean water spreads around the oceans, no-one can help you understand.
You are destined to remain perennially IGNORANT… by choice.
This explains why the surface temperature during a single El Niño event is warmer than the surface temperature during adjacent La Niña periods, but it does not come close to explaining why successive El Niño events are getting warmer and warmer over time (i.e. why does the typical “peak” of an El Niño event today seem to reach much higher than the peak of El Niño events say 30 or more years ago?
You are claiming that the warming trend is the result of El Niños, I am asking you what is driving this long term warming of El Niño events.
“Air warms in cities, leaving a low-pressure zone near the ground that then helps transport cooler air from surrounding rural areas. The rural areas then go on to absorb the heat.”
Wait a nimute! If the warm air rises then how does it get to the rural areas to be absorbed, at least at the surface? There is more going on here between UHI islands and rural areas than this simple explanaton.
Correct, especially since hot air rises.
“Rural land surrounding urban areas could help cool cities by up to 32.9 degrees Fahrenheit…”
duh! So, if you live in a hot city, just go to a nearby rural area and it might be 32.9 F cooler. Yuh, right.
Of course, the actual paper says nothing remotely like that.
It is just a major stuff-up by the person writing the article.
Paper is trying to figure out how much the “neighboring rural land cover” helps to mitigate the urban heat effect.
Not sure it does it very well, though.
UHI is not a hidden issue that is unrecognized by scientists or newspapers, nor is its potential impact on surface temperature trend estimates ignored. The station records have adjustments applied to them to remove non-climatic biases from the network.
I agree that the editor and reporter should have been more diligent in checking their math.
How much do you trust the adjustments made by people paid to push a single point of view?
There’s nothing to trust – everything is completely transparent and published in the peer reviewed literature. You can go replicate all of the adjustments yourself. NASA publishes their computer code for the GISTEMP analysis online, free for download:
https://data.giss.nasa.gov/gistemp/sources_v4/
I’ve also just done my own analysis of the station data and observed that my result was pretty darn similar to the result from NASA and the Hadley Centre:
(my analysis is the bold black line.)
I have learned to not trust peer reviewed literature. I have seen too much abuse. Sorry.
I have seen falsified data from even the most eminent organizations.
I have seen the adjustments readjusted.
NASA and NOAA are government agencies that have to align with Presidential policies or get funding cuts.
So again,
How much do you trust the adjustments made by people paid to push a single point of view?
Perhaps better stated, how much can you trust….
In my work, verification is everything. Those adjustments have no independent verification and validation.
Nothing can disabuse someone of paranoid conspiracy theories, so I can offer you nothing for your current predicament. Your only course is to start from the ground up and do all of the work yourself. As noted above, I’ve performed my own analysis of the unadjusted surface temperature data, so there is zero doubt whatsoever in my mind that the reported temperature history accurately reflects the observational data. All of the raw data is freely available for download, so, since you’ll certainly claim you don’t trust me either, you’ll have to go perform your own analysis, perhaps present your results here and describe your methodology.
You can, of course, in the worst scenario simply throw the adjustments out the window, the result doesn’t change very much:
And the adjustments actually go the wrong way – they reduce the warming, which flies in the face of your paranoid conspiracy (but you’ll be able to rationalize that away I am sure).
They’re published openly in the peer reviewed literature. All the methodology, all the computer code, everything needed for exact verification and validation. Where are the articles from Anthony and others showing in detail how the adjustments are invalid or applied incorrectly? Why has Anthony never produced his own independent surface temperature record, free of the abuses he alleges?
Each measurement requires its own correction, as they were ALL taken under different conditions. Read a metrology textbook!
The measurements are not in error and are not being corrected, they are being adjusted to remove the non-climatic component of the signal they contain.
It has been shown that EVERY time they do a homogenisations run,..
… even really high quality sites are subjected to continual changes that bear zero resemblance to any possible reality.
So you personally collected the measurement data, not just borrowing someone else’s data base.
I do not operate a global network of weather monitoring stations that have been in operation for more than a century, no. I do in fact use the same data that everybody else has access to.
Changes in instrumentation are one of the things the adjustments account for, either implicitly or explicitly, depending on the nature of the change and on the approach to adjusting the data. You can review the relevant literature to understand the approaches used.
Changes in station coverage over time can be addressed in different ways – most importantly by employing some kind of spatial averaging approach and by ensuring that the methodology for compiling the index is not sensitive to changes in the network composition (i.e. by using the anomaly).
In your ideal world, what would we do differently to verify and validate the data than what is being done, specifically?
The underlying basis of the computer code is described meticulously in the literature. The code itself is just a trivial implementation of that logic. You can absolutely validate and verify that the logic underlying the code is sound, and you can further evaluate the code itself to make sure that it functions as it is supposed to.
You obviously do not know what software independent verification and validation entails.
That you pumped in the data and got the same results only demonstrates the software is repeatable. That is not validation. That is a subset of verification.
The underlying basis of the computer code is described meticulously in the literature. The code itself is just a trivial implementation of that logic. You can absolutely validate and verify that the logic underlying the code is sound, and you can further evaluate the code itself to make sure that it functions as it is supposed to.
Did you do all of that?
Even if you had, that would boost the confidence factor, but it would not be a robust IV&V.
For my analysis I developed my own independent methodology, and developed my own code to implement it, I would describe that as validation. All of the major orgs producing surface temperature indexes using station data; NASA, Berkeley Earth, the Hadley Centre, all use independent methodology and their own code bases and all arrive at much the same result. The US Climate Reference Network (which has no adjustments applied because the network is bias-free by design by using only perfectly maintained stations in pristine sites free of urban influence) is exactly consistent with the full station network for CONUS.
Again, it isn’t clear what would actually satisfy you regarding the surface temperature indexes short of doing your own analysis.
Climatology double-speak to rationalize Fake Data.
Show us the uncertainty budget component used to “adjust” for the “non-climate component” on a station by station basis.
Would you allow a nuclear power plant to “adjust” radiation measurements downward because of newer radiation detectors, especially if it meant lawsuits would be dismissed? Would the NRC allow that to be done? Most importantly, WHY NOT?
Would the EPA allow a plant to “adjust” downward the pollution data from the past because of new measuring equipment? WHY NOT?
Have people gone back and “adjusted” data Newton collected on gravity because of new measurements? WHY NOT?
Normal science does not “adjust” past data because of new instrument data. The old data is simply declared unfit for purpose and new data is all that is used. Climate science’s NEED for long records to justify trends thereby rationalizing the adjustment of past data is simply not scientific by any stretch of the imagination.
If you were trying to analyze trends in radiation levels through time, you certainly would want to make sure the trend you were measuring wasn’t the result of the change in instrumentation.
If the plant is trying to analyze trends in pollution over time, they certainly would want to make sure that the trend they are measuring wasn’t the result of the equipment change.
“Normal science” certainly removes known systematic biases from time series data before analyzing trends in that data, if the people doing the science are any good at it.
You didn’t answer the question! Would the NRC allow you to adjust the data in past reports? It is a simple yes or no answer!
Again, you didn’t answer the question! Would the EPA allow you to submit revised “adjusted” data to replace past reports? It is a simple yes or no answer!
Your answer seems to imply that you think these agencies would allow you to submit adjusted past data. You might want to guess again.
Show us evidence of this. Not studies that modify official data for the purpose of the study, but actual businesses, agencies, and keepers of data that modify recorded official data for other scientists to use as actual recorded data.
The question is ill-posed and not relevant. Nobody is taking historical temperature data and replacing any “past reports” with adjusted versions of the data. They are taking historical temperature data and using it in analyses. All of the original data is archived and readily available.
You seem to be under the illusion that the original archived records are being altered – you are incorrect. You can obtain the unadjusted station data, exactly as it was recorded, from either the NOAA’s GHCN repository, or from the agencies who are responsible for making the records in the first place.
Really? When much of the official records are recorded to the nearest units digit in fahrenheit just how is precision out to the hundredths digit justified?
Are you still claiming you can measure a crankshaft journal down the hundredths of an inch using a yardstick marked off in inches?
The question is not ill-posed. It is pertinent for any testing lab. It is pertinent to any business who has legal or regulatory requirements for reporting data. It applies to any organization issuing financial prospectuses. How many of these get to modify their past data?
It is not about “finding” the recorded data in some file in a remote directory. It is about publishing “official” data. How many people have the education, ability, or even knowledge to recognize the official data has been massaged by some illustrious bureaucrat? “Believe me, I wouldn’t lie to you”!
Nobody is modifying past data, that is the point, they are performing an analysis and presenting the results of that analysis. The historic data are preserved.
There seems to be some deep rooted misunderstanding pervading this website about what the temperature indxes like GISTEMP or HadCRUT actually are. The “official data” is archived by the NOAA (or by the various weather agencies around the world who run their nation’s weather monitoring stations). The temperature indexes compiled from this data are analytical products, that is why there are multiple agencies producing them, using slightly different methodologies. The compilation of these analytical products does not alter or damage the official data one bit.
This fact should be blatantly obvious, but seems almost impossible to communicate to WUWT readers. The weather stations record point-level measurements, but the temperature indexes are gridded spatial products (or averages of those gridded products) – that is, the point-level measurements are analytically used to estimate temperature for all the points of the globe where the stations aren’t. And they are compiled using the station records, archived pristine and untouched, by the NOAA in the GHCN repository. What you call “meddling” is, to all the rest of the world, known as “geospatial analysis.”
As I said, the original data is hidden away and the “adjusted” data becomes the OFFICIAL data. The issue is not preservation, it is that what is shown as official data is portrayed what was measured.
Here is a link for you to discuss.
https://x.com/_ClimateCraze/status/1772685723956564378?t=d_rmbTG9-2g5QRiN88l3Mw&s=19
Why is NOAA continuing to fabricate data for a non-working station?
I’m am no expert at this subject, but from research I don’t believe this type of analysis is normally applied to coupled, non-linear systems with continually changing forcings. It is used on things like oil and mineral fields or other GEOgrahic phenomena like varieties of landscape characteristics that are substantially static. That is they don’t change momenent to moment like temperature does under chaotic conditions.
You are trying to subtly move the goalposts. What you said was, “Would the NRC allow you to adjust the data in past reports?” And the answer is that nobody is changing data in any past reports. Now you’re claiming that what you really meant was just that they made the underlying data hard to find. And they didn’t do that either – the repository is the top on Google if you search for “raw GHCN data.”
Geospatial analysis is used on all manner of dynamic data – water and air quality, human migration, weather (reanalysis), and climate are just a few examples.
The point is that the temperature indexes are analytical products – they are compiled from individual station records, but individual station records are not somehow more “pure” versions of a temperature index. And nobody alters the station records, they preserved exactly as they were recorded.
To summarize, so we avoid getting off track:
Original station data are never altered, and never destroyed, they are archived publicly.The analytical temperature indexes produced using these station data are developed in a completely transparent way, with full methodology and computer code published in the open peer reviewed literature for everyone to see.Removing known systematic bias prior to analyzing trends in a time series is the appropriate thing to do, and is not a controversial procedure.
Now you’re just dissembling. If I filed something in the past as the official data, would the NRC let me adjust that “past official” in a new filing? You can bet there would need to be comprehensive information and calculations shown for EACH AND EVERY change. Making multiple changes based on “bias” elimination just won’t cut it.
Concerning “official” data. Does this graph from NOAA use raw data or “adjusted” data.
And exactly what do you end up seeing when you follow NOAA’s website choices as in the graph above.
If you took a dataset and performed an analysis of that dataset that resulted in a brand new data product, no one would bat an eye at you publishing your brand new data product. You would not need to file any paperwork for altering the source data because none of the source data would be altered. Anyone who wanted the source data could go and get it from the same place that you did.
The graph is using bias-adjusted nClimDiv data, according to the NCEI website:
https://www.ncei.noaa.gov/access/monitoring/dyk/us-climate-divisions#grdd_
Of course they wouldn’t. That is the whole point. That brand new data product should be published, maybe with peer review, with all the necessary information to allow a reader to see what and why changes were made at every point.
You’ve already admitted that the official data published by NOAA has infilled data from stations that are no longer operating in order to maintain the fiction of a long record. Dry labbing at its best.
You just proved my point. The “official” data being promoted by NOAA is using “adjusted” data with no notice when using the web page selections to access this page. Every graph should have a disclaimer on it saying that the data being displayed has been adjusted by NOAA.
It quite literally is.
It took me about four seconds to find the methodology section on the site. The notice is loud and clear to anyone with a modicum of technological literacy who can navigate a web page. You’re again trying to move the goal posts from “they alter source data” to “it’s a little hard to find the source data”.
No measurements are ever exact. They *always* have measurement uncertainty. You thought CRN stations were calibrated annually in error. I pointed out to you and even gave you the link the cRN annual maintenance checklist which shows that annual calibration is *NOT* done.
Did you forget so quickly?
I remember you showed an annual field-maintenance checklist, which unsurprisingly did not direct the technician to perform laboratory calibrations on-site. Not showing that a thing is done is not the same as showing that a thing is not done, a nuance you never seemed able to quite get your arms around. Abundant literature from the NOAA explicitly details the annual calibration of the instruments.
“I remember you showed an annual field-maintenance checklist, which unsurprisingly did not direct the technician to perform laboratory calibrations on-site. Not showing that a thing is done is not the same as showing that a thing is not done,”
Argumentative fallacy known as Shifting the Burden of Proof. I can’t prove a negative. If the temperature sensor calibration is not scheduled each year then it doesn’t exist and proving it doesn’t exist is impossible.
If *YOU* think someone drives a mobile calibration lab around to all the CRN stations every year to calibrate them then there *will* be documentation of this somewhere. If you are going to claim they are calibrated every year then it is up to you to prove that claim, it’s not up to me to prove they don’t (a negative).
“Abundant literature from the NOAA explicitly details the annual calibration of the instruments.”
If you are going to claim it exists then its up to you to prove that it is documented somewhere. Give us a link to something. I’ve looked and I can’t find one.
Put your money where your mouth is!
There’s that FAKE graph again !
Hilarious that you still bother to use it !!
1880 used the exact same measurement devices, with the same tolerances and errors as 2020?
The number of thermometers in 1880 is exactly the same number in the same locations as 2020?
I never said I did not trust you.
The computer code, yes, it can be verified and validated but the data can’t.
Nor can the underlying assumptions in the computer code.
Nothing can disabuse someone of paranoid conspiracy theories
My question had nothing to do with conspiracy theories.
You answered the question. You feel you can trust the data since you got it from a peer reviewed paper.
+100 Adjustments should be classified as “corrections” because that is what they are claimed to be. Correction values are attributable to calibration procedures on an instrument by instrument basis. Not so for climate science. That wouldn’t accomplish their objective.
Why do we never see any station adjustments for UHI or poor siting? Answer: that would not meet the objective of global warming.
Probably because you get most of your information from a handful of websites intentionally trying to deceive you into thinking there is a vast global conspiracy among climate scientists trying to defraud the public who cherry pick selections of station data to try and falsely paint the picture that adjustments only ever increase the warming trend. Here is an easy example of a bias adjustment for the Tokyo station record, accounting for the significant urbanization of the region over time:
Further, as has been pointed out innumerable times on this website (above in my earlier comment), the net effect of the adjustments is to reduce the global warming trend. This completely obliterates the wild conspiracy theory you all cling to, and none of you seem able to come to terms with the fact:
Silly hockey sticker, propaganda is all you got.
Yeah, right.
Here is a link for you to discuss.
https://x.com/_ClimateCraze/status/1772685723956564378?t=d_rmbTG9-2g5QRiN88l3Mw&s=19
Why is NOAA continuing to fabricate data for a non-working station?
There can be only one reason. NOAA wants to keep a “long record” so NOAA can pretend they have the ability to make accurate trends of GAΔT. And guess what, you just keep propagating that message by declaring that you know the data is correct.
The USHCN has been superseded by nClimDiv, so that X thread is quite out of date – the NOAA isn’t doing anything with that non-working station. The USHCN had to infill missing station data because the NOAA wanted to produce a temperature index in absolute temperature units (the very thing you all constantly demand), and doing so requires continuous records – so they had to infill missing values with estimates using nearby stations. nClimDiv doesn’t follow this rather pointless practice, and just presents the anomaly, so there are no more “ghost stations.” It isn’t a very hard concept to grasp, but seems to be yet another one of those concepts that will forever elude WUWT readers.
Did you even examine the X post. Observations ended in 2011. The post was made as of Mar 26.
Geez dude, if NOAA thinks continuous temperature data is needed, you need to say why. I’ve created many graphs where holes appear, it isn’t a big deal.
Even more curious! How are anomalies created without having absolute temperatures to use in calculating them? That sounds very much like creating data.
Yes, nClimDiv replaced USHCN as the official US station network:
https://www.ncei.noaa.gov/access/monitoring/national-temperature-index/background
I think I’ve explained this many times in the past on this website, but I’ll give it one more go. Take two series of random values (no trends), both with different means (e.g. representing two adjacent stations at different elevations) with one series being shorter than the other:
Take their average:
The average has a trend, where neither series did. You have introduced a spurious trend by averaging two series of different lengths. To avoid this error, you either need to use the anomalies (both values normalized to a common zero), or use infilling from other nearby stations to ensure that both records have the same length. Option 1 is a lot easier and more sensible, but for some reason, NOAA initially went with option 2 using USHCN, producing those “ghost” station values. Now with nClimDiv they use option 1.
You can compute the anomaly for records of varying lengths, provided they share enough overlap during the chosen baseline period.
and you kept a straight face.
You just proved my point. Climate science NEEDS long records in order to validate their simple averaging. The end justifies the means. Make up data if necessary, just so long as you can get what you expect. That’s real scientific. Do you wonder why long term studies of drugs remove people who die before the end of the study (if they die from an unrelated reason).
If you have a record that has ceased, throw it away. Call it unfit for purpose.
If you end up with just a few records of the length you want, tuff cookies!
You would think from your response that no one else has ever run into this problem. Here is a hint. When you cease offering a product, you don’t keep making up data to average into your revenue and expense. You throw the old data away and move on. You may very well have a step change, but you deal with it and don’t pretend it didn’t happen.
You are one of the reasons that climate science has not moved on from Tavg into using a degrees per day number calculated by integrating 6 second, 1 minute, 5 minute data, whatever is available. It would just mess up your ability to compare to past temperatures. You are being left behind by the HVAC scientists. Since the 1980’s when ASOS was being introduced we have decent temperature streams that could be integrated. That is 44 years. When are you going to throw the old stuff away and start over?
and I love you too, Jim 🙂
I think I must have hit reply on the wrong message.
I apologize profusely for my mistake.
You have always been right on point about measurements,
The devil made me do it 🙂
The data for missing values in USHCN wasn’t being made up, it was being infilled from nearby stations. And, again, the step is completely unnecessary if you simply use the anomaly, which is what everyone, including NOAA since adopting nClimDiv, does. No infilling at all. USHCN is deprecated.
You’re trying to argue with me about whether infilling is a valid practice when nobody is doing infilling. It doesn’t matter whether you think it’s valid, that’s a moot point. Infilling isn’t being done.
Which is it, infilled or not?
If data is being added to stations that are no longer operating, infilling is going on.
Are you saying there are no ghost stations being used in nClimdev?
Lastly, if you have no temperature data at a station to actually use in calculating an anomaly, then you are still infilling even if it is an anomaly.
Let me remind you that adding data to non operating stations is maintaining a count of “n” stations to use in the calculation of a standard deviation of the mean.
USHCN is deprecated, replaced by nClimDiv, which does not use infilling because it is based on the anomaly.
This kind of comment degrades the quality of discourse for everyone, even for your eager compatriots breathlessly upvoting you. If you have an objection share it.
Oh, is the Australian sense of humour that unique?
I was congratulating you on your droll wind-up, like Monckton’s “pause” articles.
On reflection, that should be “distinctive” rather than “unique”.
In other words, Fake Data. If you had ever taken a real science course, you would know this is called dry-labbing and is a fast-track to a failing grade.
Why don’t you show the trend where UHI removal has decreased the trend from, say, 1980 to present.? Funny how the only change is prior to 1940.
It is as I said.
The major cause of the decreased trend is a change in the way observations of ocean surface temperature measurements were made in the mid-century, so the adjustment seems to “pivot” around that point. All the other adjustments are basically a wash. Whether you adjust the past up or the present down is an arbitrary convention – usually the present is treated as the frame of reference, so adjustments are made relative to present day. Here’s a version of the same graph with the preindustrial period as the reference:
I just showed you the adjusted Tokyo station record, where the adjustments remove urban warming from the trend, but you flagrantly ignored that, so.
Why is there an arbitrary convention? If bias is the problem, corrections or “adjustments” should not be arbitrary at all. They should only done at stations where a bias actually exists.
I didn’t ask about Tokyo. I asked about the GAΔT using adjusted data that removes UHI. All you have to do is show raw vs adjusted for UHI.
You seem to be implying that UHI is no longer a problem because that bias is being removed. If so showing it in a graph as you’ve done here shouldn’t be hard.
Applying an arbitrary adjustment to the past certainly won’t also remove an UHI bias in the present. Is there an adjustment being made to the present to remove UHI bias.
Let me also ask you to show error bars so we can see how uncertain the values are.
There has to be a convention, and which you choose is arbitrary, because it has exactly no impact on the result whatsoever.
The implication you are trying to make is that UHI isn’t being addressed – the Tokyo record proves unequivocally that this is incorrect, it is a very urban region for which the adjustment markedly reduces the warming trend. The adjustments for the globe do a lot more than simply address urbanization bias, so you can’t simply compare the raw vs adjusted global temperature record and assume all you are seeing is the impact of UHI adjustments.
There have been multiple peer-reviewed papers on this subject, see, e.g.:
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2012JD018509
You can also easily compare the rural, bias free by design, US Climate Reference Network with the full, bias adjusted nClimDiv network to see that the adjustments quite successfully remove all systematic bias from the network, including UHI.
The bias exists in the trend, the adjustment is shifting the trend lower. It does not matter if you conceptualize this shift as bringing the past up or the present down, the resultant trend is identical. I see you’re struggling with this so let me know if this isn’t clear and we can talk through it more.
And you are fooling yourself if you think you know the magnitudes!
You are in the realm of data validation.
All based on MALADJUSTED JUNK URBAN DATA. !! Meaningless.
We have seen some of your attempts at maths and statistics.. they are puerile at best.
Show us some stations that have been “adjusted” for UHI and poor site decisions. I also want to see the computer code that does the adjustments for UHI and poor site choices. You say it exists, put a link or some of the code here.
I linked the computer code for NASA’s GISTEMP analysis earlier in the thread:
https://data.giss.nasa.gov/gistemp/sources_v4/
The Berkeley Earth source code is available here:
https://berkeleyearth.org/archive/analysis-code/
The code though is, again, a trivial implementation of the mathematical methods described in the literature. Looking over the source code will just let you verify whether the methods were implemented correctly or not, you need to read and understand the literature to understand whether the methods are appropriate in the first place.
Done by climate software jockeys who don’t understand real metrology, just like you don’t.
The question Jim asked and you ran away from:
What are the uncertainties of these fraudulent “adjustments”?
The point is, if the code works the way you say, why does UHI still appear in the temperature databases? Why is UHI bias not recognized and “adjusted” out of the temperature record?
Here is an X link. Look at this fellows analysis. Why is NOAA not requiring an analysis of growth to determine UHI bias?
https://x.com/orwell2022/status/1808063523265609979?t=kgW0nfbIA05V29PGlzMz6Q&s=19
The GHCN database, which I assume is what you mean by “temperature databases” is intended to be an archive of the station data for the GHCN network, made available for scientists and the public to use for analytical purposes. NOAA offers two versions of the dataset – the raw, unadjusted station records, exactly as they came from the station (in daily and monthly versions), and the bias adjusted version, which has adjustments applied for things like station moves and instrument changes.
The reason that the NOAA does not replace the raw version of the dataset with the adjusted version is precisely because they want to preserve an archive of the historic data.
Because the NOAA isn’t presenting an analytical product, just two versions of a dataset, one raw and unadjusted, and one with an automated algorithm applied to remove in a broad swath some of the major sources of non-climatic bias (basically, the things that 99.99% of the data users would otherwise have to do themselves). They are acting as data stewards in this capacity.
Data stewards don’t promote adjusted data. In fact, real data stewards only deal with recorded information as it was written. I don’t know what you think lab books are for, but it sounds like you never had to keep a daily lab book with notes about everything you did and results you obtained. Your summaries had better be supported by your lab book results or you would not only fail the class but probably dismissed.
Karlo mentioned dry-labbing, you probably don’t even know what that is. In my experience, it is making up a lab notebook with fake results that provide the answer you are looking for. It is unethical and serious consequences will occur.
From NIST:
Speaking of Error in Forensic Science | NIST
From Yale Univ.
Academic Integrity < Yale University
These are why each and every “adjustment” of data must be noted along with the reasons why. Simply adding or changing recorded results in order to obtain a required trend just isn’t accepted. One is required to note in the lab book each and every serial number of devices used in obtaining data.
Climate science has short circuited this by claiming that unseen biases can be identified by computer and changes made to eliminate them. Infilling can be done correctly by using measurements from other stations.
Right! /sarc
They don’t promote it, they offer it alongside the raw station record archives. Because scientists asked them to.
They haven’t merely claimed this, they’ve proven it via extensive research. See, e.g., Menne et al 2009. If you object to the results of the published research, let’s see your contraevidence.
If they have proven that homogenization and infilling works based on research then the research is crap! I am about one mile from the weather station at Forbes AFB in Kansas. Their temperature readings, humidity, etc are *never* the same as mine. That means their mi-range daily values will never be the same and averages using them will never be the same.
Infilling their missing data with mine using homogenization and infilling would be useless if highly accurate results are wanted.
That’s a difference of ONE MILE. I’ve sent you pictures of NE Kansas temps before. They vary widely. There is no way that homogenization and infilling will give accurate results. Yet that is what climate science does, and then climate science follows that up by assuming that measurement uncertainty won’t be affected and can be assumed to be zero.
A joke. A freaking joke.
The weather doesn’t need to be similar, what is similar (or should be, absent no -climatic bias), is the trend.
Homogenization and infilling are not the same thing. Homogenization is the process of adjusting the trend at a station by using breakpoints identified in the station documentation or by pairwise comparisons with the station’s neighbors.
Still Fake Data fraud.
The underpinning of trendology, you clowns don’t study climate.
What does your software inform you as to what the temperature is in the pink circle?
Hope you get an answer but I’m not going to hold my breath.
Again, homogenization and infilling aren’t the same thing. Homogenization algorithms do not yield values for missing station records.
Blah, blah, blah!!!!!
Why don’t you say,
“We don’t know what temperature should be so we just make up what we think it should be.”
Because I’m not interested in affirming your misconceptions, I am interested in relaying the truth. Homogenization is a procedure to adjust the trend for an entire station record around identified breakpoints, it is not a procedure that infills missing values in the record.
Homogenization is based on the inane assumption by climate science that all measurement uncertainty cancels.
All measuring instruments suffer from calibration drift. The *exact* amount of drift per unit time can never be known. Therefore applying “corrections” to past temperatures based on drift measurements at a point in time does nothing but substitute one amount of measurement uncertainty for a different amount. It cannot get rid of measurement uncertainty. If the adjustments are based on readings from surrounding stations then all that does is spread the measurement uncertainty from those stations to the one being adjusted.
I am still waiting on you to provide me documentation showing that CRN stations have their temperature stations calibrated every year. If you can’t find the documentation showing that the calibration is done annually then just say so.
Climate science stands alone in physical science in assuming all measurement uncertainty is random, Gaussian, and cancels.
You have it exactly backwards, homogenization is based on the recognition that non-random, systematic bias exists in the station network, and that by identifying breakpoints in station records and adjusting them you can eliminate much of it.
Calibration drift is a minor component of the systematic error in the station network – it is as likely to be in one direction for a given station as for another. More important are inhomogeneities arising from changes to the network composition, urbanization, instrumentation changes, and changes in time of observation, and these are primarily what homogenization addresses (although the algorithm removes any breakpoint that can be positively identified in the station record).
It’s tiresome continually repeating the same things to you lot, but here we go again. Each USCRN station is equipped with 3 PRT sensors to record air temperature:
https://www.ncei.noaa.gov/access/crn/instruments.html#:~:text=Every%20USCRN%20observing%20station%20is,and%20a%20satellite%20communications%20transmitter.
This configuration provides triple redundancy. Each sensor logs readings in 5 minute intervals, and all three instruments are used to compute the station’s official temperature estimate. An indication of an issue or miscalibration in any of the sensors is logged so maintenance can be performed as-needed in addition to annual site visits. One of the PRT sensors is replaced annually with new, lab-calibrated device to ensure that all three sensors are calibrated and replaced every three years:
https://confluence.ecmwf.int/pages/viewpage.action?pageId=358884472
“You have it exactly backwards, homogenization is based on the recognition that non-random, systematic bias exists in the station network, and that by identifying breakpoints in station records and adjusting them you can eliminate much of it.”
Assuming a station is calibrated when installed it then begins a process of aging. Part of the aging process is calibration drift. Calibration drift is almost always a cumulative effect. The drift will be larger 2 years from installation than it was a the 1 year interval.
If you replace the station at year 5 and develop an adjustment factor at that point then when you apply that adjustment factor to data from years 1, 2, 3, and 4 you will make the temperatures during those years MORE INACCURATE then they were to begin with!
“Calibration drift is a minor component of the systematic error in the station network – it is as likely to be in one direction for a given station as for another.”
We’ve been down this road before and yo wouldn’t address why this is so wrong. I doubt you will do it now either. When components such as passive electronic components or even integrated circuits are heated during operation the material expands! I know of no material that contracts when carrying current and it is heated. Once that material is heated and expanded it will *never* return back to its original condition. This is why all manufacturers of electronic components will provide thermal impacts over time for their components. I provided you a link earlier to a couple of semi-conductor manufactures to prove this.
This means that almost *all* electronic devices will drift in the SAME direction. Meaning you simply can’t assume a Gaussian thermal drift wherein the effects will cancel among different instruments.
Calibration drift is a MAJOR component of systematic error in *all* electronic instruments. It’s why instruments in critical operations *MUST* have a NIST calibration sticker on them showing the last calibration. If the components didn’t suffer calibration drift then there would never be a reason for calibration labs!
“More important are inhomogeneities”
You can’t “fix” inhomogeneities by screwing up past, more accurate data!
“Each USCRN station is equipped with 3 PRT sensors to record air temperature:”
Redundancy is *NOT* the issue. All three of the sensors will suffer from calibration drift. If they are from the same manufacturing run and are in close proximity to each other then they will likely suffer from the same calibration drift over time. Redundancy only provides protection against significant failure mode in one of the sensors. If two of the sensors fail then how do you tell which of the three is correct?
The document you reference says: “During the annual maintenance of each site, one of the 3 PRTs is replaced with a new calibrated PRT, so that each sensor at a site is calibrated every 3 years.”
Yet this is *NOT* reflected in the annual maintenance checklist. It’s easy to say something in a “User Guide”. It is quite something else to have it actually performed and documented. You *still* haven’t shown any documentation that shows a recorded maintenance record of the sensors being changed!
I’m still waiting.
All you have done here is shown that you have *NO* experience in the actual physical attributes of electronic instrumentation. Apparently most in climate science have none either.
You have simply put forth the meme of “all measurement uncertainty is random, Gaussian, and cancels”. Anyone with actual field experience is laughing themselves silly right now.
It’s also more than just instrumentation. Trees and new buildings can shade and from a further distance change wind amount. Shelter quality can change over time. Land use change both close to and far away can modify temperatures at a station.
Great Britain has discovered a large percent of their stations are WMO Class 4 and 5. Large, and larger uncertainties. And they get milli-kelvin anomalies?
Adjustments are made for each identified breakpoint, it isn’t assumed that any station can have at most a single breakpoint.
You need to demonstrate, specifically, that this is the case for the instruments used in observing stations. You can’t merely declare it to be true.
It actually is, to be initialed by the technician on site:
But I am sure you will find a way to continue living in denial, your entire worldview depends on it. Perhaps you ought to email NCEI directly to inquire, their email is ncei.monitoring.info@noaa.gov, although I doubt even hearing directly from them would be enough to convince you.
Adjusting trends is changing data. In fact it worse because one must change a number of underlying data points in order to change a trend! Whether you go to the effort to actually calculate individual points to arrive at a new trend, you are implicitly doing so.
Your failure to show how individual data points can be “homogenized” is indicative of climate science’s failure to do science!
Yes, you are removing non-climatic bias from the series for the analysis. You are not altering the original record, which is still archived in the GHCN repository (and with the weather agency that collected the record to begin with).
Homogenization acts across the time series, it is not a process of adjusting individual data points.
You just keep digging deeper. You implicitly modify the underlying data when you modify a trend based on that data.
You are creating data out of whole cloth with no idea why other than to meet a programmer’s objective.
Yes, you remove the non-climatic part of the signal contained in the series. The original series is not altered, though, anyone who wants to see the original station record can get it from the repository.
You are not creating anything, you are isolating the climate signal in the data. The programmer’s objective is to remove the non-climatic signal. That this objective is met has been unequivocally proven in the research literature – see Menne et al. 2009.
Bull crap. Now you are into disinformation. Do you know what implicitly means? It means changing the underlying conditions.
If you change the trend you you apARE implicitly using different data to make up the new trend. You can call it anything you like, but ultimately you have not used recorded data nor a documented correction based on analysis. You have made up something that you think is correct! That is dry-labbing.
You have used a well-documented algorithm that has been empirically proven to achieve exactly what it is intended to achieve, which is to remove non-climatic biases from the station network via pairwise comparison of difference series.
Again, you need to recognize that compiling a surface temperature index is an analytical process – it is not merely a data aggregation process. You are taking point level measurements, drawn from a network whose composition is changing through time, and used it to estimate a gridded temperature anomaly field. You are not intending the individual station records to be representative of the local temperature, you are intending the individual station records to be representative of change in climate over time for a broad region surrounding the station, and so you must remove non-climatic biases.
I just saw a story that studies showing Ecstasy was useful to treat PTSD were faulty. Who would have expected that?
Peer review is nothing more than pal review. If a scientist can’t validate the data and conclusions without someone else telling them about errors, they shouldn’t be publishing.
Peer review is nothing more than pal review.
With the reproducibility crisis in medical peer review, incidents like the Sokal affair (and others since then) and the recent Lancet fraud, and gatekeeper problems like Lee Smolin highlights in The Trouble With Physics, peer review is losing (has lost) much of its credibility.
If they could do math, would they be journalists?
If climate scientists could do math, would they be climate scientists?
Judging by Mickie Mann’s resume at least, climate scientists are failed mathematicians and physicists,
Mann was a meteorologist. Unknown if he failed at that.
My understanding is (and I could be wrong but not interested in him enough to go back and look) is that he started out in Physics but switched to climate science (meteorology) because he couldn’t hack the physics. Not confirmed and maybe starting rumors but wouldn’t be the first time that’s happened in climate science.
Air warms in cities, leaving a low-pressure zone near the ground that then helps transport cooler air from surrounding rural areas. The rural areas then go on to absorb the heat.
Warm air rises and creates a localized low pressure – true.
Cooler, ground level air from further out flows in – true with exceptions. Freshly plowed land generally gets hot and creates a thermal updraft, much appreciated by sail plane aficionados.
The rural areas then go to absorb the heat? – false.
The hot air keeps rising in a thermal updraft and cooler air moves in, or at least until an equilibrium is achieved.
What seems to be missing in this discussion is that surrounding rural temperature can also be affected by heat from the cities. In perfectly calm conditions cooler air would be mostly drawn from the countryside into the city to replace rising warm air but conditions are rarely calm. Existing wind and even traffic disturbances complicate factors greatly.
I really question the value of thermometer readings to record general temperatures.
You left out geography, land use (e.g., corn versus wheat), etc., but otherwise I agree.
Another story about the current “education failure” of schools.
New math.
1 + 1 = 2 …. How do you FEEL about that?
New(er) math.
1 + 1 = 3 …. How do you FEEL about that?
Humor is a difficult concept.
No offense intended. My comment was intended to be a humorous play off of your comment, with a veiled reference to 1984. I hope it was taken that way.
Absolutely taken as humor.
My comment was from Star Trek, Saavik.
My original post was brought home from middle school by my wife a number of years past. She told me it was a joke, but that it also could be true.
Now I see where it is racist to insist a student get the right answer, that they only need to understand how it works… Oh boy…..
Heh, no wonder I didn’t get it. I pretty much stopped watching Star Treck after the TV series and Shatner was Kirk (ok, I’m getting up there).
even newer maths..
1 + 1 =……… screams of white privilege
Spot on comment.
Assuming sarcasm was intended.
Further changing rural temperatures is the lowering of the water table. I know in the upper Sacramento Valley near Slough House, the water table was around 12 feet in the 1930s, and is over 175 feet today. Creeks used to flow before mass pumping, where they only flow in the winter today. Likewise forest cover change from pines to firs is responsible for another two to three degrees, again due to evapotranspiration changes betwixt the trees.
Dr. Steel has long running temperature data from his old university field camp near Truckee, California, which shows an overall decline in temperature.
But since it is not CO2, it is not part of anthropogenic global warming. Right?
Human activities do have their effects.
Back in the 1970s, Florida was concerned about the salination of ground water. Seems pumping water for drinking, etc., drew in ocean water. I left before it was a problem and never followed up on it.
I’m ROFL.
But ‘absolute’ temperature is the wrong term, that is for Absolute temperature – Simple English Wikipedia, the free encyclopedia.
She calculated as if 0.5C were the ‘actual’ temperature in the commonly used scale, where freezing point is zero degrees Centigrade/Celsius or 32 degrees Fahrenheit – i.e. a half degree Celsius above freezing.
Failed reading compehension.
Her Hollywood Space Cadet boss needs to shape up the paper.
I’m ROFL.
But ‘absolute’ temperature is the wrong term, that is for Absolute temperature – Simple English Wikipedia, the free encyclopedia.
She calculated as if 0.5C were the ‘actual’ temperature in the commonly used scale, where freezing point is zero degrees Centigrade/Celsius or 32 degrees Fahrenheit – i.e. a half degree Celsius above freezing.
Failed reading comprehension.
Her Hollywood Space Cadet boss needs to shape up the paper.
(That’s Jeff Bezos, still Chair of Amazon but McKenzie Scott received all his shares in the divorce, he kept left-leaning WaPo and the Blue Origin space rocket business.
He hangs out in California a lot, with glamour on his arm.
I sneer at him because Amazon disappoints, but Bezos did fly on the first crewed launch.)