From Dr. Roy Spencer’s Global Warming Blog
by Roy W. Spencer, Ph. D.
Our paper (co-authored by John Christy and Danny Braswell) on computing the urban heat island (UHI) effect as a function of population density (PD) is now in the final stages of review after a 3rd round of edits, and I’m hopeful it will be accepted for publication soon. So far, I’ve only used Tavg data (the average of daily maximum and minimum temperatures) in developing and testing the method, and the paper uses only contiguous U.S. summertime data (June, July, August), which is what I will address here.
The method allows us to compute UHI trends using global gridded PD datasets that extend back to the 1800s. These UHI trends can then be compared to GHCN station temperature trends. If I do this for all U.S. GHCN stations having at least 120 years of complete monthly (June, July, or August) data out of 129 potential years during 1895-2023, the following plot shows some interesting results. (I begin with the “raw” data so we can then examine how homogenization changes the results.)
- The greater a station’s population growth, the greater the observed warming trend. This is pretty convincing evidence that the raw GHCN data has substantial UHI effects impacting the computed trends (probably no surprise here). Note the UHI temperature trend averages 66% of the raw temperature trends.
- The regression line fit to the data intercepting zero shows that those stations with no population growth have, on average, no warming trend. While this might lead some to conclude there has been no net warming in the U.S. during 1895-2023, it must be remembered these are raw data, with no adjustments for time-of-observation (TOBS) changes or instrumentation type changes which might have biased most or all of the stations toward lower temperature trends.
Since most of the rural stations (many of which have experienced little population growth) are in the western U.S., and there can be differences in actual temperature trends between the eastern and western U.S., let’s look at how things change if we just examine just the eastern U.S. (Ohio to the Florida peninsula, eastward):
This shows the Eastern U.S. has features similar to the U.S. as a whole, with a regression line intercept of zero (again) indicating those stations with no population growth have (on average) no warming trend in the raw GHCN data. But now, amazingly, the average UHI trend is over 95% of the raw station trends (!) This would seemingly suggest essentially all of the reported warming during 1895-2023 over the eastern U.S. has been due to the urbanization effect… if there are no systematic biases in the raw Tavg data that would cause those trends to be biased low. Also, as will be discussed below, this is the the period 1895-2023… the results for more recent decades are somewhat different.
Homogenization of the GHCN Data Produces Some Strange Effects
Next, let’s look at how the adjusted (homogenized) GHCN temperature trends compare to the UHI warming trends. Recall that the Pairwise Homogenization Algorithm (PHA) used by NOAA to create the “adjusted” GHCN dataset (which is the basis for official temperature statistics coming from the government) identifies and adjusts for step-changes in time at individual stations by comparing their temperature time series to the time series from surrounding stations. If we plot the adjusted data trend along with the raw data trends, the following figure shows some curious changes.
Here’s what homogenizations has done to the raw temperature data:
- Stations with no population growth (that had on average no warming trend) now have a warming trend. I can’t explain this. It might be the “urban blending” artifact of the PHA algorithm discussed by Katata et al. (2023, and references therein) whereby homogenization doesn’t adjust urban stations to “look like” rural stations, but instead tends to smooth out the differences between neighboring stations, causing a “bleeding” of urban effects into the rural stations.
- Stations with large population growth have had their warming trends reduced. This is the intended effect of homogenization.
- There still exists a UHI effect in the homogenized trends, but it has been reduced by about 50% compared to the raw trends. This suggests the PHA algorithm is only partially removing spurious warming signals from increasing urbanization.
- Homogenization has caused the all-station average warming trend to nearly double (+89%), from +0.036 to +0.067 deg. C per decade.I cannot explain this. It might be due to real effects from changes in instrumentation, the time-of-observation (TOBS) adjustment, an unintended artifact of the PHA algorithm, or some combination of all three.
Does This Mean Recent Warming In The U.S. Is Negligible?
Maybe not. While it does suggest problems with warming trends since 1895, if we examine the most recent period of warming (say, since 1961…a date I chose arbitrarily), we find considerably stronger warming trends.
Note that the GHCN trends since 1961 are nearly the same from raw (+0.192 C/decade) as from homogenized (+0.193 C/decade) data. The average UHI warming trend is only about 13% of the raw GHCN trend, and 10% of the homogenized trend, indicating little of the GHCN warming trend can be attributed to increases in population density.
But there still remains an urbanization signal in both the raw and adjusted data, as indicated by the non-zero regression slopes. One possible interpretation of these results is that, if the homogenization algorithm is distorting the station trends, and if we can use the raw GHCN data as a more accurate representation of reality, then the regression intercept of +0.10 deg. C/decade becomes the best estimate of the all-station average warming trend if NONE of the stations had any growth in population. That is little more than 50% of the homogenized data warming trend of +0.192 deg. C/decade.
What Does It All Mean?
First, there is evidence supporting the “urban blending” hypothesis of Katata et al., whereby the homogenization algorithm inadvertently blends urban station characteristics into rural temperature data. This appears to increase the all-station average temperature trend.
Second, homogenization appears to only remove about 50% of the UHI signal. Even after homogenization, GHCN temperature trends tend to be higher for stations with large population growth, lower for stations with little population growth. There is some evidence that truly rural stations would have only about 50% of the warming averaged across all U.S. stations, which is consistent with Anthony Watts’ estimates based upon restricting analysis to only those best-sited stations.
These results suggest there is now additional reason to distrust the official temperature trends reported for U.S. weather stations. They are, on average, too warm. By how much? That remains to be determined. Our method provides the first way (that I know of) to independently estimate the urban warming effect over time, albeit in an average sense (that is, it is accurate for the average across many stations, but its accuracy at individual stations is unknown). As my career winds down, I hope others in the future will extend this type of analysis.
[To see what the total UHI signal is in various calendar months around the world as of 2023, here are the hi-res images: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec. More details of our method, along with links to monthly ArcGIS-format files of global UHI grids since 1800 (Version 0) are contained in my blog post from November, 2023.]
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




I grew up around Washington, DC, and now live in Colorado. I’ve seen the UHI effects in both places. I asked a friend of mine once, why is it that occasionally even the Patomac River freezes, but not the Hudson? He said that it has to do with the fact that the banks of the Hudson were modified, and the current is stronger around New York.
I believe- it’s something to do with the fact that the tide goes way up the river, all the way to Albany.
That’s right, New York has a big urban heat effect.
Unfortunately, all the r^2 are low. That is inherent from the shape of the plotted data. It means one cannot read too much into the calculated trends.
In my view, the surface temperature record is not fit for purpose and cannot be salvaged via homogenization. Best rely on the high quality UAH lower troposphere satellite record. It shows how ‘off’ the climate models are.
“Best rely on. . . . .“
_______________________________________________________________________________
Best rely on the apparent fact that CO2 emissions aren’t really a problem and stop spending billions of dollars trying to prove that it is.
“It shows how ‘off’ the climate models are.”
ISTVAN
The average climate model predicted +3 degrees warming per CO2 x 2 in the 1970s
The actual warming rate since 1975 was +2.4 degrees C. per CO2 x 2 based on surface averages. Less than 2.4 degrees based on UAH but in the past decade UAH has been rising as fast as surface temperatures.
RG, all CMIP6 models (except INM CM5) predict a tropical troposphere hotspot that does not exist. The average CMIP6 ECS is 3.4. The observational EBM estimates about 1.7. INM CM5 estimates 1.8.
And this post explains why the observed warming rate, which you note is significantly higher than the modeled rate, is still far to high because of UHI effects.
According to the CMIP6 IPCC model ensemble, the average prediction for global temperature increase in response to a doubling of CO2 (CO2 x 2) is around 3°C (5.4°F), with a “likely” range of 2.5°C to 4°C (4.5°F to 7.2°F); this means there is a 66% chance the true value falls within this range.
The actual surface warming since 1975 was about 2.4 degrees per CO2 x 2
The actual UAH warming since 1975 was about 1.8 degrees C. per CO2 x 2 (might round to 1.7), assuming 1975 to 1979 would have had average warming.
The actual warming, especially with surface numbers, are close enough to the predictions from the 1970s that they ARE useful for climate propaganda, and they ARE used for propaganda.
in my opinion, the models are worthless climate computer games that make wild guess predictions on the effects of CO2. No one knows the actual long term effect of CO2 and all related feedbacks, so guessing about them is a waste of time and money.
The actual global warming since 1985 has been mainly pleasant warmer winters. Good News. We should be happy about global warming. Not fearful.
If up to me all climate computer games would end and there would be no more global average temperature statistics. Also no more leftists.
The climate on our planet does not get much better than it is now.
“actual surface warming since 1975 was about 2.4 degrees per CO2 x 2″
There’s that childlike idiocy of implying the surface warming is cause by CO2.
Still waiting for you to show us the CO2 warming in the atmospheric UAH data.
Or are you saying that CO2 only causes warming Urban areas??
That would be pretty dumb !!
“The average climate model predicted +3 degrees warming per CO2 x 2 in the 1970s”
That was not a bad first try. Can you describe how the models got much more accurate in the next 50 years 1974-2024?
UAH increases are ONLY at El Nino events, which naturally affect the atmosphere more than the land.
As you have consistently shown, there is no evidence of CO2 or any other human warming in the UAH data.
“ was +2.4 degrees C. per CO2 x 2 based on surface averages.”
There is no way you can can link surface warming to CO2.. It is complete anti-science because you cannot remove the urban signal with any accuracy.
The surface temperature record is not fit for the purpose it is being used for.
One of the primary purposes that climate science has is to create LONG records so they can claim trends are accurate and spurious trends are minimal. It is all to their advantage for propaganda rather than science the way it should be practiced.
In essence, short records are being adjusted so they can be spliced together. This made me a sceptic from the start. You don’t treat measurements with this kind of disdain. Only mathematicians will do so because they see the measurements as nothing more than numbers to be manipulated.
Measurement Science (metrology) never adjusts data records. It is called fraud. Anyone who does that in a certified laboratory is fired. If the data is suspect for any reason, it is noted as to why, reported to end users and not considered valid for use in metrology trend analysis. Further, closed loop corrective action is instituted by Metrology Quality personnel to prevent identified mistakes from being repeated.
thank goodness for my career that we add an “eo” to metrology, so metrology rules don’t count. 🙂
Yes, meterology and metrology are different in scope. But everyone uses metrology daily even if they don’t know it. When you’re driving to work and you and the cop both notice you’re speeding, you with a speedometer and the cop with a radar gun, for instance. The cop doesn’t bother to write you up for 70.00345 MPH though, as that could be challenged in court as incorrect.
Climate science routinely ignores significant digit rules.
I tried to find the lyrics but kept getting blocked. So, from memory. (Maybe somebody can find them and continue?)
E-O!
E-E-E-O!
Got to stop this Carbon swell.
E-O!
E-E-E-O!
Got to make life a living hell …” 😎
(Great respect for you and Dr. Christy. You both deserve it. Your both honest.)
Hi Roy,
Did you ever manage to figure out what percentage of the Earth’s surface each of the regions in UAH represents?
Several years ago in Ohio falsifying records in water or wastewater reports to the OEPA was elevated from not just loosing certification to being an actual crime.
One of the big problems is anomalies. This is nothing more than scaling actual temperatures and then recalculating a variance. For example, a monthly average is 25 ±0.3°C and 24.5±0.3°C is subtracted to obtain an anomaly of 0.5°C. The anomaly should carry an uncertainty of the √(0.3² + 0.3²) = ±0.4°C.
Now look at relative uncertainty. 0.3/25 = 1.2%. What is 1.2% of 0.5? 0.5 • 0.012 = 0.006. Does anyone really wonder why real variance of the actual temperatures is thrown away and then recalculated using numbers that have been drastically scaled downward?
Since 25C is 298.15K this is as meaningless as to calulate the percentage rise in temp from 0C to 1C, or from 10C to 11C.
Also from 10C to 20C is not “doubling.” Only MSM do that.
Even if you use “anomalies” the error bars remains the same.
You misunderstand what uncertainty is. It is an interval surrounding the stated values that describes the dispersion of measurements attributed to the measurand. If you rescale the Celsius value by adding a constant, the interval in “±value” does not change. The percent value will change but not the absolute value.
“Best rely on the high quality UAH lower troposphere satellite record. It shows how ‘off’ the climate models are.”
It can’t do that, because it isn’t very different anyway. And what about the equally high quality RSS?
Here are GISS, UAH and RSS plotted with 12-month smoothing at a common anomaly base of 1981-2010. RSS has the highest trend (equally with no UHI), UAH the lowest. But there isn’t enough difference to prove or disprove anything about climate models.
RSS is too corrupted by “the agenda”
El Ninos provide most of the trend in UAH,
The atmosphere responds much more than the surface to the 3 major El Ninos.
NOAA now agrees with uah.
https://wattsupwiththat.com/2023/04/14/ross-mckitrick-the-important-climate-study-you-wont-hear-about/
It is clear that rss is a low quality outlier and can now be ignored.
Now, redraw that with error bars of ±0.4°C or more!
Trends are often counter-intuitive, especially if one of their end-points “just happens to be” an extreme value.
Attached is a plot of the most recent “January 1979 to X” trends for the main surface and (satellite) lower-troposphere datasets, along with the trend for CO2 measurements at Mauna Loa.
Using the “correlation is not causation, but lack of correlation is an extremely reliable indicator of lack of causation” principle we can “disprove” the notion that the recent rise in GMST and TLT anomalies was “caused by” the rise in CO2 levels (since 1979, or 1981 for the NOAA STAR TLT dataset, at least).
“ lack of correlation is an extremely reliable indicator of lack of causation” principle “
You’ll never convince the AGW advocates of this simple but accurate principle.
I should have also provided the t-scores, since a low R2 can be very significant given enough data.
The two-sigma prediction intervals for the scatter plot fits would be illuminating.
Absolutely agree.
NOAA microwave data cannot resolve daily max and min temperatures, nor urban locations, the scanning is too coarse.
There “MAY” be a slight Urban signal in the UAH data.
UAH Land is about 1.5 times the linear trend of UAH Ocean
“Absolutely agree.”
Absolutely agree .. times at least 10.
Seems there is a red thumber that thinks the surface station data is fit for a purpose.
Must be a low-end climate propagandist.
More likely one of the ruler monkeys.
Spencer rarely goes wrong but his guesses about UHI are an exception
(1) 71% of Earth’s surface is oceans with no UHI
(2) The USCRN stations are rural and allegedly perfectly sited with no UHI. Yet they reflect FASTER warming than nClimDiv. That needs a data based explanation.
(3) An urban weather station that was moved to a suburban airport could have REDUCED UHI, at least for a year or two,with that move.
(4) Spencer’s own UAH reflects fast global warming of about 0.4 degrees C. in the past 10 years since the 2024 trough. Even a large adjustment for UHI would not reduce that fast warming rate to the 0.16 degrees C. per decade average of the entire UAH record since 1979
It is impossible to guess the UHI effect on the global average temperature … but it is easy to know UHI could only affect 29% of Earth’s surface (land). Less than 1* pf earth’s surface is urban (3% of the 29% that is land)
UHI does not appear to keep people from living in relatively hot cities. In the solder months of the year, UHI is good news.
Surely UHI is not affecting the temperature, it is affecting the recorded data.
Correct 👍
UHI is part of manmade global warming but impossible to measure with existing data. In the colder months of the year, UHI is good news, because UHI makes urban areas warmer in the winter, just like CO2 emissions do. UHI is bad news in the summer if you have no AC
Only INCREASED UHI would increase the rate of global warming.
Any land weather station could be affected by increased UHI. Even a rural station if a town grew up in the vicinity of that station over many decades.
So why include such sites whose numbers are obviously, artificially influenced by very local conditions into any “global” temperature measurements?
“ just like CO2 emissions do.”
You consistent have ZERO EVIDENCE to back up that conjecture.
“Only INCREASED UHI would increase the rate of global warming”
UHI is NOT a “global” thing.
It applies to only a small part of the actual global surface.. ie urban and rural thermometers encroached by human activity.
… problem is, that it makes up a large part of the so-called “global surface temperature” fabrications.
Uhi is not just a population growth factor, it is a microclimate change factor. Rural stations can see microclimate changes totally unrelated to population growth. If the glossiness of the paint on the station enclosure causes more heat to be absorbed by the enclosure that can cause a positive temp trend that is a systematic bias that no amount of statistical analysis can remove.
Most rational people would probably agree with all 4 points. And most rational people would probably say so what? The arguments in reality are about minutia. The western world is squandering billions and wrecking their economies because why?
In any case, I’ll see your 4 points with my 6 points:
1. More rain is not a problem.
2. Warmer weather is not a problem.
3. More arable land is not a problem.
4. Longer growing seasons is not a problem.
5. CO2 greening of the earth is not a problem.
6. There isn’t any Climate Crisis.
Change “is not a problem ” to
“is good news” and i agree.
I would call Nut Zero a climate related crisis, wasting our money while the actual warming climate is very pleasant.
Thank you.
7. Rising sea levels on the scale of millimeters is not a problem. If it was we should have seen it by now. We haven’t.
“but it is easy to know UHI could only affect 29% of Earth’s surface (land). Less than 1* pf earth’s surface is urban (3% of the 29% that is land)”…
We’re talking about recorded temperature here, rather than actual temperature, which is impossible to measure earth and ocean wide. We can only measure temperature where there are weather stations, balloons buoys, etc. These are disproportionately situated in urban areas, that have become increasingly urban over time and since their original siting.
Your point however about the satellite record keeping pace with the surface station record is well made.
“The USCRN stations are rural and allegedly perfectly sited with no UHI. Yet they reflect FASTER warming than nClimDiv. That needs a data based explanation”
I don’t know what “nClimDiv” is. Who alleges something is perfectly sited? Perfect?
nClimDiv uses a whole mass of urban sites.
The data is homogenised to match more “pristine” sites, Ie USCRN, on a regional basis in an attempt to remove urban heating bias.
That means nClimDiv is based not on real measurements, but on “adjusted” data.
The result is a fabricated data set, which is close to USCRN values.
The homogenisation process gave values a bit higher than USCRN in the early stages, just after 2005, and has been refined over time so they are now running basically the same trend.
Any difference in trend between the homogenised and fabricated ClimDiv, and USCRN, is a remnant of the homogenisation process.
This has been explained to RG many times.. but he chooses to remain IGNORANT to push his own anti-science garbage.
Darn, grabbed the wrong graph…
This one shows the difference between USCRN and the homogenised ClimDiv fabrication.
The only ignorant person is the one who writes your comments … or believes them
You have no idea if either USCRN or nClimDiv are accurate.
You have no idea what adjustments NOAA makes to their raw data.
Yet you jump to conclusions and call anyone disagrees ignorant.
You have NO IDEA ….. period. !!
You still don’t realise that ClimDiv IS ADJUSTED DATA.
Homogenised to “pristine” sites
They actually state that on their web site.
Why continue to show yourself as such a moron !!
If that is the case then you need to explain why the adjusted data aren’t warming as fast as the ‘pristine’ USCRN data over their joint period of measurement (since 2005).
Or is it your contention that the adjustments are cooling the CimDiv data?
If so, how does that fit with the ‘big conspiracy‘?
OMG another completely IGNORANT TWIT.
Does not comprehend what homogenisation to USCRN data means.
Dopey as heck and thick as a brick. !!
nClimDiv and USCRN data are both plotted on the NOAA USCRN website for the 25 years the USCRN has been functioning. According to the plots NOAA provided, both are nearly identical. I expect that the nClimDiv calculation methodology has been changed to make it identical with USCRN.
Both networks are NOAAs for US average temperature. USCRN is rural and allegedly perfectly sited. NClimDiv is nit but is used for the global average temperature anyway.
It makes no sense that siting differences lead to MORE warming at rural stations. As a result, I don’t trust NOAA data. They and NASA also participated in the elimination of most global cooling in the 1940 to 1975 period, as originally reported in 1975 … So I already did not trust NOAA when those unexplained revisions were made in the 1990s.
This 5 minute video describes some climate data fraud:
https://youtu.be/8YOqJc1dQSw
“It makes no sense that siting differences lead to MORE warming at rural stations”
Again with the DELIBERATE IGNORANCE…
ClimDiv is an adjusted data set, homogenised to match “pristine” sites on a regional basis.
Any difference in trend is purely a remnant of the homogenisation process.
So NOAA are adjusting data to make it cooler now?
NO, simpleton #2, they are adjusting to USCRN, which is controlling all the warming by the climate scammers.
They CANNOT allow ClimDiv to diverge from USCRN, because it would make them look VERY STUPID in deed.. just like you make yourself look all the time..
The only warming now in the USCRN/ClimDiv data is the 2016 El Nino and the recent 2023 El Nino.
No warming from 2005-2015, Cooling from 2017 to 2023.4
NO EVIDENCE of any human warming once they remove the urban warming in ClimDiv.
Not all of them show that kind of warming. The Great Plains states don’t show that much warming. The populated east and west coasts obviously have higher trends. UHI maybe?
1… Oceans CANNOT be warmed by any possible human causation, CO2 or otherwise.
2… it appears that you are still SO STUPID that you don’t realise ClimDiv is NOT real data.
It is urban data with the urban effect removed by attempted homogenisation to “pristine” sites on a regional basis. Any difference between ClimDiv and USCRN is purely a remnant of the homogenisation process which they have refined over time.
3… pure supposition. Most airport sites are actually moved around inside the airport as the massive tarmac and paved areas expand and new huge terminals are built. Only a complete moron would think this doesn’t affect the measured temperature.
4… UAH has responded to the 2023 El Nino and HT warming more than the land surface has.. Only an ignorant twit would expect otherwise.
5.. We are STILL WAITING for empirical scientific evidence of any warming by atmospheric CO2
6… We are still waiting for you to show us the human caused warming in the UAH data.
Coastal electric power plants, fossil and nuclear, and manufacturing plants that use lots of thermal energy often use ocean water for cooling purposes. This could create warm plumes in the vicinity of the water outlets. Ditto for heavily populated river basins and the rivers outlets. Are we sure that the locations that measure ocean water temperature are not affected by such plumes? Has anybody ever looked?
The amount is infinitesimal compared to the bulk of the oceans, and to energy from sub-ocean seismic and volcanoes.
RG, After posting comment, check it for typos which can be corrected. If you move the pointer to the right hand corner, a little gear wheel appears. Click on it. The edit function is activated, and it now allows for typo correction or adding additional text. You have ca. 5 minute window for making corrections.
In your comment “solder” should be “colder”. Keep this in mind: The mark of a
professional is close attention to detail.
But this leads to the sun/clouds being the culprit.
Greenhouse gases warm the ocean skin layer, reducing the rate of heat loss at night. This is what is primarily driving the increase in ocean temperatures and heat content. It’s not that more energy is getting in, rather it’s that less energy is getting out, so it accumulates.
Total and complete anti-science BS… back by ZERO empirical evidence.
Your assertion totally ignores the fact that any increase in absorption only creates more evaporation which cools the top layer. It also ignores thermodynamics where heat flows from hot to cold, not the other way around.
Reducing the rate of heat loss is not a warming. It’s still a loss which you ignore. You also ignore the rate of evaporation which must increase according to your theory of greenhouse gasses warming the ocean. Apparently when you sweat and it evaporates, you get hotter which is strange.
Excellent.
In TFN’s world the water below the skin warms the skin. That only happens if the water is warmer than the skin. Then the skin warms the atmosphere because it is warmer than the atmosphere. The atmosphere can not warm the skin because it is colder.
These guys need to read Planck’s Theory of Heat Radiation.
Bi is the subsurface, A is the skin, and the atmosphere is B.
There are REAL experiments that show that infrared causes evaporation, which actually cools the immediate sub-surface..
… been measured…
… unlike any warming by atmospheric CO2.
There is no evidence that CO2 alters the lapse rate, therefore it cannot alter the net radiative outward transfer. Or are you an anti-science denier of basic thermodynamics?
Would be interesting to see the raw data from just the rural stations.
Go to https://www.ncei.noaa.gov/access/monitoring/national-temperature-index/ This site plots all US Climate Reference Network data. All of the Network sites are rural. That’s why NOAA built it.
One take is that more attention needs to be paid to mitigating UHI, such as high albedo roofing and street paving.
On a broader note, their work strongly suggests that CO2 is ar from being the only thing causing the rise in measured temperatures.
Read my comment about black carbon which I just posted.
How much of urban heating is due to black and brown particle pollution? Since ca. 1900, where has the many billion pounds of rubber particles and dust gone? The simple answer is: anywhere and everywhere. Brown dust is produced from brake rotors and drums. A lot of brown rust particles falls off cars and trucks. Then there
is all the black particles and dust from asphalt road surfaces. Rotating rubber tires
are like grinders. There is black soot from older Diesel engines. Lastly there is lots
of brown soil dust blown into the cities rural areas.
In 1900 much heating and cooking was from burning coal, fireplaces, boilers, stoves, ovens. Car exhaust today is many times cleaner than in 1970. So for places like the US heating from these types of sources should be down. Coal soot from China (due to less efficient pollution control and still many open coal fires does have an effect on Arctic snow and ice as the soot absorbs heat. And road dust from unpaved roads is one of the larger causes of poor air quality in China.
Rubber is non-biodegradable. Once rubber enters the environment it is there forever.
Altered data is fake data. The fact that they are altering data tells you something is wrong to begin with as most fail NOAA siting standards. Then there’s the problem that NOAA re-alters the previously altered data with more assumptions. This is not genuine science — it’s junk science.
I’d be rich if I could alter the values of the dollar amounts going into my bank accounts.
But I have a human weakness called “honesty”.
I wouldn’t do it even if I could.
The sites in those photos would not be used as clmate data. Weather stations are used for other purposes!
Data is “altered” to remove what is a real UHI effect.
That the Climate div series agrees with the pristine USCRN – which has NO UHI – shows that it works.
Fraud.
ROFLMAO.
Banton finally admits that ClimDiv is adjusted data homogenised to USCRN.
Well done fool.
But you still though there was meaning in comparing the trends.. Just DUMB. !
Yes fool, the homogenisations works when you have near “pristine” reference data to remove urban data.
But it CANNOT work when you don’t.
And yes, sites like that ARE used for climate data.
That has been shown by surveys in the US, by Anthony Watts surface station project, and in Australia and the UK by several other people.
No one has any idea just how bad most of the other surface site around the globe are, but many pictures have shown they are totally horrific and totally unfit for “climate” purposes (except AGW propaganda)
There is no reason to expect anything better than what Great Britain has found in their system. It is atrocious.
Legitimizing a broken system is not science — it’s scientific fraud. If the so called “climate crisis” was real, NOAA would be rushing out, with lightning speed, with ambulance style response teams, to each improperly sited sensor — to immediately fix them. Instead, the improper siting of temperature sensors continues — even at the Airglades Airport.
“Instead, the improper siting of temperature sensors continues — even at the Airglades Airport”
Show me where that data is used for climate indices.
The FAWN program director told me all data goes into the NOAA data center — which of course are used for their magical “altering” processes — just like the data which is routinely fabricated for the Belle Glade station — which no longer exists — it’s one of over 300 ghost stations.
Without “long” records, statistical analysis has inherent faults. One is spurious results due to short records coming and going from the data.
Here is another one.
Simpsons Paradox Explained – Statistics By Jim
Again, it is you that should be providing the evidence that the station IS NOT used in order to have evidence that supports your assertion.
Hitchens’ Razor: what can be asserted without evidence can also be dismissed without evidence
Popper’s Razor: for a theory to be considered scientific, it must be falsifiable
Sagan’s Standard: extraordinary claims require extraordinary evidence
Consider your assertion dismissed.
Point is solid but the trend fit R2 stinks.
Locally,
All Met Office UK weather observations are subject to an internationally-agreed rigorous quality-control process. For new official records, we undertake further careful investigation to ensure that the measurement taken is robust, reliable, is feasible in the meteorological set-up and adheres to international World Meteorological Organisation standards.
https://www.metoffice.gov.uk/weather/learn-about/how-forecasts-are-made/observations/weather-stations
Like Heathrow airport
‘…we undertake further careful investigation to ensure that the measurement taken is robust…’
I’d be willing to bet dollars to donuts that the term ‘robust’ appears more frequently in the climate literature than in any other physical science, which is probably an indicator of how bogus most of it is.
Where would you suggest that the weather station at Heathrow be sited given that it’s primary imortance is the provision of that data for aviation?
The fact that is obviously warmer as result of it’s siting vs, say, a green field in the middle of nowhere, is why it’s there. As, well, aircraft land and takeoff from the place and it’s a tad important that they know local temps/winds.
FYI: the above fact in no way affects the GMST by even a millionth of a millionth of a degree C of F. Or any other metric that you can dream up to cast doubt that requires there to be 101% certainty
Clearly, Heathrow and all airports need this information.
All weather stations sited at airports should be removed from the temperature records used to bolster the claims of “climate crises”
You truly are displaying your absolute ignorance now.
Heathrow airport site has often been used by the Met to claim new “maximum” temperatures, as have other “airport” sites.
It has been shown that a LARGE NUMBER of site used are very badly affected by urban expansion.
Only a totally ignorant twit tries to DENY the evidence.., A blind and dumb monkey !
GAT is a meaningless metric that you ruler monkeys live by.
Funny that all of the estimates of it both on land and via satellite all show a tight correlation then isn’t it?
Yes, even one of the heros on *your* side.
In reality something is only meaningless here when you have to use a brain-cell to emply your cognitive dissonance.
And exactly what does a correlation PROVE? What experiments have been done to show a CAUSAL relation?
“Funny that all of the estimates of it both on land and via satellite all show a tight correlation then isn’t it?”
GISS et al are all based on the fabricated and maladjusted GHCN data, and BEST uses so much junk data they can create whatever they want.
You are the most brain-washed, non-thinking and gullible monkey around.
“Have Not Yet Been Removed from Official GHCN Warming Trends”
Nor should they be. GHCN should report the temperatures as measured, as they do. Roy seems to think they should be altered to reflect temperatures of an unpopulated land. But that isn’t what we have.
The legitimate worry about UHI is that, in forming a global average, urban areas may be over represented. The first obvious answer is that most of the earth is ocean. The second is to compare averages made with urban data diminished as far as possible. The classic case is USCRN vs Climdiv. THey actually are very similar, with the non-urban Climdiv warming slightly faster.
“GHCN should report the temperatures as measured”
You have just admitted what every sane person is saying, and agreed with the whole topic…
… that a large amount of the warming in the fabricate surface data is from URBAN WARMING
Your climate scammer need to STOP PRETENDING the warming is caused by CO2 !!
“GHCN should report the temperatures as measured, as they do“
LYING AGAIN.. why do that Nick???
GHCN has massively altered past data., you know that !!
Can he have the large warm spike of the 1930s,40s back…. or would it ruin the scam.!
“The classic case is USCRN vs Climdiv. THey actually are very similar, with the non-urban Climdiv warming slightly faster.”
ROFLMAO.
Another ignorant twit that doesn’t realise that ClimDiv is ADJUSTED data, homogenised to match USCRN at a regional level. Of course they are similar.
Any difference in trend is purely because of the gradual refining of the homogenisation routine.
The ruler monkeys are bankrupt without Fake Data.
Stokes routinely ignores the fact that ocean surface air temperature measurements are 100% missing-in-action.
Averaging myriad disparate adjusted temperatures from all around the globe is arrant nonsense.
Probity, Provenance and Presentation are grossly flawed.
Get all those Stevenson screens floating in ocean reporting surface data compared with those in cities and then you’d have yourself a real argument to present.
Tavg as used in this discussion is not a fully correct average daily temperature. A fully correct average would be the average of temperatures measured at a location every few seconds or at least minutes for a fixed 24 hour period with an average computed from all measurements. There are time series for at least some temperature measurements according to Dr. Spencers text but I do not know of the frequency of these series. Has anybody ever computed a true (or more true) temperature average using a high frequency time series to compare with the min/max Tavg used most everywhere? Is there any reason to believe that Tavg and a more correct average computed from high frequency measurements would be biased up or down?
Denis,
Long ago I started down that track of using daily Tmax and Tmin versus derivations from multiple observations each day. I was using Australian data. First, there was little public “multiple” data before about 1990 when electronic thermocouple style devices became widespread. So no remedy was available for the liquid-in-glass thermometry era.
It was soon apparent that differences could be found between traditional once a day and multiple. It was rather difficult to define what caused the differences, because of noise. So much noise that before long I abandoned the exercise with a “that way lies madness.”. Geoff S
“Has anybody ever computed a true (or more true) temperature average using a high frequency time series to compare with the min/max Tavg used most everywhere? “
Look:
We have what we have.
Tave computed from Tmax-Tmin.
Back to the (in the case of CET) 1659.
No data loggers then!
It can’t be done as that data does not exist in spacial extent and duration.
“We have what we have.”
Yep, basically nothing on a global basis before 1920
And the data after that is manically adjusted to create spurious warming.
If we really ” have what we have” , they should NOT be adjusting that past data.
That is tantamount to FRAUD.. so it makes sense you would accept it.
“And the data after that is manically adjusted to create spurious warming.”
FFS: you are a piece of work Oxy … cant even keep up the story you’ve made up inside your own head.
Now I know you’e seen this many times ….
But keep it up … it’s one reason why this place is so enjoyably bonkers — watching your one-man desparate attempt to influence the world’s thinking on here with absolute bollocks.
Now where do you see any “spurious warming” there?
Aside for compensating for failings in measuring ocean water in buckets – that reduces the warming trend.
Failing that just give us some more Cap/Bold bollocks instead.
It isn’t up to anyone else to do your work for you. As the person making the scientific argument that no spurious trends are being aggregated into the one trend, it is up to you to do the appropriate tests and show that there are no spurious trends occurring.
Fundamentally, these trends do not allow one to prove a causal connection between temperature and any other independent variable. They do not show an anthropological source for any warming.
Poor Banton… SUCKED IN by Zeke the scammer.
The blue line is already manically adjusted and totally fabricated GHCN data.
Show us where land and ocean was measured in 1880.
This will be funny !!
It is you that is addicted to oxy !
It is why climate science needs to follow the practice HVAC folks are and integrating the now fine distribution of daily temperatures. We have almost 40 years of this data.
Climate science wants to continue the traditional methodology for their own propaganda purposes. Think about what it would be like if physicists had said, we want to keep measuring using 1700’s technology so we can continue to compare like measurements to like measurements! Or, how about chemists using balance scales with graduated mass blocks so they could have “long” records of measurements made the same way as tradition dictates. Really stupid to not say, “we are going to move on and begin using measurement models that are no longer comparable to past records”.
Very important work, kudos.
It’s the “half are below average” fallacy. With a very large data set that has a broad spectrum of values, the average tends to put roughly half of the set above and half below.
But when there’s a small data set and/or the spectrum of values is greatly skewed, far more than half of the data set can be above or below the average.
Imagine a square grid with each intersection being a temperature. At each intersection place a post, with the height equating to the temperature.
Now throw a rubber sheet over the posts. Put rocks at the lowest posts, the furthest away from the highest posts. That’s the homogenization. All the posts not touching the rubber sheet get raised up to touch it. And to sort of balance it, the tallest posts get a few inches cut off, however much the algorithm determines.
What the homogenization calculation does is generate a pack of false data, and it can be adjusted to make it seem as though cities full of concrete, brick, and stone haven’t warmed as much as they really have, while rural areas that have changed little, if any, have warmed some when they either haven’t warmed at all or may in some places be cooler.
The only reason to use anything but the pure, raw, temperature data is to minimize UHI and falsely claim places that have not changed temperature have increased in temperature.
Drawing straight lines on a graph between known points is a useful tool for *guessing* what the thing being measured *might* have been doing in the time between measurements.
Doing complex math to plot curves between known points is just fancier guesswork.
In either case, the accuracy, or lack thereof, of the plot can be tested by re-running what was measured with more frequent sampling. *But that’s not possible with the past climate and weather.* We’re stuck with the frequency and precision (or lack of it) that was done.
If the fancy guesswork plotting *alters the known measurements* the only reason for it is to lie about the data.
People have gone to prison in many fields of business for manipulating data like that in order to defraud investors in their company or buyers of what they’re selling.
So why haven’t any of the AGW fraudsters suffered the consequences when it’s so easy to take the raw, unadjusted, unsmoothed, unhomogenized data and show how they’ve altered it?
Where doing all the fancy guesswork between known data points is very useful is in playing back digitized and compressed audio and video. Those files are a mass of sampled data points and the decoder’s math fills in the gaps, creating the missing information on the fly. But such processes are very easy to test because their output can instantly be compared to analog recordings or higher bitrate versions of the sound or video to see how accurate the replication is.
But as I wrote above, that is not possible to do with past climate and weather. There’s no going back with better or more instrumentation to get more frequent and more precise measurements.
Homogenizing the available data isn’t like making an MP3 from a cassette tape.
I agree with what you say. I would only note that for most audio/video digitization the signal is first run through a filter to restrict high frequency bandwidth. Thus the digitization can never be perfect. Calculating a mid-point temperature from Tmax and Tmin for use as the digitization data point is, in essence, filtering the signal to reduce its bandwidth. This method *severely* restricts the ability to properly reproduce temperature signal and thus the ability to determine what is actually happening with the signal in the real world.
Roy,
I have done studies on UHI with Australian data. Two advantages, first there are many stations that are plausibly “pristine” re UHI and second, we have no Time of Observation correction compared to US. The disadvantage is that our population data are far less comprehensive than yours.
I started with about 40 stations, the most pristine I could find, to see if they had any property to define them, like their temperature/time trend over many decades. The trends were all over the place, in defiance of my attempts to systematise. I concluded that I could not define “rural” versus “urban” as is popular. Noise defeated me.
I was left with an impression that some unknown factor was influencing the raw daily Tmin and Tmax. Years ago Anthony Watts identified the coating of the thermometer screen as a factor, paint is different to whitewash is different to dirty both. There seems to be one or more factors like this that might cause one station in a close cluster to wander off in a different pattern for some years. But that is guesswork on my part.
Question: in your first graph above, the fitted trend of 1.46 visually seems dominated by high values in the upper right quadrant (and none in lower right). I cannot see the dot patterns in the data dense left side. What does the trend become if you remove the dots in the upper right and re-calculate? They resemble the pattern I mentioned above as the unknown factor. I would not assume that UHI causes them, though it might.
Geoff S
Geoff, were you able to identify the type of noise you encountered?
karlomonte,
No. I have not characterised the type of noise. I posted a visual example of the time seies of Tmax raw for 45 “pristine” sites later on this thread.
Advice on how to provceed is most welcomed.
sherro 01 at outlook dot com
Geoff S
You should try this: Plot Tmax and Tmin data for a site for Jun 20 and Dec 21 for an initial investigation. The UHI effect would most likely effect Tmin. Since the sample period is one day there less a chance of weather effecting the temperature,
I’m really curious about this method. Could you do this for Alice Springs? However, Alice Springs is a small city and there might not be UHI effect. Suppose the plots are fairly flat. Then it could be concluded that CO2 does not cause warming of air.
Harold,
The Alice Springs weather station has been at the irport, about 6 miles from the closest part of the town (and old weather station at the Post Office) since 1941.
No point in using the site for UHI study when there is no urban.
Geoff S
“Noise defeated me.”
That “noise” would seem to be a measure of the variance of the data. The larger the variance the less certain an “average” value becomes since the “hump” around the average gets smaller meaning values surrounding the average are equally as possible as the “average” of being the true value.
“Noise” is also a very loaded term. In signal processing “noise” is typically considered to be a “signal” from an external source that adds/subtracts/interferes with the wanted signal. That could be UHI in regards to temperature variation but it’s also possible that it is just natural variation in the signal, e.g. data existing in the “tails” of the distribution values. If you can’t state the causes of the variation in the signal then it becomes pretty much impossible to separate anything into signal +/- noise. That doesn’t, however, invalidate the variance of the data as a metric for the uncertainty of the average value.
For Dr Roy Spencer,
Here is a blunt graphical summary of the temperature/time trends for 45 Australian BOM weather stations, daily Tmax, raw, (not adjusted) that I selected from a lot of experience travelling Australia to be “pristine” candidates for a UHI study..

I have calculated and added trends over the whole data period for each station. These are smoothed data, that do not show the full magnitude of peaks and troughs. Different record lengths and missing data complicate analysis no end.
The data are shown here to illustrate the lack of uniformity of any property from one station to another that might allow them to be defined as truly “pristine”
Mostly, they are no more than a lot of noise. (Possibly, the more remote a station, the worse the data quality.)
These stations could be more pristine than most US stations. One of these stations had one observer, population of 1, for some decades, nearest next human settlement tens of miles away.
I would hesiotate to classify and weather station as pristine before doling statistical analysis. Not shown is how these “pristine” patterns interwaeve with known “urban” stations here.
I can send you raw data for this and the Tmin set, population data and likewise for 40 or so chosen “Urban” Aussie stations. This has been a big exercise that so far has not yielded any conclusions of value.
Geoff S
What about the Tmin data? It is most likely that Tmin will show the UHI effect better than Tmax.
Harold,
Tmin is available to any who ask for it.
These are pristine sites that should have NO UHI, so why should Tmin be m ore fruitful than Tmax?
If there are any systematic trends in Tmin raw p[ristine, they would only be complications to contemplate when analysisng sites truly affec ted by UHI.
Geoff S
I trying to figure out how to use empirical temperature data to falsify the claim by the IPCC that CO2 cause global warming and put and end to crazy programs relating global warming such as the phase out of ice cars. However, I do not have the computer skills to do these investigations and calculations.
My favorite example is the graphic of Death Valley temperatures from John Daly’s website.
What are these pristine sites and why are so few people there?
I noticed there are some sites whose Tmax’s showed little variation in temperature such as Mattsuyker Island and Cape Leewin. How many of sites are in deserts or arid environments?
The idea is to try to use this type of temperature data to convince the politicians that CO2 does not cause global warming and climate change.
“I noticed there are some sites whose Tmax’s showed little variation in temperature such as Mattsuyker Island and Cape Leewin…..”
If you new anything of meteorology then the anwser would be obvious.
Both those places are peninsulas/islands.
Ever been to the sea-side?
Cooling winds.
The favourite one on here is usually Macquarie Isl.
Stuck in the Antarctic circumpolar ocean current.
V little ocean temp change therefore v little temp change over the Island.
“The idea is to try to use this type of temperature data to convince the politicians that CO2 does not cause global warming and climate change.”
It does. It just does.
That you deny it does not make it so.
There is such a thing as reasonable doubt.
The WUians have long since passed the stage where their doubt is reaonable.
“V little ocean temp change.”
Oh dearie me.
Banton has just admitted that the oceans AREN’T WARMING
Hilarious. 🙂
“It does. It just does.”
Mindless belief based on nothing but suppositories.
Indeed:
“(a) Spatial distributions of the SST warming rates extracted by EEMD. The dots indicate locations where the total SST increment during the past four decades is ≥0.5 °C. (b) Global area-weighted SST time series is shown in black and its intrinsic trend in red.”
Most of the ocean south of Oz has shown little change over the last 4 decades.
BTW *both things can be try at the same time, just thought I’d say, as I know you have trouble grocking the meaning of the G in GW.
That around Port Lewin and south of Tasmania just near +0.5C over the last 40 yrs.
How’s that a match for Harold’s “little variation in temperature”?
Well that was gibberish word salad.
“Most of the ocean south of Oz has shown little change over the last 4 decades”
No evidence it is from CO2 warming then.
Just make it up.
CO2 does not cause warming of air. Shown in the graphic are temperature data for Death Valley. The temperature plots are fairly flat. In 1922 the concentration of CO2 in the air was about 300 ppmv and by 2001 it had increased to about 367 ppmv, but there was no corresponding increase in temperature. There is little H2O in desert air.
Thus, on the basis of the empirical temperature data it is concluded, CO2 does not any measurable increase in air temperature.
The graphic is from the the late John Daly’s website:
“Still Waiting For Greenhouse” available at:
http://www.John-Daly.com.
When and where do you think the temperature “tipping point” will occur? Because if you think the CO2 global warming theory is valid, then it must.
When and where do you think that the heat that is already in the “pipeline” will manifest itself in measurement? Because if you think the CO2 global warming theory is valid, then it must.
When and where do you think that the tropical tropospheric hotspot will be measured? Because if you think the CO2 global warming theory is valid, then it must.
You must have thought about all these “fingerprint of human warming” phenomena already because you insist on telling us that it does.
So lets hear your predictions about when and where they will occur.
Banton doesn’t “think”… beyond his purview and ability..
… he regurgitates mantra.
A simple question for the Banton monkey….
Exactly how much atmospheric warming has CO2 caused in the last 45 years?
Back your answer with empirical scientific evidence
And seriously, show us where and how ocean temperature were measured in 1980 to produce that FAKE chart.
I think, if we’re being fair and respectful with each other, then we have to look at the principles of the theory, rather than exactitude! Notwithstanding of course the1.5C (must be admitted by all sides, appears arbitrary, random …), I think even the most ardent of AGW supporters would argue however, that, the theory works in principle. That being the case, it follows then that, atmospheric and oceanic science has so many variables that make exactitude a little out of reach😅. However, this of course, opens up another huge can of worms…..does it not?
In other words…..is the science settled or is it not?
Or, is there still more that don’t know, than we know about ALL of the variables involved in climate science. One area I would like to know more about for example, is how much more (or less) land area over recent decades is ploughed in the Autumn months over N Hemisphere regions? Is this a possible feedback loop, that acts to absorb heat at a time when northern latitudes would normally be starting to reflect more heat due to advancing snow cover? Has this, among a great many other factors been properly fed into the various algorithms that CG climate modelling churns out?
“I think even the most ardent of AGW supporters would argue however, that, the theory works in principle.”
It doesn’t work in principle. The AGW principle, i.e. the positive feedback of CO2 to temperature, doesn’t even recognize that maximum daytime temperatures are damped by the radiative heat loss from the earth based on T^4.
It’s like climate science believes that the earth only cools during the night and never during the day! It’s the same kind of assumption that climate science makes when it assumes all measurement uncertainty is random and Gaussian and therefore cancels out – a total ignoring of physical science principles.
Tmax is limited by the increase in radiative heat loss as the temperature goes up. This is too often ignored in climate science. There *is* radiative heat loss during the day and its intensity is based on T^4. That is a natural damping factor for Tmax.
I don’t know of an equivalent damping factor for Tmin so its change is not damped in a similar fashion and should provide a better metric for CO2 caused change in temperature.
The “greenhouse effect”, at least as I understand it, says that as Tdaytime goes up the positive CO2 feedback should drive Tmax ever higher with no damping factor involved. That obviously is not a properly formed theory.
What is “settled” science anyway?
In respect of climate change, do we mean “settled, as far as science can ever be [settled], and especially so given a infinitely complex and dynamic field of study”?
“The science is settled” is nothing but a buzz-phrase, used to beat anyone who doubts the existence of the crisis over the head. It is propaganda, and really the only thing settled is the propaganda.
“The “greenhouse effect”, at least as I understand it, says that as Tdaytime goes up the positive CO2 feedback should drive Tmax ever higher with no damping factor involved. That obviously is not a properly formed theory.
it’ not a theory at all.”
Max daytime temps are limited by the environmental lapse rate. One situation where they aren’t and here the occasional record is made is where a heat dome settles over a sub-tropical descent zone and that surface air eventually gets advected north to more sub-tropical/temperate climes. Anticyclonic subsidence suppresses surface convection but that cannot be stopped and a higher temp must be is reached in the boundary layer.
Such that a convective tower that reaches 40K ft then the extra tenths of a degree that this CC increment has added to the temp of the rising parcel is duely spread through the volume that that parcel occupied between the surface and the 40kft cloud top.
That is the damping factor the thinly spreading of say the elevated 0.5C more over the volume of the rising cloud volume and the delta disappears (though rain rate volume will have more precipitation)
For those who understand – the theory is no way related to that, as LWIR leaves Earth via the Stratosphere and does so because their GHG molecules act to impede a more direct exit to space below the effective emission level That that emission level is maybe 10c less than lower it therefore emits more weakly, in comparison to over warmer emission and which has redirection downwards and is attenuated it its escape.
I do expect a push back on this, It is of course an echo chamber to the world’s tiny number of sky-dragon slayers and conspiracy theorists with little grasp of reality and common sense.
The lapse rate is the GRADIENT of temperature in the atmosphere. It is *not* the determinant of the temperature that drives that gradient.
I am always dismayed at the lack of real world experience of so many defending climate science as it stands today.
Lapse rate is no different than the temperature gradient along a steel rod as you heat it at one end. It is the temperature at the heated end that actually determines the value of the gradient at any point along the rod. It’s no different with the atmosphere.
The lapse rate is typically explained as L = dT/dz, i.e. the rate of change in temperature with altitude. That is the slope of the temperature gradient, it is *not* the value of the gradient at any point.
Two “push backs”!
You didn’t mention CO2 at all. Funny how convection and conduction are major factors in your response.
Temperatures are measured at approximately 2m in height. What is the drop in temperature from the surface to 2m due to the lapse rate?
If only alarmist “scientists” would be this open about their work
Kudos to Roy