Moritz Büsing
In a previous article on WUWT I described how I found and corrected an error in the way weather station data is processed in order to calculate the temperature anomalies of the past 140 years.
The error was that warming of the weather station housings due to ageing of the paint by 0.1°C to 0.2°C (0.18°F to 0.36°F) was compounded multiple times by the so-called homogenization algorithms used by NOAA and other organizations. This happens, because the homogenization algorithm assumes a permanent change in temperature when the station housing is repainted, replaced, or even cleaned. But these changes are temporary, because the new paint starts ageing and accumulating dirt again.

In this first investigation I analyzed two sets of data provided by NOAA: The temperature data from thousands of weather stations around the world before and after homogenization. I determined how much the weather stations warmed on average after each homogenization step. Then I removed this warming from ageing.
The result was a reduction of the temperature change between the decades 1880-1890 and 2010-2020 from 1.43°C to 0.83°C CI(95%) [0.46°C; 1.19°C]. I wrote a paper on this analysis, in which I describe the methods in detail:
https://osf.io/preprints/osf/huxge
One might question, if the methods that I used were the right ones, and if I applied them correctly. Therefore, I tried second simpler analysis:
I compared three simple analysis results:
- The temperature anomaly by simply averaging all weather station anomalies after homogenization. (Just as a reference; averaging is dubious in the best of cases, but having non-area weighted average is even more dubious)
- The temperature anomaly by simply averaging all weather station anomalies before homogenization.
- The temperature anomaly by simply averaging all weather station anomalies, but removing the data from those years, where the ageing has the largest effect. The data from the years 13 to 30 after each homogenization step remains.
By simply deleting the data that may be affected by the homogenization and that is probably most affected by the ageing of the weather stations, I avoid making any methodological or statistical assumptions that might create a bias in the analysis.

This simple average of the anomalies shows a larger warming trend than the area-averaged data from GISTEMP:
- Full data set homogenized: 1.94°C warming (3.49°F).
- Full data set non-homogenized: 1.67°C warming (3.01 °F).
- Data from years 13-30: 1.43°C warming (2.57°F).
The anomaly from the years 13-30 after each homogenization step shows 0.51°C (0.92°F) less warming than the homogenized full data set.
However, the ageing during the interval between 13 years and 30 years remains as an error. Furthermore, what I described as “self-harmonization” in my paper remains in the data set. Because of these problems with my second analysis approach, I tried a third analysis approach:
I considered that analyzing anomalies “bakes in” any trend error due to ageing or any other cause. One should rather use absolute temperatures, because the thermometers are precision instruments that are calibrated on a regular basis. However simply averaging the absolute worldwide temperature measurements would introduce a new bias: The changes in numbers and distributions of weather station locations around the world.
First most weather stations were located in Europe and Northern America, which are comparatively cool and moderate regions. Then many more weather stations were introduced in the rest of the world, especially in warmer countries in the beginning of the 20th century. The numbers increased in the comparatively cold Soviet Union and its allies in the middle of the 20th century. Towards the end of the 20th century the numbers of weather stations in Northern America and western Europe increased, but the numbers in the former Soviet Union and its allies decreased drastically. All these non-climate related trends have a large impact on the averaging of the absolute weather station data. I tried a few variations in averaging, and got massively different results:

These huge variations in the temperature trends due to small changes in the way the data is averaged is quite suspicious. Therefore, I tried to eliminate the effect of different trends in weather station densities in different regions, by averaging the absolute temperatures and the temperature anomalies in each region and comparing the results. Luckily the weather station data is tagged by a letter code for the countries in which they are located.
I calculated the absolute temperatures and temperature anomalies for the following 29 countries, which were selected for having the most complete data sets for the past 140 years:
Netherlands, Portugal, South Korea, New Zealand, South Africa, Uruguay, Uzbekistan, USA, Iceland, Germany, China, Brazil, Egypt, Turkey, India, Australia, United Kingdom, France, Spain, Italy, Austria, Ireland, Hungary, Japan, Morocco, Poland, Sweden, Tunesia, Ukraine.
Then I calculated the difference between the temperature anomaly and the absolute temperature of each country. Finally, I calculated the trends of these differences:
| Difference in trends per year | |
| Netherlands | 0.0064 |
| Portugal | 0.0168 |
| South Korea | 0.0062 |
| New Zealand | -0.0064 |
| South Africa | 0.0042 |
| Uruguay | 0.0030 |
| Uzbekistan | 0.0144 |
| USA | 0.0137 |
| Iceland | 0.0242 |
| Germany | 0.0087 |
| China | 0.0477 |
| Brazil | -0.0064 |
| Egypt | -0.0023 |
| Turkey | 0.0137 |
| India | -0.0051 |
| Australia | 0.0035 |
| United Kingdom | 0.0007 |
| France | 0.0122 |
| Spain | -0.0011 |
| Italy | -0.0064 |
| Austria | -0.0047 |
| Ireland | 0.0109 |
| Hungary | 0.0015 |
| Japan | -0.0042 |
| Morocco | -0.0036 |
| Poland | -0.0107 |
| Sweden | 0.0055 |
| Tunesia | 0.0048 |
| Ukraine | 0.0174 |

I analyzed this data statistically:
- Lower bound 95% confidence interval: 0.00137°C/a
- Mean: 0.00568°C/a
- Upper bound 95% confidence interval: 0.00999°C/a
For 140 years this leads to the following differences between the warming trends of the absolute temperatures and the temperature anomalies:
- Lower bound 95% confidence interval: 0.19°C
- Mean: 0.80°C
- Upper bound 95% confidence interval: 1.41°C
This means that analyzing the anomalies overestimates the warming by a statistically significant amount.
This analysis still includes a bias for the trends in weather station locations within each country, but there is no reason to assume, that all of these 29 countries have the same bias.
In conclusion, all three analysis approaches had similar results that point towards substantially less global warming within the last 140 years than previously thought.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
‘Towards the end of’ve the 20th century the numbers of weather stations in Northern America and western Europe increased, but the numbers in the former Soviet Union and its allies decreased drastically.’
I’ve always wondered if the rise and decline in the number of gulags had any impact on the surface record – someone had to manage the ‘zeks’, and I wouldn’t put it past anybody that was dependent on Moscow for scarce resources, particularly in winter, to tweak their temperature records so as to improve their claim on said resources.
good point!
I remember reading about exactly that. I’ll poke around a bit more, but this is a good read – https://wattsupwiththat.com/2008/11/15/giss-noaa-ghcn-and-the-odd-russian-temperature-anomaly-its-all-pipes/
See also https://wattsupwiththat.com/2008/03/06/weather-stations-disappearing-worldwide/
I’ve commented many times over many years on the jiggering of temperature data (referred often to the fact that US individual state warm temperature records are still nearly all in the late 1930s. I remember the super el Niño of 1998 did not set a new record and it was followed immediately by almost 2 decades of the Dreaded Pause.
GISS’s Jim Hansen invented temperature jiggering in 2007 on the eve of his retirement, probably figgering that without jiggering, his extravagant forecasts of climate disaster (Westside highway below sealevel by 2000, then changed to by 2020), would be a terrible legacy for him. Well, the old saying ‘You might as well be hanged as a sheep than a lamb’ was how he must have thought about it. He pushed the late 1930s- early 40s 20th Century high stand down almost a degree Celsius.
This got rid of two problems. It submerged the high stand, annointing 1998 as the new high T and shortened the 35 yr “Ice Age Cometh” scare to a little double blip. It also showed the bold way forward and the entire global network of thermometers was shamelessly ravaged.
Now every artifice is in the jiggering tool box: station moves to airports by runways or shutting down stations that don’t ‘cooperate’. Do you think the climate wroughters don’t know the repainting trick
Nice analysis.
However the Catastrophic Climate Change narrative is a juggernaut that won’t end until the next ice age…or a massive failure of our energy systems, which will be equally effective at ending civilisation.
I hope you’re wrong but I recognize that AGW is a major part of the Marxist “New World Order” that has been in process for the past century and won’t go away with facts.
Which makes you wonder about the agenda of the AGW trolls that infect this blog.
Those who support the AGW scam that they MUST know is aimed at the intentional destruction of western society.
But they still twist and turn, mislead and lie.. just to carry on that deceitful support.
Maybe, or— they’re just fools.
Why can’t they be both !
After all their leaders have said in the news and elsewhere..
… they can’t pretend not to know what they are supporting.
It’s probably greed. The rich own the news media!
That at least some of them believe the ends justifies any means is a huge indication of their mindset.
According to you, a nitwit, almost 100% of scientists who lived on this planet in the past century are AGW trolls. The funny thing is you have no idea how stupid that belief is.
Almost 100% of scientists on this blog are sceptics, those who troll them are obviously not scientists.
Are Roy Spencer, John Christy, Judith Curry, Richard Lindzen and William Happer, all science Ph.D. believers in AGW, AGW trolls?
bNasty2000 thinks so, and that is why I correctly call him a nitwit. he also believes in UHI as a cause of global warming, which is AGW, while denying AGW. He contradicts himself.
Poor little-dickie-bot.
He knows AGW™ refers to warming by CO2.
He KNOWS he has no evidence of that.
So he tries to redefine what the whole AGW™ scam is about…
One of the most BALTANT and PATHETIC ploys he has ever tried.
What a total LOSER !!!
I defy him to produce one place were AGW™ has been defined as urban warming by the AGW scammers…
… that would destroy the whole totalitarian anti-CO2 Nut-Zero agenda.
… which is all about reducing CO2 emissions.
“Richard Greene” persists in provoking and inciting conflicts.
I comment on articles and give science explanations for my opinions.
Losers like bNasty reply with zero science 100% insult comments. I will not be insulted and remain silent. I pay back in kind.
You just posted a science free insult too. I suppose my science must be 100% correct if your only response is a childish insult.
Still no evidence of warming by atmospheric CO2
Just another childish rant.
What a LOSER. !
Still only got a fake consensus ???
That is truly sad !
And incredibly stupid of you to advertise that fact.
I think you’re right- at least it seems that way here in New England.
How frequently are weather stations cleaned?
Possibly not very often – see previous articles on WUWT that mention severe lack of resources for maintenance, particularly for US stations, that probably include cleaning and repainting.
But plenty to spend on brand new super-computers to run their computer games.
This problem goes back to before there were any computers.
I’m no engineer but I’d think it would be possible to design science quality thermometers that need little or no maintenance.
I think they tried that but they still need cleaning. They moved away from the old ‘Stevenson screen’ models to new designs but these are still not maintenance free.
I am an engineer and regular maintenance and verification/recalibration of any measurement instrument is critical to production of reliable data. Even liquid in glass thermometers need verification – usually at 5 year intervals. In properly controlled accredited laboratories if an instrument fails a verification check, all data obtained from that instrument since its last valid verification is considered suspect.
Reminds me of a nearby factory “L.S. Starrett Company” that makes high end measuring tools. I went on a tour there a few years ago. All products are hand made. The bring in nothing but raw materials- sheets of metal, etc. It’s a large facility- with countless small “shops” were each stage of the production occurs. The workers looked reasonably happy compared to assembly line work. It’s one of the few surviving factories in north central Wokeachusetts.
When I was a plant manager we used lots of Starrett micrometers. In our quality program ensuring the mikes were constantly calibrated was the hardest thing to do.
Everything needs maintenance as it ages. Even electronic ones, the readings from the sensors can change over time. You can design circuitry to try and compensate for known aging, however no two components are identical, nor will they age in exactly the same fashion.
So that’s another good reason to not have much faith in temperature data.
When I see the ocean or a lake boiling, then I’ll think there might be an emergency.
Zackly what I want to know. Are there cleaning/painting records for all the stations that NOAA and GISTEMP monitor? Following the LINK provided didn’t answer that question.
Does the care and feeding instructions for weather stations stipulate cleaning and painting schedules?
https://novalynx.com/manuals/nfes-2140-part2d.pdf
Not to be dismissive of anyone’s diligent efforts to identify erroneous constructs, but I maintain that the fundamental flaw in all these “average temperatures” constructs is the very source of the input values of the constructs.
Yes, individual siting, stations and recording instruments all have their different identifiable idiosyncrasies.
Which means that using them for the purpose of pursuing the “holy grail” of determining a “global average temperature” is a fanciful pursuit.
The only useful info to come from tracking temperature readings from individual stations over time is to gain some appreciation of the periodic swings in ambient weather conditions at each particular station.
What happens at one particular station is no determinant of what happens at other stations.
So what’s the point of “averaging” these values?
These constructs have zero application to any life forms on this planet.
“””So what’s the point of “averaging” these values?
It’s effectively a Comfort Blanket, a Teddy Bear or ‘Soother’
The Average gets rid of uncertainty = always a scary thing
Only in Australia – as attached – a Climate Scientist seeking comfort
One is left wondering how the kid got in there and hence, why Plod deemed it necessary to break the glass to get him out.
Does that not imply that the 3 year old is infinitely brighter than Plod?
Why didn’t the Police simply put some coins into the device and extract the infant with the claw?
Searchup “boy girl stuck in claw-machine”
There are dozens of smart little kids.
Such are symptoms of poor parenting.
The containers can be dismantled – not opened.
No, it just means that the toddler is far smaller than the average plod who cannot follow said toddler in through the small chute in the claw machine.
“These constructs have zero application to any life forms on this planet.”
On the contrary.. they are very useful to the marxists trying to bring down western society.
Can they be described as “life-forms”
The rich people own the news media that is spreading the so-called “climate crisis” story.
Proponents of averaging often claim that it correlates over large regions. However, this assertion is incorrect. I have recently analyzed the temperature readings from three different stations (1, 2, 3) and found that they exhibit very unrecognizable and complex behavior, regardless of their elevation and topography. They only correlate to a pressure system that enters the area, either warming or cooling the temperature. However, the conditions at those areas, and more specifically the microsite, obviously play a far more important role in determining the recorded temperatures at any given time.
Station B is a USCRN, Station A is placed on a sidewalk, and its impact is evident due to the consistently elevated minimum temperature and reduced diurnal variability. Station C is a new station that started its record in 2000. I am only assuming that, given it’s a new station and has compared variability to the USCRN as opposed to Station A, it means that it has a decent record.
walter,
I hope that you will write up your results, like in an article for WUWT.
The texture of the temperature/time plots of these 3 stations seems to be different, with bumps and dips happening at different times. But, over several years, do the trends in units like degrees change per year, stay the same between stations, or diverge, or converge?
Separately to your work, I am trying to work out what causes the wriggles in typical graphs of daily data. smoothed.
Geoff S
Geoff,
I’ll crunch all the data, and when completed, I’ll present you with the results.
Geoff,
Here’s the data for Stations A & C going back to November 1999 since that’s when Station C’s record-keeping began. There is a cooling trend in Station C, whereas with Station A, there is a very small warming trend. I think I also see increasing divergence over time, but with the CRN data I link below, it’s hard to know whether that’s just spatial variability. Regardless, I’m pretty certain its impact is there, as I showed yesterday. I’ll keep working on this, but the more and more months I look at, like what I showed you yesterday, the more I believe this is another flaw in averaging. But again, this is preliminary.
Here’s the time series for the three stations, all superimposed upon each other. The data for the CRN station only goes back to November 2007. The CRN station has the largest trend and aligns better with Station A, although all of them have very close and, arguably, statistically indistinguishable trends.
Here are some pictures of Station A; I visited the station today. (1, 2)
So, in conclusion, it’s challenging to draw conclusions from these trends. It’s consistent with the article you wrote on determining UHI with historical records: you can’t.
Once I have more useful results, I’d love to share them with the WUWT community. The last time I tried to write an article, I was confused about where to submit it: do I send an email to someone? I tried to write it here in the ‘Submit Story’ section, and I remember being confused on the instructions.
Walter
Best way is to email to WUWT and attach your article. The email to use is in the “submit story” section, from memory.
I’ve often thought it would be more accurate to only measure trends as opposed to trying to bridge in continuities etc.
The question. Can we assume that two nearby stations will have similar trend even though they may have very different absolute temperatures due to local factors.
For example, Darwin maximums have two raw data sets. Darwin Post Office shows no temperature rise or fall from 1910 to 1936.
We then have a cyclone, war and station move, so don’t count the unreliable period from 1937 to 1941.
Darwin airport then runs from 1942 to 2023 and has a 0.8C temperature rise over this period.
The only actual measured rise in temperature from 1910 to 2023 is, therefore,0.8C.
Acorn-Sat, Australia’s official temperature keeper, has this rise as 1.94C per century. They arrive at this because they use a homogenization algorithm to
try and bridge the 1937/1941 gap as well as some doubtful statistical manipulation in 1980.
I don’t think they need to do this homogenization. Just simply look at the measured rise in raw temperature. It should be consistent for both stations and is all we need, particularly as the gap period is unreliable anyway.
What do you think?
If they are tending one station then you should just ignore the gaps and trend the data you have in piece parts, just as you say. You don’t even have to try and trend using the piece parts together. Just use the data you have. With a gap of several years there is no physical guarantee as to what happened in the gap.
Homgenization over several stations is idiotic in my opinion. All it does is spread calibration errors, uncertainty, and any UHI around the area thus making *everything* screwy. If you are averaging a number of stations then average what you KNOW, it’s a sample anyway and adding one or two elements shouldn’t affect the average very much. What *should* also be done is to calculate the variance of the data in the analysis set you have. That is an indication of how accurate your estimate of the mean temperature is, the wider the variance the more possible values the estimated mean could actually take on.
Climate science never does the variance piece just like they never propagate measurement uncertainty – they assume all stated values are 100% accurate with a variance of zero in the data set (an impossibility).
If they truly tracked variance it would soon become obvious that there isn’t any way to actually find anomalies in the hundredths digit.
Bob,
Thanks for the reply. That’s a significant adjustment; no wonder Acorn-Sat has been receiving backlash lately. When you inquire about whether two nearby stations have similar trends, I think that is nonsense because, as you say, they have completely different absolutes. Currently, I am analyzing anomalies from separate stations in one area to ascertain whether anomalies genuinely correlate. I think, if anything, you should treat a broken series as a separate entity. Mr. Gorman captures my thoughts well: homogenization is a dumb approach. It simply muddies the water and assumes what we don’t know. The logic behind homogenization suggests that only the anomaly time series matters, but to truly understand how the weather in area x is changing, you need to examine the actual measurements (maximums and minimums). I also believe that the Urban Heat Island (UHI) effect will likely be more pronounced in warm, sunny conditions than in rainy, cloudy conditions. Averaging days like those together for a monthly or yearly average could potentially obscure the true picture.
Averaging temperatures can reduce uncertainty under specific conditions… Assume a well-mixed container that is steady-state (no heat entering or leaving) but still variable enough to produce different readings at different points. THEN, taking multiple readings at various locations around the container and averaging the result CAN reduce the uncertainty of the value obtained. But think, now, when is our atmosphere ever steady-state? By the time you have finished recording your reading, the conditions have changed, so measuring and recording again would not decrease, but rather would increase error estimates. Now, extend that to the entire planet, the only way averaging any readings at all would have any meaning is if ALL of the readings were recorded at the exact same time! But our methodology specifically excludes that, in taking only the maximum and minimum readings, which can occur at any time of the day, and do not coincide except by shear chance between any two stations, even stations located within sight of each other. The whole thing is a crock.
“THEN, taking multiple readings at various locations around the container and averaging the result CAN reduce the uncertainty of the value obtained.”
It will more precisely locate the mean. But if the measurement uncertainty endemic to the measuring devices is larger than the standard error of the mean you’ll still be in the realm of the Great Unknown. There simply wouldn’t be any use in going further than the measurement uncertainty interval. The average you estimate shouldn’t have any more decimal places than the measurement uncertainty.
btw: It does take a special sort of mind to come up with stuff sometimes BUT:
Question: Has anyone ever put a Solar Power Meter inside a Stevenson Screen – next to the thermometer
Let’s face it, they are hardly ‘sunlight proof’, it has got to be dazzlingly bright inside there when El Sol is beaming his magnificence on a clear (cyclonic) day
just ‘a few less clouds‘ could have an amazingly large effect
Where sunlight has most impact on AWS during the winter at least.
ls when the sun is shining but there is cloud cover above the AWS. This can shoot up the temp which a AWS records to well above what a glass thermometer is recording.
But the over sensitivity to heat can make electonic thermometers “run warm” even under thick cloud cover compared to glass both during the day and night.
The stations operated by the German DWD (Deutscher Wetterdienst) measure Insolation. They also have a very rigorous calibration and cleaning schedule. Furthermore, they tested the plastics used for the weather stations for aging, and used those that don’t yellow. I don’t know, how long they have been doing this
Very nice.
Like I say, to measure the global temp down to 1/10 deg is just too hard, there is too much against it, maybe down to 2 deg, yes, but there is too much inaccuracy for a precise measurement, numerous reasons for inaccuracy.
+100
J Boles,
We would like to read accounts from people trying to maintain constant temperatures under controlled conditions. I did some work years ago when I owned a lab, but electronics have improved since then.
In 2019 I asked some national standards laboratories about how well temperatures can be controlled. Summary from one lab was
“NPL has a water bath in which the temperature is controlled to ~0.001 °C, and our measurement capability for calibrations in the bath in the range up to 100 °C is 0.005 °C. However, measurement precision is significantly better than this. The limit of what is technically possible would depend on the circumstances and what exactly is wanted.”
National Physical Laboratory | Hampton Rd | Teddington | Middlesex | UK | TW11 0LW
………………….
This was for discussions of the accuracy of Argo float measurements. It would be interest to get similar comments from authorities reporting more than control of water bath temperatures, if readers have them
Geoff S
Thats why I identify the 95% confidence interval.
Science without confidence intervals is not science.
Where the real issues are is with the switch over from glass to electonic thermometers.
The current study am doing on this subject is showing that electonic thermometers are over sensitive to picking up heat compared to glass thermometers and its this that has totally messed up the data and turned the post 1980 warming trend into the fake over cooked mess it is as clearly shown in the graph above.
As its this sensitivity that is making them not only “run warm” during daylight hours but also depanding on the weather during the night as well. The only time electonic thermometers “run cool” is when the nights are clear or with very little cloud cover.
lts a utter disgrace what’s been happening and it needs calling out.
How are they calibrated? If you put each type next to each other- you’ll get different readings? Consistently?
What am doing is comparing my glass thermometers ” l now have 2 to make sure l was getting a true reading” and compare them with 2 local AWS one urban one rural. While also making notes of weather conditions during the time. Under certain weather conditions the temps can stay even over a wide area which allows me to test any difference between AWS and glass thermometers.
The 11.1.24 was one such day as both urban and rural AWS temps peaked at 6.9C at around the same time. At times like these l would have expected my glass thermometers to record the same sort of temp at the same time. But they did not, they recorded the temp at 5.7C.
That sort of difference should not have happened under the weather conditions at the time.
Which were thick cloud cover with a Force 3 to 4 wind.
While also during the daytime and nights with thick cloud cover, the AWS are nearly always “running warm” compared to my glass thermometers.
No l have been comparing my glass thermometer with 2 local AWS’s since Jan 6th.
During my study the local AWS’s have been running warm for at least 70% of the time.
“The error was that warming of the weather station housings due to ageing of the paint…”
hmmm… who’d a thought?
Easy fix – just throw some soup at it (/s)
But not hot soup…
Given the change in composition of modern paints due to previous pigments being banned or replaced by cheaper formulations I would imagine there are significant changes in the thermal properties over the years.
Yes, I addressed this in the linked paper
It’s not just the paint. Anything impacting air flow through the screen will impact the reading. And since climate science is trying to go down to the milliKelvin in its anomalies it wouldn’t take much. A muc dauber nest is all it would take in the air intake.
The conversion from white wash to latex paints is another source of error. Especially if the records don’t show when the conversion occurred.
This makes me think, that the correct methodology for calculating the temperature trend, is to trend each site against itself. Average the whole year, then report only the relative change for each year. Then average only the relative changes station by station. Because adding absolute values from a warmer or cooler region mid-experiment alters the mean value artificially. But if only the relative changes were reported station by station, then we’d see the true trend.
I think telling, would be plotting yearly anomaly along side average absolute sine value of the latitude of all the stations.
I’ve offered this before. Assign each station a “+”, a “-“, or a “0” (zero). You don’t even need to know the actual trend, just whether it is up, down, or sideways. Then just add’em all up! Do you get more pluses, more minuses, or more stagnant?
What you are describing is almost exactly how temperature anomalies are calculated.
However this bakes in non-climate trends as well
Hello again, Moritz.
Your contributions are important and thoughtful and free of you being part of a group with an agenda.
The only thing worse than the time wasted writing this is the time wasted reading it.
This is a HUGE steaming pile of farm animal digestive waste products
Mathematical mass-turbation
No one who discusses temperature measurements on land and ignores the oceans (71% of Earth’s surface) can be taken seriously
No one who discusses 1800s temperatures, which are mainly infilling, can be taken seriously.
While there are potential errors from the types of paint and the cleanliness of Stevenson boxes, I see no experiment here with two such boxes perhaps 10 feet apart to document differences.
And how would one separate the changes caused by box paint from changes in box size, box location, box immediate surroundings, economic growth in the vicinity of the weather station, infilling of missing data and repeated instrument changes?
Not to mention unpublicized adjustments to allegedly “raw” data.
In the end the global average temperature is whatever we are told it is. We have no way to determine if the number is right, or even useful.
There are alternative satellite data, but UAH and RSS come up with different global average statistics using the same satellites.
Fortunately, historical temperature data are not used for wild guess predictions of CAGW.
Those predictions are not based on any data, which may explain why they have been wrong since 1979
RSS now use “climate models” to re-adjust their data.
Here is a comparison of RSSv3 vs RSSv4 last time I bothered looking at RSS.
NOAA Star, and radiosonde data match UAH, as does the only maybe pristine network, USCRN.
RSS made a huge arbitrary warming adjustment in either 2015 or 2016 to better match surface data and that’s when I stopped caring about the RSS compilation.
See.. you can say sensible things, if you try really hard ! Green thumb for dickie !!
Don’t know why people are giving this Greenie comment the red thumb..
Apart from the usual bravado blustering, he doesn’t say anything that stupid.
“Don’t know why people are giving this Greenie comment the red thumb..”
That’s a WUWT tradition
Let’s face it..
… 99.9% of the time, you deserve every red thumb you get..
… for your moronically idiotic and childish rants.
Most of your claims are wrong.
People live on land, so I care more about land temperatures.
I did no infilling
In the paper I reference a study by an Italian team, who do exactly this experiment.
You cant separate the causes of stepwise changes, that is exactly my point!
I publicised a paper describing the methodology and the python code for my first approach. If somebody is interested in the code that I used for the second and third approach, then please message me.
GISS also offer their code for download. They are completely transparent.
RSS comes up with a higher trend in their final data, than the trends of all but two sattelites. I find this quite suspicious. I prefer UAH.
Only the errors that you care about matter.
The global average temperature is whatever the government authorities tell you it is.
Their predictions of global warmig doom require a warming trend after 1975 so that’s what we will hear.
It could be snowing in July in NYC and they will still claim the global temperature is rising.
The 7% rise of atmospheric CO2 from 1940 to 1975 was accompanied by significant global cooling. That cooling was inconvenient for the popular CO2 is evil narrative, so it disappeared in the following decades.
In 50 to 100 years, I predict the US Dust Bowl of the 1930s will be remembered in the history books as the 1930’s Snow Bowl,
The calculated global average temperature and trends are a fiction that we must stop. Rather than homogenize and combine data for all the newer stations with the older ones, examine each station by itself and calculate only the trend for that station. Temperatures do not increase or decrease equally around the world, and combining their data just generates false trends.
Publish the relative trends of all the stations around the world with links to each station’s own time series just like the Relative Sea Level Trends interactive map at https://tidesandcurrents.noaa.gov/sltrends/.
After you have a collection of individual trends for every station, figure out a way to determine a “global” average trend to get a sense of “global” warming. All the methods so far like in-filling non-existent data, combining data from stations that have long time series with those that have recent, short series, and arbitrarily removing urban heat island effects through manipulated math rather than comparison to quality data from rural stations produces trends that have little bearing on reality. A trend can only be accurate for a single time series. As soon as you add others in, you dilute and distort the trend.
In my opinion the collection of temperatures to create a global average is a complete waste of a lot of money.
If the change is so small per decade that almost no one notices, it is irrelevant.
We should just be thankful if the change is warming rather than cooling.
Another common-sense post from dickie..
WT* is happening !!!
Meds kicked in, perhaps ???
Well, give him a + vote, then!
Just classify each trend as plus, minus, or zero. Add’em up. How many pluses, how many minuses, and how many zeros. You’ll know just as much from that as you do from the hokey GAT that climate science thinks it can calculate down to the millKelvin.
I might try this. This will be processing intensive, because I have to find a numerical solution for fitting regression functions to thousands of data sets that span different time intervals. Lets see what can be done here.
Should I make it a simple linear regression? Linear + exponential? Quadratic? Cubic?
What should I do with the result? What should I use as a reference?