Guest Opinion by Kip Hansen — 5 January 2024 — 2500 words/15 minutes
Roger Pielke Jr. recently posted a piece at The Honest Broker titled: “U.S. Climate Extremes: 2023 Year in Review – A Very Normal Year” – which was subsequently reposted at WUWT.
In that post, he uses this graphic:

(I have increased the size of the titles for clarity – kh)
It is easy to see that the trends of both the Maximum January temperatures and the Maximum July temperatures have been rising — more so for January temperatures than July — though this is somewhat obscured by the different scales of the two graphs. [Caveat: The temperature record on which this graph is based is not scientifically reliable before about 1940.] Also, one has to be careful to note what exactly they are really measuring.
This is not the usual average temperature. Not monthly average temperature.
It is Contiguous U.S. Maximum Temperature for these two months, January and July – based on the assumption that these are the coldest and hottest months. At least we can say they represent a cold month and a hot month.
So how do we calculate such a record? Let’s just assume NOAA has done what is usually does – it took some kind of an average of the maximum temperatures reported each day by its weather stations in the Contiguous United States – those temperatures reported usually as “Tmax” in the daily station records.
Let’s leave aside all my usual arguments about the inanity of such averages and just accept the idea that they are trying to represent. (None of this is Roger Pielke Jr.’s fault – he is just reporting what they say in the NOAA produced graphics.)
But rather, consider exactly what they are reporting – Maximum Daily Temperatures (averaged somehow). But how is this measured?

How was this measure in the early 20th century? They used something like this, the Six’s Min-Max Thermometer.
However, the Wiki explains:
“MMTS (meteorology)
A Maximum Minimum Temperature System or MMTS is a temperature recording system that keeps track of the maximum and minimum temperatures that have occurred over some given time period.
The earliest, and still perhaps most familiar, form is the Maximum minimum thermometer invented by James Six in 1782.
Today a typical MMTS is a thermistor. This may be read locally or can transmit its results electronically.”
Weather.gov offers this information:
[If you are familiar with weather stations you can skip this section.- kh]
Temperature Sensors – Liquid
Thermometers used in a CRS [Cotton Region Shelter] are Liquid In Glass (LIG) and are either alcohol or mercury. Alcohol thermometers are employed in the colder climates where winter temperatures drop below -40 degrees, the freezing point of mercury. Minimum temperature thermometers have a small bar embedded in the liquid that is pulled down the tube as the temperature falls. As the temperature warms again and the liquid moves back up the tube, the bar remains at the minimum temperature. This allows the observer to read the lowest temperature. Maximum thermometers have a small break near the base of the well of liquid at the bottom of the thermometer. As the temperature falls from the maximum, this break in the liquid keeps the liquid in place at its high point. The maximum and minimum thermometers are mounted on a rack. After noting the highest and lowest temperatures, the observer then tilts the rack. This resets the thermometers by rejoining the liquid in the “maximum” thermometer and sending the bar back to the top of the liquid in the “minimum” thermometer. The thermometers are now reset, allowing observation of the highest and lowest temperatures for the next day.

Temperature Sensor – Electronic
[The newer electronic MMTS can look like the one picture here.)
Another and newer type of thermometer is the Maximum Minimum Temperature System (MMTS). An MMTS is an electronic thermometer, not too different from the type one might buy at a local electronics store. The MMTS is a thermistor. This thermistor is housed in a shelter similar in appearance to a bee hive. This design is similar in functionality to the CRS. Currently, the MMTS requires a cable to connect the sensor with a display. Future plans are for wireless displays. This would eliminate many of the problems associated with cabled systems.
In the 1980s, thermistor MMTS units began to be introduced into the NOAA and NWS systems. [ source ]
In a larger application, such as the NY State Mesonet, a typical station looks like this:

This is a Mesonet station high in the Catskill Mountains, I took this photo a couple of weeks ago. Circled is the 6-foot temperature sensor. It is specifically a RM Young 41342 [spec sheet] in a RM Young 43502 – Aspirated Radiation Shield. The standard version has an accuracy (at 23°C ) of ±0.3°C (about 0.5°F), with a 10 second response time.
Why do we need to know the response time when measuring 6-foot (2 m) air temperature? Well, when you were a child (most of us, anyway) the doctor and your mother took your temperature with “liquid-in-glass” oral thermometer (for me, a mercury-in-glass then later alcohol-in-glass) which you were required to hold “under your tongue” for how long? “for 3 minutes.” That’s how long it took to get a “liquid-in-glass” [LIG] thermometer to reliably change and record temperature. Our original Six’s Min-Max Thermometer-style thermometers, used for many years and still in use in some places today, had a similar response time to changes in temperature measured in minutes – not seconds.
This becomes important when looking at the Maximum Temperature record for any weather station. For a properly sited weather station, which would look something like the Mesonet station pictured above in many ways, there is little chance of spurious very-short-term temperature changes being recorded by an electronic MMTS. There are no parking lots, no air conditioners, no jet exhaust, no delivery trucks, no buildings reflecting heat, no odd little shifts of blowing a stream of uncharacteristic hotter air over the sensor for a minute, etc.
Many NOAA weather stations consist of a MMTS alone, on a pole. (see Anthony Watts’ Surface Station Project reports.)

[Interesting note: The NYS Mesonet station TANN, pictured here with a UFO (unidentified finger object) in the upper left, looked to me, when I visited it on a cold snowy morning, to be a very well sited weather station. But on its site data page it gives a siting rating for various measurements according to WMO SITING CLASSIFICATIONS FOR SURFACE OBSERVING STATIONS ON LAND [required reading for anyone concerned about station siting and the temperature/weather record] in which “WMO guidelines give different variables a classification number, with 1 being the best [on a scale of 1-5]. Higher-numbered classifications indicate that the site’s surrounding environment may cause uncertainty in the data.” Of the three categories rated for this station, it received a “4” for Temperature/Humidity and a “5”s for Surface Wind and Precipitation.]
Back to response time: Why would this make a difference?
I didn’t know but I had suspicions….so, naturally, I asked Anthony Watts, probably the man most knowledgeable about how temperatures are measured inside of Stevenson Screens, in the somewhat similar Cotton Region Shelters (CRS), and in modern electronic weather stations, this question:
“Pielke Jr has published these graphs of January and July Maximum temperatures. (his substack and at WUWT.)

What are the chances that some of the rise is due to the use of electronic weather stations which report INSTANTANEOUS highs and lows?”
With Anthony’s permission, I quote his answer:
“Absolutely, I’m convinced that short, local events, such as a wind shift bringing heat from pavement can contribute to a high that is spurious. The MMTS system as well as the ASOS system logs the Tmax – it does not log the duration.
The response time of a mercury or alcohol max/min thermometer basically makes it a low pass filter, and such spurious events don’t get recorded.
The solution is to install a “mass hat” on an electronic thermometer sensor to get its response time down to that of a mercury or alcohol max/min thermometer.” — Anthony Watts (personal communication)

[“Mass hat” – this would be something like a sleeve that slips over the thermistor, which is the long skinny probe seen in this image, with sufficient mass that has to change temperature before the thermistor is affected – thus slowing the response time of the thermistor to more closely match that of liquid-in-glass max/min thermometers.]

How’s that, you say? It is a result of exactly what they are measuring and recording: the Maximum temperature reading. Here are the hourly Maximums from an imaginary weather station:

We can see the usual diurnal shape, warming to midday, cooling overnight. But, hey, what’s that sticking up in the middle? At 1000 hrs? That, my friends, is a spurious instantaneous temperature reading. You see, this below is the imaginary Anywhere, U.S.A. station:

You can see the MMTS there on the left-hand side, on the lawn, and the five air conditioning units 6 to 8 feet away. Maybe, around 10 o’clock, the buildings air conditions all started up, timer controlled, and kicked out lots of heat just as an errant puff of wind came around the corner from the building on the right, blowing all that extra heat over the MMTS for a minute. The MMTS dutifully records a new Maximum Temperature. That little spike would be reported as the Maximum for the day, averaged into the Monthly Maximum. As more and more MMTS are added to the network, the more spurious instantaneous Maximums can be recorded, slowly driving the Contiguous U.S. Maximum Temperature of January or July up a bit each year, as the number of MMTS units increase the number of spurious readings.
These types of spurious Tmax readings can be caused by all sorts of things. See Anthony’s two Surface Station reports [2009 here and 2022 here]. At airports, a badly sited MMTS can be influenced by passing or turning jet airplanes on the runway, taking off or landing. For parking lot sited stations, a UPS or Amazon truck parked right next to the MMTS can reflect extra heat onto the MMTS for a minute or two. An odd little puff of wind picks up the hottest air six inches above the black asphalt and wafts it up over the MMTS. The point is that it doesn’t have to last long – 10 second response time! New Tmax!
Let me provide a real-time, real-life example from a weather station I have visited many times: C – Turkey Point Hudson River NERRS, NY (NOS 8518962). We can find the real time standard meteorological data for last 45 days from this page. Temperatures are recorded at six-minute intervals (which are averaged instantaneous measurements). [Note: ASOS stations, on the other hand, use five-minute intervals.] Looking closely at the data for examples, we find this 30 minute period on December 18th 2023 from 1436 to 1500 – five six minute records:

A temperature jump of 4.2°C (or 7.6°F) in six minutes? For over 20 minutes, the recorded temperature remains higher, and then is aback at 10-11°C. [See note just below] There are lots of instances of these types of oddities in the record of this station. In this case, we have a 10-11°C (about 50°F) day suddenly transformed into a 15°C (60°F) day — for 15 minutes. That 15°C is a Tmax for the day – almost 4°C higher than the rest of the day. The average temp (including the spurious reading) for the hour (all six-minute records) in which this oddity occurs is 12.7°C.
Note : “Once each minute the ACU [which is the central processing unit for the ASOS] calculates the 5-minute average ambient temperature and dew point temperature from the 1-minute average observations (provided at least 4 valid 1-minute averages are available). These 5-minute averages are rounded to the nearest degree Fahrenheit, converted to the nearest 0.1 degree Celsius, and reported once each minute as the 5-minute average ambient and dew point temperatures. All mid-point temperature values are rounded up (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to – 3.0°F; while -3.6 °F rounds to -4.0 °F).” [source: ASOS Users Guide, 1998] The station illustrated is a NERRS station and uses 6 minute intervals, but the algorithm is similar – kh.
Here is how that works to bias the temperature record – both the Tmax record and the Tavg record:
(Click image see full size in a new tab or window)
The five latest 1-minute values are averaged, giving a new 5-minute-average every minute. In the NERRS network, 1-minute values are averaged every five minutes to created the Recorded 5-minute Temperature record. [Note: different agencies use slightly differing algorithms and timings – NERRS uses six minute averages, while ASOS uses five minutes.] A single 1-minute spurious temperature causes five spuriously high 5-minute averages in the “averaged every minute” system used in ASOS (the orange trace in the graphs above) . In the NERRS network, 1 spurious reading creates at least two spuriously high Recorded 5-minute values (red trace and stars).
The graph at the top of this essay – Contiguous U.S. Maximum Temperature – is created by this process: “Once each day (at 23:59 LST), the highest and lowest ambient temperatures for the current month, along with the date(s) of occurrence, are computed and stored in memory until the end of the following month. On the first day of the following month, ASOS outputs the Monthly Maximum Temperature and date(s) of occurrence, plus the Monthly Minimum Temperature and date(s) of occurrence.” It logically follows that spuriously high instantaneous readings can easily make that monthly Maximum Temperature listing and thus create the graph from NOAA highlighted by Pielke Jr.
Yes, it can be confusing, but: The NERRS network does not record each 1 minute temperature, only an average every six minutes. ASOS and MMTS record a new 5-minute average every minute, which are also not a record of the 1-miute temperature measurements themselves.
These are examples of spurious instantaneous MMTS/ASOS temperature readings and their effects – and lead to the bottom line:
Bottom Line:
There is a reasonable hypothesis that could or should be investigated:
With the widespread introduction of MMTS and ASOS weather stations over time since 1980, which record instantaneous temperatures every minute with a 10 second response time, spurious instantaneous high temperatures can be recorded as Tmax driving up both the daily temperature average (Tavg) and the daily, weekly monthly and annual Tmax records.
# # # # #
Author’s Comment:
A good question based on curiosity about an observation of something (anomalous or not) is the basis of all good science and research.
This topic could be important – as all the temperature records (local, US Contiguous, Regional, Global) are based on the record of Tavg – the daily “average” temperature of a weather station. That “average” is not the average of all the temperature measurements for a 24 hour period, but rather the “average” of the Tmin and Tmax of that 24 hour period. Thus, Daily/Weekly/Monthly Average Temperatures are highly influenced by Tmax. [for details, see this document from the National Centers for Environmental Information]
It is not just the Tmax record that can be nudged higher by anomalous instantaneous affects all of the subsequent temperature metrics.
Nearly 20 years ago, K. G. Hubbard et al. produce a paper titled “Air Temperature Comparison between the MMTS and the USCRN Temperature Systems” which found MMTS systems biased Tmax high and Tmin low. It was based on a single year’s worth of data but claims that MMTS data was or maybe still is being “corrected”.
Just to be clear: This Opinion piece represents my personal investigation and opinion on the topic – I have quoted Anthony Watts’ response to my emailed question. Everything else, every word, is my responsibility and does not necessarily represent his viewpoint.
Thanks for reading.
# # # # #
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Kip,
Nice article, thank you.
Colleague Chris Gillham at his web site waclimate.net reports similar studies by counting the population at above 95th percentile, with different thermometry, LIG or MMTS, having different distributions. More populated for MMTS.
Better to see his words under “Are automatic weather stations corrupting the temperature record?” (or similar words from memory.)
I’ve also been writing up aspects of this with raw data from near Melbourne. Geoff S
Geoff ==> Thanks, I have looked at that from an earlier link — very nicely done, that.
Excellent discussion of an important topic! However, if I may be a devil’s advocate, this thrust and that of improperly situated stations etc, is irrelevant when the people in charge of the data collection and interpretation are known to have fudged the numbers and continue to do so! I point to the following as evidence for the futility of solving siting and accuracy issues:
https://realclimatescience.com/2024/01/man-made-warming/#gsc.tab=0
The chart shows the value of the historical adjustments to the temperature record, cooling the past and warming the present. This alone is bad enough, but when you plot these “adjustments” against CO2 concentration, you get essentially a straight line as in the attached.
This cannot be a coincidence! The data is clearly being adjusted to fit the hypothesis, which is the antithesis of real “science” methodology.
So again kudos for pointing out this response time problem and for Anthony’s crusade against bad siting of stations, but these efforts are meaningless if the gatekeepers of the data are putting their watermelon (green on the outside and pinko on the inside) thumbs on the scale!
You beat me to it.
It doesn’t matter what the thermometers say, the climate manipulators know best.
Even creating data where there is none etc.
D Boss ==> If the topic of temperature and its measurement, and the analysis of these measurements, and the interpretations of these measurements was EASY, we could have quit writing about a decade or two ago.
There are so many factors involved, it can be labeled: complex, complicated and chaotic.
” historical adjustments to the temperature record, cooling the past and warming the present.”
That would be adjustments to the historical …. 🙂
Anyway, knowing all the issues of instrumentation, time-of-day, drop-outs, urban growth, and other schist — I would not want to be the person in charge of making a proper temperature series.
I believe the original intentions of the weather folks were on the up and up. My guess as to when this went south was during AL Gore’s time as vice president. Along with Kerry, there should be a mountain with their heads — up-side-down — carved into granite. Others, too. A Mountain of Shame.
Peta made a good point in looking at Wunderground weather stations that are connected to the internet. Wunderground, or someone else, ought to take that worldwide data and do some modern trend analysis to see if the trends they get, regionally and globally, match what the “official” sources get.
Yooper ==> If you are familiar with Wunderground’s system, it involves a multitude of back yard, schoolyard, front yard, streetside, rooftop, tree-mounted inexpensive instruments connected to the internet through homeowner’s WiFi.
Interesting, useful in some ways, but not scientific.
For instance, if I want to know the current temperature difference between a point 20 minutes away “up the mountain”, at my home, and at a school 20 minutes away “across the river”, I can go to the Wunderground site and look up the three amateur stations.
And it will tell me the instantaneous temperatures at each. But that’s it….
Fun, but not science.
It IS scientific, Kip, just not properly calibrated. It’s a good idea and promotes a healthy scepticism of the official narrative but, you’re right, using them as if they were an official dataset would not be appropriate. I’m tempted to see if I can find a cheap setup now!
Richard ==> There are scads of electronic weather stations, some of them quite good. Check out those recommended by Wunderground.
You are perfectly correct: “using them as if they were an official dataset would not be appropriate”
An unregulated network of differing equipment, with no siting standards and no standardization, results in just a bunch of numbers. And that ain’t science.
To be truly scientific, each measuring instrument must measure the same thing, in the same way, under the same conditions, and on and on. That’s why researchers carefully standardize their methods and follow them exactly (well, not in CliSci).
I am a HUGE fan of citizen science — and this essay is an example of developing an hypothesis from data available to citizens.
This is one of the reasons why climate science says to use anomalies, to remove calibration bias from the equation. It’s malarky for them just as it would be for trying to use the citizen network as “science”.
I suggested a long time ago that a better way would be to just calculate the trend at each station and assign either a plus or minus value to it Then add up all the global pluses and minuses. While it wouldn’t provide a weighted value based on the value of the slope of the trend you would get a value that would be at least as valuable as the anomaly based GAT today.
Shirly Nott.
old cocky ==> Gavin schmidt explained it this way in August 2017:
“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second, that reduces to 288.0±0.5K. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
One must use the WayBack machine to see this here. . RealClimate has altered the page (without notification or comment) to eliminate this very straight forward confession.
“The climatology for 1981-2010 is 287.4±0.5K,”
That’s an SEM and not a measurement uncertainty figure. So they start off wrong ==> and end up wrong.
Tim ==> The important point is that Schmidt acknowledges the uncertainty (by whatever name) of Global Surface Temperatures. And tells why they use Anomalies. It lets them hide the uncertainty.
It doesn’t let them hide it. It lets them IGNORE it! A subtle but important difference.
What he said was: “One of the most common questions that arises from analyses of the global surface temperature data sets is why they are almost always plotted as anomalies and not as absolute temperatures.
There are two very basic answers: First, looking at changes in data gets rid of biases at individual stations that don’t change in time (such as station location), and second, for surface temperatures at least, the correlation scale for anomalies is much larger (100’s km) than for absolute temperatures. The combination of these factors means it’s much easier to interpolate anomalies and estimate the global mean, than it would be if you were averaging absolute temperatures.”
“ correlation scale for anomalies is much larger (100’s km) than for absolute temperatures.”
The value may be smaller but it better be the same on a relative basis or you’ve somehow changed the distribution when you shouldn’t!
Why are you interpolating anomalies/
Why is it easier to calculate the average anomaly then an average temperature? Calculations in a computer should be done using floating point representations – it shouldn’t have any more problem finding the average absolute temperature than finding the average anomaly value.
How do you account for the different variances between the temps in the NH and SH when you combine them? The anomalies inherit the very same variance as that of the components used to calculate the anomaly. In fact, since variances add the uncertainty of the anomaly should be greater than the uncertainty of either component alone. So averaging the absolute temperature should have a smaller uncertainty than that of the anomaly average!
Or are you going to also claim that all measurement uncertainty is random, Gaussian, and cancels like so many others defending the GAT?
Sorry I assumed that posters here knew what quotation marks stood for! To explain to Tim the passage between the quotation marks: “…….”, is what Gavin posted in this case.
The mind boggles 🙁
That appears to be some strange and convoluted approach used for temperature reanalysis. He seems to have extended the “reasoning” to actual temperature measurements.
The +/- 0.5K interval for absolute temperatures and +/- 0.05K for anomalies seem to be just assumed into existence.
Oh, dear, so that is how they do it?
Boggle squared 🙁
As Gavin says in the conclusion to that post: “When communicating science, we need to hold ourselves to the same standards as when we publish technical papers. Presenting numbers that are unjustifiably precise is not good practice anywhere and over time will come back to haunt you. So, if you are ever tempted to give or ask for absolute values for global temperatures with the precision of the anomaly, just don’t do it!”
Sad but true.
NOAA has been disappointed that the average US temperature they report is not rising fast enough to scare people. They first decided to “fix” their numbers, but it was pointed out that UAH numbers were available so they should not go overboard.
NOAA decided to add personal anecdotes to their monthly average US climate announcements.
They will interview obese old geezers outdoors on hot afternoons on sunny Summer days. They will ask these sweating Big Belly Burts and Berthas if the climate is warmer now then when they were young (and slim). Of course the climate seems a lot warmer to people carrying 200 pounds of extra blubber everywhere they go outdoors. And the anecdotes NOAA uses, without photos, will all say how much warmer the US is now. This is an actual NOAA plan, not satire.
Government Climate Science 101
(1) Bad science that increases the warming rate is okay
(20 Bad science that reduces the warming rate must be adjusted
NOAA and NASA-GISS have predicted rapid dangerous global warming so are biased to make their predictions look good.
That’s why any adjustment that is likely to decrease the warming rate will be ignored, And any change of equipment or methodology that increases the warming rate will be adopted.
The only exception I know of to that Rule of Thumb is the NASA-GISS adjustment for UHI, obviously done ONLY so they could claim the UHI warming was deleted from the global average temperature. Based on NAAO0GISS’s tiny adjustment, I don’t believe HadCRUT bothers with a UHI adjustment.
When I investigated the GISS methodology a few years ago, I found the total global UHI adjustment was only 0.05 degrees C, in a century.
Hard to believe.
A normal organization would use that adjustment to reduce the current average temperature. But NASA GISS did the opposite — they warmed the all the past average temperatures slightly to reduce the warming rate slightly.
When I investigated the details of the tiny adjustments, I found that NASA GISS claimed UHI was increasing, as expected, but only at just over 50% of the land weather stations and decreasing at almost 50% of the land weather stations? They did not explain why. But the result was a tiny net UHI warming effect over a century.
At the time it appeared that urban weather stations were assumed to have UHI warming, while urban stations that had moved to airports were assumed to have less UHI warming than before. I didn’t buy this methodology and accused NASA GISS of science fraud in th article on my climate blog, perhaps back in 2019.
I already did not trust NASA GISS because they had previously “disappeared’ most of the global colling from 1940 to 1974, that was originally reported in 1975, with no explanation
Are there the same number of weather stations? The highest temperatures captured will also be driven by the number of weather stations. The highest temperature from a set of 1,000 randomly located stations will record a higher reading than 10 randomly located stations. Likewise, I would bet my money that 800 stations will record a higher maximum than if you had only 700.
steve(etc) ==> The number of reporting stations changes constantly. The algorithms are complex and complicated. But you’d have to really dig in at NOAA’s NCEI to find out.
PS: To get around the “available username problem” one can just use their full real name like I do.
This is a very interresting subject that has given me a lot of thought and checking into.
I am from Norway – where we have seen numerous cold records broken in the last days – but also spend quite a bit of time in Greece most years. On these cold records they emphasize that “they are mainly on newer stations”. That is not mentioned on summer TMax records set in the later years. Though the same is the case. And in the coldest part of Norway we still (!) have stations reporting temperatures three (!) times a day. Ex. Coavddatmohkki (Close to Karasjok). And used in off. calculation by met Norway. In 2024…
When looking into Greek reporting (I think this goes for Italy too (and other countries?)) I see that TMax is officially calculated by Greek Meteo from readings every 10 mins. on the stations that makes up the national network of stations from where monthly temperature records a made (54 I think).
You highlight a few problems with this, but what strikes me the most is the increase in factual possibilities to have a higher reported TMax from this practice. And of course a higher avg. As apposed to the past when hourly or even 3-6 hourly readings a day were the norm for many places. Instead of one possible “win” an hour this is increased from one to six hourly reported and from one to 48(!) with three readings a day. See example over from Norway. That impacts TMax. No doubt. That is pure statistical math. So looking at an increase over time in TMax wo. context is untruthful.
When reported from non-official sources like Wunderground it can be a 15:12:46 (f.i.) reading making the news within minutes from the reading. With no or few questions asked in the news report. Fueling the impression that everything is “burning”. Never telling people “this station was set up in 2011, and that the reading is an instantanious TMax reading that never will be official”.
So what of the past? Let me give an example; On the 20. of June 1970 the highest reported TMax of 35,6 C was reported in Nesbyen for Norway and it still stands. A farmer couple ran the station and reported temperatures every three hours at the time. Farmes do have quite a bit else to do then standing in front of thermometers. Reviewing this from 2024, what is the statistical probability that it would have been higher based on almost 30 additional readings in the three hours before and after the actual reported number? Sky high. This example can of course be translated to everywhere where daily readings were a lot fewer in the past.
When checking “my” local station in Greece I found that for the summer of 2023 only 3-6 days a month had a TMax reported “on the hour”. Most were 10 min. readings NOT on the hour. And reported by Meteo as the days highest TMax-reading. Do the math against measurements every third hour. Every eight hour…
On top of that stations are “aborted” or moved and rarely to cooler locations. Normal Due Dilligence would at a minimum request us to re-start older stations with old tech and new tech measureing equipment to see the actual difference the change in reporting makes up. On TMax/min. And on average against the newer stations. Where possible. Not modelled. At least with a global coverage fairly representing what we actually measured “back in the days”. I am fairly sure it will not happen, but a project to show todays data as they would have been reported in the past should be feasable and of interrest to honest climate investigators or scientists.
All the best from W-Norway with -10C.
ADAV ==> Thanks for the man-in-the-snow report from Norway. CliSci isn’t really into Normal Due Diligence….well, I apoligize to those serious scientists that work in the field who do so, but those we read of in the media, those media “go-to” experts just push agenda, truth-be-damned.