Guest essay by Larry Hamlin
The L A Times ran an article addressing the year 2023 Northern Hemisphere summer temperatures which falsely claimed that:
“But in Britain and the United States, global records go back to the mid-1800s, and the two countries’ weather and science agencies are expected to concur that this summer has been a record breaker.”

Despite all the climate alarmist politically driven ignorance-based hype about “record” year summer 2023 temperatures the reality of the year 2023 summer temperatures in the U.S. and other global locations are, in fact, disputed by NOAA’s measured data.
NOAA’s year 2023 U.S. temperature data records covering the NOAA defined 3-month summer June through August period actually shows that U.S. 2023 summer temperatures were far below “record” summer maximum temperature levels regardless of whether one is looking at NOAA’s national, regional, state, county or city summer temperature data.
Looking first at NOAA’s National Contiguous U.S. Maximum Temperature for year 2023 (shown below) we see a maximum temperature of 85.72 F which represents the 109th highest maximum summer temperature of the 129 maximum summer temperatures identified. There are 20 years in which the Contiguous U.S. Maximum Temperature was higher than in 2023 with the highest ever being in 1936 at 87.92 F.
The year 2023 is not even close to being a “record” maximum highest summer temperature for the Contiguous U.S.
Looking next at NOAA’s West Regional Time Series for year 2023 (shown below for the West Region) we see a maximum temperature of 86.6 F which represents the 73rd highest maximum temperature of the 129 maximum summer temperatures identified. There are 56 years in which the West Regional Maximum Temperature was higher than 2023 with the highest ever being in 2021 at 91.2 F.
There are 7 other NOAA Regional Areas which are the Ohio Valley, Upper Midwest, Northeast, Northwest, South, Southeast, Southwest and Northern Rockies Plains of the U.S. with each of these showing that the year 2023 maximum summer temperature is not the “record” highest with these “record” highest maximum years being 1936, 1988, 1949, 2021, 2011, 2020 and 1936 respectively.
The year 2023 is not even close to a “record” maximum highest summer temperature for any of NOAA’s U.S. Regions.
Looking next at the NOAA’s Statewide Time Series Maximum Temperature for year 2023 (shown below California) we see a maximum temperature of 87.9 F which represents the 77th highest maximum temperature out of the 129 maximum summer temperatures identified. There are 52 years in which California’s Statewide Maximum Temperature was higher than in year 2023 with the highest ever being in 2021 at 91.9 F.
Of the 48 States in the Contiguous U.S. 47 did not see a “record” maximum highest summer temperature in 2023. Only Louisiana has a “record” maximum highest summer temperature in year 2023.
California’s year 2023 summer temperature was not even close to a “record” maximum summer temperature.
47 of the 48 Contiguous U.S. States did not have a “record” maximum summer temperature.
Looking next at NOAA’s County Time Series Maximum Temperature for year 2023 (shown below for Los Angeles County) we see a maximum temperature of 85.0 F which represents the 49th highest out of 129 maximum summer temperatures identified. There are 80 years in which the Los Angeles County Maximum Temperature was higher than in year 2023 with the highest ever being in 2006 at 89.7 F.
There are 58 California counties listed in NOAA’s County Time Series and none these counties had a “record” maximum highest summer temperature in 2023.
Looking next at NOAA’s City Time Series Maximum Temperature for year 2023 (shown below for Los Angeles) we see a maximum temperature of 72.8 F which represents the 47th highest summer temperature out of the 79 identified. There are 32 years in which the Los Angeles City Maximum Temperature was higher than in year 2023 with the highest being in 2006 at 76.8 F.
There are 9 California cities (including Death Valley) listed in NOAA’s City Time Series and none of these cities had a “record” maximum highest summer temperatures in 2023.
NOAA’s temperature data as identified and addressed in the above discussion for year 2023 clearly indicates that the U.S did not have “record” breaking summer temperatures in year 2023 – not at the national level, not at the nations regional level, not at the nations state level, not at the level of California’s 58 counties, and not at the level of 9 major California cities.
Despite these outcomes in California and the U.S. climate incompetent alarmist politicians and news media will continue to falsely hype “record breaking heat” as being present across the nation and its states, regions, counties, and cities based on the grossly and completely invalid misapplication of a global wide temperature averaging outcome that cannot define temperature outcomes at specific global regional, national, country, state, county, or city locals.
Additionally, given the 2023 summer temperature outcomes of the U.S. addressed above it seems unlikely that the Global Region of North American had “record” summer temperature outcomes as hyped by the propaganda driven alarmist media. This outcome is further addressed below.
Alaska highest Maximum Summer Temperatures Peak was 65.3 F in 2004 with the year 2023 maximum summer temperature being only 60.6 F in 2023 making it the 85th out of 99 total yearly maximums as shown below.
The highest reported temperature measured in Canada this year was 41.4 C in British Columbia’s South Coast on August 14, 2023.
Canada’s highest ever reported temperatures are shown below.
This data clearly supports that Canada did not have “record” high maximum summer temperatures in year 2023 which is consistent with the same outcome for Alaska whose territories occupy the same Northern Hemisphere global latitudes as Canada.
Given that neither the U.S., Alaska nor Canada had “record” high maximum summer temperatures in 2023 it seems extremely probable that neither did the entire North America Global Region.
But despite this reality in the inane ignorant based climate alarmist political propaganda world the alarmism hype will continue unabated.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.







On September 25, a cold front will draw precipitation to California.
https://earth.nullschool.net/?fbclid=IwAR35xltlYSkdXdFBR_bH10xA8Ol23jRR-XZKuG5dLHsolJj6Z8mIAXRdVB4#2023/09/25/2300Z/wind/isobaric/700hPa/overlay=temp/orthographic=-136.40,48.24,844
It is common for Northern California to have a storm at the end of September, followed by a month of Indian Summer, before the Winter rains, overcast, and Tule Fogs set in.
Some people claim regarding August 2023 that “In the globe as a whole, it was the hottest August in the record by a long way (about 0.24C)” That tell us nothing about maximum temperatures at specific global regions, nations, countries, cities, etc.
Based on NOAA Time Series Data August 2023 was not the “hottest month in the record” across the Contiguous U.S., across any of NOAA’s 8 U.S. Regions, across 47 of 48 U.S. States, across any of the 58 counties in California, across 8 of the 9 cities tracked by NOAA in California.
A “global” averaged value is nothing but hype when used as some flawed alarmist claim that is supposed to reflect climate behaviors at specific locals. The alarmist media does this all the time and is never corrected by those representing alarmist leadership.
karlomonte, the Gormans and more,
Some commentators here like you, question the ways that uncertainties and errors are expressed.
The other team claim success by showing close agreement between GISS, BEST, Hadley long term global T anomalies.
The former group might enhance their case by showing, with examples, how such agreement is achieved and why it is wrong. Geoff S
It is wrong from the very beginning. I’ve shown that. The daily mid-range value is *NOT* an average. It’s the max temp from a sinusoid plus a min temp from an exponential decay. If you want an *average* daily temp then you should find the average daytime temp from the sinusoid and the average nighttime temp from the exponential decay and calculate the daily average from those values.
That mid-range value is *NEVER* carried forward as a “stated value +/- uncertainty”. It is used as a 100% accurate temperature in monthly averages. The uncertainty of that monthly average is given, when it is given at all, as the SEM of the average as calculated from the daily mid-range values.
The GUM says the uncertainty *should* be the dispersion of values around the mean, not the dispersion of sample means around the average, i..e the SEM. That dispersion can be done in a couple of different ways. *IF* it is assumed that all of the temperatures are measures of the same thing using the same device under identical conditions with no systematic bias involved then the standard deviation of the stated values can be used as the uncertainty. The standard deviation of the stated values is *NOT* the SEM. If you cannot meet these restrictions then the uncertainty should be the propagated uncertainties from each element added in quadrature.
It’s truly that simple. And neither method will give you uncertainties in the hundredths or thousandths digit.
If the uncertainty interval used in climate science is in the hundredths digit then I really don’t care how they calculated it, THEY DID IT WRONG!
“The GUM says the uncertainty *should* be the dispersion of values around the mean, not the dispersion of sample means around the average,”
A quote would be helpful.
I think you keep confusing the uncertainty of an instrument, which could be determined by the dispersion of values from repeated measurements of the same thing, with the uncertainty of the mean.
GUM 4.2.3 explains this for repeated observations.
“If you cannot meet these restrictions then the uncertainty should be the propagated uncertainties from each element added in quadrature.”
and you still can’t figure out that adding in quadrature will give you the uncertainty of the sum of the elements, not of the mean.
ooooooo the great one tells it like it isn’t.
“A quote would be helpful.”
In other words you’ve never actually studied the GUM, just cherry-picked from it. Just like you do with Taylor, Bevington, etc!
————————————————–
C.3.2 Variance
The variance of the arithmetic mean or average of the observations, rather than the variance of the individual observations, is the proper measure of the uncertainty of a measurement result. The variance of a variable z should be carefully distinguished from the variance of the mean z.
———————————————–
Now, misread this and tell us how “the variance of the individual observations” means the variance of the data set instead of the variances of the individual elements.
“I think you keep confusing the uncertainty of an instrument, which could be determined by the dispersion of values from repeated measurements of the same thing, with the uncertainty of the mean.”
The SEM is USELESS for stating measurement uncertainty!
See the attached image. The dashed line is the measurement uncertainty, σ_x. The solid line is the SEM, σ_xbar.
THEY ARE NOT THE SAME THING. And σ_xbar tells you NOTHING about the measurement uncertainty. The SEM is merely how well you have calculated the population mean, it is *NOT* the measurement uncertainty of the mean.
If you would actually STUDY Taylor instead of just cherry-picking things you think confirm your misconceptions, you would be much further ahead when it comes to metrology!
Well done. You managed to find a quote that literally says the opposite of what you claimed.
“Now, misread this and tell us how “the variance of the individual observations” means the variance of the data set instead of the variances of the individual elements.”
It means exactly what it says. And it also says the variance of the individual observations is NOT the proper measure of the uncertainty, but that the variance of the arithmetic mean is.
They even give the equation for the variance of the arithmetic mean, which in case you still don’t get it is just the square of the standard error of the mean, or as they call it the experimental standard deviation of the mean.
“THEY ARE NOT THE SAME THING”
Indeed they are not. One is the uncertainty of the individual measurements, the other is the uncertainty of the mean.
“Well done. You managed to find a quote that literally says the opposite of what you claimed.”
You don’t enough know enough to evaluate the context of the quote. It says *exactly* what I claimed.
“The variance of a variable z should be carefully distinguished from the variance of the mean z.”
The variance of the variable z is the measurement uncertainty, not the variance of the mean! The variance of the mean is only applicable when you have multiple measurements of the same thing using the same instrument under the same environmental conditions!
I give you *these* quotes from the GUM:
————————————————-
B.2.18 uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand
NOTE 1 The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence.
NOTE 2 Uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the statistical distribution of the results of series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.
C.2.20 variance
a measure of dispersion, which is the sum of the squared deviations of observations from their average divided by one less than the number of observations
———————————————-
(bolding mine, tpg)
The bolded part above is basically describing Type A and Type B measurement uncertainty.
You *need* to study the documents outlining how to do measurement uncertainty and stop trying to cherry-pic from things you don’t understand!
“dispersion of the values that could reasonably be attributed to the measurand”
This is *NOT* the variance of the mean. If is the variance of the measurements! It *is* important to distinguish between the variance of the variable z and the variance of the mean of z. Just like the first quote I gave you says!
“It means exactly what it says. And it also says the variance of the individual observations is NOT the proper measure of the uncertainty, but that the variance of the arithmetic mean is.”
Again, ONLY when you have one specific condition – the condition you ALWAYS assume: “all measurement uncertainty is random, Gaussian, and cancels”.
You just refuse to learn. STUDY the texts you have been given!
“Indeed they are not. One is the uncertainty of the individual measurements, the other is the uncertainty of the mean.”
And the uncertainty of the mean is USELESS for the global temperature data since it is not Gaussian and does *not* cancel. You continue to try and distract from the issue – the global average temperature. I assume in the faint hope that people will join you in your meme that all measurement uncertainty is random, Gaussian, and cancels.
If you think about it, his line about how “sampling is the largest source of uncertainty” is just an underhanded way of saying that the SEM is the end-all-be-all, all you need to know about the subject. Another example of his dishonesty.
And he will NEVER acknowledge that the sample size of a time-series measurement is always and exactly equal to one. And, as you pointed out, with one degree of freedom, the SEM is then 0/0, undefined.
He’s read about the SEM somewhere and doesn’t understand it any better than he does measurement uncertainty.
“You don’t enough know enough to evaluate the context of the quote.”
The context is, you claimed the GUM says “the uncertainty *should* be the dispersion of values around the mean, not the dispersion of sample means around the average”. I asked for an actual quote from the GUM saying that. Then you provided a quote saying the exact opposite.
So now you are falling back on the fact that your quote was talking about measurements of a single thing, and not a sample of different things.
So, again, where in the GUM does it say that uncertainty should be the dispersion of values, rather than the dispersion of means, when you want to know the uncertainty of a mean.
“It says *exactly* what I claimed.”
As long as you ignore the bit where it said
“The variance of the variable z is the measurement uncertainty, not the variance of the mean!”
It’s the measurement uncertainty of a single measurement, not of the average of several measurements.
“The variance of the mean is only applicable when you have multiple measurements of the same thing using the same instrument under the same environmental conditions!”
And round and round the plug hole we spin.
You keep doing this. Demand that the GUM is used to describe the measurement uncertainty of a global mean, then say that the global mean is not a measurement and therefore we should ignore the GUM.
“The bolded part above is basically describing Type A and Type B measurement uncertainty.”
Correct. It would be more impressive when you get something correct, if you didn’t immediately follow it up with an insult, but well done anyway.
““dispersion of the values that could reasonably be attributed to the measurand”
This is *NOT* the variance of the mean.”
If we are talking about a global average than that is the measurand we are talking about. The uncertainty is the dispersion of values that could reasonably be attributed to the global mean.
“If is the variance of the measurements!”
You need to explain what variance of the measurements you are talking about – then explain how this could possible make sense when we are talking about the uncertainty of the mean, and not the uncertainty of the measurements.
Rest of the mindless insults and parroting mantras about Gaussian distributions ignored.
You’re an idiot.
Trendology stops at step 2, doesn’t care about variance.
Clown.
Clown is right!
It’s like he has never read an LIG thermometer!
I know how to calculate a variance – better than you it would seem.
What I’m trying to get you to explain is what “numbers” you want to know the variance of? Which variance will help you determine the uncertainty of a global monthly average anomaly?
“3. calculate standard deviation”
“4. square standard deviation”
You do realize you had to work out the variance to get the standard deviation don’t you?
You *HAVE* to start by calculating the standard deviation of the daily mid-range value.
Let’s look at Jan 1, Jan 2, and Jan 3 of this year.
Jan 1: 60F +/- 1F ,39 +/- 1F
Jan 2: 49 +/- 1F,39 +/- 1F
Jan 3: 47 +/- 1F,30 +/- 1F
Jan 1: mean stated value = 49.5, stated value SD = 10.5
Jan 2: mean stated value = 44, stated value SD = 5
Jan 3: mean stated value = 38.5, stated value SD = 8.5
The stated value mean of those three days is 44. The stated value SD is 4.5
What is the measurement uncertainty of that average of 44?
You’re mixing up so many different things here it’s a bit difficult to know where to start.
As I tried to explain to at least one of you some time ago – you do not look at the range of values in a day to determine the uncertainty of the mean. You are not basing the mean on two random measurements taking at the random times of the day – you are taking two fixed points of the day.
If you want to know what the uncertainty in an individual daily mean, you need to define exactly what you are measuring. Are you using TMean as an estimate of the true mean temperature of the day, or do you just want to use it as a reasonable index of what the overall temperature was?
If you just want to know what the uncertainty in an individual TMean is, then it would just depend on the measurement uncertainties of TMax and TMin. Taking your values and assuming they are independent uncertainties then the uncertainty would be ±1 / √2 ≈ ±0.71, and the uncertainty of the average of the 3 days would be 0.71 / √3 ≈ ± 0.41.
If on the other hand you wanted to use the “NIST protocol”, you could use the SD of the means. 5.5 / √3 ≈ 3.2. This is taking the 3 values as a random sample, i.e. the sampling uncertainty. The assumption being that each daily mean is just a random value around the true mean. Note, that 3.2 is the standard uncertainty. You would have to multiply it by a coverage factor based on a Student-t distribution to get the ± expanded uncertainty.
If you want to know the uncertainty of the true means, based on the estimate of TMean, you would need to look have some estimate of how much TMean varies from the average, and this may well include a systematic bias.
You could look at comparing CRN data, which includes daily averages based on very small time intervals.
You may remember I did look at that myself, and produced some interesting results – but you of course just blew it off.
More hand-waving, proof by assertion.
You STILL have absolutely zero clue about what uncertainty is, and you demonstrate it again and again with ridiculous claims like this.
Try again: how many separate averages go into an anomaly graph?
He is *STILL* trying to substitute the SEM for the measurement uncertainty of the mean while hoping no one notices!
Yep. And notice that he ran away fast from my challenge.
A lesson in running away from a challenge from Mr “don’t play your Jedi mind tricks on me, I’ve told you thousands of times I will not try to educate you any further, you are in position to demand I answer any of your questions.”
Which hand-waving challenge was this?
“how many separate averages go into an anomaly graph?”
I think I said before – lots. And a proper global anomaly is not based on simply averaging values.
You don’t say which particular graph you are talking about, but at a rough estimate, say if you are talking about an average 30.417 day month, and there are about 5000 stations active, and each station has a full compliment of days, and is based on the average of min and max, that’s 304170 separate measurements. In addition the anomaly is based on a 30 year base period, so that’s another 9125100.
If you want a year, multiply that by 12.
Why are all the standard deviations IGNORED?
This is the point you run away from.
Averaging and anomalies DO NOT reduce uncertainty.
I am not running away. I’m trying to get you to say what standard deviations you think are being ignored, and how you would use them if they weren’t ignored.
If you are talking about the standard deviation of a random sample, then that is not ignored – it’s fundamental to how the uncertainty is calculated – SD / √N remember.
Any other sensible method of estimating uncertainty will also require the SD either directly or indirectly.
The only time I see the standard deviations of the values ignored, is when uncertainty is only being described in terms of measurement uncertainties.
“Averaging and anomalies DO NOT reduce uncertainty.”
Well if you say so it must be true.
Why would I hope nobody notices? The SEM by definition is the uncertainty of the mean. How you estimate it, and how you correct for systematic errors is another question.
Yet another indication of your abject ignorance—YOU CAN’T.
More bullshit. To me you are the equivalent of the modern-day flat-earth believers—the Earth is flat and no amount of reasoning and demonstration can ever convince them they are wrong.
“You are not basing the mean on two random measurements taking at the random times of the day – you are taking two fixed points of the day.”
Each of those points still have measurement uncertainty – which you always want to ignore. They also define a distribution of size 2 meaning they have a standard deviation. You can’t call them an “average” without them also having a standard deviation!
“Are you using TMean as an estimate of the true mean temperature of the day, or do you just want to use it as a reasonable index of what the overall temperature was?”
It’s not a reasonable index since each is from a different distribution, one is sinusoidal and the other an exponential decay.
” uncertainty of the average of the 3 days would be 0.71 / √3 ≈ ± 0.41.”
Once again you are trying to substitute the SEM for the measurement uncertainty of the mean. The SEM is *NOT* the measurement uncertainty of the mean. For three days the measurement uncertainty of the mean would be (0.7) * sqrt(3) = 1.2.
Since the temps for those three days were taken at the same location by the same instrument you could do what Possolo did in TN1900 and find the variation of the values, i.e. the SD. The SD of the mid-range values is 4.5. That is LARGER than the propagated measurement uncertainty. And I didn’t even expand it for the coverage factor!
Now come back and tell us that Possolo did it wrong in TN1900!
“Each of those points still have measurement uncertainty”
You keep equivocating as to which uncertainty you are talking about. You were the one who brought up the standard deviation of the max and min values. You seem to imply that has some relevance to the uncertainty of the mean.
“which you always want to ignore.”
I ignored it by telling you what it would be.
“They also define a distribution of size 2 meaning they have a standard deviation.”
”
But they are not two random values. What you have is a range. It is not going to tell you what the standard deviation of the daily temperature profile is, and it does not tell you what the uncertainty of your mean is.
“It’s not a reasonable index since each is from a different distribution, one is sinusoidal and the other an exponential decay.”
Which tells you nothing about how useful it is as an index.
“Once again you are trying to substitute the SEM for the measurement uncertainty of the mean.”
I’m propagating the measurement uncertainties, if that’s what you mean. I am not treating them as a random sample, just looking at the measurement uncertainties.
“For three days the measurement uncertainty of the mean would be (0.7) * sqrt(3) = 1.2.”
Yes, we all know that’s what you believe, and have elevated that believe into a religion. The fact it disagrees with every book on metrology ever produced and leads to obviously wrong conclusions will not affect your delusion one bit.
“Since the temps for those three days were taken at the same location by the same instrument you could do what Possolo did in TN1900 and find the variation of the values, i.e. the SD.”
By TN1900 you mean treat them as a random sample and find the SEM, correct?
“The SD of the mid-range values is 4.5. That is LARGER than the propagated measurement uncertainty.”
You missed the bit where TN1900 divides by the square root of the sample size. Taking the actual SD of those three values, which is 5.5, not 4.5, the SEM is 5.5 / √3 ≈ 3.2, as I told you in the previous comment.
And yes, this is much larger than the measurement uncertainty – something I keep trying to tell you.
Idiot! Once again, how many separate averages are needed to generate any of your precious trendology anomaly graphs?
Answer this if you dare.
Lots. And a lot of calculations besides averaging.
If you have a point now might be the time to reveal it. So far all you’ve done is hand-wave about hidden standard deviations, and thrown out your usual juvenile taunts.
And what happens to all the standard deviations from these calculations?
They (and you) IGNORE THEM.
The point that flew over your pointy head is they ALL affect the final result. Averaging and anomalies DO NOT reduce air temperature uncertainty!
You can’t even admit that (Tmax+Tmin)/2 *has* a standard deviation! And you accuse someone else of not making their point?
Why would I admit that? It’s blatantly false. A mean does not have a standard deviation. A distribution will have both a standard deviation and a mean. A sampling distribution means will have a standard deviation (also known as the SEM).
As always though you throw words like standard deviation around never saying which one you are talking about.
For example the variance of all monthly anomaly values for UAH is 0.06°C², or more usefully the standard deviation is 0.25°C.
That could be taken as an indication of the standard uncertainty in a monthly global anomaly measurement. But that ignores the fact that monthly values would be expected to vary, regardless of measurement uncertainty, both from random effects like ENSO, and due to a warming trend. It would suggest that there is an upper limit on the standard uncertainty of 0.25°C.
Somehow I suspect you are going to say that that is not the variance you are looking for, and then claim that your variance is much bigger and somehow that will make the monthly uncertainties much bigger regardless of how much they actually vary.
YMHW.
You keep forgetting to propagate the measurement uncertainty associated with the measurements of the base temperatures.
On a spring day where max temp is 70 and the min temp is 50, the mid-range temp will have a standard deviation of 10 (a variance of 100). If that is considered to be the uncertainty then it needs to be propagated into the calculation of the monthly anomaly.
If the measurement uncertainty is propagated instead of using the variation the measurement uncertainty will be at least sqrt(0.5^2 + 0.5^2) = 0.7.
If you have 30 “stated values +/- 0.7” then the total measurement uncertainty is sqrt(30) * 0.7 = 3.8.
When calculating a monthly anomaly that measurement uncertainty *must* be propagated onto the anomaly along with the measurement uncertainty of the monthly baseline.
Why do you *always* assume all measurement uncertainty is random, Gaussian, and cancels?
Think about it for a minute. What if you had 30 days of the same temperatures, say 70/50. The variation would be zero but the total measurement uncertainty would *NOT* be zero. The standard deviation of the daily mid-range values would not be propagated onto the monthly average either. Yet that standard deviation of the daily mid-range value *exists*, it can’t just be ignored the way you do!
The methodology of how to calculate the measurement uncertainty has to be able to handle *all* situations. Your methodology doesn’t.
“You keep forgetting to propagate the measurement uncertainty associated with the measurements of the base temperatures.”
I’m looking at the actual values. Why do you want to add an additional hypothetical uncertainty? If you give me a tape measure and insist that it’s uncertainty is 10cm, but I measure the same thing a few hundred times and find the standard deviation of all the measurements is 0.1cm, what am I going to believe – the data or your theory?
You can use your claimed understanding of how to propagate uncertainties on to a monthly mean of satellite data, but if you are claiming it should have an uncertainty of multiple degrees, yet every actual monthly value is within a few tenths of a degree, maybe, just maybe, your calculations are wrong.
“Think about it for a minute. What if you had 30 days of the same temperatures, say 70/50. The variation would be zero but the total measurement uncertainty would *NOT* be zero.”
As ?i keep saying, nobody is calculating the uncertainty like that. And if you do get 30 identical measurements the obvious conclusion is your thermometer is broken.
The ONLY uncertainty that you consider is what you get after 500,000 averages.
Instrumental measurement uncertainty is NOT hypothetical, why can’t you understand this?
He’ll *never* get it as long as he lives with the mem that all measurement uncertainty is random, Gaussian, and cancels.
He might as well be one of the flat-earth believers.
“I’m looking at the actual values. “
Measurements are given as “stated value +/- measurement uncertainty”.
Why do you always want to throw away the measurement uncertainty?
“If you give me a tape measure and insist that it’s uncertainty is 10cm, but I measure the same thing a few hundred times and find the standard deviation of all the measurements is 0.1cm, what am I going to believe – the data or your theory?”
Measurement uncertainty is a sum of random error and systematic bias. u(total) = u(random) + u(systematic).
If you don’t know either u(random) or u(systematic) and you just assume they cancel out in the end then any analysis you make of the stated values will be wrong.
Please tell us that for the 10cm uncertainty, which part is u(random) and which part is u(systematic).
“So, again, where in the GUM does it say that uncertainty should be the dispersion of values, rather than the dispersion of means, when you want to know the uncertainty of a mean.”
Your lack of reading comprehension is showing again, I BOLDED the quote that tells you exactly what you are asking for!
“If we are talking about a global average than that is the measurand we are talking about. The uncertainty is the dispersion of values that could reasonably be attributed to the global mean.”
And exactly *what* is the dispersion of values that could be attributed to the measurand? Is it the SEM or the dispersion of the temperature values?
“You need to explain what variance of the measurements you are talking about”
OMG! The temperature data set has ONE variance. It is an indication of the spread of the values in the data set.
He is so lost the search party got lost looking for him.
“The other team claim success by showing close agreement between GISS, BEST, Hadley long term global T anomalies.”
The main argument against that, is that all those data sets use to some extent the same data, so you would expect some agreement.
A better test of uncertainty is to compare independent data sets, such as surface and satellite data. Given it’s claimed that both surface data and UAH data have monthly uncertainties of > 1 °C, it’s remarkable that they can both detect fluctuations that are only a few tenths of a degree.
For example, comparing the difference in monthly values between UAH and GISS, the standard deviation is only 0.16°C. This is despite the fact that the warming trends are different, and they are measuring different things.
Where does climate science hide the standard deviations?
Why do they hide the standard deviations of the temperature data?
Which do you want? And what relevance does that have to the point I was making?
Its a simple question, but obviously beyond your ken.
bellman: “A better test of uncertainty is to compare independent data sets”
If you don’t know the standard deviations of the independent data sets then you can’t analyze them statistically. Comparing them is like comparing Shetland ponies to quarter horses.
You can’t seem to get anything right.
That he pretends to not see the relevance of the questions is glaring.
That you still can’t answer my question is also glaring. What standard deviations do you want?
In addition, what statistical analysis do you want me to do with them?
I have 500 or so monthly anomalies in two independent data sets. I’ve compared the standard deviation of the difference between the two. What do you want me to do with the other, as yet unidentified standard deviations?
How many separate averages are needed to calculate the UAH?
What happens to the standard deviations?
And yes, I’ll state it again:
That you pretend to not see the relevance of the questions is glaring.
What is the standard deviation of Tmax and Tmin for the past three days here in Kansas?
87,54,85,58,81,64
Never mind. I’ll give it to you. The variance is 176 and the standard deviation is 13. The mean is 71. The standard deviation compared to the average is 0.18, about 20% of the mean.
The daily averages are 0.5, -0.5,-1.5. This gives a mean of -0.5 and a standard deviation of 0.8. The standard deviation of the anomalies compared to the mean of the anomalies is 1.6 or about 160% of the mean.
The variance of the anomalies is much higher than the variance of the absolute temps.
You can’t just look at the values of the standard deviations and say the SD of the anomalies is much smaller. The *spread* of the values of the anomalies is much greater compared to the spread of the absolute values. If you normalize both distributions the anomalies will have a lot “flatter” distribution than the absolute values – meaning the uncertainty of the anomalies is larger than the uncertainty of the absolute values.
You *still* can’t relate to the real world at all!
“What is the standard deviation of Tmax and Tmin for the past three days here in Kansas?”
Why are you mixing max and min values?
“The standard deviation compared to the average is 0.18, about 20% of the mean.”
You still don’t get that you cannot take the percentage of non-absolute values. Do you rally think that 20 degrees is twice 10 degrees?
“The daily averages are 0.5, -0.5,-1.5.”
Averages of what?
“This gives a mean of -0.5 and a standard deviation of 0.8.”
Again, what are you talking about at this point?
“The standard deviation of the anomalies compared to the mean of the anomalies is 1.6 or about 160% of the mean.”
Almost as if anomalies were much smaller than absolute temperatures. What would you say if the anomaly was zero?
“You can’t just look at the values of the standard deviations and say the SD of the anomalies is much smaller.”
?
“The *spread* of the values of the anomalies is much greater compared to the spread of the absolute values.”
You’ve based that on just 3 consecutive days in one location. The SD of the absolute temperatures and the anomalies should be identical (unless you are such an idiot as to look at them as percentages of the mean).
The benefit of using anomalies is when you are comparing either the SD of temperatures at a single place throughout the year, or are looking at places across the globe.
“If you normalize both distributions the anomalies will have a lot “flatter” distribution than the absolute values – meaning the uncertainty of the anomalies is larger than the uncertainty of the absolute values.”
??
“You *still* can’t relate to the real world at all!”
Do you really think anyone cares about these petty insults? Or do you just add them for your own reassurance?
I was going to keep out of this, but you blokes seem to have been arguing so much that you’ve confused each other and yourselves.
As long as the scale step size is the same (K/C, Ra/F), the base zero point makes no difference to the size of the range, hence variance and standard deviation. The mean and upper & lower bounds will be different, but it doesn’t affect the sd. The same applies for “anomalies”, which are just site-specific zeros.
A range of 0 C to 20C is 20 degrees, just like 273K to 293K (intentionally rounded to integer values). So is -10 anomaly(K) to +10 anomaly(K) with an arbitrary baseline of 10C|283K.
What the anomalies do do is to re-baseline all display values to provide some form of consistency as weather stations are moved, added or retired.
Which is why I said that you have to normalize everything in order to compare variances.
If you normalize everything, then all the variances will be 1.
Maybe I’m misunderstanding what you mean by “normalize”, in which case could you explain exactly what you want to do.
You don’t even know basic statistics or you would know how to normalize a data set. Excel will do it for you if you don’t know how to do it yourself.
X_normalized = (X- X_min)/(x_max-x_min)
This eliminates the variations in the scale of different data sets. I.e. a data set with small numbers can be easily compared to a data set with large numbers.
I’m through trying to teach you basic math skills. You are never going to learn. You have a Statistics 101 for non-math majors textbook and you try to cram everything into the one distribution you learned from it, a random, Gaussian distribution.
You’ll *never* be able to understand metrology with your limited skill set since you are unwilling to expand your skill set. I’ll leave you to KM if he can stand it.
I’ve had more than enough flat-earth preaching for this week.
“You don’t even know basic statistics or you would know how to normalize a data set.”
I was asking you what sort of normalization you want. There are multiple meaning to “normalize a data set”.
“X_normalized = (X- X_min)/(x_max-x_min)”
Min-max feature scaling, in other words.
This changes all the values to a range of 0 – 1. Why exactly do you want to do this, given you also insist we have to look at the standard deviation?
“This eliminates the variations in the scale of different data sets.”
What different data sets?
“I’m through trying to teach you basic math skills.”
You were before you started.
“I was asking you what sort of normalization you want. There are multiple meaning to “normalize a data set”.”
Malarkey! You are now using your typical Evasion Rule No. 2, claim the question was badly formed or vague.
We were discussing comparing two distributions having different scales. The type of normalization required to do this is obvious. You have to normalize both to a standard scale. I tried to do this by giving you percentages and you said that couldn’t be right, that the smaller number had a smaller standard deviation. Using that logic all you have to do to eliminate the standard deviation is divide everything by infinity! An obvious logic fallacy.
Again, I’m done teaching you basic math skills. All you are doing now is blowing smoke trying to make it look like you know what you are talking about. It’s obvious that you don’t!
“We were discussing comparing two distributions having different scales.”
We are not. Anomalies and temperatures have exactly the same scale. Celsius. The difference is the range, and standard deviation – not the scale. There is zero reason to adjust the figures – it’s just your desperate attempt to avoid the obvious – anomalies have less deviation than absolute temperatures.
“I tried to do this by giving you percentages and you said that couldn’t be right, that the smaller number had a smaller standard deviation.”
I said it isn’t right. You should know why it isn’t right by now. I’m sure there was a time when you used to attack me, incorrectly, as someone who thought that 20°C was twice as hot as 10°C.
“Using that logic all you have to do to eliminate the standard deviation is divide everything by infinity! An obvious logic fallacy.”
You made that idiotic remark in another comment and I’ve already explained where your problem is. You still can’t accept that anomalies are not re-scaled temperatures, they are shifted temperatures. You don’t divide a temperature by anything to get an an anomaly, you just subtract a base value.
“Again, I’m done teaching you basic math skills.”
I hadn’t realized you’d started.
“All you are doing now is blowing smoke trying to make it look like you know what you are talking about. It’s obvious that you don’t!”
I’ll leave it as an exercise for the reader to determine exactly what sort of irony this is.
“but you blokes seem to have been arguing so much that you’ve confused each other and yourselves.”
Yes. I think that’s often the problem. It would help if people didn’t just keep demanding answers to questions such as “what’s the variance” without ever specifying which variance they are talking about. I’m sure I’m as guilty of this as anyone – but it does get difficult to keep everything straight when every minor comment blows up into a multiple thread argument with novella long comments.
“As long as the scale step size is the same (K/C, Ra/F), the base zero point makes no difference to the size of the range, hence variance and standard deviation. The mean and upper & lower bounds will be different, but it doesn’t affect the sd. The same applies for “anomalies”, which are just site-specific zeros.”
True, but the site-specific (and also time-specific) is the point.
“What the anomalies do do is to re-baseline all display values to provide some form of consistency as weather stations are moved, added or retired.”
As well as reducing seasonal differences.
My point is that this reduces the standard deviation of a global and annual sample. Because you are reducing many of the systematic factors that cause the wide range of temperatures, and are just looking at how any one reading differs from a base period, at the same place and time.
Tim’s argument that taking anomalies increases variance changes. First he’s arguing that the uncertainty of the anomaly is bigger than that for a single measurement – which is true but largely irrelevant. It’s the spread of values across the globe that reduces, regardless of the small increase in variability caused by the uncertainty of the base period.
And now we have the more desperate claim that anomalies have more variance, based on looking at the relative variance – which makes no sense given the fact that the mean anomaly will be much closer to zero, than will a temperature based on much colder zero. To really exaggerate this he reverts to Fahrenheit, making the zero even colder.
And then he avoids any advantage in using anomalies by only looking at 3 days with the same base.
Averaging and anomalies do not reduce measurement uncertainty!
This is the fundamental point that you refuse to acknowledge.
No! They have less because of all the information that has been thrown away.
The monthly range and variance are the same, just the base changes so you get different nominal maxima, minima and mean.
The simplistic deseasonalisation effect of changing the zero point every month will certainly reduce the reported annual range, but does it reduce the variance? I’d expect that combining the 12 monthly variances would be much the same as the annual variance derived from daily figures, allowing for rounding errors and differences in month lengths.
Bear in mind that the zero adjustments run in opposite directions in the northern and southern hemispheres.
aiui, the global monthly figures are an area-weighted mean of the monthly site figures, and the annual figures are derived from the monthly figures in any case.
“The monthly range and variance are the same, just the base changes so you get different nominal maxima, minima and mean.”
For a single station, yes.
“The simplistic deseasonalisation effect of changing the zero point every month will certainly reduce the reported annual range, but does it reduce the variance?”
If you are talking about the variance of temperatures across the year, I would expect so. That is the point I was addressing in Clyde Spencer’s comment. Where he’s says “For a global temperature of over 200 deg F, the SD should be about 50 deg based on annual temperatures.”. I assumed that was meant to mean taking all temperatures across the year.
As I said elsewhere I checked this using CET daily values for 2022. Thought their I based the anomalies on daily values, rather than monthly values. The SD for temperatures was about twice that for anomalies.
“I’d expect that combining the 12 monthly variances would be much the same as the annual variance derived from daily figures, allowing for rounding errors and differences in month lengths.”
If you mean averaging each monthly variance then yes, that seems right. If you are using a single base mean for each month the variance for any month will be the same for temperatures and anomalies. But I think Spencer was talking about the variance across the year, not for each month.
“aiui, the global monthly figures are an area-weighted mean of the monthly site figures, and the annual figures are derived from the monthly figures in any case.”
Yes, and that’s another reason why just looking at the SD for the globe is not a good measure of uncertainty. I probably should have mentioned it along with anomalies.
This all started from trying to describe the uncertainty of the mean in terms of a random sample, despite repeatedly pointing out that this is not how a global average is calculated. It is not really a random sample, and can’t just be treated as such.
Yeah, deseasonalising will reduce the range and hence variance of the residual. Most deseasonalisation seems to be multiplicative rather than additive, so I have some doubts about the approach used for anomalies.
Looked at over a year, the overall variance doesn’t change, but deseasonalisation is a useful method to isolate the components of that variance. The use of anomalies sort of deseasonalises, but it would be interesting to see how that compares to more conventional approaches.
I can be a bit slow sometimes, so this took a while to percolate.
Given that each site’s anomaly is derived from the monthly average during a base period, what would the variance and SEM of the base be?
Trying to track this thread and the Scafetta thread at the same time sort of brought the strands together.
ALL HAIL THE ALMIGHTY ANOMALY!!!
ALL BOW DOWN!!!
“Why are you mixing max and min values?”
Because that is what lies at the root of the GAT! Are you truly that simple minded?
“You still don’t get that you cannot take the percentage of non-absolute values. Do you rally think that 20 degrees is twice 10 degrees?”
Of course I can!
At constant pressure dH = C_p dT. If you double the temperature then you double the enthalpy.
So YES, you get twice the enthalpy from 20K as you do from 10K.
Btw, enthalpy is the total heat content of a system in case you didn’t know.
“Averages of what?”
Your lack of reading comprehension skills are showing again. I gave you the temperatures.
“Again, what are you talking about at this point?”
Willful ignorance?
“Almost as if anomalies were much smaller than absolute temperatures. What would you say if the anomaly was zero?”
Even if the anomaly is zero it will still have a measurement uncertainty (variance, if you will).
I already covered this in another post. Even if all temps are the same, the uncertainty of the average will still be the propagated measurement uncertainty of the individual temperature measurements.
You continue with the meme of: “all measurement uncertainty is random, Gaussian, and cancels”. You just can’t get away from it, can you?
“You’ve based that on just 3 consecutive days in one location.”
So what? Adding more days won’t change anything! The anomaly will *still* inherit the sum of the varainces of the components.
“The benefit of using anomalies is when you are comparing either the SD of temperatures at a single place throughout the year, or are looking at places across the globe.”
Both you and climate science ignore the fact that winter temps have different variances than summer temps. You can’t simply ignore them! The anomalies inherit the variance of the components, meaning their uncertainties are different . Again, you can’t simply ignore them.
A 1K difference at 10K and a 1K difference at 100K are *NOT* the same as far as climate is concerned. Yet climate science (and apparently you as well) can’t seem to get that through your head!
“At constant pressure dH = C_p dT. If you double the temperature then you double the enthalpy.”
Seriously? d in that equation means change. It says nothing about the starting point or relative change. If you start at 10°C and increase it to 20°C are you doubling the enthalpy?
“So YES, you get twice the enthalpy from 20K as you do from 10K.”
Only because you’ve just switched to an absolute temperature scale. You are assuming enthalpy is zero at 0K. But that doesn’t mean it will be zero at 0°F or 0°C.
“Your lack of reading comprehension skills are showing again. I gave you the temperatures.”
And then gave me averages much lower than those temperatures. (I’m guessing you meant to say these are assumed anomaly values, but you didn’t say that.)
“Even if the anomaly is zero it will still have a measurement uncertainty (variance, if you will).”
The point I was trying to make is that you for some reason are talking about the standard deviation as percentages from the mean. It the mean anomaly was zero, what would you claim the standard deviation was as a percentage?
The point I was hoping you might figure out is that it is meaningless to talk about percentages when your base value has an arbitrary zero position.
“So what? Adding more days won’t change anything!”
It will if some of those days are in winter and some in summer.
“The anomaly will *still* inherit the sum of the varainces of the components.”
I don’t know if you realise this or not, but you keep switching your variances. Half the time you mean the variance in recorded values, and the other half you mean the variance caused by measurement uncertainty.
“Both you and climate science ignore the fact that winter temps have different variances than summer temps.”
And there he rambles of again.
“Almost as if anomalies were much smaller than absolute temperatures. What would you say if the anomaly was zero?”
He calls any uncertainty other than the little wiggles in an anomaly graph “hypothetical”.
If he doesn’t see it, it isn’t there.
“Given it’s claimed that both surface data and UAH data have monthly uncertainties of > 1 °C, it’s remarkable that they can both detect fluctuations that are only a few tenths of a degree.”
They *CAN”T* detect fluctuations of a few tenths of a degree. You are still conflating the SEM with the dispersion of values attributable to the measurand.
With a large number of data elements they can calculate the average to within a few tenths of a degree – BUT THAT IS NOT THE UNCERTAINTY OF THE MEAN! As the GUM says, the actual uncertainty of the mean is the dispersion of the data values surrounding the mean, not the sample means surrounding the mean.
“They *CAN”T* detect fluctuations of a few tenths of a degree.”
So your models say. Yet somehow the evidence suggests they do. A beautiful theory destroyed by an ugly fact.
Liar.
There isn’t a single piece of evidence that suggest this.
The global warming prognosticators have turned out to be wrong on every single future projection they have made.
You would think that sooner or later they would wake up to that fact. But global warming is a religious cult and they have “faith” on their side.
Notice that he wrote “your models“, revealing that he rejects all of modern metrology. It is a religious cult.
I suspect “modern metrology” understands this a bit better than you or Tim.
What you suspect is quite irrelevant, given your abject self-imposed ignorance of the subject.
Clown.
No, the EVIDENCE suggests nothing. The prognostications of the global warming advocates (food shortages, ice free Arctic, flooded NYC and Miami, polar bear extinction, and on and on and on ….) based on a warming GAT have turned out to have the same accuracy as a circus fortune teller.
Instead we are seeing a greening globe, longer growing seasons, record food harvests, a still frozen Arctic, growing polar bear populations, less extreme weather, and on and on and on ……
The global warming is based on a methodology that ignores measurement uncertainty – just as you always do.
If you have nothing to offer on how to determine the global average temperature then why are you here trying to defend a warming GAT that is actually UNKNOWN?
He’s now boxed himself into claiming a single measurement of anything has zero uncertainty, Why? Because there is no “sampling”!
What an idiotic claim, even by your standards.
Someday, somebody here will do something astonishing and actually argue about something I’ve said, rather than just arguing with their own fantasies.
No, he pretty much has you pegged. You painted yourself into a corner when you claimed single measurements have more accuracy than multiple measurements.
“…you claimed single measurements have more accuracy than multiple measurements.”
Did I say that? Or is that one of your many fantasies about me?
If I did say it I was wrong. I’ve spent the last 2.5 years trying to explain to you that means become more accurate the more measurements you take, not fewer.
How many times does the difference between accuracy and precision have to be explained to you?
The calculation of the mean doesn’t become more ACCURATE with larger samples, its location becomes more precise and that is all you can say about it. How precisely you locate the population mean says NOTHING about the accuracy of that mean. The accuracy of that mean has to be propagated from the individual data elements onto the average.
If your data elements consist of multiple measurements of the same thing using a calibrated instrument with all measurements being taken under the same environmental conditions then you can use the variation of the data elements as a measurement uncertainty.
For anything else the measurement uncertainty of the average has to be propagated using the measurement uncertainty of the data elements, either using direct addition or by using root-sum-square.
“How many times does the difference between accuracy and precision have to be explained to you?”
It doesn’t but don’t it stop you. First could you explain what relevance it has to the comment you are replying to?
“The calculation of the mean doesn’t become more ACCURATE with larger samples, its location becomes more precise and that is all you can say about it.”
Your claim was that uncertainty increases with larger samples. Not that it stays the same.
And you are still wrong. The average of a large sample will still be more accurate than a single measurement.
“For anything else the measurement uncertainty of the average has to be propagated using the measurement uncertainty of the data elements, either using direct addition or by using root-sum-square.”
It doesn’t matter how many times you assert this it’s still wrong. You do not propagate the uncertainties in a mean by just adding them.
This is bullshit, and you know it.
This is also why attempting to educate you is pointless.
Whine me an ocean, clown.
It’s because he still, after at least two years, hasn’t actually studied the subject. He just cherry-picks stuff. A true troll.
Yep. A perfect example that trendology has absolutely nothing to do with climate, its just arguing about these silly anomalies.
“No, the EVIDENCE suggests nothing.”
How would you know. You refuse to look at any evidence that might contradict your own beliefs.
“The prognostications of the global warming...”
At which point it’s safe to ignore the rest of the comment.
I believe it was Richard Feynman that said something like: If the theory doesn’t give answers that match observations then the theory is wrong.
The EVIDENCE, i.e. the real world observations, proves the theory wrong. None of the prognostications suggested by the climate models have come to pass – NONE.
And it is telling you can’t point to a single prognostication that has come true.
“I believe it was Richard Feynman that said something like”
Lots of people have said it, and it’s the point I’m trying to make to you. You claim to be able to determine, form theory, that the uncertainty in a monthly global average is very large, but the evidence suggests your theory is wrong.
It is possible to detect changes of a few tenths of a degree, the entire variation of all monthly means is less than you claim for the uncertainty and there is good agreement in monthly values between different independent data sets.
“The EVIDENCE, i.e. the real world observations, proves the theory wrong. None of the prognostications suggested by the climate models have come to pass – NONE.”
And you keep trying to change this into an argument about global warming, rather than uncertainty.
In so doing you make it clear that you have a bias which might be affecting your believe in the uncertainty of measurements. And you shoot yourself in the foot. You claim that there is so much uncertainty in the measurements that it’s impossible to know what global temperatures are doing – yet you want to claim these real world observations prove that the world is not warming.
How would you know?
Without your impossibly tiny “error bar” fantasies on these trend line graphs, you would be completely bankrupt.
“You claim to be able to determine, form theory, that the uncertainty in a monthly global average is very large, but the evidence suggests your theory is wrong.”
What you claim as “evidence” is not any such thing.
It is simply not possible to detect a difference that is within the UNKNOWN interval. You can *guess* at the difference, you can cast bones to read what the difference is, you can gaze into a cloudy crystal ball to scry out what the difference is but you can’t KNOW what the difference is.
“And you keep trying to change this into an argument about global warming, rather than uncertainty.
The whole issue is the uncertainty of that global warming!
“In so doing you make it clear that you have a bias which might be affecting your believe in the uncertainty of measurements.”
I have no bias. I have Tayloe, Bevington, Possolo, and the GUM as my evidence of how to calculate uncertainty. Something which you and much of the climate science clique wish would go away!
“You claim that there is so much uncertainty in the measurements that it’s impossible to know what global temperatures are doing – yet you want to claim these real world observations prove that the world is not warming.”
Did you actually proof read this before you posted it? The greening of the earth is happening – it is an observation of reality. Record global grain harvests *is* happening each and every year – it is an observation of reality. Fewer extreme weather events are happening – an observation of reality.
What I *said* was the prognostications of what global warming would do ARE WRONG. That means either global warming isn’t happening or it is BENEFICIAL and not a catastrophe – take your pick.
This guy is the Pontius Pilate of climatology, tries to pretend he’s aloof from all the green marxist politics, but if Christopher Monckton dares to point out there has been no warming for eight years using their own data, then the loud and long whining commences.
“What you claim as “evidence” is not any such thing.”
Then say what evidence you would consider acceptable.
If you claim the uncertainty in a monthly UAH anomaly is ±1.4°C or more, say how you would test this hypothesis. What evidence would convince you this is wrong? What evidence would you provide that this is correct?
Does anyone understand the NOAA website featured in this article? This NOAA web site has selection criteria and, for this article, maximum temperatures occurring in June through August have been chosen (statewide time series). I find no information as to whether these are thermometer readings or adjusted temperatures.
“Maximum temperature”seems to me to mean the graphs and charts should display the highest temperature recorded each year for that time period in the state, county, or city selected. In fact, its plots and charts show maximum temperatures much lower than reality.
For various places I’ve lived, the maximum temperature shown when selecting those summer months is far below the temperatures actually recorded there. Just for example, in and near Sacramento, CA, going back to 1962, temperatures above 100F were common in July and August, sometimes in early September. Further north in the central valley, temperatures were often higher, 110F, perhaps more.
I am now in Nevada. The last two summers, 100+ occurred fairly often. This July, the highest I experienced was 117F during a trip to lower elevations. During last year’s summer it was 116F. These high temperatures are what weather reports and local temperature displays showed.
No doubt those highs, being in Las Vegas and Mesquite, were influenced by UHI. Where I live, at 3500 feet, with less than 1000 population claimed, UHI is probably less and the temperatures were about 10 degrees F lower. Yet for California, for the entire period of record, the NOAA web page shows once reaching 94F, For Nevada, once reaching 90F.