The US Blows Hot And Cold

Guest Post by Willis Eschenbach

I got to thinking about the raw unadjusted temperature station data. Despite the many flaws in individual weather stations making up the US Historical Climate Network (USHCN), as revealed by Anthony Watts’ SurfaceStations project, the USHCN is arguably one of the best country networks. So I thought I’d take a look at what it reveals.

The data is available here, with further information about the dataset here. The page says:

UNITED STATES HISTORICAL CLIMATOLOGY NETWORK (USHCN) Daily Dataset M.J. Menne, C.N. Williams, Jr., and R.S. Vose National Climatic Data Center, National Oceanic and Atmospheric Administration

These files comprise CDIAC’s most current version of USHCN daily data.

These appear to be the raw, unhomogenized, unadjusted daily data files. Works for me. I started by looking at the lengths of the various records.

Figure 1. Lengths of the 1,218 USHCN temperature records. The picture shows a “Stevenson Screen”, the enclosure used to protect the instruments from direct sunlight so that they are measuring actual air temperature.

This is good news. 97.4% of the temperature records are longer than 30 years, and 99.7% are longer than 20 years. So I chose to use them all.

Next, I considered the trends of the minimum and maximum temperatures. I purposely did not consider the mean (average) trend, for a simple reason. We experience the daily maximum and minimum temperatures, the warmest and coldest times of the day. But nobody ever experiences an average temperature. It’s a mathematical construct. And I wanted to look at what we actually can sense and feel.

First I considered minimum temperatures. I began by looking at which stations were warming and which were cooling. Figure 2 shows that result.

Figure 2. USHCN minimum temperature trends by station. White is cooling, red is warming.

Interesting. Clearly, “global” warming isn’t. The minimum temperature at 30% of the USHCN stations is getting colder, not warmer. However, overall, the median trend is still warming. Here’s a histogram of the minimum temperature trends.

Figure 3. Histogram of 1,218 USHCN minimum temperature trends. See Menne et al. for estimates of what the various adjustments would do to this raw data.

Overall, the daily minimum temperatures have been warming. However, they’re only warming at a median rate of 1.1°C per century … hardly noticeable. And I have to say that I’m not terrified of warmer nights, particularly since most of the warmer nights are occurring in the winter. In my youth, I spent a couple of winter nights sleeping on a piece of cardboard on the street in New York, with newspapers wrapped around my legs under my pants for warmth.

I can assure you that I would have welcomed a warmer nighttime temperature …

The truth that climate alarmists don’t want you to notice is that extreme cold kills far more people than extreme warmth. A study in the British Medical Journal The Lancet showed that from 2000 to 2019, extreme cold killed about four and a half million people per year, and extreme warmth only killed a half million.

Figure 4. Excess deaths from extreme heat and cold, 2000-2019

So I’m not worried about an increase in minimum temperatures—that can only reduce mortality for plants, animals, and humanoids alike.

But what about maximum temperatures? Here are the trends of the USHCN stations as in Figure 2, but for maximum temperatures.

Figure 5. USHCN maximum temperature trends by station. White is cooling, red is warming.

I see a lot more white. Recall from Figure 2 that 30% of minimum temperature stations are cooling. But with maximum temperatures, about half of them are cooling (49.2%).

And here is the histogram of maximum temperatures. Basically, half warming, half cooling.

Figure 6. Histogram of 1,218 USHCN maximum temperature trends.

For maximum temperatures, the overall median trend is a trivial 0.07°C per century … color me unimpressed.

Call me crazy, but I say this is not any kind of an “existential threat”, “problem of the century”, or “climate emergency” as is often claimed by climate alarmists. Instead, it is a mild warming of the nights and no warming of the days. In fact, there’s no “climate emergency” at all.

And if you are suffering from what the American Psychiatric Association describes as “the mental health consequences of events linked to a changing global climate including mild stress and distress, high-risk coping behavior such as increased alcohol use and, occasionally, mental disorders such as depression, anxiety and post-traumatic stress” … well, I’d suggest you find a new excuse for your alcoholism, anxiety, or depression. That dog won’t hunt.

My very best to everyone from a very rainy California. When we had drought over the last couple of years, people blamed evil “climate change” … and now that we’re getting lots of rain, guess what people are blaming?

Yep, you guessed it.

w.

As Always: I ask that when you comment you quote the exact words you’re discussing. This avoids endless misunderstandings.

Adjustments: This raw data I’ve used above is often subjected to several different adjustments, as discussed here. One of the largest adjustments is for the time of observation, usually referred to as TOBS. The effect of the TOBS adjustment is to increase the overall trend in maximum temperatures by about 0.15°C per century (±0.02) and in minimum temperatures by about 0.22°C per century (±0.02). So if you wish, you can add those values to the trends shown above. Me, I’m not too fussed about an adjustment of a tenth or two of a degree per century, I’m not even sure if the network can measure to that level of precision. And it certainly is not perceptible to humans.

There are also adjustments for “homogeneity”, for station moves, instrument changes, and changes in conditions surrounding the instrument site.

Are these adjustments all valid? Unknown. For example, the adjustments for “homgeneity” assume that one station’s record should be similar to a nearby station … but a look at the maps above show that’s not the case. I know that where I live, it very rarely freezes. But less than a quarter mile (1/8 km) away, on the opposite side of the hill, it freezes a half-dozen times a year or so … homogeneous? I don’t think so.

The underlying problem is that in almost all cases there is no overlap in the pre- and post-change records. This makes it very difficult to determine the effects of the changes directly, and so indirect methods have to be used. There’s a description of the method for the TOBS adjustment here.

This also makes it very hard to estimate the effect of the adjustments. For example:

To calculate the effect of the TOB adjustments on the HCN version 2 temperature trends, the monthly TOB adjusted temperatures at each HCN station were converted to an anomaly relative to the 1961–90 station mean. Anomalies were then interpolated to the nodes of a 0.25° × 0.25° latitude–longitude grid using the method described by Willmott et al. (1985). Finally, gridpoint values were area weighted into a mean anomaly for the CONUS for each month and year. The process was then repeated for the unadjusted temperature data, and a difference series was formed between the TOB adjusted and unadjusted data.

To avoid all of that uncertainty, I’ve used the raw unadjusted data. 

Addendum Regarding The Title: There’s an Aesop’s Fable, #35:

“A Man had lost his way in a wood one bitter winter’s night. As he was roaming about, a Satyr came up to him, and finding that he had lost his way, promised to give him a lodging for the night, and guide him out of the forest in the morning. As he went along to the Satyr’s cell, the Man raised both his hands to his mouth and kept on blowing at them. ‘What do you do that for?’ said the Satyr. ‘My hands are numb with the cold,’ said the Man, ‘and my breath warms them.’ After this they arrived at the Satyr’s home, and soon the Satyr put a smoking dish of porridge before him. But when the Man raised his spoon to his mouth he began blowing upon it. ‘And what do you do that for?’ said the Satyr. ‘The porridge is too hot, and my breath will cool it.’ ‘Out you go,’ said the Satyr, ‘I will have nought to do with a man who can blow hot and cold with the same breath.’”

The actual moral of the story is not the usual one that people draw from the fable, that the Man is fickle and the Satyr can’t trust him.

The Man is not fickle. His breath is always the same temperature … but what’s changing are the temperatures of his surroundings, just as they have been changing since time immemorial.

We call it “weather”.

4.8 48 votes
Article Rating
239 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Henry Pool
March 11, 2023 10:15 am

Ja. Ja. I told. Maxima are dropping. Worldwide. I wonder why nobody noticed it? I am even battling keeping the temperature of my swimming pool up….

Henry Pool
Reply to  Henry Pool
March 11, 2023 10:18 am
Henry Pool
Reply to  Henry Pool
March 11, 2023 10:35 am

Note that in Table 1 of same report I show what is happening. It corresponds 100% with Willis’ observations. Minima are going up in the NH. But they are going down in the SH. Now look at the change in position of the magnetic north pole?

Last edited 20 days ago by Henry Pool
Reply to  Henry Pool
March 11, 2023 3:53 pm

Henry Pool,
Temperatures from UAH satellite platfoms are dropping all over the world just now, but that does not allow a forecast of the future trends. Any month ahead can see a reversal of the present trend. You simply must expect a future warming trend, magnitude and timing unkown because the predictive efforts do not work. We do not understand enough about the causes of changes to temperatures like these. Geoff S

Henry Pool
Reply to  Geoff Sherrington
March 12, 2023 3:53 am

Geoff

Carefully look at table 2 of my report quoted earlier to understand why minima in the nh are going up. Here where I live, in Pretoria, minima went down by 0.8K compared to 40 years ago. The heat in the nh is not coming from the sun. It is not coming from the CO2. It is simply coming from inside…..

wilpost
Reply to  Henry Pool
March 11, 2023 6:06 pm

I live in Vermont, New England, US., already for 35 years.

During earlier years, we had lower temperatures in winter, such as -25F or -30F, near my house.
During later years, we had temperatures of -10F or -15F.

On average the US colder temperatures have not changed much, per the histograms, but that is not the case in New England.

It would be interesting, if Willis could verify that with histograms

Last edited 20 days ago by wilpost
wilpost
Reply to  Willis Eschenbach
March 12, 2023 6:49 am

Thank you, Willis.

I am astounded the difference is that little, about 3F over about 4 decades

Vermont’s and New Hampshire’s climate is colder from south to north.

I live near Woodstock, VT, near the middle

wilpost
Reply to  wilpost
March 13, 2023 12:00 am

Hi Willis,

I have one more request.
Please add the Lowess line for the maxima.

That would give me some ammunition to try to change the minds of a few fence-sitting Legislators, who might be persuadable towards sanity

vuk
March 11, 2023 10:16 am

Date to remember:
Great Blizzard of New York City on March 11 1888
https://en.wikipedia.org/wiki/Great_Blizzard_of_1888

Last edited 20 days ago by vuk
Walter
Reply to  vuk
March 11, 2023 10:34 am

Why is that a date to remember?

John Oliver
Reply to  Walter
March 11, 2023 11:59 am

Probably no particular relevance. But interesting- a late winter blizzard to remind people nature is in control not man.

Duker
Reply to  John Oliver
March 11, 2023 12:40 pm

More relevant
Top 10 snow falls , most were blizzards too

  1. 27.5” on January 22-23, 2016
  2. 26.9” on February 11-12, 2006
  3. 25.8” on December 26-27, 1947
  4. 21.0” on March 12-14th, 1888
  5. 20.9” on February 25-26, 2010
  6. 20.2” on January 7-8, 1996
  7. 20.0” on December 26-27, 2010
  8. 19.8” on February 16-17, 2003
  9. 19.0” on January 26-27, 2011
  10. 18.1” on March 7-8, 1941 & January 22-24, 1935

6 of the 10 since the 2000s

http://www.weather2000.com/NY_Snowstorms.html

Henry Pool
Reply to  Duker
March 12, 2023 11:14 am

I think (more) snow forms when it is cooler. So the extra snow must be because of all that global warming….

rah
Reply to  Walter
March 11, 2023 1:22 pm

Joe Bastardi is saying conditions next week are nearly identical to when that blizzard occurred.

John Hultquist
Reply to  Walter
March 11, 2023 4:10 pm

Depends on who your date was.

Walter
March 11, 2023 10:33 am

Speaking of which, when is Anthony’s new temperature dataset that he announced at the conferences going to be up in running? I can’t wait to see how the media will react to that.

Also, February’s temperature for the USCRN just came with a +1.1 anomaly, which is colder than February 2005. I tried searching on the web for any articles and pieces from the media about the warmth back then and if it supposedly equates to catastrophic human caused global warming. I couldn’t find anything. Yet now there are articles like this:

https://www.cnn.com/2023/02/23/us/early-spring-record-warmth-impacts-climate/index.html
https://www.washingtonpost.com/business/50f-in-february-this-is-what-climate-change-looks-like/2023/02/15/7470c3f8-ad3f-11ed-b0ba-9f4244c6e5da_story.html

It just shows how misleading and desperate they are. This is literally brainwash.

Last edited 20 days ago by Walter
Richard Greene
Reply to  Walter
March 11, 2023 11:32 am

You should not compare one month with another month

That is data mining

The USCRN trend is down since 2015
after being up from 2005 to 2015.

While an eight year cooling trend from 2015 to 2023 in the US does not predict the future Us or global climate, it does reveal a very important fact.

The largest 8 year increase of global CO2 emissions in history was accompanied by global cooling in the US, and very close to a flat trend globally (UAH data) for the past 101 months,

Looks like CO2 lost its imaginary job as the climate control knob !

Last edited 20 days ago by Richard Greene
rah
Reply to  Richard Greene
March 11, 2023 12:26 pm

The real problem with USCRN is that it did not come on line until 2005.

Walter
Reply to  Richard Greene
March 11, 2023 2:50 pm

Richard,

My main point was that despite the fact that the winter of ‘04-‘05 was warmer than the winter of ‘22-‘23, there was no such ridiculous reporting connecting it to climate change back then. It just shows that they’re trying to scare people and they are stepping up their propaganda tactics even more now.

Richard Greene
Reply to  Walter
March 11, 2023 8:13 pm

Climate change is 1% science and 99% trying to scare people to gain government power and control over the private sector.

bigoilbob
Reply to  Richard Greene
March 12, 2023 8:21 am

You should not compare one month with another month.”

Bingo. These “hottest month” pronouncements are not only meaning free, but provide legit – if convenient – targets for posters here. Since we have confidence limits for those GAT months, you can assess the probability of those “hottest month” pronouncements,. But even then, who cares?

“Looks like CO2 lost its imaginary job as the climate control knob!”

Not necessarily. It just demonstrates the importance of periodic oceanic trends to the cyclical, overall up trending, global temperatures since the waning of the mid century aerosol era. I realize that you deny the importance of that era climactically, but it is undeniable, above ground.

Jeff Alberts
Reply to  Walter
March 11, 2023 9:53 pm

 I can’t wait to see how the media will react to that.”

What makes you think they’ll even notice it?

Bellman
March 11, 2023 10:36 am

What is the period these trends are measured over? Is it using the same period for all stations?

Bellman
Reply to  Bellman
March 11, 2023 4:48 pm

I ask, because it makes a difference. NOAA’s trends for USA:

1895 – 2014
Min: 0.8°C / Century
Max: 0.7°C / Century

1950 – 2014
Min: 1.8°C / Century
Max: 1.3°C / Century

1979 – 2014
Min: 2.1°C / Century
Max: 2.7°C / Century

Steve Richards
Reply to  Bellman
March 11, 2023 11:45 pm

If the apparent trend varies so much due to the choice of end points then you need to increase your error bars dramatically. Or consider the need to worry about a fluffy trend at all.

Bellman
Reply to  Steve Richards
March 12, 2023 1:20 pm

Or consider that warming over the last 125 years has not been linear.

Jim Gorman
Reply to  Bellman
March 12, 2023 2:34 pm

I agree with Steve Richards. You always gripe about cherry picking end points. The variance in the trends you show how trending can mislead one. I am arguing with someone on twitter who declares the temperature data to be “noise” and the linear trend to be the actual signal. That is basically what you are doing also. As Steve points out, the error bars on your trends should definitely be large. Large to the point where the uncertainty in the trend prevents one from making any conclusions.

I spent 30 years in the telephone industry forecasting call growth, usage growth, usage from here to there, capital and expense expenditures, people, maintenance, etc. I learned early on that time series trends and smoothing were your enemy. Time is not a functional variable in any of those and it is not in temperature either. The ONLY way to make any headway in determining what causes variations in anything is to know what the actual inputs are and how they interact with each other. Modelers have tried to do this and fail miserably. Why do you think a time series trend when temps go up and temps go down will tell you anything beyond having your confirmation bias confirmed?

Bellman
Reply to  Jim Gorman
March 12, 2023 4:57 pm

You realize that every one of your points could have been addressed to Willis Eschenbach. It’s his article. He’s the one drawing trend lines for every individual weather station in the US. He’s the one claiming on the basis of the median of all these stations, that the rate of warming is nothing to worry about. He’s not producing any uncertainty for these trend lines, or specifying what end points he’s using.

Jim Gorman
Reply to  Bellman
March 13, 2023 3:00 pm

Sounds like you are concerned about scientific debate.

Just remember that a time series of temperature being trended does not prove any causation. If people aren’t enamored of your “trends”, maybe you need to consider why?

Bellman
Reply to  Jim Gorman
March 13, 2023 4:15 pm

Debate? All you do is throw insults and demonstrate you’re inability to address the point. E.g.

Just remember that a time series of temperature being trended does not prove any causation.

I claimed absolutely nothing about causation, anymore than this article does. My point, was that you can’t just say “the trend” without specifying what period your trend is over. This is particularly addressed to the point int he article where it’s stated

Overall, the daily minimum temperatures have been warming. However, they’re only warming at a median rate of 1.1°C per century … hardly noticeable.

If people aren’t enamored of your “trends”, maybe you need to consider why?

They are not my trends – I just pulled them off the NOAA website. Again, though the usual issue with your comments – why do you only object when I present a trend? Not when Monckton does it, not in this article, only when I report a trend do you suddenly go on about causation, or predicting the future or uncertainty etc.

Bellman
Reply to  Jim Gorman
March 12, 2023 7:12 pm

Here’s the graph of maximum temperatures across the series, from 1895 – 2022.

Trend is 0.80 ± 0.25 °C / Century. (2σ confidence interval)

So statistically significant. But, as I say, it doesn’t represent a linear rate of warming.

20230312wuwt4.png
Last edited 19 days ago by Bellman
Bellman
Reply to  Bellman
March 12, 2023 7:25 pm

Here are the time periods I used before, with their 2σ confidence intervals

1895 – 2014
Min: 0.80 ± 0.23°C / Century
Max: 0.65 ± 0.27°C / Century

1950 – 2014
Min: 1.77 ± 0.52°C / Century
Max: 1.25 ± 0.68°C / Century

1979 – 2014
Min: 2.1 ± 1.4°C / Century
Max: 2.7 ± 1.8°C / Century

Jim Gorman
Reply to  Willis Eschenbach
March 13, 2023 3:42 pm

It is as simple as your weight. A trend doesn’t necessarily predict what is going to happen. It can tell you what has occurred but since time isn’t a functional variable determining your weight, you need to know what functional variables are controlling your weight in order to have an idea how it will change and if it will follow the established trend.

When I look at UAH, I see temps increase then fall back to the base line. I also see that they have decreased and gone back up. That simply can’t happen under a constantly increasing CO2 if CO2 is the controlling variable.

My preliminary investigations point toward seasonal changes as partially causing spurious trends. Averaging NH and SH likely have an effect on spurious trends also.

I learned early in my career that simple linear regression based on time is most likely going to give incorrect answers when predicting. I am seeing too many locations that show no warming and even cooling to all of a sudden start believing in linear regression based on averages of averages of averages as a panacea for forecasting future temperatures.

Jim Gorman
Reply to  Willis Eschenbach
March 14, 2023 6:14 pm

I’ll just point out that experience has taught me that trends, especially linear trends, will bite you on the butt. They are only good to show where you have been.

I know that you know what goes into your body and can therefore have a good idea what is going to happen. My point is how many people forecasted the current pause the CMB is tracking? How many people have gone on the record about when it will end? As Nick pointed out above, a lot of people think the current linear trend will continue and maybe even get steeper until the earth literally burns up. That is what trends are good for. The SVB bank near you probably used a similar method to track their viability.

Bellman
Reply to  Jim Gorman
March 13, 2023 4:44 pm

I might as well continue the “debate” here.

You always gripe about cherry picking end points.

As should anyone. Monckton used to get very self righteous about “the end-point fallacy”, pointing out how carefully choosing your endpoints to get a result you wanted was a bogus statistical technique.

I’ve also griped about looking at a linear trend over non linear data, and making claims about the strength of that trend.

The variance in the trends you show how trending can mislead one.

I wasn’t making any specific point about those three periods. Just trying to get an answer to the question, over what period are the trends in this article calculated. I think this is an important question because the trend is being used to argue that the rate of warming isn’t a problem.

I am arguing with someone on twitter who declares the temperature data to be “noise” and the linear trend to be the actual signal. That is basically what you are doing also.

How? I’m pointing out the trend varies over different periods. It’s not possible for both the trend from 1895 and the trend from 1979 to be “the actual signal” when they are different.

As Steve points out, the error bars on your trends should definitely be large.

I’ve calculated the confidence intervals for all the trends below. As expected they get larger over shorter periods, but none of them indicate the warming trend isn’t significant.

But, as so often, you just assume they “should definitely be large” without doing the calculations for yourself.

Large to the point where the uncertainty in the trend prevents one from making any conclusions.

No. See above.

Bellman
Reply to  Bellman
March 13, 2023 4:58 pm

Continued.

I spent 30 years in the telephone industry forecasting call growth, usage growth, usage from here to there, capital and expense expenditures, people, maintenance, etc.”

You keep saying this, and I’m sure I’ve suggested to you that the telephone industry and global temperatures may not work in the same way.

I learned early on that time series trends and smoothing were your enemy.

Or maybe, you just weren’t very good at understanding them.

Time is not a functional variable in any of those and it is not in temperature either.

And we’ve been over this numerous times before. 1) Time is a dependent variable (assuming that’s what you mean by functional; variable) is the function you are interested in is how does something change over time. 2) Nobody is suggesting that time causes the change, just that things change over time, and that suggests something is cause the change.

“The ONLY way to make any headway in determining what causes variations in anything is to know what the actual inputs are and how they interact with each other.

As I’ve agreed with you on many occasions. If you want to understand and predict how changes occur, than you need to understand the why. Statistics won’t prove why something is happening, but they can give indications, and they can be used to provide evidence for a particular hypothesis.

Modelers have tried to do this and fail miserably.

That’s your belief.

Why do you think a time series trend when temps go up and temps go down will tell you anything beyond having your confirmation bias confirmed?

I’m not sure what particular bias you think I’m trying to confirm here. If I wanted to talk about global warming I wouldn’t be looking at US temperatures. I’m specifically making the point that a linear trend over temperatures that are going up and down like they are in the US data, is not a reliable indicator of what is happening, or what will happen.

Last edited 18 days ago by Bellman
Jim Gorman
Reply to  Bellman
March 14, 2023 6:24 pm

You keep saying this, and I’m sure I’ve suggested to you that the telephone industry and global temperatures may not work in the same way.”

You missed the entire point. A trend based in time, when time is not a functional variable, leaves you blind as to what the variables that determine the trend. Did your trend forecast the pause we are currently experiencing? If not, why not? Did your trend forecast any of the excursion up or down? why not.

All your trends can tell you is that something has caused both warming and cooling to occur. You have no data to explain what the causes were and where the future will go. Does that sound like the GCM’s that turn linear and just keep growing? Does any GCM forecast when temps will level off? Does your trend, or do you believe we are destined to burn up?

Bellman
Reply to  Jim Gorman
March 14, 2023 6:38 pm

Did your trend forecast the pause we are currently experiencing?

You realise your hypothetical pause is also a trend line based on nothing but time?

As I’ve said before the trend line up to the pause would have predicted cooler temperatures than we’ve seen int he last 8 years.

20230315wuwt1.png
Bellman
Reply to  Jim Gorman
March 14, 2023 6:43 pm

Did your trend forecast any of the excursion up or down? why not.

Of course not. Because it’s linear.

But the point of this or any other regression is not to predict every up and own. It’s to indicate what the overall change has been.

Again, here’s my linear regression using just CO2, ENSO and a volcanic index.

20221104pr3.png
Jim Gorman
Reply to  Bellman
March 14, 2023 6:31 pm

I learned early on that time series trends and smoothing were your enemy.

“Or maybe, you just weren’t very good at understanding them.”

Yeah, I am sure I just wasn’t smart enough to understand what a straight line was showing for growth. It only takes meeting with the boss one time to discuss missed forecasts to learn what those straight lines really meant. You obviously have never had one of those meetings after missing forecasts.

For your own education, there were any number of factors that needed to be considered, not unlike temperature. Government funding of all kinds, layoffs, new businesses, recessions, population changes, and on and on. If you didn’t do the research properly and just relied on what the past did, you were screwed.

Bellman
Reply to  Jim Gorman
March 14, 2023 6:53 pm

It only takes meeting with the boss one time to discuss missed forecasts to learn what those straight lines really meant.

That’s my point. If all you did was project a linear trend would continue into the future, you were not very good at forecasting. Hopefully you learnt from your mistake.

For your own education, there were any number of factors that needed to be considered, not unlike temperature.

I love how you keep trying to “educate” me on things I keep showing you. I keep showing you my simple linear models based on any number of things that might affect global temperatures. And, as I kjeep saying, if I was trying to predict the future I wouldn’t rely on that. I’d want an actual physical model, and that wouldn’t be enough to predict the future. It would require knowing how much more CO2 is going to go into the atmosphere along with any other factors, which is not something you can predict.

Jim Gorman
Reply to  Bellman
March 14, 2023 7:09 pm

And, as I kjeep saying, if I was trying to predict the future I wouldn’t rely on that.”

If that is truly the case, then what are you trying to accomplish when discussing your trends. If they can’t be used to forecast, just show the data and forget the trend. It is meaningless by your own admission.

Bellman
Reply to  Jim Gorman
March 14, 2023 8:01 pm

As I keep saying, I’m using them to determine the rate of warming. And in this case to illustrate that the more recent trend was faster than the trend since 1895.

I’m not saying the trend can’t be used to forecast. But it’s not going to be a very accurate forecast, especially if you project it too far into the future.

Richard M
March 11, 2023 10:36 am

Probably mostly due to UHI. That would explain the difference between the trends. UHI is more significant at night.

Richard Greene
Reply to  Richard M
March 11, 2023 11:37 am

USCRN numbers are similar to NClimDiv numbers

The rural USCRN weather stations allegedly have no UHI

So how much UHI could possibly be included in the similar NClimDiv numbers?

Richard M
Reply to  Richard Greene
March 11, 2023 2:15 pm
Richard Greene
Reply to  Richard M
March 11, 2023 8:18 pm

USCRN weather stations are claimed to be perfectly sited rural weather stations, NOT affected by UH I, not needing adjustments, and not needing infilling because temperature data reporting is automated. That sounds too good to be true, so maybe it is not true?

It is my opinion, the integrity of the PEOPLE who compile the national average temperature are just as important as the quality of data they have. And I do not trust NOAA.

Reply to  Richard Greene
March 12, 2023 5:46 am

The operative words in your post are “claimed to be perfectly sited”.

UHI affects can be seen downwind of UHI sites for miles, at times 20 miles or more.

Do you ever wonder why the variance of the temperature data sets is never addressed? The variance of truly rural sites will be different than the variance of those with UHI impacts.

USCRN stations should be capable of providing 5min data or even 2 minute data. Heck, my Davis Vantage Vue weather station can do that! It should be able to analyze the resulting profiles to see if there are any impacts from wind-related UHI caused by variable weather.

Of course with this kind of data it would be better for climate science to join the 21st century and begin to use integral-based degree-day analysis for each station.

Jim Gorman
Reply to  Richard Greene
March 12, 2023 6:02 am

See the image. If accuracy, is ±0.3C, how does anything below this ever get reported? Averaging simply does not change the precision to which measurements are actually made.

CRN manual.jpg
SteveZ56
Reply to  Richard M
March 13, 2023 10:36 am

This sounds about right. More energy is used heating buildings at night than during the day, and the larger temperature gradient at night would accelerate heat losses to the outside air.

The map for the trends in the maximum temperature is interesting. Most of the stations where daily maxima are decreasing are in the Southeast and Midwest, while the stations where daily maxima are increasing are in the desert southwest, the Northwest, and Northeast.

In the Southeast, where the climate is generally warm and humid anyway, days are getting cooler, while the generally cooler climates in the northern states are getting warmer, which would tend to lengthen growing seasons. If such trends continue, this would lead to smaller temperature gradients between north and south, and tend to decrease the frequency of strong storms.

As for the desert southwest, could the daytime warming be the result of increasing urbanization (for example, Phoenix or Las Vegas), where asphalt and concrete absorb sunlight better than natural soil in the desert?

Richard Greene
March 11, 2023 11:17 am

I think this analysis was far from complete, which is rare for Willie E.

First Problem
How can one be sure that raw “unadjusted” numbers are really raw and unadjusted?

One would have to trust NOAA to be confident.
I don’t trust NOAA, do you?

NOAA has two different weather station networks, NClimDiv and USCRN, with completely different siting, yet they somehow manage to show almost the same temperature trends.

Is that just a coincidence, or is the fix in?
That is very suspicious to me.

My general rule of thumb is to NOT trust government politicians and bureaucrats.

I do trust Roy Spencer and John Christy for UAH data — they are volunteers who do not have any financial incentives to show more warming than actually exists in their data.

Second.Problem
Are the known arbitrary adjustments in the numbers shown to the public really completely missing from the raw “unadjusted” data?

For example, the mid-1930s included the hottest US year for a long time. Hotter than the initial 1998 number. Then there were arbitrary adjustments and 1998 became warmer than any year in the 1930s. It that arbitrary adjustment also included in raw “unadjusted” data?

Third Problem:
While a weather station may be listed as being in community A. What if it was moved from downtown Community A to the out of town Community A airport? That move is really a new weather station, in a different environment, even if still listed as “Community A” weather station. How about a weather station moved to one or more different locations within the Community A over many decades? Maybe Community A had a weather station for 120 years, but it was moved four times in those 120 years?
Would you still call it a 120-year record weather station?

Fourth Problem:
There are many estimates included in the so called raw numbers, called infilling. Tony Heller, at his website, claims the estimated (infilled) data are a large percentage of USHCN data. if he is correct, the alleged raw data are not raw data at all. They are a mix of raw data and wild guessed numbers, with the guesses coming from government bureaucrats who are likely to be biased to want to see more warming.
61% Fake Data | Real Climate Science

In my opinion, are few real data in the average US temperature statistic
There are mainly adjusted numbers and infilled numbers
Those are not data
Only raw unadjusted measurements are data.

The big question is what is actually included in the statistical average US temperature that NOAA tell the general public:
The percentage of raw unadjusted numbers?
The percentage of adjusted numbers?
The percentage of infilled numbers?

Without answers to those three questions, all we know is the average US temperature is whatever NOAA tells us it is, and there is no way we can verify if their average is correct.

Last edited 20 days ago by Richard Greene
Editor
Reply to  Richard Greene
March 11, 2023 1:05 pm

I’m not sure that those problems are significant.
#1. How can one be sure that raw “unadjusted” numbers are really raw and unadjusted?
They are the rawest we have. They don’t have to be 100% raw in order to be different to the official numbers or interesting, just rawer. If even rawer numbers become available, I’m sure that Willis will tell us about them.
#2. Are the known arbitrary adjustments in the numbers shown to the public really completely missing from the raw “unadjusted” data?
The paper describing the data does indicate that the data is of the actual station readings.
#3. While a weather station may be listed as being in community A. What if it was moved from downtown Community A to the out of town Community A airport?
The same paper also states that station moves were a factor in selecting the stations. So we can reasonably assume that if there were indeed any station moves then there were fewer station moves in the selected stations than in others.
#4.There are many estimates included in the so called raw numbers, called infilling.
The same paper states that there are both daily and monthly numbers. Infilling for a station would be used for monthly numbers where daily data was missing. Willis uses only the daily data. Maybe Willis can confirm whether some daily values are missing, in which case there is likely to be no infilling in the daily data. In any case, if there is any same-station infilling then it is surely likely to be consistent with any trend in the station’s actual measurements, That’s very different to homogenisation, where a station’s trend can easily be influenced by different trends in other stations.

No-one is claiming that the data used by Willis is as pure as the driven stuff that children don’t know any more. The data is different to the homogenised official data, and it’s interesting. That makes it worth analysing, and I thank Willis for doing the analysis.

Richard Greene
Reply to  Mike Jonas
March 11, 2023 1:18 pm

“The same paper states that there are both daily and monthly numbers. Infilling for a station would be used for monthly numbers where daily data were missing.”

It is hard to believe that infilling would be used for monthly numbers and not for daily numbers. It would seem to me that missing daily numbers would have to be infilled to compile a monthly average temperature.

I still have the largest potential problem (4) reported by Tony Heller,
claiming a majority of USHCN numbers are estimated, which means infilled. And the estimated, infilled numbers themselves have a steep warming trend. Very suspicious.

If that is anywhere close to being a true percentage of infilling, then the data are not fit for any scientific analysis.

Nick Stokes
Reply to  Richard Greene
March 11, 2023 3:37 pm

It is hard to believe that infilling would be used for monthly numbers and not for daily numbers.”

There are two different notions of infilling here. The one referred to here is infilling in time. If days in the month are missing, some expected values derived from data for the same station are used to infill before averaging. That is much better than the common device of just leaving them out of the average. But obviously, you can’t do that for daily data.

The other notion is spatial, where nearby stations are used to provide an estimate. That is never done for USHCN raw or GHCN unadjusted.

bdgwx
Reply to  Richard Greene
March 11, 2023 5:56 pm

RG said: “I still have the largest potential problem (4) reported by Tony Heller,”

Keep in mind that it was Tony Heller’s insistence that you didn’t need to grid and infill data that sparked the controversy getting himself banned from WUWT. Even Anthony Watts knows that you must grid and infill data to produce a spatial average.

RG said: “claiming a majority of USHCN numbers are estimated,”

No they aren’t.

RG said: “which means infilled.”

As Nick points out there are two notions of Infilling. The one that is most relevant to the discussions is the spatial infilling or the interpolation of grid cells without an assigned observation.

RG said: “And the estimated, infilled numbers themselves have a steep warming trend.”

In the US the infilling doesn’t matter much. It is the TOB and instrument change adjustments which make the trend higher relative to the raw data. This is because both the TOB changes and instrument changes put a low bias on temperature measurements [Vose et al. 2003] [Hubbard & Lin 2006].

RG said: “If that is anywhere close to being a true percentage of infilling, then the data are not fit for any scientific analysis.”

Infilling is inevitable when doing a spatial average. You can’t avoid it. The no effort biased strategy is to assume the unfilled cells behave like the filled cells. This is often handled implicitly like is the case with NOAAGlobalTemp v5.0 and lower and HadCRUTv4 and lower which have less than 100% spatial coverage but call it a “global” value anyway. Other datasets like GISTEMP, BEST, and yes even UAH do the infilling explicitly with locally weighted strategies. The idea being that a grid cell is behaves more like it’s neighbors than with the average with of the entire globe.

Last edited 20 days ago by bdgwx
Richard Greene
Reply to  bdgwx
March 11, 2023 8:30 pm

That was a lot of tap dancing saying nothing of value, and sounding like you are a NOAA employee.

Tony Heller reports that a majority of USHCN numbers are estimated (infilled), and the infilled numbers show a sharp warming trend NOT seen in the measured raw numbers.
That suggests infilling has a warming bias.

Either Tony Heller is right
or Tony Heller is wrong.

You obviously have no answer.
You character attacked him but
conveniently avoided my question.

I want to know what percentage
of USHCN numbers are infilled,

The answer requires a percentage
A number
Not your “no they aren’t” claim
Not an explanation of what infilling is.

If you don’t know, just say: “I don’t know”.
And if you don’t know, your conclusion that:
“In the US the infilling doesn’t matter much”
is your speculation, NOT a fact.

Last edited 19 days ago by Richard Greene
Nick Stokes
Reply to  Richard Greene
March 11, 2023 11:17 pm

I want to know what percentage of USHCN numbers are infilled”

Very imprecise. Do you mean
1) % of raw data – answer 0%
2) % of adjusted data – probably his count of 61% is right.

But the thing is that USHCN was replaced by nClimDiv nine years ago. It has not been used since then in any NOAA published work. It is true that they have kept posting data from USHCN stations on a file on the internet; they have not closed that system down. But they have not made efforts to keep those stations reporting. Why should they? That system is obsolete. They now have a bunch of 10,000 stations that is what they actually use.

Grumpy Git UK
Reply to  Nick Stokes
March 12, 2023 3:36 am

Of course the Raw Data has been adjusted.
You only need to compare the Raw data from GISS V2 to the raw data for GISS V4.
Because of course V4 has been Quality Controlled.

Richard Greene
Reply to  Grumpy Git UK
March 12, 2023 4:49 am

If there are versions,
then something has changed.

If there are no changes,
then there would be only ONE version.

Last edited 19 days ago by Richard Greene
Reply to  Richard Greene
March 12, 2023 6:10 am

They are trying to snow you. They are statisticians and not physical scientists. Infilling creates more data points than are actually available thus making the distribution look more peaked around the “average”. To a statistician this means the data “looks” more accurate, i.e. a smaller standard deviation.

All it does is spread UHI and measurement uncertainty around making the data *more* unreliable, not more reliable.

Tom Abbott
Reply to  Richard Greene
March 12, 2023 10:56 am

Excellent comments, Richard. Excellent questions.

Reply to  bdgwx
March 12, 2023 6:06 am

Infilling is inevitable when doing a spatial average.”

Malarky. All the infilling does is drive the temperature toward the average. It hides the true variance in the actual data by adding data points that are actually unknown. Infilling SPREADS UHI impacts and measurement uncertainty over other stations.

I’ve given you the study by Hubbard and Lin before in other threads that show that adjustments (i.e. infilling) on a regional basis are just not acceptable. They must be done on a station-by-station basis.

The average of the data you *have* is what is scientifically acceptable. Anything else is just guesswork and not fit for purpose.

Nick Stokes
Reply to  Richard Greene
March 11, 2023 1:40 pm

“NOAA has two different weather station networks, NClimDiv and USCRN, with completely different siting, yet they somehow manage to show almost the same temperature trends.”

The conspiracy notions here are nuts. Firstly just on the common sense grounds. nClimDiv has over 10000 stations. That is, just for a start, over 10000 people who would know if their data was being fiddled to match. Then of course, there are FOI, OIGs etc. You may dislike how bureaucracies work, but they just can’t work that way.

But second, the station data are published as soon as read. Here is a NOAA site which has both USCRN and nClimDiv, updated every hour, at least. How do you imagine data fiddlers are getting to it before posting? It is clearly coming straight from AWS; there is too much data for human processing. But they wouldn’t even know what USCRN stations were showing before their own station data had appeared.

“Are the known arbitrary adjustments in the numbers shown to the public really completely missing from the raw “unadjusted” data?”

Same thing. The raw data hasn’t changed. Pre-1990 data was recorded in GHCN V1 and distributed on DVD. Data this century was posted almost as soon as read. You can track it, as I showed in a WUWT post for Australia.

“There are many estimates included in the so called raw numbers, called infilling.”
Untrue. Raw data was measured at that station only. Again, you can check the station records. NOAA even posts facsimiles of the hand-written data from olden times.

Richard Greene
Reply to  Nick Stokes
March 11, 2023 8:39 pm

You can’t compute a monthly average temperature without infilling

Therefore, raw data are incomplete until missing daily data are infilled,

Your blind trust in NOAA, and all other government data, has been noted here before. I do not have that confidence in government data on Covid, Covid vaccines, government censorship, climate change, Nut Zero temperature averages, etc. This website is popular because of the mistrust of government data and claims. We are obviously on opposoite sides of this subject.

The 1940 to 1975 global cooling, as reported in 1975, was revised away by NASA-GISS. I know NASA-GISS is not NOAA, but there is no logical reason to trust government temperature averages. There are too many temperature revisions made after the week of the initial measurements.

Last edited 19 days ago by Richard Greene
Nick Stokes
Reply to  Richard Greene
March 11, 2023 8:57 pm

You can’t compute a monthly average temperature without infilling”
Well, you can if none are missing. Else the missing are estimated by interpolation of the station time series.

 there is no logical reason to trust government temperature averages

OK, then do it yourself. Here is my Feb 2023 estimate. GISS will be out in a few days. It will be much the same, even though I use unadjusted GHCN data.

Reply to  Nick Stokes
March 12, 2023 6:15 am

 Else the missing are estimated by interpolation of the station time series.”

You are dissembling, something you are very good at. The issue isn’t infilling *station* information, it is using data from station A to estimate the temperature at station B.

Hubbard and Lin showed clear back in 2004 that is unacceptable due to microclimate differences between stations.

Nick Stokes
Reply to  Tim Gorman
March 12, 2023 6:29 am

As I spelt out here, for raw data the only thing done is time infilling when calculating monthly averages. Spatial infilling is not done.

Reply to  Nick Stokes
March 12, 2023 4:38 pm

bdgwx: “Even Anthony Watts knows that you must grid and infill data to produce a spatial average.”

Nick, you and bdgwx need to get together and get your stories straight.

There is no reason for infilling daily data in order to calculate a *monthly* average at a specific measuring station. Daily median values only use Tmax and Tmin. If you are missing those for a day are you going to *guess* at what they actually were? Since they are at an inflection point you can’t use “interpolation” to find out what they were. Or are you just going to use the rest of the days in the month to get an average?

Nick Stokes
Reply to  Tim Gorman
March 12, 2023 8:22 pm

Of course you must grid and infill data to get a global average. That is an average for the globe, and so you must estimate what lies between the sample points.

But you seem unable to handle the most elementary distinctions. I said that raw data is not infilled. That is, in the supplied data set. Of course people may do so themselves later.

Reply to  Nick Stokes
March 13, 2023 11:48 am

Once again, pure malarky! Infilling data only creates more data points having the average value – thus making the distribution have a smaller standard deviation. It is statistical hocus-pocus!

The word “estimate” is synonymous with the word “guess”. Guessing temperatures is a fraud! That’s why Hubbard and Lin found that you couldn’t use regional adjustments. The micro-climates between stations can be vastly different.

You use the data you have. Period. Exclamation point!
You don’t guess the temperature on top of Pikes Peak is the same as in Colorado Springs merely because they happen to be in different grids. Your guess would be wildly wrong.

See the attached graphic. If you didn’t know the temperature in Osage City would using the temp in Topeka provide a valid “estimate”? It would be off by several degrees!

Just use the temperatures you have and be done with it.

wibw_weather_3_13_23.jpg
Bellman
Reply to  Willis Eschenbach
March 13, 2023 4:29 pm

Perhaps you are foolish enough to infill by adding “more data points having the average value”.
Protip—sane people do not infill in that manner.

Isn’t that essentially what happens if you don’t infill? You assume all the missing data is the same as the average.

Jim Gorman
Reply to  Richard Greene
March 12, 2023 6:26 am

You can’t compute a monthly average temperature without infilling”

Sure you can. How much effect do you think 2 or 3 days or even more has on a monthly average?

Temperatures are highly correlated, especially daily averages. Can you have a large change in between two days, sure. However, as long as the missing temps are random, over time, there isn’t a major effect.



Reply to  Willis Eschenbach
March 12, 2023 4:43 pm

The heavy black horizontal lines are the medians of the data. As you can see, infilling both decreases the spread and increases the accuracy of the results.”

The “accuracy” is just a phantom. It’s created by inserting more data points close to the “average”. It makes the standard deviation look as if it is smaller. It makes the distribution more peaked around the average.

That’s not a real increase in the “accuracy”. It’s like walking up to an archery target and sticking a bunch of extra arrows by hand into the bullseye and saying that your accuracy has gone up as a result.

wilpost
Reply to  Willis Eschenbach
March 13, 2023 6:45 am

This the first time I see the term spline infilling.
I have always used linear infilling

Jim Gorman
Reply to  Willis Eschenbach
March 13, 2023 8:45 am

I would be interested in how the variance changed by performing the infilling. If the variance changes, then the distribution is different.

I think what Tim was pointing out what would happen if you infilled with the “monthly average”. Using surrounding days would give a different affect.

The problem with infilling, even with an average of the surrounding days is that there is no way to know if that is an accurate value. It may be that it should be closer to the preceding temp or closer to the following temp. Who can know?

The preliminary work I am doing doesn’t show a large problem with changing variance by simply using the days that are available. Over a number of years, the variance in same a same month distribution appears to be within the range of variance.

The image I have attached has RSS as Root Sum Square. The variances have been calculated based on the NIST TN 1900 Example 2 method of determining the expanded uncertainty. As you can see the uncertainties haven’t changed much over the years.

I will add before someone else does, that for this station, there are a number of missing years, like 1970 to 1999. However, it doesn’t appear that either anomaly values nor their uncertainty have had drastic changes over the years.

Topeka January Tmax Tmin.jpg
Reply to  Willis Eschenbach
March 13, 2023 12:04 pm

Instead, as I clearly statedI interpolated by inserting values that were the average of the previous and the following days.”

You interpolated by using an average value. You *added* data points that you didn’t actually measure. In other words you guessed.

Attached is a graph of the temperatures at my weather station for the past 30 days. Yes, on some days you could average the day before and the day after and make a good guess at the temperatures. On other days you would be wildly wrong by doing that.

If your “average” interpolation method works then just averaging the actual data you have should give almost the same answer. If you sample 20 days out of 30 and get a different average from those 20 data points than you do by “infilling” the missing 10 data points then there is something drastically wrong with your sample to begin with.

I should also point out that you are speaking of infilling at ONE specific measurement station. That is vastly different than using the measurement made at station A to infill a missing value at station B in order to just have a value to put in an empty grid.

30_day_temp.png
Nick Stokes
Reply to  Willis Eschenbach
March 13, 2023 2:31 pm

I did experiments similar to what Willis did here and here. Similar result, infilling is good. I didn’t try splines, though.

Nick Stokes
Reply to  Willis Eschenbach
March 13, 2023 5:58 pm

Thanks, Willis
actually IS infilling, you’re just infilling with NA “
There is a bit extra to that, which I find useful. It is arithmetically equivalent to infilling with the average of the points for which you do have data. So if it is a colder or hotter place than the average, that will correspondingly be a bad choice.

bdgwx
Reply to  Willis Eschenbach
March 13, 2023 6:42 pm

The way I usually describe it is that when you leave a cell unfilled and do the trivial average of only the filled cells you have only produced a spatial average that is a subset of the whole. For example, if NOAAGlobalTempv5.0 has 20% of its cells unfilled then the grid average is for only 80% of Earth. That’s not strictly “global”…obviously. But if you then use that 80% figure as a proxy for the whole you are effectively infilling the remaining 20% of cells with the average of the filled 80%. That gets you to the 100% and now you can rightfully call it “global”. This is what I mean when I say infilling is inevitable. The question is…do you want to infill using a local strategy (like kriging) or a non-local strategy where you effectively assume the unfilled cells behave like the average of the filled cells.

Last edited 18 days ago by bdgwx
Jim Gorman
Reply to  Willis Eschenbach
March 14, 2023 6:04 pm

Willis it is a little more than that. You only did a part of an experiment to determine the affects of infilling. Here are some questions.

  • what occurs when you simply repeat the previous day?
  • What happens if you repeat the following day?
  • What happens if the infill is one degree warmer or cooler than the either the previous day or the following day?
  • What happens to the variance of the distribution when you infill with the above values? Is it larger, smaller, or does it stay the same?
  • Is the variance similar to the same months in other years that have all the days?

Too many folks that post here about averages just blithely ignore distributions and variance. You posted frequency charts in your essay. Have you examined those for monthly values. From my investigations, few months, unless you’re near a coast, have normal distributions. Plot the mean on those charts and you can recognize the skewness and kurtosis that is there. Two more statistical parameters that are never discussed.

To many of the folks posting here are focused entirely on doing averages only. Let’s average this, or adjust that to get a better mean, maybe homogenize regardless of what it does to the actual distribution and variance.

One should ask themselves why medical research papers, physics research papers, and other scientific papers delve deeply into the statistical parameters and how sampling is done. Even political pollsters have better statistics than we see in climate science.

After studying Dr. Possolo’s paper that is published by NIST, I have a better understanding and appreciation for the steps he used to calculate a mean temperature in TN 1900. I have also obtained his book and studied it. One can easily understand why he makes the assumption that systematic and random measurement errors are negligible when compared to the variance in Tmax and Tmin throughout a month. Using his method, most months have an expanded uncertainty @95% confidence of ±2 -4°C. You never see a variance like this quoted here.

Jim Gorman
Reply to  Willis Eschenbach
March 15, 2023 8:55 am

A long answer but I want people to understand where I am coming from.

I understand what you are doing. I have not looked at what occurs with missing entire months of data that are missing. All of my investigations have looked at missing days within a month.

As you can see from the image I previously posted, the expanded uncertainty for each individual month, as calculated using NIST TN1900, has little variation. You simply can not tell which months have missing days by looking at the values.

I also need to point out that the statement of uncertainty has rules for significant digits also. From this web site:

Microsoft Word – Uncertainties and Significant Figures.doc (deanza.edu)

“Rule For Stating Uncertainties – Experimental uncertainties should be stated to 1- significant figure.”

“Rule For Stating Answers – The last significant figure in any answer should be in the same place as the uncertainty.”

Since your data values are integers and have 1 significant digit, the uncertainty should be rounded to 1 significant digit at the units place.

For example:
(1, 4, 5, 4, 2) => mean = 3
stated answer => 3 ± 2

We can evaluate this using the TN 1900 method.

1.643/√5 = 0.73
k factor for (DF = 5 – 1) @ 97.5% = 2.776
0.73 * 2.776 = 2.02 => 2 @ 95% confidence interval
stated answer => 3 ± 2

We get the same expanded uncertainty. Isn’t that amazing?

Let’s do it for the 2nd set – minus one value

(1, NA, 5, 4, 2) => mean = 3
1.826/√4 = 0.91
k factor for (DF = 4 – 1) @ 97.5% = 4.541
0.91 * 4.541 = 4.13 => 4 @ 95% confidence interval
stated answer => 3 ± 4

Whoa! Why is this? The fewer values you have, the larger the uncertainty. Let’s look at a number set with 30 values, then remove tfour of them.

(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44)

mean => 22.5
SD = 15.4
SD/√30 = 2.81
k factor for (DF = 30 – 1) @ 97.5% = 2.05
2.81*2.05 = 5.76
Stated answer => 23 ± 6

(1,2,3,4,5,6,8,9,10,11,12,14,15,30,31,32,33,34,35,36,38,39,40,41,43,44) {missing 7, 13, 37, 42}

mean = 22.15
SD = 15.4
SD/√26 =3.02
k factor for (DF = 26 – 1) @ 97.5% = 2.06
3.02*2.06 = 6.22
Stated answer => 22 ± 6

Please note that I have carried extra digits solely for the purpose of reducing rounding errors. The final answer follows Significant Digit rules.

I have also included a portion of Dr. John Taylors book as an image. It basically says the same thing concerning the digits in an uncertainty quote. There are many sites that say the same thing about physical measurements and their uncertainty.

Since SD is an indication of uncertainty, your values would all round to a similar number. Don’t be misled by those on here that use the SEM as a measure of uncertainty determined by dividing the SD by a number in the thousands. The uncertainty must tell one how accurately an actual physical measurement has been done and not how close a sample mean is to the population mean (SEM). An uncertainty far, far below the resolution of the actual measuring device is an indicator that one does not understand metrology or how physical measurements are made.

Following the NIST TN 1900 will not lead one astray when evaluating the expanded uncertainty of a sample of measurements.

Taylor stating uncertainty.gif
Mike McMillan
Reply to  Richard Greene
March 11, 2023 1:58 pm

First Problem
How can one be sure that raw “unadjusted” numbers are really raw and unadjusted?

You can’t, because they aren’t.

In the summer of 2009 I was going to compare the raw numbers with the homogenized numbers to see how much Hansen et al were creating warming. I downloaded all the raw charts from Iowa, Illinois, and Wisconsin. Then I got distracted for a few months, then I got started downloading some raw charts, then remembered I had already done that. Just to check, I compared newly downloaded with previous raw charts. They were different.

Bad luck for them. I accidentally caught them [doing something not good].

Here are my blink charts for the 3 states, all sites.

https://www.rockyhigh66.org/stuff/USHCN_revisions_wisconsin.htm

https://www.rockyhigh66.org/stuff/USHCN_revisions.htm

https://www.rockyhigh66.org/stuff/USHCN_revisions_iowa.htm

Nick Stokes
Reply to  Mike McMillan
March 11, 2023 3:30 pm

Those are not raw USHCN data. They are numbers as reprocessed by Gigtemp.

Mike McMillan
Reply to  Nick Stokes
March 12, 2023 4:26 pm

Sure they’re raw data. Both charts were labelled as raw data with nary a mention of Mr Gigtemp making them even better. Not long after these went viral (136,000 downloads international before I pulled off the counter), they had to relabel their artwork Version 2.

https://chiefio.wordpress.com/2010/01/15/ushcn-vs-ushcn-version-2-more-induced-warmth/

comment image

Nick Stokes
Reply to  Mike McMillan
March 12, 2023 8:03 pm

The labels on those charts are obviously not from GISS, but have been added for the blink.

Walter
Reply to  Richard Greene
March 11, 2023 2:54 pm

Richard,

You can look at individual USCRN stations and compare the data with other weather stations nearby. I have for my area, and there is a huge difference between official measurements taken at the airport or at a NWS office and the USCRN station. I mean HUGE difference.

Jeff Alberts
Reply to  Walter
March 11, 2023 10:50 pm

“Nearby” is meaningless. Just a few miles can make a major difference. Averaging all this stuff into a single number is just plain nuts.

Reply to  Jeff Alberts
March 12, 2023 6:17 am

Yep!

You use the data you have or you don’t use it at all. You don’t just create data because it makes the data distribution look more accurate. That is a statistician’s trick, not a physical scientist’s trick.

bdgwx
Reply to  Richard Greene
March 11, 2023 5:36 pm

Richard Green said: “I do trust Roy Spencer and John Christy for UAH data”

Year / Version / Effect / Description / Citation

Adjustment 1: 1992 : A : unknown effect : simple bias correction : Spencer & Christy 1992

Adjustment 2: 1994 : B : -0.03 C/decade : linear diurnal drift : Christy et al. 1995

Adjustment 3: 1997 : C : +0.03 C/decade : removal of residual annual cycle related to hot target variations : Christy et al. 1998

Adjustment 4: 1998 : D : +0.10 C/decade : orbital decay : Christy et al. 2000

Adjustment 5: 1998 : D : -0.07 C/decade : removal of dependence on time variations of hot target temperature : Christy et al. 2000

Adjustment 6: 2003 : 5.0 : +0.008 C/decade : non-linear diurnal drift : Christy et al. 2003

Adjustment 7: 2004 : 5.1 : -0.004 C/decade : data criteria acceptance : Karl et al. 2006 

Adjustment 8: 2005 : 5.2 : +0.035 C/decade : diurnal drift : Spencer et al. 2006

Adjustment 9: 2017 : 6.0 : -0.03 C/decade : new method : Spencer et al. 2017 [open]

That is 0.307 C/decade worth of adjustments with a net of +0.039 C/decade and that does not include the unknown magnitude of adjustments in the inaugural version A.

Pay particular attention to their infilling strategy.

Next, each Tbl was binned into the appropriate 2.5° grid square according to the latitudes and longitudes provided with the raw data by NESDIS. This was done on a daily basis for each satellite separately and for ascending and descending satellite passes separately. At the end of the day, a simple (one per distance) interpolation was performed to empty 2.5° grid squares for ascending and descending grid fields separately with the nearest nonzero grid data, in the east and west directions out to a maximum distance of 15 grids. These interpolated ascending and descending Tbl fields were then averaged together to provide a single daily Tbl field for each satellite. At the end of the month, a time interpolation was (±2 days) to any remaining empty grid squares. The daily fields were then averaged together to produce monthly grid fields.

15 grids representing 2.5° of longitude each at the equator is 4175 km. Compare this to GISTEMP which only interpolates to a maximum of 1200 km. GISTEMP does not perform any temporal interpolation.

My point is this. An awful lot of people put their full faith in 2 guys who perform some of the most (if not the most) aggressive homogenization, adjustment, and infilling procedures of any global average temperature dataset yet believe that everyone else is a liar and a fraud who is participating in a grand conspiracy to fake temperature data. Strange isn’t it?

Last edited 20 days ago by bdgwx
Jeff Alberts
Reply to  bdgwx
March 11, 2023 10:51 pm

I don’t pay attention to anyone who creates a global average. It’s complete nonsense.

Henry Pool
Reply to  bdgwx
March 12, 2023 4:10 am

It is either the man, the method or the machine. In this case I am inclined to think that it might be more machine related. I don’t know how they calibrate the probes. No doubt they are being attacked by the most energetic SW.

Richard Greene
Reply to  bdgwx
March 12, 2023 4:58 am

All UAH revisions are clearly documented, so even you can tell us what was done.

No financial incentive for Christy and Spencer to exaggerate global warming. their work done voluntarily without payment.

Those facts are all important to me.

“An awful lot of people put their full faith in 2 guys who perform some of the most (if not the most) aggressive homogenization, adjustment, and infilling procedures of any global average temperature dataset yet believe that everyone else is a liar and a fraud who is participating in a grand conspiracy to fake temperature data. Strange isn’t it?”

I believe the satellite methodology has the POTENTIAL for a reasonably accurate global average, mainly because of much less infilling than surface averages. Also, I have more trust in two private sector scientists with NO government funding, over government bureaucrat scientists, especially after the past few years of consistent government lying and censorship, such as lying and censorship about January 6, Covid and Covid vaccines, is a fool. Based on your comment, you are such a fool.

In the long run, the historical temperature trend does not matter much, as long as it is increasing, as it was from 1975 to 2015.

Because the predictions of FUTURE global warming are barely related to any past trend of global warming, even the cherry picked 1975 to 2015 period.

It’s not like manmade CO2 emissions began in 1975 and ended in 2015. CO2 emissions were mainly in the past 82 years, from 1940 to 2023, But the global cooling from 1940 to 1975 is usually ignored, especially after being “revised away”. And the flat temperature trend since 2015 gets no mass media attention.

Last edited 19 days ago by Richard Greene
Reply to  Richard Greene
March 12, 2023 6:22 am

UAH is *not* a global average temperature. I suspect Christy and Spencer will tell you the same thing if actually pinned down. It is a “METRIC”. It’s like measuring snow depth with an uncalibrated stick. You can tell if the snow depth is going up, down, or sideways by putting marks on the stick but you don’t really know what the actual depth values are.

bdgwx
Reply to  Willis Eschenbach
March 12, 2023 12:10 pm

WE said: “See Dr. Roy on the subject here.”

Keep in mind that the [Christy et al. 2018] methodology does two interesting things.

First…they use IGRA. Note what IGRA says about their own dataset.

IGRA can be used as input to air pollution models, for studies of the detailed vertical structure of the troposphere and lower stratosphere, for assessing the atmospheric conditions during a particular meteorological event, and for many other analyses and operational applications. NCEI scientists have applied a comprehensive set of quality control procedures to the data to remove gross errors. However, the data may still include jumps and other discontinuities caused by changes in instrumentation, observing practice, or station location. Users studying long-term trends may prefer to use the NOAA Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) or one of the available non-NOAA IGRA-derived, homogeneity-adjusted radiosonde datasets.

Second…they adjust IGRA to match the satellite data.

When a significant shift in the difference-time-series is detected (by the simple statistical test of the difference of two segments of 24-months in length on either side of the potential shift) we then adjust the radiosonde to match the satellite at that shift point.

WE said: “The UAH MSU is calibrated and validated against radiosonde balloon data”

It is true that UAH is calibrated and validated against radiosonde data to some extent. In fact, they use the data to bias correct satellite observations. But the context here is of spot measurements only. [Spencer et al. 1990]

Here is the comparison with 3 radiosonde datasets in terms of the trend.

comment image

Reply to  Willis Eschenbach
March 12, 2023 4:52 pm

It is *still* just a sample from a huge population and there is no guarantee that each orbit’s readings are the same distribution as the population. Different locations and different times for each measurement. It is no more capable of determining a “true value” for the globe than anything else. It is a *metric” just like using a stick to measure snow depth. That stick *will* tell you if the snow depth has gone up, down, or sideways but it simply can’t give you a “true value” for the snow depth. It really can’t even tell you *why* the snow depth changed, just that it did. Same with UAH.

Tom Abbott
Reply to  Richard Greene
March 12, 2023 11:35 am

“All UAH revisions are clearly documented, so even you can tell us what was done.”

That’s right. Nobody associated with the UAH satellite data is trying to hide anything. It’s all out in the open for everyone to see.

bdgwx
Reply to  Tom Abbott
March 12, 2023 12:43 pm

TA said: “That’s right. Nobody associated with the UAH satellite data is trying to hide anything.”

Except of course the source code and materials required to replicate their work.

bdgwx
Reply to  Richard Greene
March 12, 2023 11:51 am

RG said: “All UAH revisions are clearly documented, so even you can tell us what was done.”

They publish their methods in academic journals like all the others, but that’s it. They do not publish the source code and materials to replicate their work. Contrast this with GISS which publishes everything you need to replicate their work in as little 30 minutes.

Bellman
Reply to  Richard Greene
March 12, 2023 1:47 pm

No financial incentive for Christy and Spencer to exaggerate global warming. their work done voluntarily without payment.

I don’t think that’s true. According to their report

Neither Christy nor Spencer receives any research support or funding from oil, coal or industrial companies or organizations, or from any private or special interest groups. All of their climate research funding comes from federal and state grants or contracts.

https://www.nsstc.uah.edu/climate/2023/february/GTR_202302FEB_1.pdf

But, regardless, the logic that voluntary workers are less likely to “manipulate” data than paid workers, is questionable. The usual argument is that all these producers of data sets are manipulating them in order to prove a point, not because they are being paid to do it. If money is the only motivation, it would be easy for other companies to offer more money to get the results they want.

Tom Abbott
Reply to  bdgwx
March 12, 2023 11:33 am

“My point is this. An awful lot of people put their full faith in 2 guys who perform some of the most (if not the most) aggressive homogenization, adjustment, and infilling procedures of any global average temperature dataset yet believe that everyone else is a liar and a fraud who is participating in a grand conspiracy to fake temperature data. Strange isn’t it?”

Not really. The Weather Balloon data correlates with the UAH data about 97 percent, so those two guys must be doing something right.

bdgwx
Reply to  Tom Abbott
March 12, 2023 12:42 pm

TA said: “The Weather Balloon data correlates with the UAH data about 97 percent”

Sure. It correlates pretty well when you adjust radiosonde data, which the creators warn against using for climatic research, to match the satellite data like what [Christy et al. 2018] did.

Here is a comparison with three radiosonde datasets without adjusting first. Notice that the match isn’t so great.

comment image

Nick Stokes
Reply to  Richard Greene
March 11, 2023 7:23 pm

Fourth Problem:
There are many estimates included in the so called raw numbers, called infilling. Tony Heller, at his website, claims the estimated (infilled) data are a large percentage of USHCN data.”
Again, total confusion here. Raw data does not undergo spatial infilling. Heller is calculating the percentage of adjusted stations that have been infilled, basically because they were missing data.

But, absurdly, Heller has never acknowledged that NOAA replaced USHCN with nClimDiv in March 2014. Since than they have never posted a NOAA-calculated USHCN average for the USA. Heller’s simple solution was to calculate it himself, badly, and then castigate NOAA. So when, in the post you linked he posts this graph (my red), well, you can see the problem:

comment image

USHCN kept a fixed set of stations since it began in 1987. They were mostly staffed by volunteers, and over time, fewer were reporting. The fixed set was a good idea while it lasted, but by 2014 the dropouts had reached a level where they needed something new. But Heller kept going.

Last edited 20 days ago by Nick Stokes
Richard Greene
Reply to  Nick Stokes
March 11, 2023 8:45 pm

The percentage were already high BEFORE nClimDiv

Nick Stokes
Reply to  Richard Greene
March 11, 2023 8:50 pm

Yes, maybe the change was overdue. But it happened.

Tom Abbott
Reply to  Richard Greene
March 12, 2023 11:09 am

“For example, the mid-1930s included the hottest US year for a long time. Hotter than the initial 1998 number. Then there were arbitrary adjustments and 1998 became warmer than any year in the 1930s. It that arbitrary adjustment also included in raw “unadjusted” data?”

Yes, at one time, James Hansen said 1934 was 0.5C warmer than 1998, and he showed this U.S. temperature chart as verification.

Hansen 1999:

comment image

Then when the temperatures failed to continue to get warmer after 1998, while CO2 amounts in the atmospher continued to climb, Hansen apparently felt the need to adjust the U.S. temperature profile so that 1934 was no longer warmer than 1998. If 1934 was warmer than 1998, then that would mean that the temperatures have been cooling since 1934, not climbing, and Hansen and the other climate change alarmists did not want that to be the case so they bastardized the temperature chart to promote 1998 and demote 1934.

And about temperature databases, somewhere in the Climategate emails is a note from a colleague of James Hansen where the author (I forget his name) told Hansen that the author’s independent data verfied that 1934 was 0.5C warmer than 1998.

Now that Hansen has bastardized his temperature record, and no longer shows 1934 to be warmer than 1998, I wonder what his colleague, who did show 1934 as warmer, thinks of Hansen’s changing of the temperature record?

Any temperature chart you see today that does not show the Early Twentieth Century as being just as warm as today is Science Fiction. The raw, unmodified data does not supprt any other temperature profile.

Tom Abbott
Reply to  Tom Abbott
March 15, 2023 2:51 am

No comments from the Peanut Gallery?

Rud Istvan
March 11, 2023 11:22 am

Fig. 6 is a BIG problem for AGW alarmists. Not only is there no observational cause for alarm, there is no observational evidence for AGW. AGW was always long on theory (models) and short on observational support. AGW ignores natural variation (assuming A) to its peril.

We know models are wrong in several basic ways. All but one in CMIP6 produce a tropical troposphere hotspot that does not observationally exist. All but two produce an ECS significantly higher than observational energy budget methods, by about a factor of 2x. Sea level rise did not accelerate as modeled. Despite theoretical polar amplification, summer Arctic sea ice did not disappear as modeled.

Richard Greene
Reply to  Rud Istvan
March 11, 2023 12:07 pm

Fig. 6 is a BIG problem for AGW alarmists. Not only is there no observational cause for alarm, there is no observational evidence for AGW

False
The effect of greenhouse warming should mainly be seen in the TMIN numbers, rising faster than the TMAX numbers, and that is exactly what is shown on figure 3 and figure 6.

The average temperature is the net result of ALL local, regional and global causes of climate change.

The average temperature does not have to match the expected warming pattern of increased CO2. But it happens to match in the US in the two charts above.
.
Greenhouse warming is EXPECTED to be mainly TMIN in the six coldest months of the year.

Last edited 20 days ago by Richard Greene
aussiecol
Reply to  Richard Greene
March 11, 2023 1:30 pm

That is not how it is portrayed by the mass hysteria media though is it. We tirelessly keep hearing about heatwaves and record daily temperatures being the hottest ever. But Willis clearly shows that’s all bullshit.

Richard Greene
Reply to  aussiecol
March 11, 2023 8:48 pm

It was so hot in Australia yesterday that a woman whose range was broken had her bald husband stand outside in the noon sun for a half hour. When his scalp warmed up enough, she fried an egg on his head. It was in all the Australian newspapers.

aussiecol
Reply to  Richard Greene
March 12, 2023 4:00 am

LOL. He obviously was not a Tasmanian, who reputedly have pointed heads.

TheFinalNail
Reply to  Willis Eschenbach
March 12, 2023 12:49 pm

W

I think it was a joke.

TFN

Reply to  Richard Greene
March 12, 2023 6:26 am

Istvan: Fig. 6 is a BIG problem for AGW alarmists. Not only is there no observational cause for alarm, there is no observational evidence for AGW

Greene: False
The effect of greenhouse warming should mainly be seen in the TMIN numbers, rising faster than the TMAX numbers, and that is exactly what is shown on figure 3 and figure 6.

At least part of what Rud said is true. There is no observational cause for alarm.

Nor is Tmin going up any kind of confirmation of AGW. Tmin could be going up for any number of reasons. AGW could be a “part” of it or none of it. The models certainly can’t distinguish.

Tom Abbott
Reply to  Tim Gorman
March 12, 2023 11:41 am

“Nor is Tmin going up any kind of confirmation of AGW. Tmin could be going up for any number of reasons. AGW could be a “part” of it or none of it. The models certainly can’t distinguish.”

Good point.

Tom Abbott
Reply to  Richard Greene
March 12, 2023 11:39 am

“Greenhouse warming is EXPECTED to be mainly TMIN in the six coldest months of the year.”

So how much does CO2 increase TMIN?

Tom Abbott
Reply to  Tom Abbott
March 15, 2023 2:52 am

Answer: Nobody knows.

Solomon Green
March 11, 2023 11:24 am

Thanks for the very interesting look at the raw data and the fascinating maps indicating the warming and cooling of the maxima and minima. What struck me at first glance was how the coastal stations appeared to show far more warming than cooling. (more red dots than white). Any guesses as to why?

It might also be interesting if Willis could produce similar histograms using only those stations for which data is available for more than 30, or even 40 years, to see if they generate similar trends.

RickWill
Reply to  Solomon Green
March 11, 2023 2:04 pm

What struck me at first glance was how the coastal stations appeared to show far more warming than cooling.

The oceans and deep lakes in the Northern Hemisphere are warming and retain their heat through winter. Winter heat advection from ocean to land is increasing. So the regions most influenced by the oceans (and large lakes) is warming the most.

A consequence of the increased winter advection is increasing snowfall. Snowfall records will be a feature of weather reporting for the next 9000 years. The warming of the NH oceans has only been occurring for about 500 years.

The single place and time that shows the most warming on the entire globe is the Greenland plateau in January. It has warmed from -30C to -20C in the past 70 years.

Stuart Nachman
Reply to  RickWill
March 11, 2023 4:53 pm

As an oceanside resident in the mid-Atlantic region, I notice that the ocean has a moderating effect on whatever temperature exists as little as a mile inland, making my residence cooler in the summer and warmer in the winter than those living more inland.

Tom Abbott
Reply to  Stuart Nachman
March 12, 2023 11:46 am

I watch the artic air as it circles the globe and I have noticed that when it starts getting close to the UK the ocean starts moderating the temperatures and the UK barely gets touched by the cold air because it has warmed by the time it gets there.

Reply to  RickWill
March 12, 2023 6:28 am

Population density is growing more in the coastal regions causing more UHI?

Clyde Spencer
Reply to  Solomon Green
March 11, 2023 3:54 pm

That struck me too. My first guess would be warming surface water.

Gary Pearse
March 11, 2023 11:32 am

The eastern half of the country seems to be cooling and the western half warming

Richard Greene
Reply to  Gary Pearse
March 11, 2023 1:19 pm

I hope the left coasters don’t move East

rah
Reply to  Richard Greene
March 11, 2023 1:24 pm

They already have been.

michael hart
March 11, 2023 11:52 am

“In my youth, I spent a couple of winter nights sleeping on a piece of cardboard on the street in New York, with newspapers wrapped around my legs under my pants for warmth.”

I can empathize.
The coldest life of my youth was spent trekking up the Baltoro glacier on a shoestring budget of food and equipment.
Trying to sleep, having placed every piece of clothing, fabric and equipment, except the ice axe, underneath, the ground still sucked heat out of my body.
It was only later that I realised that pitching the tent on flat stone was much worse than pitching it on snow/ice.

Dave Fair
Reply to  michael hart
March 11, 2023 1:42 pm

The coldest I have ever felt (including after years of racing sled dogs in Alaska) was during the monsoon season laying in rice paddy mud for night ambush operations in Vietnam. Just thinking about those nights makes my bones hurt.

Tom Abbott
Reply to  Dave Fair
March 12, 2023 11:53 am

It could get chilly over there. Monsoon, rain, mud, cold.

I was there in 1969 when Vietnam got hit by a Typhoon and dropped 22 inches of rain in 24 hours. I don’t remember the name of the Typhoon.

I was in a bunker located about 50 feet from a 50-foot-wide-creek and within a matter of hours after it started raining, we had to move to higher ground as that creek turned into a roaring river.

Dave Fair
Reply to  Tom Abbott
March 12, 2023 1:14 pm

Where were you located, Tom?

Tom Abbott
Reply to  Dave Fair
March 15, 2023 2:55 am

I was living right next to a bridge at the east end of the An Khe pass, at the time. Four Americans and 40 South Korean infantry who were guarding the bridge.

Last edited 16 days ago by Tom Abbott
Dave Fair
Reply to  Tom Abbott
March 15, 2023 12:15 pm

My war was different: As part of the 9th Division, we were dealing with daily tides during combat patrols, helicopter assaults and ambush operations in the Mekong Delta.

I forgot to add, I was wounded on a night riverine operation in a converted LST.

Last edited 16 days ago by Dave Fair
Rud Istvan
Reply to  michael hart
March 11, 2023 5:02 pm

Just an old extreme camper. Used to snowshoe up big NE mountains in mInus 20 F weather. So, to keep warm,

  1. carry a closed foam cell 3/8 inch bedmat roll, SOP. Insulates from the ground. Goes easy/small strapped on the bottom of my Kelty backpack, and weighs almost nothing.
  2. carry an impervious ‘space blanket’. Light weight, small volume, put over bedmat, shiny side up to reflect body heat up to body,
  3. use a really good goose down ‘mummy’ sleeping bag. They stuff up tight for packing because of down, fluff up bigly because of down if you keep dry

I thrived in a two pole simple V tarp shelter (not a tent) at 20 below when that night before the wood reflector fire, had my hot coffee freeze half thru the old army issue canteen cup aluminum mug. Now, admittedly used the snow shoes to dig out a snow bank place below the wind, roofed with cut fresh evergreen boughs, just in case it began snowing heavily again before setting up my four corner tarp

Peta of Newark
Reply to  Rud Istvan
March 11, 2023 6:46 pm

You used the space blanket upside down.

Some home/house insulation products come with ‘shiny sides’ = always a layer of Aluminium foil. Of course **everyone** thinks that the shiny stuff is ‘reflecting heat’

Wrong. The shiny/reflective bit is simply a feature of the material

If you read the instructions that come with those shiny insulation products (the ones that know their job and what they’re talking about) – they will say to install the material with the “Shiny side facing the cold”

Not because the shiny side reflects the ‘cold from getting in’
It is that Aluminium has very low Emissivity
Other shiny substances would not work.

It is why Thermos Flasks haven’t just got a vacuum inside them, but also the shiny Aluminium layer. The vacuum stops conduction and convection while the Aluminium stops radiation.
I have explained this A Trillion Times on here – yet Magical Thinking stops the message getting through every single time.

So the SpaceBlanket works by

Trapping warm air (stopping convection)When that warm air (object inside the blanket) warms the Aluminium foil layer, the Aluminium does not radiate the heat away (to the extent most other materials would)The foil does not reflect the heat back in – it stops it from getting out. Those are NOT the same things, like the difference between a Tax Credit and a Subsidy

That last point is vital when considering the GHGE and The 2nd Law is properly applied
i.e. When energy has radiated away from any object, it instantly at that time becomes ‘spent energy’ or ‘waste heat’
iow: It has started its path down a thermal gradient

The 2nd Law, Entropy, Carnot. Stefan/Boltzmann and everyday experience tell me. you, anyone that that energy can NOT return to the object that radiated it away

The GHGE as always explained is total junk.

The only way Earth can get warmer in the presence of a Constant Sun is if either its Albedo and or its Emissivity reduced
It does not matter what Earth is made of – apart from the Emissivity value of whatever that substance is.
Whatever is absorbed and re-radiated inside Earth is utterly irrelevant.
Emission, absorption and re-radiation are no different than Conduction and the GHGE is not (supposedly) a conduction effect

As it happens and at the temps/pressures of Earth’s atmosphere, CO2 does actually have vanishingly low Emissivity.
But at the concentrations it is at in the atmosphere, the effect will be unmeasurable.

Last edited 20 days ago by Peta of Newark
DWM
Reply to  Peta of Newark
March 11, 2023 9:45 pm

“The 2nd Law, Entropy, Carnot. Stefan/Boltzmann and everyday experience tell me. you, anyone that that energy can NOT return to the object that radiated it away
The GHGE as always explained is total junk.”

That is not true. The 2nd law only requires that the radiated energy from the hot object to the cold object be greater than the radiated energy from the cold object to the hot object. The net flow of energy is from the hot to the cold.

The GHGE lives.

Richard Greene
Reply to  DWM
March 12, 2023 3:56 am

I took a thermodynamics class in the 1970s. It was awful but I passed. I can not understand why so many conservatives pontificate about thermodynamics, which they do not understand, and then use their misinformation to declare there is no greenhouse effect. Which leads to the claim that CO2 does absolutely nothing.

Last edited 19 days ago by Richard Greene
rah
Reply to  Peta of Newark
March 12, 2023 8:07 am

The military emergency “space blankets” issued when I was in, and which we all carried in our “bug out kits”, were shiny on one side and OD green on the other. The obvious intent being that the OD be facing out.

Dave Fair
Reply to  Peta of Newark
March 12, 2023 1:09 pm

Peta, your “The shiny/reflective bit is simply a feature of the material” is not even wrong: The actual radiative science (in practical use since the 1970s) is “The reflective agent on space blankets — usually silver or gold — reflects about 80 percent of our body heat back to us.” [from “How Stuff Works”] It was originally built to reflect sunlight from the space station to keep from overheating. It is used worldwide to reflect body heat back to the person it covers in all sorts of medical and inclement weather uses.

BTW, the shiny side of standard fiberglass batting insulation placed between the studs and rafters faces inward to act as a vapor barrier. It also holds the batting together for ease of transportation and use. When was the last time you built a house?

David Dibbell
March 11, 2023 11:58 am

Thanks, Willis!
Those links to the USHCN sources appear not to be the latest available though. The daily data appears to be through 2015.

I have been using the following links to access both the daily and monthly USHCN files which NOAA keeps up to date.

Daily – these are not adjusted. Each station file contains its entire record.
At this link: https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/

The ghcnd_hcn.tar.gz file contains all the 1,218 USHCN station files of daily data.

Or the individual files can be found here. https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/hcn/

And the monthly files are here.
https://www.ncei.noaa.gov/pub/data/ushcn/v2.5/

The monthly files by station are separate for “raw”, “tob” (time of observation adjusted) and “FLs.52j” (after pairwise homogenization) tmax, tmin, tavg. And raw and FLs.52j for precip.

Finally, about the adjustments. Your comment is, “This also makes it very hard to estimate the effect of the adjustments.” Yes, but the USHCN monthly files can be analyzed by station and in aggregate to directly see the effect over time of the tob and FLs.52j adjustments as applied to a particular month’s raw values. At this link are plots for the monthly tavg values for December, from 1895 through 2021, for the mean of all USHCN stations reporting a value.

https://www.dropbox.com/sh/23ao8mh3b4j4rpg/AAB6Gx5DEtOrHEEfKXcVaF5ba?dl=0

Again, much appreciation for this post and for all your work.

Last edited 20 days ago by David Dibbell
JCM
March 11, 2023 12:27 pm

Heat transport and storage dynamics have almost nothing to do with radiation in the turbulent boundary layer.

It’s all about tangible heat, net latent flux, and material properties (specific heat or volumetric heat capacity). These are very local factors, subject to very localized variability.

This is most evident at night using near surface thermometers.

Untitled.png
JCM
Reply to  JCM
March 11, 2023 12:40 pm

Note that net latent flux (QE here) is up and out of the turbulent boundary layer. The magnitude of the upward daytime arrow is larger than the nighttime downward arrow. The heat released in cloud condensation aloft exceeds the heat released in condensation of ground dew and frost.

QH does not have such behavior (sensible heat).

JCM
Reply to  JCM
March 11, 2023 1:14 pm

a better illustration

Untitled.png
Chris Hanley
March 11, 2023 12:40 pm

According to the US National Institute of Standards and Technology: “To measure the temperature, thermometers have historically used the fact that liquids such as mercury and alcohol expand when heated. These thermometers are reasonably accurate, to within a degree or two” (How Do You Measure Air Temperature Accurately).
+/-1C or +/-1F seems the most accurate and precise that can be assumed from the historical record given even the most diligent observers.
I’m not sure in the case of weather stations the large number of measurements and so-called ‘adjustments’ improve the data quality to narrow the confidence level from +/-1 (C, F).

Last edited 20 days ago by Chris Hanley
Jim Gorman
Reply to  Chris Hanley
March 12, 2023 6:46 am

They don’t. Those are the uncertainty intervals of each individual measurement. Averaging different things, i.e., one time readings of a passing temperature provides no distribution that can be used to lower the uncertainty of individual readings.

In addition, the biggest part of the uncertainty interval has to do with the resolution (precision) of an LIG or MMTS station. If the temperature is recorded in integer fashion, averaging integers can not and does not increase the resolution available. Adding decimal digits by averaging is functionally creating new information out of thin air. It is why significant digit rules were created. Those rules prevent altering the resolution of the original measurements.

bdgwx
Reply to  Chris Hanley
March 12, 2023 11:45 am

±1 C is good estimate for the uncertainty of older observations. Observations using modern instrumentation are probably closer to ±0.5 C. So for a monthly average at a station the uncertainty would be about 1/sqrt(60) = 0.13 C and 0.5/sqrt(60) = 0.06 C respectively for the uncorrelated case [JCGM 100:2008]. Adjustments only remove error arising from systematic effects. They do not do anything to improve the uncertainty arising from random effects.

Last edited 19 days ago by bdgwx
Reply to  bdgwx
March 13, 2023 4:04 am

You confuse variation of the stated values of the data with the measurement uncertainty of the data.. x/sqrt(y) is the SEM. It’s a measure of the variation in the measurement values. It’s how closely your estimate of the average approaches the population average. The measurement uncertainty is totally different. If the population average is not accurate the uncertainty simply can *not* be less than the uncertainty of the measurements. It usually would be the root-sum-square of the individual measurements and certainly no smaller than the inherent uncertainty of the individual measurements, i.e. +/- 0.5C.

+/- 1 and +/- 0.5 are *NOT* the variation in the measurement data. They are not appropriate measures to use in deciding how close you are to the population average.

If your ruler is off by an inch then it doesn’t matter how many measurements you take, the average will be off by an inch. No amount of averaging can lessen that. You can take all samples you want, it won’t help.

+/- 1.0 and +/- 0.5 are measurement uncertainty and have two components, random error and systematic bias (the ruler being off by an inch). You simply have no idea of the size of each component which means you can’t just ignore the measurement uncertainty. And, once again, the SEM is *not* measurement uncertainty.

If you would include these words in every post you make on the subject maybe it would sink in. “Variation in the stated values of the measurements is not the measurement uncertainty”.

Last edited 18 days ago by Tim Gorman
bigoilbob
Reply to  Tim Gorman
March 13, 2023 7:28 am

“If your ruler is off by an inch”

Ah, the usual conflation of random and systemic error. Yes, if you have an unknown, constant, systemic error in every one of kazillions of temp measurements over the decades, then the temp trend would be – oops – unaffected. But what you are (perhaps unintentionally, due to your Dan Kahan System 2 bias), claiming is even more unlikely. You are claiming that when these systemic errors start, they are increasing temps by large fractions of a degC, and then, as the decades go by, those errors are smoothly reduced to nada, and finally, are then smoothly replaced by errors that decrease that apparent temp, also by large fractions of a degC. All with nada for technical backup. Sorry Maxwell Smart – I find that hard to believe…

karlomonte
Reply to  bigoilbob
March 13, 2023 7:50 am

blob the clown weighs in with a hand-waved blob word salad.

Reply to  Willis Eschenbach
March 13, 2023 12:53 pm

Willis,

km is correct. bigoilbob just threw out a word salad with no actual refutation contained in it.

b-b won’t accept that a measurement of 2.5C +/- 0.5C and a second measurement of 2.6C +/- 0.5C doesn’t automatically mean a positive trend.

Like so many in climate science he always wants to ignore that +/- 0.5C uncertainty.

In TN1900, Possolo made two very restrictive assumptions in order to allow the variation of a partial set of Tmax values from the same station to define the uncertainty of the monthly average Tmax. The first assumption is that there is no systematic bias in the measurements. The second assumption is that the measurement uncertainty is random, Gaussian and therefore cancels. Therefore the measurements of the temps become the stated values only.

That is what b-b *always* uses as unstated assumptions. If you don’t use these assumptions then you can’t claim that the stated values define a specific trend because you *can’t* know that due to the measurement uncertainties.

karlomonte
Reply to  Willis Eschenbach
March 13, 2023 6:18 pm

You could also just ban me for my bad attitude toward the trendologists instead of going for the mime.

And it is quite pointless trying to argue with these persons, but go ahead, have fun.

Last edited 18 days ago by karlomonte
Reply to  bigoilbob
March 13, 2023 12:41 pm

Yes, if you have an unknown, constant, systemic error in every one of kazillions of temp measurements over the decades, then the temp trend would be – oops – unaffected. “

Malarky!

  1. Systemic BIAS is not constant. In fact it can vary from positive to negative in the same station over time as different components dominate. And it *is* bias, not error! You only display your misunderstanding of measurement uncertainty by calling a systematic bias an “error”.
  2. Since the systematic bias can change then the trends *can* be affected.
  3. If the trend line lies within the measurement uncertainty interval then how do you know what the trend actually is? Measurements are given as ‘stated value +/- measurement uncertainty. As usual, you want to assume the stated value is 100% accurate and can define the trend. It’s the common tactic of a statistician – just ignore the hard part – the measurement uncertainty.

You are claiming that when these systemic errors start, they are increasing temps by large fractions of a degC”

I’m claiming no such thing! Stop putting words in my mouth. You do *NOT* understand measurement uncertainty at all! The *actual* value of the measurement done in the field is always unknown, every single time. You can’t tell if the temps are increasing or decreasing or are stagnant as long as they remain within the uncertainty interval.

How do you know that 2.5C +/- 0.5C and 2.6C +/- 0.5C define a positive trend?

The stated values certainly do if the uncertainty is ignored as you want to do.

The first temp can range from 2C to 3C. The second temp can range from 2.1C to 3.1C. 3C for the first measurement to 2.1C for the second defines a NEGATIVE trend. How do you know that isn’t the actual trend?

Jim Gorman
Reply to  bdgwx
March 13, 2023 1:52 pm

The problem is that Tmax and Tmin are highly correlated. A simple arithmetic mean of the two values is not a transform that can remove that correlation. When you start with correlated data the data IS NOT independent and it stays that way regardless of how much you average it. Your use of 60 data points for the SEM indicates that you are using correlated data.

This is one reason that that Tmax and in should be evaluated separately.

Correlated data invalidates any statistical calculation requiring independent values such as your reduction in “error”. Neither the Law of Large Numbers nor the Central Limit Theory can be used in the fashion you are doing when data is correlated.

Lastly you are calculating the SEM which defines how well the sample mean represents the population mean. It is not a measurement uncertainty that is propagated. Nor is it a proper use of significant digit rules. You can not declare a calculation to have more resolution than what the original measurement contained.

Following NIST TN 1900 Example 2, one should divide the the SD of the sample by the number of data points then expand that value to a 95% confidence level. You will find that far exceeds the values you are arriving at. That is one reason that NIST declared measurement uncertainty as negligible.

bdgwx
March 11, 2023 12:58 pm

Of relevance to this article are the following publications.

Vose et al. 2003

Hubbard & Lin 2006

Menne & Williams 2009

Hausfather et al. 2016

Note that nClimDiv supersedes USHCN. USHCN is considered a legacy product, but is still updated daily and made available to the public for historical purposes. The data files can be found here. The PHA source code is available here.

bdgwx
March 11, 2023 1:35 pm

This also makes it very hard to estimate the effect of the adjustments.

The way to measure how much the adjustments matter is to download the FLs.52j and raw files. You’ll have to roll your own gridding, infilling, and averaging code though. You could do a simpler non-grid analysis, but that comes at the cost of overweighting urban areas.

There’s a description of the method for the TOBS adjustment here.

That is for version 2. Version 2.5 uses PHA.

To avoid all of that uncertainty, I’ve used the raw unadjusted data. 

That’s fine, but your analysis will be contaminated with the time-of-observation change bias, station relocation bias, station commissioning/decommissioning bias, instrument/shelter change bias, etc. The biases known have the largest impact are discussed in [Vose et al. 2003] and [Hubbard & Lin 2006]. Make sure you cross reference the citations. For example the TOB bias was known back in the late 1800’s with a long history of discussion and strategies for mitigating appearing in the literature.

Are these adjustments all valid? Unknown.

Validating PHA can be done in a variety of ways. See [Menne & Williams 2009] [Williams et al. 2012] [Venema et al. 2012] and [Hausfather et al. 2016] for details. The Hausther et al. 2016 approach is compelling because it uses the overlap period with the USCRN network.

Last edited 20 days ago by bdgwx
Frank from NoVA
Reply to  bdgwx
March 11, 2023 4:43 pm

‘The Hausther et al. 2016 approach is compelling because it uses the overlap period with the USCRN network.’

I bet it is ‘compelling’, given that the goal seems to be to use homogenization to pollute data from relatively pristine rural stations with data from urban stations that has been hopelessly corrupted by UHI effects.

At the end of the day, the only way to infer any change in climate from temperature records is to look at individual t_min and t_max records from rural stations, wherein any change in location, instrument, methodology, etc., means that the old record has ended and a new record has begun. Only this allows one to say if temperature for a specific location during a specific period has gone up, down or stayed the same.

While this certainly puts a damper on calculating the Earth’s surface temperature from ‘pre-industrial’ times, the simple truth is we just don’t have the data and tampering with the sparse data we do have is simply dishonest.

bdgwx
Reply to  Frank from NoVA
March 11, 2023 5:05 pm

USCRN is not homogenized or adjusted. It does not “pollute data from relatively pristine rural stations with data from urban stations” or is “hopelessly corrupted by UHI effects.”

Frank from NoVA
Reply to  bdgwx
March 11, 2023 6:01 pm

That’s great! Hopefully in 30 years, or so, we’ll have sufficient data to infer individual temperature trends at each of these supposedly pristine locations.

Richard M
Reply to  bdgwx
March 12, 2023 6:05 am

No such thing as “relatively pristine rural stations”. As Dr Spencer has clearly shown, all areas in the US have been growing.

Reply to  Richard M
March 13, 2023 4:09 am

Wind, inversions, pressure fronts (related to wind), and land use can all spread UHI over vast areas – even to the supposedly “relative pristine rural stations”.

Jeff Alberts
Reply to  bdgwx
March 11, 2023 10:57 pm

You’ll have to roll your own gridding, infilling, and averaging code though.”

And that’s where it all goes wrong. Averaging alone, of disparate stations, is a major no-no. And infilling, is just making shit up.

Reply to  Jeff Alberts
March 13, 2023 4:12 am

Infilling and averaging only serve to falsely make the data distribution more peaked around the average than it truly is. It makes the standard deviation of the distribution artificially smaller.

If you have 1000 measurements with a standard deviation of σ1 and you add 1000 more data points equal to the average of the original 1000 measurements, what does σ2 become?

Reply to  Willis Eschenbach
March 13, 2023 12:26 pm

The operative words are “Averaging alone, of disparate stations

If you are adding data points that you have *not* measured then you are screwing around with the distribution. If the data you have is not fit for purpose then adding “guesses” in order to create more data points won’t help. It just hides the problems the data has.

RickWill
March 11, 2023 1:36 pm

Next, I considered the trends of the minimum and maximum temperatures. I purposely did not consider the mean (average) trend, for a simple reason.

Finally realised the relevance of looking at temperature range rather than anomally.

Anomalies remove most of the information in a data set as you have now discovered.

Reply to  RickWill
March 13, 2023 4:15 am

Averaging removes even more data since it hides the actual variance of the temperature profile. You can have two different locations with the exact same “average” but widely different temperature profiles with different variances – meaning different climates. Creating other averages with those original averages just hides even more of the variance. Every time you do an average you lose data.

dixonstalbert
March 11, 2023 1:43 pm

I seem to recall the BEST temperature dataset showed the same thing; most stations showed ‘global warming’ but 20% or so did not. “Clearly, “global” warming isn’t.”

Tom Abbott
Reply to  dixonstalbert
March 12, 2023 11:59 am

BEST is Science Fiction, too.

Any temperature chart that does not show the Early Twentieth Century as being just as warm as today, is Science Fiction. Best does not show the Early Twentieth Century are being just as warm as today, therefore. . .

Bob
March 11, 2023 1:52 pm

Excellent Willis. Raw data with explanations of which sites might be compromised due to poor siting or urban growth is far more meaningful and believable than data that has been adjusted for whatever reason. Even with the problems we know exist I don’t see anything to be concerned about. Yet more proof that we are being lied to and cheated by those we should be able to trust.

Clyde Spencer
March 11, 2023 2:45 pm

We experience the daily maximum and minimum temperatures, the warmest and coldest times of the day. But nobody ever experiences an average temperature.

I’m not a fan of average (mean) temperatures and even less so of mid-range values. However, I don’t think that your claim can be logically supported. If someone stays in the same location where they experienced the daily low, they will also experience the mid-range value as the day heats up.

Clyde Spencer
Reply to  Willis Eschenbach
March 11, 2023 4:04 pm

Even with the advantage of a thermometer in hand, one cannot know that they have ‘experienced’ the daily max or min until some time after it has passed. Without a thermometer, I doubt that most people can properly estimate the temperature to +/- 5 deg C except at the freezing point. While someone may not know when the daily mid-range temperature occurred, they will have ‘experienced’ it.

bdgwx
Reply to  Willis Eschenbach
March 11, 2023 5:15 pm

But that problem exists for Tmin and Tmax as well. Neither Tmin nor Tmax are the instantaneous low and high values for the day. They aren’t even instantaneous values. They are themselves averages albeit over a short period of time. That means people don’t experience those either.

Last edited 20 days ago by bdgwx
Frank from NoVA
Reply to  bdgwx
March 11, 2023 6:17 pm

I get it! Since thermometers don’t provide instantaneous readouts, we shouldn’t have an issue with averaging T_min and T_max, which I guess also means we shouldn’t have an issue with grafting modern temperature records on to bristlecone tree ring reconstructions if that helps the rubes come to the right conclusions.

Richard Greene
Reply to  bdgwx
March 11, 2023 8:55 pm

TMIN and TMAX would change from changes in measurement instruments. It would be possible now to have a TMIN or TMAX for a 5 second period during a day. I bet the older equipment could not do that.

bdgwx
Reply to  Richard Greene
March 12, 2023 11:40 am

It is possible for a modern instrument to have response time of 5 seconds. They just aren’t typically deployed to ASOS stations. And you correct, LiGs cannot respond as fast modern instrumentation.

Richard Greene
Reply to  Willis Eschenbach
March 12, 2023 4:04 am

Is it possible for modern equipment to record the warmest and coldest ONE SECOND of a day?

Would the older thermometers have the ability to respond that fast, with the same level of precision? I doubt it.

I am thinking about a jet plane moving past a thermometer at an airport, with the heat from the jet engine exhaust hitting the thermometer for a moment.

Last edited 19 days ago by Richard Greene
Reply to  Richard Greene
March 13, 2023 4:31 am

Even the sensors used today have thermal inertia. They simply can’t respond instantaneously. Are they faster than LIG devices? Probably. How much faster actually depends on the entire measurement device, not just on the sensor itself.

The air moving through the measurement device will be conditioned in some manner by the entire device. If the material in the device’s airflow is cooler or hotter than the air, the temperature of the air will be changed in some manner. How much depends on several factors including changes in the air flow due to obstructions such as dirt and other things, e.g. snow or ice blocking the inlet.

It’s why measurement uncertainty in the readings is such an important consideration. But climate science never seems to understand that simple fact.

bdgwx
Reply to  Willis Eschenbach
March 12, 2023 11:33 am

LiGs are not instantaneous either. [Burt & Podesta 2020]

Reply to  Willis Eschenbach
March 13, 2023 4:24 am

And yes, they do measure the “instantaneous low and high values for the day”.”

Respectfully, they simply can’t. The fluid in the thermometers have thermal inertia. Meaning they can’t respond instantaneously. If the temperature peaks in value and then starts to fall before the fluid can respond. The fluid will never catch the actual instantaneous peak. It will stop responding when the falling temperature matches where the fluid temperature is at the time.

In reality it probably doesn’t matter very much. The measurement uncertainties associated with the instrument will be wider than the difference between the instantaneous peak and the actual reading of the fluid. It’s why the measurement uncertainty should be propagated throughout all of the temperature data bases and should be considered in any “average” reached using the temperatures – but for some reason in climate science measurement uncertainty is always ignored, it seems to always be assumed to be zero!

Reply to  Willis Eschenbach
March 13, 2023 12:23 pm

Willis,

It is *NOT* meaningless trivia. The thermal inertia *is* part of the measurement uncertainty. It doesn’t just apply at Tmax and Tmin but for all measured temperatures. It’s why in calibration lab processes using water baths the amount of time the temperature sensor has to remain in the bath is usually specified.

I know in climate science it is always assumed that measurement uncertainty is random, Gaussian, and cancels but that simply isn’t true in practice. It is *not* just noise. It can’t just be dismissed as meaningless.

Phil.
Reply to  Willis Eschenbach
March 15, 2023 12:54 pm

Yes I used to operate my high school’s weather station for about 5 years, reset the floats every lunchtime.

Jim Gorman
Reply to  bdgwx
March 12, 2023 8:40 am

Thermometers have hysteresis so that they do not respond instantaneously. It is one of the items that goes into the NOAA/NWS specs for uncertainty. If I remember correctly, LIG’s have something like a 30 second period before they can respond to a step change. If the temperature changes within that period, the LIG will never show the instantaneous maximum or minimum temperature.

Phil.
Reply to  Jim Gorman
March 15, 2023 12:45 pm

The response time of an LIG is about 30 sec, this is the time it takes to reach 63% of the step change. If you know the response time it is possible to calculate what the actual temperature history was from the measured temperature trajectory.

bdgwx
Reply to  Willis Eschenbach
March 11, 2023 5:13 pm

Willis, Tmin and Tmax are themselves averages at least for modern digital instrumentation. For example Tmax = (T0+T10+T20+T30+T40+T50) / 6 and similarly for Tmin. The difference between Tmin/Tmax and Tavg is that the former averages 6 values whereas the later averages 12 values from two subsets of 6 values each. See the ASOS User Guide for more details.

Jim Gorman
Reply to  bdgwx
March 12, 2023 8:43 am

You don’t mention the reason for this. It is so that temperatures can be adequately correlated with what an LIG thermometer would show. Otherwise, you could never compare readings from the two. All records would have to be stopped, which they should be anyway, immediately with the change.

JoeF
Reply to  Willis Eschenbach
March 16, 2023 3:30 am

If anyone is interested, here is a 2016 study (https://etda.libraries.psu.edu/catalog/13504jeb5249) that looked at 215 US airport stations to compare the differences between min-max averaging and hourly averaging (since the daily temperature curve is of course non-symmetrical). They found that the differences varied by season predictably, mostly due to six significant climate variables (cloud cover, precipitation, specific humidity, dew point temperature, snow cover, soil moisture). Generally, the max-min method “overestimates daily average temperature at 134 stations, underestimates it at 76 stations, and has no difference at only 5 stations.” The differences are greatest in summer (in August, the average difference for all 215 stations was +0.54F). They also found that the shape of the diurnal temperature curve changes over time (from 1980-2010), which is interesting, but maybe expected for airport stations. Here’s the (kriged) map of differences for August (1980-2010), it looks like the min-max method overestimates average temperature by about .5F for me in PHL.

usmapkriged.jpg
Editor
March 11, 2023 2:48 pm

w. ==> If the data you processed was”the raw, unhomogenized, unadjusted daily data files” — why is it also “CDIAC’s most current version of USHCN daily data.”?

If it has been recorded and remained unchanged, there would not be versions, of which this is the most current — versions mean it has not remained the same.

Anyone else notice this? Where, if anywhere, are the previous versions? Are they different? How different? Why different?

Richard Greene
Reply to  Kip Hansen
March 11, 2023 9:01 pm

Once again Hansen is Mr. Smarty Pants here.

“most current version” suggests there were many prior versions

This is the best comment here today.

Here’s my theory:
If the raw data supported the coming climate crisis narrative, they would be available to the public without adjustments, revisions, infilling and versions. If there are versions, then there have been “adjustments”.

When raw data are adjusted, they are no longer data
They are an estimate of what the raw data would have been if measured correctly in the first place, or deliberate science fraud.

Last edited 19 days ago by Richard Greene
David Dibbell
Reply to  Kip Hansen
March 12, 2023 10:58 am

Kip, please see at this link under “methods” for NOAA’s GHCNd, of which the USHCN station daily data is a subset, for more information. The daily USHCN station data would have passed through quality checks as described there to be published. But not time-of-observation, pairwise homogenization, or other adjustments for station changes etc. (if I understand this correctly!)

https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily

So yes, it is the case that these are the “the raw, unhomogenized, unadjusted daily data files.” It is also true that certain QA checks may have flagged or removed values that violate the tests.

Please also see my comment above about links to the currently maintained USHCN data (and for daily data, also for GHCNd).

Editor
Reply to  David Dibbell
March 12, 2023 1:01 pm

David ==> “So yes, it is the case that these are the “the raw, unhomogenized, unadjusted daily data files.”

That is far from the case — the link clearly states the opposite. Stations are not even a set number of stations, they don’t necessarily include the same stations in each versions, and then some stations which are mingled with other stations.

I don’t think you have understood the magnitude of the process.

The “raw, unhomogenized, unadjusted daily data files.” would be just that — a predetermined set of stations, with their normal daily reported values, untouched by automatic or human corrections or changes. There must be, of course, some error checking — missing values, or “999s”….but these should be flagged.

It is possible to check the RAW data, many stations have this available day by day. Unhomogenized means specifically NOT MINGLED. And, of course, UNASDJUSTED mean that NO adjustments, no matter how seemingly totally justified.

The data set used by willis might be the best we have easily available — but it is NOT Raw, it is NOT UNHOMOGINIZED, and it is NOT UNADJUSTED.

David Dibbell
Reply to  Kip Hansen
March 12, 2023 3:31 pm

Kip, I’m not going to get in a semantic debate on this. Just explaining. I believe I understood Willis’ use of those words when he wrote, “These appear to be the raw, unhomogenized, unadjusted daily data files.” I don’t think he is wrong. This means that there has been no time-of-observation adjustment or pairwise homogeneity adjustment applied to the daily values contained in the files. Quality processing and flagging has been applied, yes. But not the two key adjustment algorithms that are applied later as monthly values are computed for the legacy USHCN list of stations (and for taking GHCNd data into ClimDiv.)

The “version” issue does not bother me much, as a managed dataset might indeed have had differences in format, file structure, processing, etc. applied.

All the best to you.

Last edited 19 days ago by David Dibbell
Clyde Spencer
March 11, 2023 2:58 pm

I don’t think that looking at the entire available USHCN provides the whole story. The relationship between Tmax and Tmin has not been consistent.
comment image

https://wattsupwiththat.com/2015/08/11/an-analysis-of-best-data-for-the-question-is-earth-warming-or-cooling/

You might get very different results if you just looked at 1982 through 2022.

Last edited 20 days ago by Clyde Spencer
bdgwx
Reply to  Clyde Spencer
March 11, 2023 7:50 pm

It is interesting to note that aerosols increased significantly post WWII. But then starting around 1980 aerosols began to decline. This could explain the more rapid decline in the diurnal range from 1950-1980 and an increase from 1980 to 2015. Note that an increase in aerosols suppresses Tmax more than Tmin while a decline augments Tmax more than Tmin since they modulate the solar input.

Richard Greene
Reply to  bdgwx
March 11, 2023 9:06 pm

1975 to 1980 SO2 emissions rising but temperature is rising too, not falling, as would be expected with rising SO2 emissions

2015 to 2023 SO2 emissions falling, but the temperature trend was flat, not warming, as would be expected from fewer SO2 emissions.

Seems like SO2 is a minor cause of climate change.

Last edited 19 days ago by Richard Greene
Tom Abbott
Reply to  Richard Greene
March 12, 2023 12:05 pm

“Seems like SO2 is a minor cause of climate change.”

No correlation.

n.n
March 11, 2023 3:01 pm

Net-zero climate effect with a good chance of greening.

Wayne Raymond
March 11, 2023 3:14 pm

Willis, your map of the USA showing areas of increasing and decreasing maximum temperatures is interesting. Looking at California on the map, most of the UNHCN stations seem to have increasing maximums. I have looked at California stations with long temperature records but less urbanized areas, and found generally those stations have flat or decreasing maximums, with the exception of some in the desert southeast, and the central coast. The aggregate of 27 weather stations geographically spread over the state with best siting and long records (90 years or more) the maximum temperatures show an upward trend of about 0.7 F per century, and minimums have an upward trend of about 2.6 F per century. I think once urban heat island effect is controlled, the split between rising and falling maximum temperatures in California is close to even, while minimum temperatures have definitely increased statewide.

AndyHce
Reply to  Wayne Raymond
March 11, 2023 4:21 pm

Probably the necessary data isn’t widely available but it seems to me that a different measure of trend would be more realistic. From some long ago reading I learned about the concept of what I think was called heat hours. This was mainly in relation to fruits, which I presumed included annuals such as tomatoes and peppers on the warm side while the cold side possibly only related to perennials such as trees (many fruits), shrubs (nuts and berries), and vines (berries).

Basically, some minimum hours of temperature above some particular temperature is needed for proper ripping and some minimum hours of cold below some particular temperature is needed during the dormant stage to assure the next year’s crop.

Some places I’ve lived frequently have temperatures still around 100̊F at midnight, meaning the “hours of heat” however they are defined, are rather numerous. However, some days with maximum at 110 and above were down to 70 well before midnight.

Where I am currently is not so cold as some places, but overnight lows were frequently around 20̊F. During this winter, my only experience so far, they were never below freezing during early afternoon but some days were close to freezing again by 4:30 PM.

My point is that min and max temperatures may well not reflect the “amount” of hot and cold very well so not be a very accurate measure of what the area is really like or whether or not any significant change is taking place.

Reply to  AndyHce
March 13, 2023 4:43 am

The temperature profile during the day is approximately a sine wave. The path of the sun over a specific location is a sinusoid so this makes sense. The temperature profile at night is an exponential decay, which also makes sense.

What you are describing is called degree-days. You can have growing degree-days in agriculture, cooling or heating degree-days used for sizing HVAC requirements in a building, and even soil temperature degree-days.

Degree-days are a much better representation of climate, at least in my opinion. It’s why the ag sector and building engineering sector use degree-days. Both are *very* climate dependent.

The old method of calculating degree-days used the daily median values just like climate science does today. But 20 years ago or so industries moved to using integrals of the entire temperature profile to calculate degree-days. It provides much better estimates of the climate at a location. For some reason climate scientists have remained stuck in the 20th century. The big question is why.

Jim Gorman
Reply to  Tim Gorman
March 13, 2023 2:45 pm

I don’t know if night is the correct time for the decay to start. According to the graph, the decay in air temp starts around 4pm to 5pm.

There is heat storage both in the soil and in the atmosphere. This will have some hang over before the change really gets going.

John Hultquist
Reply to  Wayne Raymond
March 11, 2023 4:24 pm

I recall Roy Spenser examined CA temperatures a few years ago (6?) and related warmer night-time temperatures to irrigation being used on increasing acreage. Maybe that was reposted on WUWT but that I do not recall. It might be searchable. Not saying urban heat island effects do not contribute.

ni4et
March 11, 2023 3:44 pm

One for Willis to ponder.

On any given night at my simple little Accurite weather station the dew point and the temperatures will lock together and look as if the low temperature is being limited by the moisture in the air. The temperature drops until it hits the dew point and then they track together all night while dropping only a couple degrees C more. It’s hard for me to see how a daily low temperature measurement is meaningful without also knowing what the dew point is. Alternatively, what would the temperature be if it were a function of radiation only without being held up by heat released by water vapor changing to liquid state.
I just don’t see how this obvious phenomenon is being considered. Been wondering this for a while.

My location is in East Tennessee.

Reply to  ni4et
March 13, 2023 4:44 am

It’s why temperature is such a poor proxy for enthalpy which is what you are actually describing.

March 11, 2023 3:48 pm

Willis,
Thank you for this – but – are the red dots actually different to the white dots or do their uncertainties mostly overlap?
Some modern statistics programs provide easy-to-use analysis of data for quality control, even forensic analysis that looks for dud stuff. The program JMP is an example.
With Tom Berger, I did some analysis of Australian data on WUWT late last year. It would be interesting to see similar work on US data and that on many other countries. I cannot understand its (supposed) absence in IPCC reports, for example.
 In science and metrology, it really is basic, accepted procedure to understand the limitations of your numbers before you use them for serious purpose.
Our Australian work has showed parties unable to agree on how uncertainty should be measured, unable to accept that “RAW” data have been fiddled by unreported means and people and more. Read it here, the last link in particular.
The middle link had 839 comments, very large for WUWT, including regulars like bdgwx, nick stokes, bellman. Some are writing again on this Willis article as if they had never read the following, that is, they appear not to have learned from them.
Geoff S
https://wattsupwiththat.com/2022/08/24/uncertainty-estimates-for-routine-temperature-data-sets/
https://wattsupwiththat.com/2022/09/06/uncertainty-estimates-for-routine-temperature-data-sets-part-two/
https://wattsupwiththat.com/2022/10/14/uncertainty-of-measurement-of-routine-temperatures-part-iii/

Bellman
Reply to  Geoff Sherrington
March 11, 2023 5:52 pm

The middle link had 839 comments, very large for WUWT, including regulars like bdgwx, nick stokes, bellman. Some are writing again on this Willis article as if they had never read the following, that is, they appear not to have learned from them.

Huh? I didn’t comment on any of those articles, and all I’ve asked in this one is for clarification on what time period is being used for all these trends.

Reply to  Bellman
March 11, 2023 10:31 pm

Bellman,
My apologies. You did not post directly, but you were in my mind since bdgwx was mentioned with you a few times in the one sentence. Geoff S