NOAA U.S. Contiguous USCRN Temperature Anomaly September 2023 Data Shows No “Climate Emergency”

Guest essay by Larry Hamlin

NOAA has released its September 2023 Contiguous U.S. USCRN temperature anomaly data shown below with the years 2015, 2019, 2021 and 2022 recorded USCRN September temperature anomaly data exceeding the year 2023 September 1.93 F value. This data is from the state-of-the-art U.S. Climate Reference Network which has all of their stations properly sited away from heat sources/biases by design.

NOAA’s September 2023 contiguous U.S. temperature anomaly USCRN data result does not support alarmist propaganda claims of a “climate emergency”.

Additionally, NOAA’s National Time Series 3-month maximum temperatures for the contiguous U.S. between July and September 2023 do not support climate alarmist media propaganda claims of “record high summer temperatures in 2023” based on this latest September 2023 NOAA data shown below.

NOTE: You can see the USCRN data on the WUWT sidebar. There is an explanatory page to go with it here.

4.9 28 votes
Article Rating
315 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ron Long
October 12, 2023 10:15 am

Looks like the Greenie Libtards are having panic attacks for nothing. Film at eleven. Wait for it.

barryjo
Reply to  Ron Long
October 12, 2023 10:40 am

Wait? Nope. I have better things to do.

Reply to  Ron Long
October 12, 2023 11:42 am

They will just claim the NOAA data is wrong and treble down

JC
October 12, 2023 10:20 am

Facts are indexed not on the veracity of the truth they represent but on political expediency.

Do the people who are pushing the political movement called climate change care about facts? Only the ones they conjure up.

The rest of us don’t give a wup.

There has been no climate emergency in PA this Summer or Fall. Weather happens and we deal…who gives a wup abut the weather. Life is full of all sorts nastiness from time to time and it keeps coming and there isn’t anything anyone can do about. So we just need to fight the temptation to whine powerlessly like victims and get on with life.

The greater concern is the damage the climate change political movement is doing globally…. this should be a point of grave concern only for a minute… then move on..

Reply to  JC
October 12, 2023 11:43 am

Let’s say it out, loud and proud in capitals

THERE IS NO CLIMATE EMERGENCY – ITS ALL A HOAX

JC
Reply to  Energywise
October 12, 2023 1:29 pm

There is something far worse than the Green Regime hoax. It’s the Green Regime and those who are politically and financially leveraging it. We should have a far greater concern for them and not give a wup about their BS. They don’t are about anything but political power and ruling the world.

Reply to  JC
October 12, 2023 1:26 pm

They redefined climate to only be around 30 years so it is always changing.

LT3
October 12, 2023 10:20 am

Ok, on the 2nd plot, what does that mean? It indicates that it did not exceed 86 degrees F. for July – September in 2023. I do not understand the data if it is saying those are max temps in the contiguous US.

Reply to  LT3
October 12, 2023 10:43 am

The second plot is from NOAA’s USHCN (historic) records, known to be rife with issues, including missing data and urban heat island effects. The USCRN records only go back to 2005.

Apparently, the plotted data are the three month averages for July-September in the USHCN records. The heat in the 1930s was so well-established across the 48 contiguous US states that USHCN records confirm it to be the “hottest decade on record”.

LT3
Reply to  Bob Webster
October 13, 2023 5:42 am

Not many surface datasets go that far back, so I guess it’s the truth until something says otherwise.

Reply to  LT3
October 12, 2023 11:26 am

I believe it is the *average* max temp. Not every place has the same high temp each day.

Reply to  Tim Gorman
October 12, 2023 11:09 pm

Graphs and tables can be requested by (some) cities, any county, any state. They are far from reasonably correct.

LT3
Reply to  AndyHce
October 13, 2023 5:44 am

Yes indeed, I dug into it, I see a lot of edited time segments, such as 98 El-Nino edited out of many stations. It’s a black box.

LT3
Reply to  AndyHce
October 13, 2023 6:20 am

All in all it looks like a good dataset, you can see the cooling of the 60’s, El-Chichon, Pinatubo , 1915 eruption etc…

October 12, 2023 10:21 am

Worth noting that the 1930s are still the reigning champion years for US heat records.

Indeed, a look at US states’ all-time record highs, the vast majority (23 of 50 states) were set in the decade of the 1930s while the second “hottest” decade (by this measure) was 1910-1919 (with 7 all-time highs set). That’s 60% of all-time US state high temperature records set prior to 1940, before the dramatic rise of atmospheric CO2 in the latter part of the 20th century!

But most important is that the computed correlation coefficients between changing atmospheric CO2 and changing global average surface temperatures are so small that no causation is possible. Consequently, any theory that claims climate change is linked to changing atmospheric CO2 is invalid by virtue of its being inconsistent with observation. Same is true when examining the geologic evidence going back 550 million years. Correlation coefficient between changing atmospheric CO2 and changing climate is virtually zero, at 0.10.

No correlation, no causation. No causation, no “climate crisis”. No “climate crisis”, then fossil fuels are the least expensive, most easily obtained and most universally available (24/7) of practical energy sources.

Fig-C4D-11-50-state-records.png
Mikeyj
October 12, 2023 10:43 am

Southern Michigan reports in: We had a great summer; few 90 degree days, just the right amount of rain(not on golf days) and neither our A/C or furnace has come on in the last two weeks. Please more global warming.

mleskovarsocalrrcom
October 12, 2023 10:44 am

I’m still waiting for more people to wake up and realize they’ve been duped by the MSM.

Reply to  mleskovarsocalrrcom
October 12, 2023 11:44 am

But who’s going to inform MSM?

Reply to  Energywise
October 12, 2023 12:09 pm

Well, we keep trying but they won’t or don’t listen.

Reply to  Oldseadog
October 12, 2023 12:55 pm

despite this site having a half billion hits

Reply to  Joseph Zorzin
October 12, 2023 7:06 pm

Despite science, facts, reality and half a billion hits on WUWT they still prefer to believe their fantasies.

wh
October 12, 2023 11:03 am

https://www.theclimatebrink.com/p/the-pause-vs-the-surge

https://www.theclimatebrink.com/p/the-most-accurate-record-of-us-temperatures

These two make it so hard to not question the narrative. They literally are just taking what they can get and run with it. They’re more activists than scientists in my opinion.

October 12, 2023 11:14 am

It looks to me that the growth from 1900 to 1935 is about the same as from 1995 to 2023. If there is a cycle, we might expect some cooling in the near future.

Reply to  Jim Gorman
October 12, 2023 12:58 pm

The old saying here in New England and probably elsewhere is “if you don’t like the weather- just wait a minute”. It should be revised to: “if you don’t like the climate, just wait a year or so”. What goes up must go down.

Jeff Alberts
Reply to  Joseph Zorzin
October 15, 2023 9:12 am

“if you don’t like the weather- just wait a minute”

I’ve heard that saying just about everywhere I’ve been. And I’ve been to a LOT of states and a couple European countries.

AlanJ
October 12, 2023 11:18 am

The USCRN exhibits a strongly positive slope:

comment image

Not sure you want to be highlighting it if your goal is to convince people that the planet isn’t warming significantly.

Reply to  AlanJ
October 12, 2023 11:50 am

Analysing graphs isn’t a strong specialism of yours is it?

AlanJ
Reply to  Energywise
October 12, 2023 11:57 am

Would you care to expand on that sentiment?

Reply to  AlanJ
October 12, 2023 12:50 pm

You’re nothing but an incompetent monkey with a ruler.

Is that expansive enough?

AlanJ
Reply to  bnice2000
October 12, 2023 1:32 pm

Your reply is nice and zesty, which is definitely fun, but somehow it’s even less information-dense than Energywise’s, so you need to try again. What I’m looking for is an earnest articulation of whatever errors I’ve made in my analysis above.

Reply to  AlanJ
October 12, 2023 1:54 pm

I explained it to you elsewhere.

If you are so zero-competence you can understand… not my problem.

Why are you so supportive of the lies and deceits from a climate cabal that wants to destroy western society?

That is the real under-lying question.

What despicable immorality drives you.?

Reply to  AlanJ
October 12, 2023 1:26 pm

Only stupid people like you fail to see why your misuse of the chart exposes your lack of rational thinking.

Reply to  Sunsettommy
October 12, 2023 11:16 pm

Come on! Do you believe the slope isn’t positive or that there is some calculation error or that there is some other kind of error (please specify!)? All sorts of nasty emotional statements do not provide enlightenment or even a tiny bit of insight.

Reply to  AndyHce
October 13, 2023 9:41 am

He has been told many times already why I should have to repeat what others say and what is missing on the chart exposes his lack of understanding of the chart being validated where is the source for the chart and data?

AlanJ
Reply to  Sunsettommy
October 13, 2023 10:05 am

I made the chart in Excel, the source for the data is quite literally in the post we are all commenting on.

Reply to  AlanJ
October 13, 2023 6:57 pm

SO WHAT!

You still haven’t realised/admitted that the tiny trend comes ONLY from the El Ninos in 2015 and 2020.

When will you recognise that FACT.

Never is my guess, because it proves there has been no human caused warming whatsoever in the US since at least 2005.

Any monkey can use a basic linear trend in Excel.. doesn’t mean they understand what the data is showing.

Reply to  bnice2000
October 14, 2023 5:21 am

The fact that the El Nino’s are getting *cooler”, meaning the oceans are releasing less heat, just seems to escape notice by the alarmists. It’s why linear trends of time-based cyclical processes lead to incorrect interpretation of what is going on. If you want to forecast the future then you must understand *all* of the data.

Reply to  AlanJ
October 12, 2023 2:41 pm
Mr.
Reply to  AlanJ
October 12, 2023 12:04 pm

Looks like we’re definitely leaving that Little Ice Age in our dust now, thank gaia.

But I bet she / they / them / whatever will bring a Not-So-Little Ice Age upon us again when the mood next takes her / them / they / it / whatever.

wh
Reply to  AlanJ
October 12, 2023 12:05 pm

What’s the 95% confidence interval on those numbers? Also the p-value?

AlanJ
Reply to  wh
October 12, 2023 12:49 pm

The p-value is 0.025 and the standard error of the estimated slope is 0.024.

Reply to  AlanJ
October 12, 2023 12:13 pm

AlanJ,

Why is it that you chose to not present the goodness-of-fit (the R^2 value) of the linear curve in your posted graph?

Might it possibly be (as I suspect) that doing such would reveal that the R^2 value was so low as to admit that line could statistically have good probability of a having a zero or even negative slope?

Just look at the amplitudes of variations in the data represented by the blue line versus the total rise in slope of the orange line over the total data set . . . something like an 11:1 ratio. That’s a very low signal-to-noise ratio . . . not at all consistent with your claim of a “strong positive slope”.

AlanJ
Reply to  ToldYouSo
October 12, 2023 12:55 pm

The r-squared value only tells you how well the model (linear regression) is doing at predicting the variance in the data. Since we don’t expect our model to predict variance in the dataset, there is no reason to expect a high r-squared value. As you point out, the data are quite noisy. But the calculated trend is significant at the 95% level, which means our result cannot be explained by the null hypothesis, thus there is a warming trend expressed despite the general background noise.

Rud Istvan
Reply to  AlanJ
October 12, 2023 2:02 pm

As there should be. It has been in evidence since the end of the LIA, which I put at 1819, the last Thames Ice Fair. But it is tiny. Less than half of modal predicted, and those are tuned to best hindcast from USHCN, not CRN (not a long enough record yet). We know since the original Surface Stations project that HCN is contaminated by local site issues and UHI.

In the big picture, you quibble when basic facts speak volumes:

  1. None of the definitively asserted past climate disasters have occurred—sea level rise did not accelerate, Arctic summer sea ice did not disappear, Glacier National Park still has glaciers.
  2. Almost none of the CMIP5/6 climate models have credibility. They produce a non-existant tropical troposphere hotspot, and an ECS twice observational EBM estimates.
  3. The preferred ‘solution’, renewables, at any meaningful grid penetration are ruinables. They raise costs by 2-3x, and have no non-FF backup solution for intermittency.
  4. India and China won’t play, so nothing US or EU does matters. Only harms their economies while benefitting India and China.
  5. The more warmunists try to disparage/silence such fact based skeptical observations, the worse it goes for them. Lots of good recent examples.
MarkW
Reply to  Rud Istvan
October 12, 2023 3:25 pm

Glacier National Park didn’t have any glaciers during the Holocene Optimum.

Reply to  AlanJ
October 12, 2023 2:08 pm

The trend is explained by the presence and location of the 2015 El Nino bulge.

Why are you so dumb that you cannot see that ?

Look at what the data is actually doing , don’t blinding apply linear trends and regression like a semi-educated monkey.

The fact is that it is no warmer now, than it was in 2005..

… and there was no warming before the 2015 El Nino.

There is absolutely no evidence of any human caused warming.

Reply to  AlanJ
October 12, 2023 5:26 pm

“But the calculated trend is significant at the 95% level . . .”

I see that you totally missed the main point: a calculated trend being statistically significant is not the same thing as it being “a strong positive slope” (your own words).

AlanJ
Reply to  ToldYouSo
October 13, 2023 5:08 am

The trend is about 0.55 degrees F per decade. The trend for the globe is about 0.32 degrees F per decade. I would call that a strongly positive trend.

Reply to  AlanJ
October 13, 2023 6:49 am

“The trend is about 0.55 degrees F per decade” . . . yeah, measured over less than two consecutive decades.

And both NOAA and NASA define climate to be weather averaged over a specified geographical area for a continuous interval of 30 or more years.

Your graph and its L.S. linear trend line are thus seen to represent something other than climate change over the US.

Your implication (in your sentence under the graph you posted) that such shows the planet to be warming is thus quite amusing.

BTW, the USCRN wasn’t completed until 2008, so data reported prior to that really could be biased high or low compared to post-2008 data.

AlanJ
Reply to  ToldYouSo
October 13, 2023 10:14 am

I don’t think the trend of the past two decades reflects only the long term temperature increase in the United States – as you say, climate is typically defined over periods f thirty years or more. I think the 0.55 degrees per decade trend reflects both the underlying long term trend and some short term variability. But the trend is the trend, the claim is being made that the USCRN indicates no “climate emergency” because, we are told, September wasn’t the hottest on record. My point is simply that the USCRN shows a degree of warming that is considerably higher than other indexes, so it isn’t the best choice for trying to dismiss claims of dangerous warming.

Reply to  AlanJ
October 13, 2023 11:39 am

Since when did anyone other that you and climate alarmists decide/declare that 0.55 deg-F per decade (equivalent to 3 C/century) is “dangerous warming” (your phrase)? And where is the objective, scientific evidence of such?

To the contrary here is a portion of what the “prestigious” (hah!) IPCC has to say about abrupt and rapid change in climate
“The central Greenland ice core record (GRIP and GISP2) has a near annual resolution across the entire glacial to Holocene transition, and reveals episodes of very rapid change. The return to the cold conditions of the Younger Dryas from the incipient inter-glacial warming 13,000 years ago took place within a few decades or less (Alley et al., 1993). The warming phase, that took place about 11,500 years ago, at the end of the Younger Dryas was also very abrupt and central Greenland temperatures increased by 7°C or more in a few decades (Johnsen et al., 1992; Grootes et al., 1993; Severinghaus et al., 1998)”
http://www.ipcc.ch/ipccreports/tar/wg1/074.htm
(my bold emphasis added)

Earth withstood the Younger Dryas warming without suffering any tipping point or climate catastrophe, and that warming rate is scientifically estimated to be more than a factor of 10 times higher than the 0.55 deg-F per decade rate that you derived from USCRN data and are so concerned over.

AlanJ
Reply to  ToldYouSo
October 13, 2023 12:15 pm

Again, the trend in the USCRN is higher than the trend for the globe for this period. I would call that a “strong” warming trend. If you want to claim that there isn’t a “climate emergency” because global warming isn’t strong, pointing to the dataset that shows one of the strongest warming trends of any index is probably a dumb move.

If my point clearer for you now? Or should I try to simplify it more?

Reply to  AlanJ
October 13, 2023 5:19 pm

First, I don’t need to “claim” there isn’t a climate emergency. The preponderance of all objective, scientific evidence relevant to climate, including that from paleoclimatology, establishes that there isn’t.

Second, yes, you should try to simplify your “point” further, as that would be of great help to you.

Lastly, speaking of dumb moves, there is an adage for you to consider that is quite apropos here:
“When you find you are digging yourself deeper and deeper into a hole, the first rule is to stop digging.”

AlanJ
Reply to  ToldYouSo
October 14, 2023 6:33 am

Second, yes, you should try to simplify your “point” further

Happy to help you.

Larry Hamlin: The world is not warming quickly, look at the USCRN dataset.

Me: The USCRN dataset shows a stronger rate of warming than other datasets, not a great choice for supporting your argument.

Does that help you at all? I can break it down to a second grade reading level if you need, just let me know.

Reply to  AlanJ
October 14, 2023 10:26 am

Good grief! . . . it looks like instead of you helping me to understand your “point” behind your posting of a USCRN data plot and your straight line linear regression fit to its limited temporal range of data, I need to inform you of your limited understanding of what that data actually represents, and what it does not represent.

1) You present a series of data points from USCRN, but fail to appreciate their limitations with respect such to having any meaningful relationship to global warming:
— USCRN data is from land surface stations only, measuring atmospheric temperature in close proximity to the ground elevation (above MSL) at each station
— USCRN has 114 measuring sites in the continental US plus 23 measuring sites in Alaska plus 2 measuring site in Hawaii, a total of only 139 sites; these stations are not uniformly distributed over the regions they cover nor are all at the same elevation (ref: https://www.ncei.noaa.gov/access/crn/map.html )
— the land surface area of the continental US, Alaska and Hawaii is 3,532,000 square miles, meaning that on average, each USCRN measuring site represents some 25,000 square miles
— the blue line in your posted graph connects successive single USCRN temperature data points that supposedly represent a combined “average” of this widespread collection of near-surface, above-ground atmospheric temperature measurements, and the “sampled rate” in your graph appears to be approximately monthly, yet USCRN station data is logged at 5-minute intervals (ref: https://www.ncei.noaa.gov/access/crn/qcdatasets.html )
— there are no scientific details provided for the procedure(s) used to distill such lat/long-varying, elevation-varying and temporally-varying collections of data down into a single number claimed to represent the US “average temperature” over any stated interval of time (of course, it’s a fool’s errand to even attempt to do so, but NOAA seems oblivious to this fact)
— the USCRN data in the graph you presented (blue line) does not reflect the summer-winter temperature variation cycle that one would expect, given that all such stations reside in the northern hemisphere of Earth; this implies that some “adjustments” have been made to the raw data to “normalize” it in some manner . . . leading one to question what other “adjustments” NOAA has applied/is applying to USCRN data and the exact scientific justification for doing so.

2) Your repeated statements to the effect of
“The trend in the USCRN is higher than the trend for the globe for this period. I would call that a “strong” warming trend.”
reflects your apparent lack of understanding that:
— the surface of planet Earth is covered mostly (about 71%) by water. More specifically, in Earth’s northern hemisphere the ratio of land surface area compared to water surface area is 0.39:0.61 (in Earth’s SH the ratio is quite different at 0.19:0.81).
— water has a much higher heat capacity than does dry soil (about 20% that of water) or solid rock (ranging from 60–90% that of water, depending on rock composition)
— the combination of larger total surface area and greater specific heat for water compared to dry land means that the thermal “inertia” inherent in NH and SH oceans will always cause them to be slower to respond to heating or cooling power flux forcings than will land surfaces in those hemispheres; thus, it is an “apples-to-oranges”, sophomoric comparison to state that USCRN has a higher, and therefore “stronger”, warming trend than does “the globe” (i.e., Earth) on average.
— then too, “the globe” includes the year-round-ice-covered North and South poles whereas USCRN stations do not extend north of the highest latitude of Alaska, Point Barrow, at 71° 23′ N; inclusion of all that ice in computing a “global average temperature” at any point in time (again fully acknowledging such an calculation to a single temperature value is rather meaningless) does further serve to lessen “global warming” compared to USCRN trending . . . mainly via the thermophysics associated with the heats of fusion and sublimation of water-ice.

So, considering all of the above misinformation or lack of information by you in this subject matter, you should realize how pointless it would be for you to attempt to “break it down to a second grade reading level” for me (your words) . . . but thank you for that kind offer 🙂

AlanJ
Reply to  ToldYouSo
October 14, 2023 11:11 am

You raise some really wonderful points explaining why the USCRN is not suitable for supporting the claim that global warming is not significant (limited coverage area, land-only). So thanks for bolstering my argument so beautifully. I’ll look forward to you explaining this to the rest of the folks in the thread who have been arguing with me.

Reply to  AlanJ
October 14, 2023 12:10 pm

“So thanks for bolstering my argument so beautifully.”

WTF?

AlanJ
Reply to  ToldYouSo
October 14, 2023 12:33 pm

Me: it’s not appropriate or compelling to use the USCRN dataset to make claims about the rate of global warming.

You: No! It’s NOT appropriate or compelling to use USCRN to make claims about the rate of global warming.

I’ll give you whatever time you need to mull that over on your own.

Reply to  AlanJ
October 14, 2023 1:01 pm

“Me: it’s not appropriate or compelling to use the USCRN dataset to make claims about the rate of global warming.”

You posted previously on October 12, 2023 11:18 am:
“The USCRN exhibits a strongly positive slope:
. . . Not sure you want to be highlighting it if your goal is to convince people that the planet isn’t warming significantly.”

So, it was immediately evident that you have difficulty with understanding a complex topic, but you now reveal you also have difficulty remembering what’ve you’ve previously stated.

My condolences and sympathy to you.

Obviously, further discourse between us would be pointless. Good luck to you.

AlanJ
Reply to  ToldYouSo
October 14, 2023 1:25 pm

“The USCRN exhibits a strongly positive slope:

. . . Not sure you want to be highlighting it if your goal is to convince people that the planet isn’t warming significantly.”

That statement is completely true – the USCRN exhibits a warming trend that is larger than the trend for the whole globe, expressed by other indexes. It is not appropriate or compelling to use the USCRN dataset to substantiate a claim about the rate of global warming. You seem to agree with me, so I’m happy to end the discussion here with us both on the same page.

Reply to  AlanJ
October 14, 2023 1:57 pm

Tell us what is warming in CRN. Tmax or Tmin? If Tmax, why no state re

Reply to  Jim Gorman
October 14, 2023 1:58 pm

why few state high temp records being set?

AlanJ
Reply to  Jim Gorman
October 14, 2023 2:17 pm

I already answered this question for the other Gorman elsewhere in the thread, but he promptly ignored it. Both T-Max and T-Min are warming in USCRN:

comment image

Reply to  AlanJ
October 14, 2023 3:49 pm

Tmax has been decreasing since 2006 according to the graph. Tmax is *not* warming, it is cooling.

Live by the short-term linear regression line then die by the short term linear regression line.

AlanJ
Reply to  Tim Gorman
October 14, 2023 3:59 pm

comment image

Trust yourself, but verify.

Reply to  AlanJ
October 13, 2023 6:59 pm

Again, you REFUSE TO ADMIT that the trend comes ONLY from the El Nino bulge in 2015 and 2020.

You cannot allow yourself to admit that basic fact…

.. because you know it means that there is no human causation for this very slight mathematical warming trend.

And of course, we know it has cooled significantly faster since the 2015 El Nino.

another fact you ignorantly deny.

Reply to  AlanJ
October 13, 2023 7:28 pm

You really are showing your mathematical ignorance now.

UAH is a global average, hence its range is a lot less than USCRN

The Stdev of UAH data from 2005 is 0.209

The Stdev of USCRN data from 2005 is 1.98

It is totally meaningless to compare the trends.

The normalised trend of USCRN is about 1/6 that of the globe.

Reply to  bnice2000
October 14, 2023 4:22 am

A standard deviation of 1.98 and we’re faced with warnings of 0.55 per decade? Ho, ho, ho!.

I hope AlanJ recognizes that means ±1.98.

Reply to  Jim Gorman
October 14, 2023 5:38 am

It’s not obvious that AlanJ, as an MSC in the physical sciences, understands what std dev actually *means* when it comes to measurements.

AlanJ
Reply to  Tim Gorman
October 14, 2023 7:38 am

Oh, I know what it means, but it isn’t clear that any of you three do. The standard deviation is large for a regional mean than for the global mean because the variance is larger. The trend is still statistically significant at the 95% level in USCRN.

Reply to  AlanJ
October 14, 2023 12:56 pm

If all the regional means have a large variance, then how do you make the leap that the global mean will have a reduced variance?

You do realize the uncertainty adds even if by RSS, right? You are probably wanting to divide the uncertainty to make it an average as many others do too. Tell us how you average the tropics with Antarctica and get a small variance!

AlanJ
Reply to  Jim Gorman
October 14, 2023 1:32 pm

Can you provide a simple statement describing the relationship between the standard deviation and sample size?

Reply to  AlanJ
October 14, 2023 1:55 pm

I can, but before I do you need to inform us what the size of a sample of daily Tmax and Tmin is.

AlanJ
Reply to  Jim Gorman
October 14, 2023 2:25 pm

If I take the minimum and maximum temperature for a single day at a single station and use those two measurements to estimate the average daily temperature, the sample size is two. Tavg would be given as (Tmax+Tmin)/2.

Reply to  AlanJ
October 14, 2023 3:13 pm

Do you have a copy of the Guide to the Uncertainty of Measurement? I think not.

You have two samples of size 1. You can not calculate a mean or variance for either temperature. Therefore, you are assuming they have no uncertainty.

If you want the variance of the random variable you have just created, you need to calculate the variance for the two data points in your your random variables. What is that variance?

Read this from the GUM.

“””””C.3.2 Variance
The variance of the arithmetic mean or average of the observations, rather than the variance of the individual observations, is the proper measure of the uncertainty of a measurement result. The variance of a variable z should be carefully distinguished from the variance of the mean z. “””””

THE VARIANCE OF THE ARITHMITIC MEAN OR AVERAGE OF THE OBSERVATIONS, IS THE PROPER MEASURE OF THE UNCERTAINTY OF A MEASUREMENT RESULT.

I’ll ask again, what is the variance of Tavg? That is the uncertainty of that so-called measurement. How does that combine with other Tavg “measurements”?

Fundamentally Tavg is unfit for determining anything in climate!

AlanJ
Reply to  Jim Gorman
October 14, 2023 3:51 pm

You have two samples of size 1. You can not calculate a mean or variance for either temperature. Therefore, you are assuming they have no uncertainty.

Well, no, you have two samples of a variable (daily temperature). You can calculate the mean and variance of those two samples. Tmin and Tmax are just temperatures at the weather station.

THE VARIANCE OF THE ARITHMITIC MEAN OR AVERAGE OF THE OBSERVATIONS, IS THE PROPER MEASURE OF THE UNCERTAINTY OF A MEASUREMENT RESULT.

I’m not sure why this needs to be in all caps as if it’s a disagreement. If the thing you’re measuring is the mean, the variance of the mean is the appropriate measure of uncertainty (you want to know how the mean varies as you draw different samples from the population).

But of course you’ve neglected to answer my question, after I generously obliged by answering yours.

Reply to  AlanJ
October 14, 2023 4:09 pm

Well, no, you have two samples of a variable (daily temperature). You can calculate the mean and variance of those two samples. Tmin and Tmax are just temperatures at the weather station

Tavg as a measurement doesn’t even meet the repeatability conditions for measurements. From the GUM.

B.2.15 repeatability (of results of measurements)

Closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement

NOTE 1 These conditions are called repeatability conditions. 

NOTE 2 Repeatability conditions include:

—   The same measurement procedure

—   The same observer

—   The same measuring instrument, used under the same conditions

—   The same location

—   Repetition over a short period of time.

NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.

You obviously feel you are a statistics expert but you also obviously know nothing about measurements.

Tell what the variance is of the those two temperatures is and what it means from the standpoint of uncertainty in measurement.

AlanJ
Reply to  Jim Gorman
October 14, 2023 4:26 pm

For a daily temp readings at the same surface station:

—   The same measurement procedure Check
—   The same observer Check
—   The same measuring instrument, used under the same conditions Check
—   The same location Check
—   Repetition over a short period of time. Check

All looks good.

Tell what the variance is of the those two temperatures is and what it means from the standpoint of uncertainty in measurement.

The thing we are measuring is Tavg, and we have taken two measurements out of the infinite population of possible measurements. The variance in the mean is the expected variance we would achieve by drawing numerous different samples from this same population to determine the mean. For the daily average, all else being equal, the largest variance we could achieve would be to take a measurement at the hottest part of the day and a measurement at the coldest part of the day, as both values achieve the greatest possible deviance from the arithmetic mean. All other possible samples are between them.

Reply to  AlanJ
October 15, 2023 5:48 am

The thing we are measuring is Tavg

You are NOT measuring Tavg, you are CALCULATING IT. If you wish to declare Tavg as the measurand, then you must meet all the requirements of creating a measurement.

You can not reduce the measurement uncertainty by averaging with other daily Tavg’s since they each stand on their own. Their uncertainties add either directly or via Root Sum Square if you can justify it.

You forgot to address the following GUM definition of measurement uncertainty by using the variance.

C.2.20 variance

A measure of dispersion, which is the sum of the squared deviations of observations from their average divided by one less than the number of observations 

C.3.2 Variance

The variance of the arithmetic mean or average of the observations, rather than the variance of the individual observations, is the proper measure of the uncertainty of a measurement result. The variance of a variable z should be carefully distinguished from the variance of the mean z.

4.2.2 The individual observations qk differ in value because of random variations in the influence quantities, or random effects (see 3.2.2). The experimental variance of the observations, which estimates the variance σ2 of the probability distribution of q, is given by

s2(qk) = (1 / (n-1)) [Σ1n (qj – q̅)2 ]

This estimate of variance and its positive square root s(qk), termed the experimental standard deviation (B.2.17), characterize the variability of the observed values qk, or more specifically, their dispersion about their mean q̅.

Why don’t you want to address the measurement uncertainty by using the variance? It isn’t hard to do, there are numerous online sites that can do it quickly.

Tell everyone here what you get for the variance and experimental standard deviation of a Tavg measurement. Then discuss how such a large measurement uncertainty affects where the measurand’s value might actually be.

AlanJ
Reply to  Jim Gorman
October 15, 2023 8:02 pm

The variance of the arithmetic mean or average of the observations, rather than the variance of the individual observations, is the proper measure of the uncertainty of a measurement result.

This is from the section of the GUM you quoted above. Reflect on what the implications are and get back to us when you have it figured out. I’ll be happy to continue the discussion from there.

Reply to  AlanJ
October 16, 2023 6:06 am

You didn’t do as I asked.

“””””Tell everyone here what you get for the variance and experimental standard deviation of a Tavg measurement. Then discuss how such a large measurement uncertainty affects where the measurand’s value might actually be.”””””

Try using this standard deviation calculator. It only requires two entries, Tmax and Tmin.

https://www.calculator.net/standard-deviation-calculator.html

AlanJ
Reply to  Jim Gorman
October 16, 2023 9:48 am

You didn’t do as I asked first 😉 So you owe me. How does the standard deviation of the sampling distribution change as the number of samples grows?

Reply to  AlanJ
October 16, 2023 11:07 am

If all samples are IID with the same μ and σ then the sampling distribution will not vary. If the samples have varying mean values, then sampling error occurs and the sample mean distribution may expand depending on the variance of each sample being further from the population mean, that is, the SEM will grow.

Basically, the only real way to reduce the SEM is to increase the sample size (NOT the number of samples) such that a better and better representation of the population distribution is obtained in each sample.

Now your turn.

Define the population of a single Tavg calculation?

What is the experimental standard variance of a single daily Tavg calculation?

What is the experimental standard deviation of a single daily Tavg calculation?

What does this uncertainty tell you about the value of a single daily Tavg calculation?

How do you propagate this single daily uncertainty of Tavg into a monthly Tavg calculation?

Reply to  Jim Gorman
October 16, 2023 11:48 am

You won’t get an answer. He doesn’t even understand that daytime temps have a different variance than nighttime temps. Thus the total variance of the combination is the sum of the variances of the daytime temps and nighttime temps. Since variance is a measure of the uncertainty of the mid-range value, the uncertainty of the mid-range value is larger than either separate component.

When you combine MR1 (mid-range value for day 1) with all the other MR values you add all their variances. At the end of the month the variance of the total becomes large, meaning its uncertainty is large.

Since climate science can’t stand having their conclusions questioned they just ignore the growth of uncertainty at each stage of averaging. Thus the meme: “all measurement uncertainty is random, Gaussian, and cancels”. Meaning they fool themselves into thinking they don’t have to consider variance and its growth.

AlanJ
Reply to  Tim Gorman
October 16, 2023 1:56 pm

all measurement uncertainty is random

Random measurement error tends to cancel when averaged (if it’s random then the + is equally as likely as the – so you’re adding together values that are equally too high and too low), and the more measurements you include the more the random element cancels. That is all that is said about averaging.

Not one single climate scientist believes that there is no systematic error in temperature measurements, else there would not be an enormous body of literature discussing the subject. Your stubborn inability to accept that no one is disagreeing with you on this point seems to be the very heart of your persistent confusion on this topic.

Reply to  AlanJ
October 16, 2023 4:28 pm

Don’t know where you got your information on “error” but it is out of date, far out of date.

From the GUM:

“””””0.2 The concept of uncertainty as a quantifiable attribute is relatively new in the history of measurement, although error and error analysis have long been a part of the practice of measurement science or metrology. It is now widely recognized that, when all of the known or suspected components of error have been evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the correctness of the stated result, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured.”””””

“””””F.1.1.1 Uncertainties determined from repeated observations are often contrasted with those evaluated by other means as being “objective”, “statistically rigorous”, etc. That incorrectly implies that they can be evaluated merely by the application of statistical formulae to the observations and that their evaluation does not require the application of some judgement. “””””

You do realize that each and every measurement, even of the exact same thing, has uncertainty. It is an interval surrounding each and every measurement being made. Those intervals do not cancel in their entirety and sometimes not at all. They are not “errors” that have a consistent value that cancel if the distribution is Gaussian.

There are differing influence quantities in every measurement of different experiments. That is one reason the GUM expects an experimental standard deviationto be used for measurement uncertainty

From the GUM:

“””””B.2.10 influence quantity quantity that is not the measurand but that affects the result of the measurement

EXAMPLE 1. Temperature of a micrometer used to measure length.

EXAMPLE 2. Frequency in the measurement of the amplitude of an alternating electric potential difference.

EXAMPLE 3. Bilirubin concentration in the measurement of haemoglobin concentration in a sample of human blood plasma.

[VIM:1993, definition 2.7]

Guide Comment: The definition of influence quantity is understood to include values associated with measurement standards, reference materials, and reference data upon which the result of a measurement may depend, as well as phenomena such as short-term measuring instrument fluctuations and quantities such as ambient temperature, barometric pressure and humidity.

AlanJ
Reply to  Jim Gorman
October 17, 2023 9:00 am

You do realize that each and every measurement, even of the exact same thing, has uncertainty. It is an interval surrounding each and every measurement being made. Those intervals do not cancel in their entirety and sometimes not at all. They are not “errors” that have a consistent value that cancel if the distribution is Gaussian. 

Again, you hold the misguided belief that people disagree with you on the existence of non-random elements of error in temperature measurements. Nobody does. Until you dispel yourself of this incorrect notion, you are going to continue to struggle in these discussions.

Reply to  AlanJ
October 17, 2023 12:03 pm

You keep using the word and concept of ERROR. That no longer exists. Read this again.

“””””although error and error analysis have long been a part of the practice of measurement science or metrology. It is now widely recognized that, when all of the known or suspected components of error have been evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the correctness of the stated result, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured.”””””

Uncertainty IS NOT ERROR. Until you learn this you are stuck in the mud.

AlanJ
Reply to  Jim Gorman
October 17, 2023 12:27 pm

The concept of measurement error still exists, I am not sure what you are smoking but it must be wild.

Reply to  AlanJ
October 17, 2023 1:14 pm

I have given a quote from the GUM. I’ll do it one more time.

From the GUM:

“””””0.2 The concept of uncertainty as a quantifiable attribute is relatively new in the history of measurement, although error and error analysis have long been a part of the practice of measurement science or metrology. It is now widely recognized that, when all of the known or suspected components of error have been evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the correctness of the stated result, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured.”””””

You are obviously uneducated about uncertainty. It replaces “error” like it or not. Uncertainty adds, and cancels only under a very specific set of assumptions.

If you want to be considered a physical scientist or engineer, you should obtain copies if all the JCGM’s documents from the Internet and educate yourself.

When you begin to provide references from these internationally accepted references that support your assertions, people may give them some credence. Until then, no go!

AlanJ
Reply to  Jim Gorman
October 17, 2023 2:02 pm

The quotation you provide does not indicate that the concept of error has been abandoned, it merely indicates that a broader concept of uncertainty has more recently been introduced. Measurement error is a component of the uncertainty, which consists of both random and systematic elements.

Reply to  AlanJ
October 17, 2023 3:33 pm

Do you just make this crap up as you go along?

Reply to  karlomonte
October 17, 2023 5:12 pm

I swear this is Bellman under another name. His knowledge of sampling is just as bad. One sample means can give you a sample means distribution – Ho, Ho, Ho.

Reply to  AlanJ
October 17, 2023 4:32 pm

Ok, how about this reference from the GUM.

Annex D

“True” value, error, and uncertainty

The term true value (B.2.3) has traditionally been used in publications on uncertainty but not in this Guide for the reasons presented in this annex.

True value as traditionally been taught has been internationally deprecated. You should download the JCGM documents and learn from them.

AlanJ
Reply to  Jim Gorman
October 17, 2023 6:05 pm

So we are jumping from “the concept of errors is deprecated” to, “the term true value not used in the GUM.” Do the goal posts get heavier each time you have to lift them?

Reply to  AlanJ
October 17, 2023 6:34 pm

Work on the GUM began in 1977. It is not new by any means. The old method of “errors” canceling and revealing a true value disappeared a long time ago. Read the beginning of the GUM for an explanation.

For someone with an MsC in science you sure haven’t kept up with metrology very well. What book on metrology are you using?

Your attempt to justify current metrology in climate science is indicative of your knowledge of uncertainty.

Reply to  AlanJ
October 17, 2023 4:38 am

Random measurement error tends to cancel when averaged (if it’s random then the + is equally as likely as the – so you’re adding together values that are equally too high and too low), and the more measurements you include the more the random element cancels. That is all that is said about averaging.”

Malarky! Error only cancels when you are measuring the same thing multiple times using the same device under the same environmental conditions – and even then you have to show that you have a true Gaussian distribution of measurement error.

Temperature measurements totally violate this restriction. First, the same thing is not being measured – Tmax is *not* Tmin. Second, multiple measurements of the same thing are, therefore, not being taken. And third, each measurement is taken under different environmental conditions by definition.

Your assertion totally ignores the fact that systematic bias in readings can’t be statistically analyzed yet systematic bias is a definite component of measurement uncertainty, especially in uncalibrated field measuring devices subject to component drift, sensor drift, micro-climate changes, and external influences such as insect infestation or ice/snow.

Averaging doesn’t help ANYTHING. The average uncertainty is *NOT* the uncertainty of the average. The SEM is not the measurement uncertainty. Systematic bias cannot be eliminated by averaging.

Each temperature measurement is a random variable with a sample size of 1 and whose variance is its measurement uncertainty. When you add all those random variables all those variances add. The total from adding those individual variances is the variance of the resulting data set and is a direct indication of the uncertainty of the average value, *not* the average value of the individual variances. If you believe that there is *some* cancellation of the variances then you add them in quadrature instead of direct addition – but you will *never* get complete cancellation except under the one possible situation above.

“Not one single climate scientist believes that there is no systematic error in temperature measurements, else there would not be an enormous body of literature discussing the subject. Your stubborn inability to accept that no one is disagreeing with you on this point seems to be the very heart of your persistent confusion on this topic.”

Then why do you *NEVER* see it mentioned anywhere concerning the “global average temperature? Even if you assume the measurement uncertainty of the GAT is no more than the average uncertainty of the measuring devices, that average measurement uncertainty ranges from +/- 0.2C to +/- 1.0C for the temperature measurements used to calculate the GAT. That means the measurement uncertainty is somewhere in the tenths digit! That overwhelms the ability to tell what is happening in the hundredths digit – that hundredths digit is part of the GREAT UNKNOWN.

It’s the reason that climate science likes to say that the SEM is the measurement uncertainty of the GAT. You can make the SEM as small as you want just by increasing the sample size!

Medicine recognized the falsity of that assumption a long time ago, they differentiate between the SEM of a set of data and the actual measurement uncertainty (the standard deviation of the population data set). Yet climate science has yet to recognize the same thing!

Climate science is so far behind other disciplines in their methodology that is it actually sad. It’s like they are still living in the 1700’s when the daily mid-range temperature was only used for local purposes (such as for agricultural use) and wasn’t extended to regional or global use.

AlanJ
Reply to  Tim Gorman
October 17, 2023 9:11 am

Your assertion totally ignores the fact that systematic bias in readings can’t be statistically analyzed yet systematic bias is a definite component of measurement uncertainty, especially in uncalibrated field measuring devices subject to component drift, sensor drift, micro-climate changes, and external influences such as insect infestation or ice/snow. 

Nothing about my assertion ignores this. There is an enormous amount of effort spent identifying and removing systematic bias from temperature measurements.

Then why do you *NEVER* see it mentioned anywhere concerning the “global average temperature?

You don’t see it because you are completely ignorant of the body of work in this field. Refer to any of the multitude of references provided alongside any of the major temperature indexes.

Reply to  AlanJ
October 17, 2023 11:56 am

I’m going to die laughing today.

Show me a study where each location was visited to determine the extent of systematic uncertainty!

You didn’t read the Hubbard & Lin study did you? You can not determine systematic error remotely through statistics. The fact that you need this explained at all indicates your lack of knowledge about making measurements. If the frame of my micrometer is bent, how do you tell that from the measurements I take? If mites leave detritus on a thermistor, how do you tell from temperatures it reads?

AlanJ
Reply to  Jim Gorman
October 17, 2023 12:25 pm

You didn’t read the Hubbard & Lin study did you? You can not determine systematic error remotely through statistics.

Again, no one has ever disagreed with this point. You’re arguing with your own hallucinations. This single misunderstanding of yours is the primary driver of your continual confusion whenever you engage in conversations about this topic with others.

Reply to  AlanJ
October 17, 2023 4:10 pm

What you said.

There is an enormous amount of effort spent identifying and removing systematic bias from temperature measurements.

What I said.

Show me a study where each location was visited to determine the extent of systematic uncertainty!

You didn’t respond with any study or reference whereby visits are routinely made to every individual station to monitor and resolve systematic uncertainty. Therefore, your statement must have meant that lots of computing time and statistical analysis is done to resolve systematic uncertainty.

It is not wrong to point out assertions that are incorrect. You can resolve the issue by pointing to papers showing how systematic uncertainty at individual stations is isolated and corrected.

Reply to  AlanJ
October 17, 2023 12:13 pm

There is an enormous amount of effort spent identifying and removing systematic bias from temperature measurements.

And it is all data FRAUD.

Reply to  karlomonte
October 17, 2023 12:57 pm

Systematic uncertainty can not be determined thru statistical analysis. Beware anyone who thinks it can.

Can anomalous readings be recognized? Only if they exceed a set range.

This guy is a neophyte.

Reply to  Jim Gorman
October 17, 2023 3:37 pm

Yep. He thinks the climate science data adjustments via goat entrails is a valid procedure, and all he can do to back them up is generating word salads.

Reply to  karlomonte
October 17, 2023 5:15 pm

You got that right!

Reply to  AlanJ
October 17, 2023 1:15 pm

You claimed measurement uncertainty cancels. How does it cancel when it is systematic? LIG thermometers have similar systematic bias due solely to their design. See Pat Frank’s paper on this. That systematic bias in LIG thermometers can’t cancel since it is the same for every LIG thermometer.

You don’t see it because you are completely ignorant of the body of work in this field.”

There is *NO* body of work in climate science on how to correct systematic bias in field temperature measuring devices. There can’t be because it would require each station to be calibrated in a lab environment prior to each measurement.

There is no way to identify if increasing temperature readings at a statin are due to global warming or due to calibration drift, or due to a new contract with a new landscaper! There is a reason why Hubbard and Lin found that you can’t do regional temperature reading adjustments because of the unknown impact of the microclimates at each station. You would need to do adjustments on a station-by-station basis and *that* would require you to have a co-located, calibrated thermometer at each station prior to each measurement!

I suspect there is a reason why no climate scientist ever gets hired to design a load carrying bridge for the public.

AlanJ
Reply to  Tim Gorman
October 17, 2023 2:15 pm

You claimed measurement uncertainty cancels. How does it cancel when it is systematic?

I didn’t claim that, I claimed that random measurement error tends to cancel when multiple observations are combined by averaging. The systematic component of the uncertainty will not cancel.

All of the adjustments applied to the surface station data are designed to removed bias and systematic error from the network, and all such adjustments are meticulously described in the literature. The random component of the uncertainty is not explicitly addressed via analytical methods because it is minimized by combining a large number of observations. Does this mean that every single possible conceiveable element of systematic uncertainty is addressed? Of course not, otherwise scientists would not continually publish updates to the surface temp indexes.

Again, in your replies you ignore what is actually being said to you. You’re fighting your own shadow.

Reply to  AlanJ
October 17, 2023 3:38 pm

All of the adjustments applied to the surface station data are

FRAUDLENT, and you sit wherever you are and defend this practice.

Reply to  AlanJ
October 17, 2023 4:59 pm

All of the adjustments applied to the surface station data are designed to removed bias and systematic error from the network, and all such adjustments are meticulously described in the literature.

Show one reference document that describes how systematic uncertainty is both determined and addressed.

The random component of the uncertainty is not explicitly addressed via analytical methods because it is minimized by combining a large number of observations.

Again, you display your lack of knowledge of statistics.

I have asked you before to answer some questions about the statistics used for temperature and you failed to answer them. Here they are again.

When you have:

Var(X) = monthly variance,

Var(Y) = baseline variance,

and:

Var (X – Y) = Var(X) + Var(Y)

Pick any month, what is the value of the variance of this subtraction? What is the value of the variance for an annual average of monthly anomalies? Show your work.

A large number of observations doesn’t mean shite if they are not grouped and analyzed properly. As I have tried to point out, unless you know the population σ, dividing by the √n is an improper calculation. Even worse, it doesn’t use the experimental standard deviation of data as outlined in the GUM.

Reply to  AlanJ
October 18, 2023 4:38 am

The systematic component of the uncertainty will not cancel.”

If systematic bias does not cancel, and if u(total) = u(random) + u(systematic) then what portion of u(total) is u(systematic)?

All of the adjustments applied to the surface station data are designed to removed bias and systematic error from the network”

How do you eliminate systematic bias when you don’t know what it is? Hubbard and Lin showed in 2002 that station adjustments have to be done on a station-by-station basis. You cannot apply the same adjustment to a group of stations because of differences in microclimate across stations. We already went over this once. Is you memory *that* short?

“The random component of the uncertainty is not explicitly addressed via analytical methods because it is minimized by combining a large number of observations.”

If u(total) = u(random) + u(systematic) then you cannot assume that all random error cancels because you don’t know what it is! It will be different for each measurement as well as for each station.

“Does this mean that every single possible conceiveable element of systematic uncertainty is addressed? Of course not, otherwise scientists would not continually publish updates to the surface temp indexes.”

As I have already pointed out, climate science totally ignores the literature on measurement uncertainty that is applicable to temperature measurements.

Recap: 1) you can’t adjust for systematic bias since it is different for every measurement stations and, 2) you can’t assume cancellation of random error because you don’t know what proportion of a total station uncertainty the random component is.

This just all boils down to what I’ve been saying for two years. Climate science just uses the meme of “all measurement uncertainty is random, Gaussian, and cancels”.

That’s *exactly* what you are saying. “all measurement uncertainty is random, Gaussian, and cancels” along with “you can adjust systematic bias away using a common adjustment for all stations”.

Meaning you can just ignore measurement uncertainty and assume the stated values of the temperature measurement are 100% accurate.

It’s how climate science justifies being able to discern temperature differences in the hundredths digit using field measurement devices with uncertainty in the tenths digit – just ignore the measurement uncertainty!

Reply to  Tim Gorman
October 18, 2023 5:39 am

And what remains astounding to me is the near-universal belief that these “adjustments” are ethical. Not one of them that I can see will pause for a even millisecond to consider they are not ethical.

Reply to  AlanJ
October 18, 2023 2:48 pm

I didn’t claim that, I claimed that random measurement error tends to cancel when multiple observations are combined by averaging. “

ONLY WHEN YOU ARE MEASURING THE SAME MEASURAND USING THE SAME DEVICE UNDER THE SAME ENVIRONMENTAL CONDITIONS.

Please explain how this applies to temperature measurements made by different devices under different environmental conditions! And then explain how the SAME MEASURAND is involved in temperature measurements.

AlanJ
Reply to  Jim Gorman
October 16, 2023 1:26 pm

Basically, the only real way to reduce the SEM is to increase the sample size (NOT the number of samples) such that a better and better representation of the population distribution is obtained in each sample. 

That is correct. Our sample is the set of observations of Tavg used to calculate the mean. The more observations we include in the sample, the lower the SEM.

Define the population of a single Tavg calculation?

For an estimate of Tavg at a single location on a single day using a single instrument, the population is the set of possible values of temperature we could measure throughout the day. Our sample will be the set of measurements taken (T-max and T-min).

How do you propagate this single daily uncertainty of Tavg into a monthly Tavg calculation?

The uncertainty is propagating via the variances – you determine the standard deviation of the daily Tavgs. It will generally be true that the uncertainty of the monthly estimate of the daily Tavg will be lower than the uncertainty of any individual daily Tavg estimate. This will certainly be true for the case where you are combining multiple daily Tavg estimates for each day of the month, as when determining the global mean from a station network.

Reply to  AlanJ
October 16, 2023 4:54 pm

That is correct. Our sample is the set of observations of Tavg used to calculate the mean. The more observations we include in the sample, the lower the SEM.

So you have one sample with 31 days? Or do you have 31 samples with 1 data points in each sample.

With one sample you can not have a sample means distribution. You only have 1 sample means and that doesn’t make a distribution used with the CLT.

With 31 samples your sample size is 1. That makes σ = SEM using the formula SEM= σ / √n.

You haven’t thought any of this through have you?

Why don’t you download these and really, really study them?

JCGM 100:2008
https://www.iso.org/sites/JCGM/GUM/JCGM100/C045315e-html/C045315e.html?csnumber=50461

JCGM 200:2012 https://www.iso.org/sites/JCGM/VIM/JCGM_200e.html

AlanJ
Reply to  Jim Gorman
October 17, 2023 10:14 am

So you have one sample with 31 days? Or do you have 31 samples with 1 data points in each sample.

You have a sample with a size of 31 observations. There is a sample means distribution, it is reflective of the distribution of means obtained by randomly sampling the total population.

Reply to  AlanJ
October 17, 2023 11:11 am

“””””You have a sample with a size of 31 observations. There is a sample means distribution, it is reflective of the distribution of means obtained by randomly sampling the total population.”””””

Rotflmao!

You have “a sample”! That is one sample. That is ONE sample mean. One sample mean does not make a sample means distribution that will allow a standard deviation of the sample means distribution to be calculated.

The SD of a sample is not the SEM either. The SEM is the SD of a SAMPLE MEANS distribution that is normal by the CLT, which you don’t have. You also have no proof that your sample is representative of the population.

Remember, the sampling theory you learned? Sampling allows you to calculate an estimated mean and to estimate a standard deviation by “SEM•√n”, but first you need sampleS to calculate the SEM.

You need to find a better answer!

AlanJ
Reply to  Jim Gorman
October 17, 2023 12:21 pm

The SD of a sample is not the SEM either. 

You are conflating how the SEM is calculated with what it represents. The SEM represents the deviation of the sample mean from the population mean, it is calculated by dividing the sample standard deviation by the square root of the sample size. In effect you are scaling the sample standard deviation to be appropriate for the sample mean.

Reply to  AlanJ
October 17, 2023 1:23 pm

the sample standard deviation”

This implies one sample. How do you prove that this one sample is representative of the population? Especially when it is a very small sample. The CLT *requires* multiple samples in order to have an SEM since the multiple samples will coalesce into a Gaussian distribution. The SD of *ONE* sample is insufficient for proving anything.

AlanJ
Reply to  Tim Gorman
October 17, 2023 2:35 pm

You explicitly don’t assume this. In fact the smaller the sample is, the less likely it is to be representative of the population, and you can expect a broad distribution of sample means for same-sized samples. For very large sample sizes, the distribution of sample means around the population mean narrows. The CLT does not “require” multiple sample means to show a normal distribution, it says the that the distribution will be normal whether we have measured it or not. The calculation of the SEM is based on the standard deviation of a single sample. It’s better if the sample is very large.

Reply to  AlanJ
October 17, 2023 3:40 pm

More word salad that doesn’t apply to GAT calculations.

Reply to  AlanJ
October 17, 2023 5:09 pm

The CLT does not “require” multiple sample means to show a normal distribution, it says the that the distribution will be normal whether we have measured it or not. The calculation of the SEM is based on the standard deviation of a single sample. It’s better if the sample is very large.

Bull crap. Your knowledge of sampling is extremely deficient. You CAN NOT obtain an SEM based on a single sample. Are you Bellman under another name?

Read the chapter I referenced and the preceding chapter about point estimation and sampling distribution.

Just exactly how do you get a sample means distribution without having multiple samples. For sure, ONE sample as you claimed a distribution does not make. Tell how ONE SAMPLE MEANS can make a distribution.

From the reference I gave you.

The sampling distribution of a sample statistic is the distribution of the point estimates based on samples of a fixed size, n, from a certain population. It is useful to think of a particular point estimate as being drawn from a sampling distribution.

You probably don’t even know what connotation “sample statistic” has.

Reply to  AlanJ
October 18, 2023 2:26 pm

For very large sample sizes, the distribution of sample means around the population mean narrows.” (bolding mine, tpg)

But you don’t have multiple sample MEANS! You have ONE sample mean.

“The CLT does not “require” multiple sample means to show a normal distribution, it says the that the distribution will be normal whether we have measured it or not.”

The CLT says the distribution of the SAMPLE means will be normal! Not the sample distribution. Or the population distribution.

The SEM is *NOT* based on the SD of the sample.

You *really* don’t understand any of this at all do you?

Why do you think the more accurate name for the SEM is “standard deviation of the sample means”?

Reply to  AlanJ
October 17, 2023 3:47 pm

The SEM represents the deviation of the sample mean from the population mean, it is calculated by dividing the sample standard deviation by the square root of the sample size.

You need to show a reference that supports your assertion. It needs to address specifically what assumptions are needed to allow its use.

And no, it is not calculated by dividing the sample standard deviation by the √n. It is calculated by dividing the population σ by the √n.

The CLT will provide a normal distribution if a sufficient number of samples each with a large enough size from the distribution of the sample means. Now you tell me what this means.

Let’s make it more basic. I take 100 samples each of which has 30 measurements. I then take the mean of each sample and create a random variable. I then plot a histogram with the with the 100 means of the 100 samples and according to the CLT, I will have a normal distribution regardless of the shape of the original population. The mean of the sample means distribution will estimate the mean of the population. The standard deviation of the sample means (standard error) can estimate the population σ by using the formula SD = SEM * √n.

You have admitted that you only have one sample. THAT PROVIDES ONE SAMPLE MEAN. To obtain a sample means distribution that meets the CLT requirement for both estimating the population mean and the population standard deviation, you need a lot more samples.

I’ll be you can’t even decipher what assumption NIST made in TN 1900 Example 2, that let them assume they could divide the standard deviation of the data by the √n. Even they admitted that it might be a false assumption. If you truly understand sampling and the CLT you might be able to figure it out.

For your perusal here is a reference. There are many more on the internet.

From:5.2 The Sampling Distribution of the Sample Mean (σ Known) – Significant Statistics (vt.edu)

To summarize, the central limit theorem for sample means says that if you keep drawing larger and larger samples (such as rolling one, two, five, and finally, ten dice) and calculating their means, the sample means form their own normal distribution (the sampling distribution). The normal distribution has the same mean as the original distribution and a variance that equals the original variance divided by the sample size. Standard deviation is the square root of variance, so the standard deviation of the sampling distribution (aka standard error) is the standard deviation of the original distribution divided by the square root of n. The variable n is the number of values that are averaged together, not the number of times the experiment is done.

AlanJ
Reply to  Jim Gorman
October 17, 2023 6:00 pm

I understand that the concept is confusing for you, but it is a fundamental part of statistics so wrapping your head around it is probably a good idea. We typically do not know the population standard deviation, but we do know the standard deviation of our sample. This standard deviation is an observation of the distribution of the sample means, because it is a set of n observations of the population distribution, from which we can determine the SEM.

You are conflating the definition of the SEM with the mathematical equality stating that it is equivalent to the sample standard deviation divided by the square root of the sample size.

Reply to  AlanJ
October 17, 2023 6:16 pm

“””””we do know the standard deviation of our sample. This standard deviation is an observation of the distribution of the sample means”

You are wrong! The standard deviation of a sample is in no way related to the sample means distribution. Where did you learn statistics, a Cracker Jack box! The sample means distribution is made up of the mean values of all the samples.

Show a reference that calculates an SEM from the standard deviation of the samples.

You are making up a word salad hoping it comes out right. It isn’t!

Dude, you said you had one sample with 31 temps in it. Do I need to show you what you said?

One sample provides one sample mean. You can not make a sample means distribution with just one sample mean.

Reply to  AlanJ
October 18, 2023 2:43 pm

 our sample. This standard deviation is an observation of the distribution of the sample means, because it is a set of n observations of the population distribution, from which we can determine the SEM.” (bolding mine, tpg)

You can’t even be consistent in two consecutive sentences! In the first sentence you speak of ONE sample, singular. In the next sentence you talk about have *means*, plural!, and a “set”, plural.



Reply to  AlanJ
October 18, 2023 2:55 pm

If you don’t know the population SD it is easy to find. Take multiple samples of size n, find the SEM, and then multiply the SEM by sqrt(n). You’ll be close to the population SD.

You do *NOT* just assume the SD of one sample is the population SD.

Reply to  AlanJ
October 14, 2023 3:42 pm

Tmax is taken from a sinusoidal daytime temp curve. Tmin is taken from an exponential decay nighttime temp curve. (Tmax+Tmin)/2 is *NOT* an average temp, it is a mid-range temp.

That mid-range temp can be created from many different combination of Tmax and Tmin. That means it cannot be used as an index to define climate. Two different climates should not give the same index value if you are trying to discern climate impacts. The climate of Las Vegas and of Miami are quite different. Yet daily mid-range values can be very similar. So what does the mid-range value tell you about the climate in Las Vegas vs Miami?

AlanJ
Reply to  Tim Gorman
October 14, 2023 4:08 pm

The thing we want to know is whether the climates are changing, whether that change is occurring in Tmin or Tmax we don’t initially care, so Tavgis the perfect metric.

Malarky! If the variance of regional temps is high, and the random variables known as the regional temp data are added together to create a global data set then the variances of the regional temp data ADDS – meaning the variance of the global data will be larger than the variance of any regional data.

That isn’t how variance works. Variance is defined as the squared sum of the differences of the individual observations and the mean of the observations divided by the number of observations. The variance is determined by how close the observations typically are to the mean.

Reply to  AlanJ
October 14, 2023 4:15 pm

You are the person going on and on about high temperatures. How do you tell what the high temperatures were from a global anomaly ΔT? You don’t even know what the actual temperature is that is associated with the anomaly. If Tmin grows by 1 degree and Tmax doesn’t change at all, what happens to the average? It grows.

You are not worth spending any more time with. You have no background in measurements, what they mean, nor how uncertainty applies.

Reply to  AlanJ
October 15, 2023 4:01 am

The thing we want to know is whether the climates are changing, whether that change is occurring in Tmin or Tmax we don’t initially care, so Tavgis the perfect metric.”

Again, what you are calculating is *NOT* an average. It is a mid-range value. The mid-range value is not a good index to use in determining if climates are changing since different climates can give you the same mid-range value. If the mid-range value can’t discern different climates then how can it tell you “whether the climates are changing”?

That isn’t how variance works.”

Of course it is. When adding random variables the variances add. You just defined what variance is, you did *NOT* define what the combined variance of two random variables is.

The northern hemisphere in summer has a different variance in temperatures than the southern hemisphere in winter. Both are a random, independent variable. Yet to determine the “global climate” you add variables together.

Var(A ± B) = Var(A) + Var(B)

Variance is a measure of the uncertainty of a random variable. The higher the variance the less certain you are about what actual value the random variable can assume. Therefore the uncertainty of the global climate is GREATER than the uncertainties of the component parts used to determine the global climate.

You really don’t knw what you are talking about, do you? Your word salads are proof.

AlanJ
Reply to  Tim Gorman
October 15, 2023 8:06 pm

Again, what you are calculating is *NOT* an average. It is a mid-range value. The mid-range value is not a good index to use in determining if climates are changing since different climates can give you the same mid-range value. If the mid-range value can’t discern different climates then how can it tell you “whether the climates are changing”?

It very much is an average, by definition. If the mid-range values are changing then the climate is changing, that is how the Tavg can tell is whether the climate is changing. If I know the height of two people’s belly buttons, I maybe can’t distinguish whose head is farther from the floor, but if the mean height of their belly buttons is changing over time I know that there is a change in the body proportions of at least one person. Now I can start investigating why the change is occurring.

Reply to  AlanJ
October 16, 2023 5:17 am

It very much is an average, by definition.”

No, it isn’t. The average of a half-cycle of a sinusoid is 0.63 x Tmax. The average of an exponential decay is about the half-life, i.e. ln(2)/λ where λ is the exponential decay factor.

The average of the daily curve is the average of the two averages, i.e. daytime and nighttime. [ 0.63Tmax+ (ln(2)/λ)/2]

These values *will* differentiate between climates. Las Vegas and Miami will have a different average value, even if the mid-range value is the same, because the exponential decay factor at night is different for each location.

It is why climate science needs to join agriculture science and HVAC engineering in using integrative degree-day values instead of the mid-range value which is solely based on TRADITION – shades of Tevye!

If the mid-range values are changing then the climate is changing, that is how the Tavg can tell is whether the climate is changing.”

Malarky. Inherent in this, whether you recognize it or not, is the assumption that the climates of Las Vegas and Miami are the same. They aren’t. The climate in Miami could be warming/cooling at a different rate than in Las Vegas. Using the mid-range values simply won’t be able to tell you that.

 but if the mean height of their belly buttons is changing over time I know that there is a change in the body proportions of at least one person.”

But the “global average temperature”, derived from all these mid-range values assumes the same change is happening to BOTH! If one person is getting taller while the other is getting shorter the average of the mid-range values will come out the same and you’ll totally miss the actual changes!

Climate science uses a fatally flawed index at its base and it makes all the other averages derived from that index fatally flawed as well.

The truth is that averages lose information. Every time an average of an average is calculated more information is lost. By the time climate science calculates the “global average temperature” so much information has been lost that it is no longer possible to actually tell *what* or *why* the global average temperature has changed.

AlanJ
Reply to  Tim Gorman
October 16, 2023 6:26 am

These values *will* differentiate between climates. Las Vegas and Miami will have a different average value, even if the mid-range value is the same, because the exponential decay factor at night is different for each location. 

The thing that we are trying to track is change in the climate, we are not trying to characterize the climates of these places – that is one reason why scientists report values as anomalies. We don’t retain information about the specific climates by design. If we wanted to understand how the climates of Las Vegas and Miami were individually responding, we would be conducting a more detailed regional study.

But the “global average temperature”, derived from all these mid-range values assumes the same change is happening to BOTH! If one person is getting taller while the other is getting shorter the average of the mid-range values will come out the same and you’ll totally miss the actual changes!

The global population of belly button havers isn’t changing, then. That’s the whole point. Global climate change represents a change in the energy state of the system, internal variability represents unforced movement of energy from one part of the system to another. The former is the thing we are trying to track.

Reply to  AlanJ
October 16, 2023 11:10 am

The thing that we are trying to track is change in the climate,”

You didn’t even read what I posted or you didn’t understand what the mid-range value is.

If the mid-range value goes up in one location and down in another then how do you identify that a change even happened? The average of the two mid-range values will be the SAME – *no discernable change*.

“that is one reason why scientists report values as anomalies.”

Anomalies are a joke. Winter temps have a different variance than summer temps. How does using anomalies overcome this fact? A random variable, i.e. winter temps, that has a different variance then another random variable, i.e. summer temps, can’t be directly compared. And that includes their averages and anomalies. But climate science does exactly that!

Anomalies are supposedly a difference between a baseline average temp and the current temp. One major problem with that is that it assumes the baseline average temp somehow describes *today* as well as yesterday. The change in temp today, compared to the average temp from 2006, is different than comparing it to the average temp from 1950-1980. So which anomaly would better define what is happening today?

Climate science suffers from a major blind spot – tradition.

AlanJ
Reply to  Tim Gorman
October 16, 2023 1:32 pm

If the mid-range value goes up in one location and down in another then how do you identify that a change even happened? The average of the two mid-range values will be the SAME – *no discernable change*.

A change did not happen if a change in one place is exactly offset by a change in another. This is the entire concept of global climate change. If warming in one region is offset by cooling in another, all that is happening is energy being move from one region to another. There is not a forced change in the energy state of the system occurring (net forcing is zero).

Anomalies are supposedly a difference between a baseline average temp and the current temp. One major problem with that is that it assumes the baseline average temp somehow describes *today* as well as yesterday. The change in temp today, compared to the average temp from 2006, is different than comparing it to the average temp from 1950-1980. So which anomaly would better define what is happening today?

That distinction is arbitrary – the choice in baseline doesn’t affect how much temperature change has occurred between any two points in time, it merely determines where the zero is. If 0 is in 1900 then the present day animal is around 1.2, and 1.2 degrees of warming has occurred since 1900. If zero is in 1980 then 1900 is -0.5 degrees and present day is 0.7 degrees, and 1.2 degrees of warming has occurred since 1900.

Reply to  AlanJ
October 16, 2023 4:38 pm

You missed the whole issue. Why am I not surprised. Anomalies don’t capture absolute reasons for variations. Averaging them just doesn’t take into account annual global weather conditions.

AlanJ
Reply to  Jim Gorman
October 17, 2023 10:15 am

If we wanted to study annual global weather we would not be looking at a global mean anomaly index.

Reply to  AlanJ
October 17, 2023 10:33 am

Your shoes are going to catch fire because you are dancing so hard and fast.

I suppose in your world a warmer global anomaly index doesn’t portend any change in global weather at all!

Funny how you and others warmists continue to forecast more extreme WEATHER because global anomaliy growth means raised temperatures globally.

AlanJ
Reply to  Jim Gorman
October 17, 2023 12:33 pm

My position has remained completely consistent at every step of the discussion, what has been transient is your own interpretation of what is being said to you, which is intangible and fleeting as the breeze.

The global mean temperature tracks the energy content of he climate system, and the energy content of the climate system has implications for weather patterns. But if we want to study those patterns themselves we will look deeper than just the simple global mean temperature anomaly.

Reply to  AlanJ
October 17, 2023 1:00 pm

You must be trying to convince yourself. All who have tried to read your word salads know better.

Reply to  Jim Gorman
October 17, 2023 3:42 pm

I’ll give him one thing, he keeps going and going and going and going and going and going and …

Jim Masterson
Reply to  karlomonte
October 17, 2023 5:04 pm

“. . . he keeps going and going . . . .”

Yes he does. After losing this argument, then he’s going to have to explain how you measure a thermodynamic system that isn’t in equilibrium. Then he needs to explain how you average intensive properties. Of course, you can average phone numbers in a phone book, but I doubt the result will have any meaningful value.

Reply to  karlomonte
October 17, 2023 5:23 pm

Bellman!

Hey do you have an email I can use to contact you?

Reply to  AlanJ
October 17, 2023 1:28 pm

The global mean temperature tracks the energy content of he climate system”

Energy content is tracked using enthalpy, not temperature. Temperature is a very poor proxy for enthalpy. The fact that you would make this assertion is just further proof that you are nothing more than a troll, you don’t actually have any real knowledge of thermodynamics at all.

AlanJ
Reply to  Tim Gorman
October 17, 2023 2:37 pm

How shall one track the enthalpy of the whole earth system?

Reply to  AlanJ
October 17, 2023 3:43 pm

Why did you not read what Time wrote?

Reply to  AlanJ
October 17, 2023 4:15 pm

How shall one track the enthalpy of the whole earth system?

You are the guy with all the knowledge. Why are you asking others.

Reply to  AlanJ
October 18, 2023 2:31 pm

How shall one track the temperature of the whole earth system?

If you know the specific heat of the medium and its temperature why can’t you determine the enthalpy? Land-based temp measuring stations have recorded pressure and humidity for a long time. Knowing the humidity, pressure, and the temperature is all you need for enthalpy – how do you think you calculate the energy is a gas like water steam? Have you *ever* looked at the steam tables?

How do you calculate the enthalpy of a unit of water? Do you have any idea?

Reply to  AlanJ
October 17, 2023 4:55 am

Except a negative anomaly represents cooling from zero and a positive anomaly represents warming. The warming is calculated from the zero point, not the minimum point, so the warming from zero would only be +0.7deg, not 1.2deg. See the attached graph.

image_2023-10-17_065439261.png
AlanJ
Reply to  Tim Gorman
October 17, 2023 10:52 am

A cooling back in time is the same as a warming forward in time. So a negative anomaly prior to the baseline period represents a warming from that negative up to the baseline. So you need to add that to warming occurring after the baseline period.

Reply to  AlanJ
October 17, 2023 1:29 pm

More word salad trying to rationalize your incorrect assertion. Anomalies are not calculated from the minimum but from zero. Check your graph.

AlanJ
Reply to  Tim Gorman
October 17, 2023 3:21 pm

Anomalies are calculated relative to the baseline period, which means the mean anomaly of the baseline period is zero. That doesn’t mean we are prohibited from comparing periods before and after the baseline period.

If Jim’s height is 175cm and Tim’s is 185cm, and we decide to measure their relative heights against a baseline of 180cm, then Jim’s anomaly is -5cm and Tim’s is +5cm, and the difference in height between Tim and Jim is 5cm – (-5cm) = 10cm. In this simple example we can see that this is exactly the result we would have obtained by taking the difference between Jim and Tim’s heights directly.

So in my example the amount of warming that had taken place between 1900 and present day would be 1.2 degrees, exactly as I said initially.

Reply to  AlanJ
October 17, 2023 3:46 pm

Anomalies are calculated relative to the baseline

And what you and the vast majority of climate science types ignore is that the baseline also has a measurement uncertainty, and the subtraction calculation increases uncertainty.

Reply to  karlomonte
October 17, 2023 5:21 pm

Yep, I’ve asked him twice what happens with variance when you subtract a baseline from a monthly value. No answer yet. In my own checks, I have seen SD of 0.7 to 2+° C. Far, far beyond what is being expressed as a perfectly good average value in the millikelvins. BTW, all my calcs were in Kelvin. Not having “-” values to deal with is simpler. Nobody ever discusses how that affects any of the averages of averages. Squared values are used for a reason.

Reply to  Jim Gorman
October 18, 2023 4:30 pm

And using absolute temperature allows you to do relative uncertainty calculations.

Reply to  karlomonte
October 18, 2023 5:22 pm

Anything dealing with thermodynamics, i.e., heat must be done in a zero based scale like Kelvin.

It is one reason climate scientists like anomalies. They can justify increments in Celsius being the same as increments in Kelvin. They ignore that temperatures should be translated into Kelvin prior to doing calculations.

Jim Masterson
Reply to  Jim Gorman
October 18, 2023 6:44 pm

“Anything dealing with thermodynamics, i.e., heat must be done in a zero based scale like Kelvin.”

Some calculations in thermodynamics don’t need a zero based scale. But if you always use a zero based scale, you don’t need to worry about those times when you should have.

Jim Masterson
Reply to  Jim Gorman
October 18, 2023 6:38 pm

“BTW, all my calcs were in Kelvin.”

Yes, I suffer from old, slow brain cells. If I had converted Mr. J’s 0 and 1 to an absolute temperature scale, the incorrect addition of another significant digit would have become obvious. He left off the units–a major no-no in engineering. I assume that’s true in other scientific disciplines too.

Reply to  Jim Masterson
October 19, 2023 5:32 am

Converting everything to Kelvin just makes life easier especially when you are dealing with T³ and T⁴ variables.

Reply to  AlanJ
October 18, 2023 2:35 pm

That doesn’t mean we are prohibited from comparing periods before and after the baseline period.”

You really don’t understand *any* of this do you? You are a troll. Do you have the faintest clue as to why baselines are used?

So in my example the amount of warming that had taken place between 1900 and present day would be 1.2 degrees, exactly as I said initially.”

Except the measurement uncertainty back in 1900 was +/- 1C! Do you have even the faintest clue as to what that means for that anomaly of 1.2deg? It means the difference will have a measurement uncertainty of at least +/- 1C. So what exactly is the anomaly?

Reply to  Tim Gorman
October 18, 2023 2:41 pm

The real problem with baselines is the inherent assumption that they are the perfect temperature.

Says who and why!

Reply to  AlanJ
October 16, 2023 5:38 am

“”””It very much is an average, by definition. “”””

Is an arithmetic mean useful for a non- normal distribution? Is a median a better value?

Do you know why the GUM describes the experimental standard deviation as the measurement uncertainty? Might it have something to do with non-normal distributions of experimental measurements?

You keep wanting to quote statistical stuff out of a textbook. To make those useful, you need to carefully define what the population is, what the size of samples is, how many IID samples you take, and does the distribution of the sample means represent a normal distribution. Please tell us your assessment of these items.

Reply to  AlanJ
October 14, 2023 3:38 pm

Malarky! If the variance of regional temps is high, and the random variables known as the regional temp data are added together to create a global data set then the variances of the regional temp data ADDS – meaning the variance of the global data will be larger than the variance of any regional data.

Reply to  Tim Gorman
October 14, 2023 6:06 pm

Careful . . . it was only that AlanJ claimed to have a MSc in the physical sciences (his post of October 13, 2023 8:42 am). And we’ve all seen what his claims are worth.

From all I observe, AlanJ’s degree—assuming he really has one—is a Masters of Scams.

Reply to  AlanJ
October 12, 2023 12:17 pm

What is your definition of “strongly”?

Reply to  AlanJ
October 12, 2023 12:49 pm

And the climate loony is drawn out of its slimy hidey hole once again.

The only reason for the very slight trend is the presence of the 2015/16 El Nino bulge in the latter half of the data.

Before that, there was no trend at all.

Reply to  AlanJ
October 12, 2023 12:55 pm

And of course, since that El Nino, the USA has been STRONGLY cooling.

combined USA since 2015.png
AlanJ
Reply to  bnice2000
October 12, 2023 1:04 pm

The trend in USCRN since 2015 is not significant at the 95% level, with a p-value of 0.15. We cannot reject the likelihood of the calculated trend simply being the result of variance in the dataset.

Reply to  AlanJ
October 12, 2023 1:58 pm

Poor Anal.. cannot allow himself to see the COOLING trend,

We cannot reject the fact that the ONLY trend in the USA data is from the El Nino bulge.. because it is. !

There is absolutely ZERO evidence of any human causation in the TINY positive mathematical trend.

The real question is.

Why are you so supportive of the lies and deceits from a climate cabal that wants to destroy western society?

What despicable immorality and hatred of western society drives you.?

Reply to  AlanJ
October 12, 2023 3:22 pm

There are three sets of data saying exactly the same thing…

Sorry you are deliberately blind to that fact and don’t have a clue what it means.

Keep digging deeper into your pits of semi-education, AJ.. It is funny to watch. 😉

JC
Reply to  bnice2000
October 12, 2023 2:11 pm

One brief minimal trend compared to another brief minimal trend doesn’t prove much . There are lot’s of possible variables related to cooling and heating which he world does in cycles much bigger than a couple of decades: SC 24 Minimum, 5 years of ENSO neutral or La Nina prior to the El Nino, weak El Nino, Tonga and many more that are unknown or unclear.

The good news is that this post points to other variables other than just human civilization which is only one variable in the mix of many and the green regime hates it it.

Reply to  AlanJ
October 12, 2023 1:00 pm

I never understood how people come up with a line like that. Are you saying there is one and only one solution? I see this sort of thing often and if I wasn’t so lazy and old I’d study the method. But if you can explain it I’d appreciate it.

AlanJ
Reply to  Joseph Zorzin
October 12, 2023 1:08 pm

The line plotted is determined via a method called “linear regression.” It is a way of fitting a line to a set of data that minimizes the distance between the individual data points and the line, also called a “line of best fit.” In other words, it is the line (y=mx+b) that lies the closest to all of the individual points. It is useful for characterizing the general tendency of a time series like this one as either trending up or down.

Depending on the nature of the dataset, a linear fit might not be the best choice, and there are many other regression types that might be better suited (e.g. if the data are actually exhibiting an exponential increase, the “best fit line” will be described by an exponential function).

Reply to  AlanJ
October 12, 2023 1:16 pm

OK- I guess now I’ll have to get out a text book and see how it’s actually done. Funny, but I was cut out to be a math major. I had talent for it but it was the late ’60s and math seemed a boring way to spend those years. 🙂

Reply to  Joseph Zorzin
October 12, 2023 2:29 pm

It is a way for alarmist to convince people that CO2 is driving temperature growth. It is a joke.

A linear regression is supposed to relate an independent variable to a dependent variable through a functional relationship. Time is not a variable that causes temperature. A regression against time is no better than using a simple index of 1 to n on the x-axis.

Stock brokers and budget folks used to rely solely on this. No more. One must understand what variables drive the dependent variable and develop a relationship.

Reply to  AlanJ
October 12, 2023 1:17 pm

Linear regression on a cyclical time series doesn’t work well. It is *not* useful for characterizing the general tendency of a CYLCIAL time series. You’ve totally missed the significance of the cycles shown in the data.

AlanJ
Reply to  Tim Gorman
October 12, 2023 1:29 pm

Of course it does, provided there is an underlying trend. If you want to characterize the behavior of the cycles, then, no, it is not appropriate, and a trigonometric regression would be required.

Reply to  AlanJ
October 12, 2023 1:43 pm

Exactly how do you determine the trend on a cyclical process that can cover years, decades, and even centuries. What you have captured is like trying to say the rising portion of a sine wave is a “tend”.

AlanJ
Reply to  Tim Gorman
October 12, 2023 1:50 pm

If you have 18 years of data capturing a centuries long cyclic oscillation, the trend over 18 years will indeed be fit very well by a linear regression. I think what you’re trying to steer us towards is the notion that a regression is not predictive, and I agree, you have to have underlying physical theory to anticipate how the trend is going to evolve. But the trend is the trend is the trend.

Reply to  AlanJ
October 12, 2023 1:56 pm

Malarky! You have cycle periods in the data as short as five years. They are CYCLICAL. You can’t just dismiss them as natural variation or random noise, or even short term oscillations! They repeat.

A residual plot will show the oscillations around your “trend line”.

And you *still* haven’t addressed why the maximums are trending down and the minimums are trending up since 2010! That alone should tell you than a linear regression line is useless in explaining what is going on.

Stop being willfully ignorant and trying to defend the indefensible using hand waving.

AlanJ
Reply to  Tim Gorman
October 12, 2023 2:15 pm

A residual plot will show the oscillations around your “trend line”.

Exactly, if we want to isolate the oscillations, it might help us to detrend the series by determining the linear best fit and subtracting that from the series. But then we won’t be able to see the underlying long term trend.

For the USCRN, both the max and min temperatures are increasing since 2010:

comment image

And also further back since USCRN records begin in 2005:

comment image

Thomas
Reply to  AlanJ
October 12, 2023 2:49 pm

One will find cycles and trends in a series of random numbers from 1 to 100. What causes them? Nothing other than the fact that we live in a random universe.

Reply to  Thomas
October 12, 2023 3:16 pm

You don’t usually find *repeating* cycles of the same approximate period in random numbers. Trends – yes, since the trend basically depends on the start and the end values. They can have either direction of slope.

Reply to  AlanJ
October 12, 2023 3:05 pm

ROFLMAO.

Always relying on the El Nino effect.

and you are so thick that you don’t realise it.

Digging deeper and deeper into your lack of punderstanding

Just keep those monkey-like linear trend coming.

Its HILARIOUS !

Reply to  AlanJ
October 12, 2023 3:10 pm

Do you DENY that there were El Ninos in 2015/16 and 2020?

Or don’t you understand how they affect the temperature.

Your continued gormless IGNORANCE of those El Ninos, is a wonder to behold.

Everything you have graphed PROVES that the very tiny trend is TOTALLY NATURAL…

.. and that there is ZERO HUMAN CAUSATION.

Or are you going to take a deeper dive into your ignorance, and say that El Ninos are caused by humans.

Anything to back your disgusting, anti-human, anti-western-society agenda. !

Reply to  Tim Gorman
October 14, 2023 6:27 pm

“You have cycle periods in the data as short as five years.”

IMHO, it is extremely curious to see that there are no one-year period cycles obvious in the USCRN plots that AlanJ has posted, all of which have which appear to have monthly increments in successive data points (each connected by a straight line).

One would certainly expect such to be visible given the wide temperature swings between summer and winter, and the fact that all USCRN temperature monitoring stations are located in the northern hemisphere.

Somebody’s been messing with the raw data.

Reply to  ToldYouSo
October 15, 2023 5:42 am

If you look between 2013 and 2015 most of the temp change values are negative, meaning both summer and winters were colder than the baseline. The big question is what is the baseline the changes were measured against.

AlanJ
Reply to  ToldYouSo
October 15, 2023 8:11 pm

The data from USCRN are expressed as anomalies (deviation from mean climatology), that is why they do not contain a seasonal expression. There is no tampering with the raw data whatsoever. The CRN is a Reference Network by design.

Reply to  AlanJ
October 16, 2023 4:27 am

Horse hockey. CRN uses thermometers just like all other weather stations. Calculations are made on the data to obtain ΔT values.

Tell us what the measurement uncertainty is for CRN stations. Do you know where to find that information?

Have you figured out how to calculate the monthly Tavg measurement uncertainty yet? Tell us what steps you used.

AlanJ
Reply to  Jim Gorman
October 16, 2023 6:30 am

Horse hockey. CRN uses thermometers just like all other weather stations.

And instead of presenting the thermometer readings as absolute temperatures, which would show seasonal variation, the values are expressed as anomalies, which do not show seasonal variation. There is no tampering of the data, and the CRN network is unadjusted (because by design it contains no systematic biases).

Reply to  AlanJ
October 16, 2023 6:53 am

“””””because by design it contains no systematic biases).”””””

Tell us where you learned that there are no systematic biases.

This statement illustrates your lack of knowledge about measurements.

Can green grass versus brown grass cause a systematic error?

Can a difference in latitude cause a systematic error when averaging .

Tell what NOAA/NWS specifies as the accuracy for CRN stations.

Do you think the uncertainty value specified by NOAA/NWS includes systematic uncertainty?

AlanJ
Reply to  Jim Gorman
October 16, 2023 9:56 am

Tell us where you learned that there are no systematic biases.

That’s why the USCRN was built. It was designed to provide a reference network of stable, well sited, well maintained monitoring stations free of site specific biases or contamination for long term climate monitoring in the US. There are not instrumentation changes, no time of observation changes, no station moves, no urbanization etc at any of the sites in the network. Thus there is no need to adjust the station data when compiling a nationwide index.

Do you think the uncertainty value specified by NOAA/NWS includes systematic uncertainty?

In the full station network (ClimDiv), systematic bias is identified and remove prior to compiling the index. The fact that ClimDiv is almost identical to the reference network illustrates that the set of adjustments used to remove systematic error and bias are doing a good job:

comment image

Reply to  AlanJ
October 16, 2023 10:43 am

“””Tell what NOAA/NWS specifies as the accuracy for CRN stations.””””

You didn’t bother answering my request! Why?

Here is another request.

Tell us how systematic uncertainty is identified and corrected.

There are many, many sites that inform me that systematic error is not amenable to statistical analysis because there is no variance. Perhaps you know better.

Reply to  AlanJ
October 16, 2023 11:19 am

That’s why the USCRN was built.”

That is *NOT* why CRN was built. CRN was built in an attempt to avoid things like UHI, changes in microclimate of the measuring device, and to use the most advanced measuring devices.

Hubbard and Lin analyzed those devices, e.g. PRT sensors, and found that their measurement uncertainty is *still* approaching +/- 0.5C because of calibration drift and microclimate variation. This was clear back around 2002 and climate science just ignored their findings – just as you are doing!

Microclimate changes *all* the time. Variation in mowing intervals for stations above grass can change the calibration of the instrument. Grass changing from green to brown or brown to green causes calibration changes. Insect infestations leaving detritus in the measuring device can impact air flow and cause calibration drift.

It’s not obvious that you even have a clue as to what systematic uncertainty is. And you have an MSc?

AlanJ
Reply to  Tim Gorman
October 16, 2023 1:44 pm

Hubbard and Lin analyzed those devices, e.g. PRT sensors, and found that their measurement uncertainty is *still* approaching +/- 0.5C because of calibration drift and microclimate variation. This was clear back around 2002 and climate science just ignored their findings – just as you are doing!

You’ll need to provide that citation, as this is the first time you’ve mentioned it, else you shouldn’t claim that I’ve ignored anything.

Reply to  AlanJ
October 16, 2023 4:31 pm

So you are commenting about things you have no knowledge of.

Who would’a thought that!

Reply to  AlanJ
October 17, 2023 4:09 am

You don’t even realize that you just proved my point for me. Climate science just totally ignores anything having to do with measurement uncertainty.

I’m not your research assistant either. Go search the internet for “Hubbard Lin prt uncertainty”. The document was actually published in 2004.

Here’s an excerpt: “The MMTS sensor and the HO-1088 sensor use the
ratiometric method to eliminate voltage reference errors. However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond -40C to +40C. Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained
within 60.28C under the temperature range from -40C to +40C. Because the MMTS is a calibration-free device (National Weather Service 1983), testing of one or a few fixed resistors for the MMTS is unable to guarantee the nonlinear temperature relations of the MMTS thermistor. For the HO-1088 sensor, the self-heating error is quite serious and can make temperature 0.58C higher under 1 m/s airflow, which is slightly less than the actual normal ventilation rate in the ASOS shield (Lin et al. 2001a).”

You come on here and spout garbage about measurement uncertainty in climate science and you aren’t even knowledgeable about the literature having to do with the subject. That’s the problem with too many so-called “climate scientists”. They seem to be basically statisticians who have never in their life seen a +/- value following a data point. All they know is 100% accurate stated values for their data sets. It’s why the always use the SEM as the uncertainty of the average instead of the population standard deviation which is SEM * n.

AlanJ
Reply to  Tim Gorman
October 17, 2023 10:59 am

You don’t even realize that you just proved my point for me. Climate science just totally ignores anything having to do with measurement uncertainty.

He says glibly, citing a climate science paper dealing with measurement uncertainty.

Hubbard & Lin, 2005 are providing a recommendation for improving USCRN measurements. You seem to be under the impression that their recommendations were not followed. Can you prove this?

Reply to  AlanJ
October 17, 2023 12:11 pm

You never answered my question about the NOAA/NWS published uncertainty value. Why don’t you do so now?

With that uncertainty on each and every measurement along with the experimental standard deviation of readings, how do you obtain averaged values much p, much smaller than the uncertainty?

Do I need to list all the questions you have failed to answer?

Get with the program!

Reply to  AlanJ
October 17, 2023 1:35 pm

The paper was not about climate, global warming, climate change, or anything else. It was about the capabilities of the temperature measurement devices. Your reading comprehension is as bad as bellman’s. Is poor reading skills a requirement to be a CAGW advocate?

Hubbard & Lin, 2005 are providing a recommendation for improving USCRN measurements.”

EXACTLY! They made *NO* conclusions about climate, only the measuring devices!

“You seem to be under the impression that their recommendations were not followed. Can you prove this?”

You can’t prove a negative. Can *YOU* prove they were followed? That’s a positive that *is* subject to being proved!

Reply to  Jim Gorman
October 16, 2023 8:22 am

Horse hockey indeed! AlanJ’s sophomoric statement “The data from USCRN are expressed as anomalies (deviation from mean climatology), that is why they do not contain a seasonal expression.” is pure BS.

In almost all cases discussing anomalies, the derived “anomalies” are referenced to a single numerical value (whether such value be for an arbitrary reference number or for a single numerical average obtained from a dataset over a stated time/sample interval).

In fact, I am not aware of any scientific plot related to climatology that has “anomalies” referenced to a periodic (or semi-periodic) time-varying waveform, such as would be required to represent seasonal temperature variations in the northern hemisphere. Cycles in nature are just not so repeatable as to allow such.

AlanJ’s assertion below, in another response to you, that “. . .  the values are expressed as anomalies, which do not show seasonal variation.” is thus seen to be absurd.

AlanJ
Reply to  ToldYouSo
October 16, 2023 9:58 am

An anomaly is the deviation from a reference baseline. If months anomalies are used, the baseline is the mean monthly climatology over the reference period. This means there will not be seasonal expression in the anomalies (they are showing how different a particular season is from the seasonal mean during the reference period).

Reply to  AlanJ
October 16, 2023 11:25 am

Yes. And when you have:

Var(X) = monthly variance,
Var(Y) = baseline variance,

and:

Var (X – Y) = Var(X) + Var(Y)

Pick any month, what is the value of the variance of this subtraction? What is the value of the variance for an annual average of monthly anomalies? Show your work.

Reply to  AlanJ
October 16, 2023 11:33 am

Malarky! Ag science has found that growing season length has changed since 1998, analysis of degree-days show an increase. That means that any reference period including years before 1998 is WRONG since it doesn’t include this change. An increase in the length of the growing season means that SEASONALITY impacts have changed and baselines that don’t recognize this give anomalies that are WRONG.

If you are an example of climate scientists today it’s no wonder that climate science gets so much *wrong* today!

AlanJ
Reply to  Tim Gorman
October 16, 2023 2:45 pm

Ag science has found that growing season length has changed since 1998, analysis of degree-days show an increase. That means that any reference period including years before 1998 is WRONG since it doesn’t include this change.An increase in the length of the growing season means that SEASONALITY impacts have changed and baselines that don’t recognize this give anomalies that are WRONG.

The change in the growing season would be reflected in the magnitude of the anomaly relative to the baseline – if the growing season is getting longer because the climate is warming, then the monthly anomalies will be getting larger relative to a baseline established prior to the lengthening of the growing season. 

You seem to carry serious misunderstandings of the various concepts you’re trying to cobble together in defense of your kooky ideas about statistics.

Reply to  AlanJ
October 16, 2023 3:54 pm

Do you even know what determines GDD?

Reply to  AlanJ
October 17, 2023 4:45 am

The change in the growing season would be reflected in the magnitude of the anomaly relative to the baseline”

If the baseline is wrong then the anomaly is wrong as well! Logic isn’t your strong suit is it?

“if the growing season is getting longer because the climate is warming,”

How do you know what is warming? Tell us EXACTLY what causes the growing season to get longer! Ag science knows. Do you? My guess is that you are no more knowlegeable on this subject than on measurement uncertainty of temperature measuring devices.

You seem to carry serious misunderstandings of the various concepts you’re trying to cobble together in defense of your kooky ideas about statistics.”

Except I *know* that growing seasons are expanding – while you don’t. I know *why* they are expanding – while you don’t.

So tell me again who has serious misunderstandings?

Reply to  ToldYouSo
October 16, 2023 11:36 am

This whole discussion is becoming absurd. AlanJ has no obvious experience with making physical measurements with uncertainty. Like every statistician, all the numbers are exact. No uncertainty, everything always has normal distributions.

Jim Masterson
Reply to  Jim Gorman
October 16, 2023 2:37 pm

“This whole discussion is becoming absurd.”

I discovered Mr. J’s being absurd when he increased precision by averaging zero and one.

Reply to  Jim Masterson
October 17, 2023 4:46 am

bellman’s no better. He thinks the same thing.

Reply to  Jim Gorman
October 16, 2023 5:11 pm

It’s worse than just the measurement uncertainty, which you are right to question.

Despite AlanJ’s claim that “There is no tampering with the raw [USCRN] data whatsoever”, he clearly doesn’t know what he’s taking about.

Here is what NOAA itself admits regarding USCRN temperature measurement data under “IMPORTANT NOTES”: (ref: https://www.ncei.noaa.gov/pub/data/uscrn/products/monthly01/readme.txt ):
“I. On 2013-01-07 at 1500 UTC, USCRN began reporting corrected surface temperature measurements for some stations. These changes impact previous users of the data because the corrected values differ from uncorrected values. To distinguish between uncorrected (raw) and corrected surface temperature measurements, a surface temperature type field was added to the monthly01 product. The possible values of the this field are “R” to denote raw surface  temperature measurements, “C” to denote corrected surface temperature measurements, and “U” for unknown/missing.” 
(my bold emphasis added)

The truth is out there.

Reply to  ToldYouSo
October 17, 2023 5:40 am

Thanks for the notice, I had not seen that!

Reply to  AlanJ
October 12, 2023 2:02 pm

provided there is an underlying trend.”

Which THERE ISN’T !

The only reason there is a positive mathematical trend is the El Nino bulge.

Very sad that you are incapable of seeing that… shows that you are deliberately blinding yourself to reality.

Why do that ?

What drives you to support a cabal of fraudsters that want to bring down western society ??

AlanJ
Reply to  bnice2000
October 12, 2023 3:15 pm

So there isn’t a positive trend but there is but it’s caused by El Niño? Do you ever think before you type? It might be a cool new thing you could start doing.

Reply to  AlanJ
October 12, 2023 3:39 pm

Thanks for agreeing that the slight meaningless positive trend is there ONLY BECAUSE OF EL NINOs.

Backing up everything I have said.

Do you ever think… at all !

Reply to  bnice2000
October 13, 2023 9:47 am

He is blind to step warming sections caused by El-Nino’s which is why his charting methods are misleading and dishonest.

Reply to  Sunsettommy
October 13, 2023 7:01 pm

Just basically ignorant of anything real.

As I said, a monkey with a ruler can be taught to draw a straight line.

Reply to  AlanJ
October 12, 2023 4:04 pm

A graph is supposed to educate about the underlying source data – all your graph proves is that you have a basic knowledge of statistics. Don’t be an effing idjit and actually look at the way the temperatures are behaving, don’t just find a clever way to draw a straight line. Have you looked at the source data? Have you tried different approaches, curve-fitting or scatter plots as the best way to present the data? Or have you just stuck some numbers into a program and now you want a banana?

Phil R
Reply to  Richard Page
October 12, 2023 6:07 pm

It doesn’t even prove that. All it proves is that he can cherry pick data and draw a line through it and convince himself that it means something.

Reply to  AlanJ
October 12, 2023 1:11 pm

This is what you get when you try to put a linear line on a cyclical time series. You should do a residuals vs fit plot. My guess is that you will find a distinct oscillation in the graph. BTW, the standard error of the slope is meaningless unless the data is truly linear.

You have distinct positive peaks in 2006, 2012, 2017, and 2022. Somewhere around a 5-6 year cycle. Yu have distinct negative peaks in 2010 2015, 2019, and maybe in 2023. Again, about a five year cycle.

The positive peaks appear to be on a downward trend since 2012 while the negative peaks have a upward trend since 2010.

Will the two continue to get closer together? Exactly what would that mean for the earth? To me it would mean longer growing seasons, more food, and a nicer climate!

AlanJ
Reply to  Tim Gorman
October 12, 2023 1:21 pm

You should do a residuals vs fit plot. My guess is that you will find a distinct oscillation in the graph. 

This is just saying that if I remove the long-term warming trend, I am left with an oscilating series. Which I think is exactly the case, here is a plot of the resdiduals:

comment image

I’ve fit a 2nd order polynomial curve to the series for illustration. Basically, the US is warming, and over the warming trend are superimposed short-term oscillations + random variability.

Reply to  AlanJ
October 12, 2023 1:48 pm

Where is the rest of the sine wave? What happened before your plot starts?

Why didn’t you address the fact that the maximums are trending down and the minimums are trending up? The cycles are *NOT* just short term oscillations or random variability, they are an indicator of what is happening. You are just trying to use hand waving to dismiss what the data is telling you.

What you see should tell you that a linear regression is *NOT* a good way to characterize a cyclical process such as this one.

AlanJ
Reply to  Tim Gorman
October 12, 2023 2:00 pm

A cyclic is by definition not random – it is cyclic. There are many modes of internal variability within the earth’s climate that operate in cyclical modes, such as ENSO and the PDO. This variability is not random, but it does not produce a long term trend in the system, because it represents a movement of heat from one place to another, not a change in the planet’s energy balance.

What you see should tell you that a linear regression is *NOT* a good way to characterize a cyclical process such as this one.

It’s not a good way to characterize the cycles, but it is a god way to characterize the underling trend, which is the thing we care about for long term climate change.

Thomas
Reply to  AlanJ
October 12, 2023 2:59 pm

During an El Niño, heat that had been trapped in the Pacific warm pool is released to the atmosphere, which causes the atmosphere to warm until the heat radiates to deep space. The ocean/atmosphere system cooled, but the atmosphere warmed as part of that cooling process. The average temperature of the atmosphere 2 meters above the ground tells you nothing about the heat content of the system. Temperature doesn’t even measure the heat content of atmospheric air—you need enthalpy. Recent warming is mostly due to reduced cloud cover, which might be due to reduced particulate pollution, or long-term ocean(s) oscillation(s) or even solar activity. Almost certainly not CO2.

Reply to  AlanJ
October 12, 2023 3:09 pm

but it does not produce a long term trend in the system, because it represents a movement of heat from one place to another, not a change in the planet’s energy balance.”

How do you *KNOW* it doesn’t produce a long term trend? Glacial and inter-glacial periods are certainly long term! And it is cycles and how they interact in the system that produce those periods!

“It’s not a good way to characterize the cycles, but it is a god way to characterize the underling trend, which is the thing we care about for long term climate change.”

ONE MORE TIME – linear regression lines are *NOT* a good way to characterize cyclical time series. Your own analysis of the data should be telling you this. A cyclical time series can have almost *any* trend depending on where you pick the start and end dates! A sinusoidal signal will have both uptrends and downtrends!

There is a *reason* why you need to use a residual vs fit graph. If the graphs shows a sinusoid around the horizontal line then a linear regression is *NOT* a good fit to the data! As you said earlier, some kind of sinusoidal curve needs to be fit tot he data, probably from a multi-element equation!

YOU STILL HAVEN’T EXPLAINED THE DOWN TREND IN MAX AND THE UP TREND IN MIN SINCE 2010!

All you did was use the argumentative fallacy of Argument by Dismissal. You didn’t actually explain the trends, you just dismissed them as “short term oscillations” (what the hell is that anyway?) or as “natural variation”. NATURAL VARIATION *CAN* BE CYCLICAL!

You are just one more trendologist trying desperately to defend the use of linear regression for *everything*.

AlanJ
Reply to  Tim Gorman
October 12, 2023 3:25 pm

How do you *KNOW* it doesn’t produce a long term trend?

Because we understand the underlying mechanism for these modes of variability, and they are not forced.

Glacial and inter-glacial periods are certainly long term!

Those cycles are forced.

ONE MORE TIME – linear regression lines are *NOT* a good way to characterize cyclical time series. Your own analysis of the data should be telling you this. A cyclical time series can have almost *any* trend depending on where you pick the start and end dates! A sinusoidal signal will have both uptrends and downtrends!

It’s a good way to describe the trend. That’s what we care about, we don’t really care about the particular characteristics of quasi-cyclic modes of internal variability (I mean, we do, but mainly we are trying to determine if the region is warming over time). I agree that you can’t tell from the trend line whether or not you are in the middle of a large-scale cycle, no one ever suggested otherwise. That doesn’t mean the trend line does not describe the range of observations made. We first need to observe the change, then we start working on the problem of what is causing it.

YOU STILL HAVEN’T EXPLAINED THE DOWN TREND IN MAX AND THE UP TREND IN MIN SINCE 2010!

I addressed it explicitly above. Both min and max show positive trends since 2010 in USCRN.

Reply to  AlanJ
October 12, 2023 3:55 pm

It’s a good way to describe the trend.”

WRONG. If you don’t look at what is happening physically…

… and the underlying cause of any trend is deliberately ignored…

… it is a sloppy, un-scientific methodology that can lead to an erroneous unthinking, lack of understanding.

As you keep showing us.

Reply to  AlanJ
October 12, 2023 4:19 pm

but mainly we are trying to determine if the region is warming over time:

Nobody is denying some warming is occurring. We would still be at Little Ice Age temperatures if it wasn’t warming. The real question is attributing the correct factors to natural variation and human caused variation.

But, as I have said previously, saying warming is occurring is all your trend will allow. Attempting to assign a cause will never be justified by simple linear regression. There are all kinds of time series analysis you can do while mixing multiple variables like clouds, CO2, etc. Linear regression is not one of those that allow attribution without a formal functional relationship.



AlanJ
Reply to  Jim Gorman
October 13, 2023 5:14 am

But, as I have said previously, saying warming is occurring is all your trend will allow.

I’ve never said anything different. In fact, elsewhere in the the thread I rather explicitly said:

“I think what you’re trying to steer us towards is the notion that a regression is not predictive, and I agree, you have to have underlying physical theory to anticipate how the trend is going to evolve.”

No one is trying to establish attribution. All of this arguing and gnashing of teeth and you guys don’t even disagree with me. You’re just being contrarians… because.

Nobody is denying some warming is occurring. 

I expect you’ll go let all your peers know they can stop arguing with me now.

Reply to  AlanJ
October 13, 2023 6:08 am

I expect you’ll go let all your peers know they can stop arguing with me now.

The problem is your interpretation. You define the trend as the warming caused by human intervention, with a naturally caused “noise” factor. That is simply not a provable interpretation.

The “trend” you are using has no ability to accomplish the deciphering of any physical functional relationship between the individual variables that make up “climate”.

Therefore, your trend is nothing more than a curve fitting exercise. A linear regression of time versus a dependent variable which is not related to time is a simpleton’s attempt to show what will happen in the future. It is not a scientific endeavor to determine the cause of what is occurring.

Look up piecewise function treatment. It is what people are trying to tell you is required for analyzing temperature changes. Pauses are great indicators for realizing that a separate functional relationship is required to describe what is happening. Periods of anomalous growth (+ or -) are other periods that must be separately analyzed.

Just like you, many, many climate scientists are trying to prove that CO2 is a driver of a simple time series trend of temperature because of the “trend”. I have never seen a climate paper where a scientist has broken up the temperature changes into coherent periods and attempted to analyze why that period happened.

AlanJ
Reply to  Jim Gorman
October 13, 2023 6:34 am

The problem is your interpretation. You define the trend as the warming caused by human intervention, with a naturally caused “noise” factor. That is simply not a provable interpretation. 

I’ve never once defined it as such, and I challenge you to point to a single comment in this thread where I even inadvertently hinted at such a thing.

The trend is the trend. We can use regression to determine the rate of warming. If we want to understand what is driving the warming, we need to apply physical theory. No scientist, ever, has suggested that we need only examine temperature time series to establish the cause of the warming.

Reply to  AlanJ
October 12, 2023 3:14 pm

characterize the underling trend, “

Which is nonexistent .

You have proven that by your continued use of El Ninos to create a trend.

Reply to  AlanJ
October 12, 2023 2:11 pm

Thanks for showing ABSOLUTELY that the linear trend comes from the El Nino

Game set and match.. No human induced warming

AnalJ.. batting ZERO. !

And so dumb he doesn’t realise he has backed up exactly what I was saying.

Reply to  AlanJ
October 12, 2023 2:12 pm

Even shows the COOLING since the 2015 El Nino

Well done AnalJ.. you have finally figured it out ! 🙂

Reply to  AlanJ
October 12, 2023 2:15 pm

In your efforts to prove your cleverness.

You have TOTALLY BECLOWNED yourself.

HILARIOUS. !

AlanJ
Reply to  AlanJ
October 12, 2023 2:21 pm

I think this plot of the residuals is incorrect, should look more like this:

comment image

You can more easily see the cyclic oscillations present in the series.

Reply to  AlanJ
October 12, 2023 3:11 pm

And linear regression is *not* a good characterization of a cyclical time series!

Reply to  AlanJ
October 12, 2023 2:35 pm

Recently there have been lower highs and lower lows. The recent trend is downward. The last solar cycle was very weak and this one isn’t much either.

Reply to  AlanJ
October 12, 2023 4:29 pm

 “I’ve fit a 2nd order polynomial curve” 

ERROR !!!

Reply to  bnice2000
October 13, 2023 7:04 pm

…. why the red thumbs.

Are you so DUMB that you can’t see it is not a 2nd order polynomial !

Obviously you are mathematical morons.

Reply to  bnice2000
October 14, 2023 5:32 am

The fact that this data is sinusoidal should indicate that you simply can’t identify a non-sinusoidal data match that explains it.

The *FIRST* thing that should be done with this data is a Fourier or Wavelet analysis to determine the actual components that exist. Then you have to determine the coefficients for each component in order to lay out why the data has increases and decreases over time. It should be intuitively obvious that there is a longer term sinusoid that is modulating the shorter cycles.

it’s why linear regression of cyclical processes can be so mis-leading – the trend line depends totally on where in the cycles the beginning and end points are.

AlanJ says: I have an MSc in physical sciences, mostly focused on the computational analysis of large datasets.”

You would think that his training should have included analysis of cyclical processes.

(p.s. I’ve been busy working on customer projects so I’ve missed a lot of this thread. Keep up the good work!)

AlanJ
Reply to  Tim Gorman
October 14, 2023 10:30 am

The fact that this data is sinusoidal should indicate that you simply can’t identify a non-sinusoidal data match that explains it. 

You can’t explain the sinusoidal pattern with a linear fit, but you can explain the long-term behavior of the series with a linear fit if the series exhibits a linear change over time, which the USCRN series does.

I agree that you can’t tell if you’re in the middle of some very large oscillation by using a linear fit, because no form of curve-fitting will give you information about what is going to happen beyond the bounds of your dataset without physical theory underpinning it. All we are trying to do is describe the behavior of the data that we see, and the USCRN is exhibiting a statistically robust increase over time.

Reply to  AlanJ
October 14, 2023 1:45 pm

“”””you can explain the long-term behavior of the series with a linear fit “””

No you can’t. You are a trendoligist with no knowledge regardless of your education.

First, the trend is not the series. How many times do you need to be told that? Your trend has time as the independent variable. Time has nothing to do with temperature. It explains nothing.

The temperatures are not noise. That insinuates that temperature is not what is changing but something else that is related to time. It is trying to show temperatures are simply extratranous information. That is stupid. You need to get your act together

AlanJ
Reply to  Jim Gorman
October 14, 2023 2:12 pm

The temperatures are not noise. That insinuates that temperature is not what is changing but something else that is related to time. It is trying to show temperatures are simply extratranous information. That is stupid. You need to get your act together

By modeling temperature as the dependent variable and time as the independent variable, we aren’t supposing that there is a causal relationship between time and temperature. What we are doing is saying that our statistical model is a predictor of temperature values for given time values (within the bounds of the time series – we shouldn’t assume that our model can extrapolate successfully without underlying physical theory).

And yes, by fitting more and more parameters, we will find that we can capture more and more observations in our statistical model. But we are not necessarily increasing the predictive power of the model by doing this. For our purposes here, in establishing how temperatures are tending to evolve over the USCRN time span, a linear model is perfectly sufficient. We know there is a lot of up and down year over year variability, but we care about the decadal+ trends, not those annual ups and downs.

This is also the case for quasi-cyclic patterns such as ENSO – we know those are in the series, but we know that they aren’t contributing to the long-term behavior of the series (because we have physical theory), so we aren’t worried about capturing that cyclic behavior (we do need to worry about that behavior if we are looking at too short a timespan, however).

Reply to  AlanJ
October 14, 2023 3:23 pm

“””””By modeling temperature as the dependent variable and time as the independent variable, we aren’t supposing that there is a causal relationship between time and temperature. What we are doing is saying that our statistical model is a predictor of temperature values for given time values (within the bounds of the time series – we shouldn’t assume that our model can extrapolate successfully without underlying physical theory).”””””

Word salad.

You are modeling nothing. If there isn’t a casual relationship between the independent and dependent variables, you are only doing a basic curve fitting exercise that means nothing!

Quit trying to sound like a modeling expert!

AlanJ
Reply to  Jim Gorman
October 14, 2023 4:14 pm

You are modeling nothing.

You’re modeling how temperature is changing over time. That doesn’t mean you’re assuming that time is causing temperature to change. If I observe life expectancy over time, that doesn’t mean I think time is the thing driving changes in life expectancy, but it sure is helpful to know if it’s going up or down. Once I know that, I can start exploring the underlying drivers of the observed trends.

Reply to  AlanJ
October 15, 2023 4:10 am

If time doesn’t determine temperature then you are “modeling” nothing. You are merely curve matching a set of data points. “Modeling” requires some knowledge of the underlying physical processes, curve matching doesn’t. “Modeling” can give you some information about why things happen as they do, curve matching cannot.

 it sure is helpful to know if it’s going up or down”

In order to tell what is going to happen to life expectancy you must know what determines life expectancy. Curve matching life expectancy against time can’t tell you that. If you don’t know what determines life expectancy then knowing whether it is going up or down is mostly just useless information because you can’t do anything about it!

You can start exploring the underlying drivers of life expectancy without knowing whether it is going up or down at any point in time. I don’t need to know whether corn harvests are going up or down today in order to explore what drives the growth of corn kernels.

AlanJ
Reply to  Tim Gorman
October 15, 2023 7:52 pm

If time doesn’t determine temperature then you are “modeling” nothing. You are merely curve matching a set of data points. “Modeling” requires some knowledge of the underlying physical processes, curve matching doesn’t. “Modeling” can give you some information about why things happen as they do, curve matching cannot.

You are modeling the behavior of the observations. You are not attempting to say anything about why the observations are behaving as they are, only describing the behavior. I think we agree on this point, you’re saying the same thing as me in different words.

If you don’t know what determines life expectancy then knowing whether it is going up or down is mostly just useless information because you can’t do anything about it!

On the other hand, if I don’t observe how life expectancy is changing, I don’t know that there’s anything to do anything about in the first place. That’s why we make observations, then apply theory to understand what is driving the observations. Again, I don’t think we actually disagree here, I think you’re being a contrarian.

Reply to  AlanJ
October 16, 2023 4:49 am

You are modeling the behavior of the observations. You are not attempting to say anything about why the observations are behaving as they are, only describing the behavior. “

You *HAVE* to be kidding, right? If the curve matching doesn’t say anything about why the observations are behaving as they are then what is the use of the curve matching?

The way modelling *should* be done is to create a hypothesis about how the observations should act and then comparing the model to the observations to see if the model works or not. First you create a curve using a model and then you see of the observed curve is the same.

If you use the observations as a spur for generating a model then you *are* using the curve matching to say something about why the observations are behaving as they are.

Face it, you are trying to avoid attaching any importance to the observations. That is *NOT* the same thing as saying the observations you have don’t tell the whole story.

AlanJ
Reply to  Tim Gorman
October 16, 2023 6:36 am

The curve matching is how we make an observation about the behavior of the measurements. Otherwise we are just eyeballing the dataset. Observation is typically the beginning of scientific investigation – you have to observe a phenomenon you want to try to develop a theory to explain. If we didn’t see the climate changing, we wouldn’t know there was anything to understand about it.

Why would we develop a theory about the causes of the ice ages, for instance, if we never observed evidence of ice ages?

Reply to  AlanJ
October 16, 2023 7:08 am

Who are “we”?

You and the Queen?

Reply to  AlanJ
October 16, 2023 7:17 am

First, stop using the term climate, it is a misnomer designed by propagandists to elicit a feral response. Temperature change is what you are discussing.

Secondly, anomalies are not temperatures. They are at best, a ΔT, a differential at a location. Averaging differentials of various functions is perilous.

Lastly, temperature changes do not result in climate changes. Climate is much more than temperature.

Reply to  AlanJ
October 16, 2023 5:54 am

“””””Again, I don’t think we actually disagree here, I think you’re being a contrarian.”””””

No, you are the one saying that an increase in Tavg indicates a hotter world with more people dying from higher temperatures. You are doing so because your trend of Tavg is increasing. Yet, you have no idea whether Tmax, Tmin, or both are doing.

Are you now dismissing your original statements about heat related deaths?

AlanJ
Reply to  Jim Gorman
October 16, 2023 6:46 am

We often look at the Tavg because it is a convenient metric for tracking global climate change, but Tmin and Tmax are being tracked, and the datasets are all available, facilitating studies of exactly the sort you suggest here (e.g. is daytime or nighttime warming faster, and where).

The study describing projected increases in heat related deaths is a modeling study – i.e. the authors analyze climate model projections and observe regions of the world which exhibit an increase in conditions conducive to heat stress. They aren’t simply saying, “the world has been warming ergo more heat stroke will occur.” That’s your own misunderstanding.

Reply to  AlanJ
October 16, 2023 7:35 am

No misunderstanding by me. Do I need to show all the headlines from all over the world where everywhere is warming faster than the global anomaly?

Where do you think this misconception comes from?

BTW, I haven’t seen an answer from you describing the experimental standard deviation for Tavg yet, nor what that means for uncertainty.

AlanJ
Reply to  Jim Gorman
October 16, 2023 10:36 am

It isn’t a misconception – land surfaces are warming faster than the ocean surface, thus you can trivially point to almost any country on earth and truthfully say it is warming faster than the global mean – doubly so if it is a region in the Arctic.

Reply to  AlanJ
October 16, 2023 11:16 am

You are dancing!

The oceans are boiling everywhere due to the heat.

The coral reefs are bleaching due to the heat in the oceans.

The polar ice caps are melting due to the heat.

Record rain is occurring everywhere due to the heat.

Severe storms are increasing everywhere due to the heat.

Glaciers are melting everywhere due to the heat.

These are happening everywhere according to you and your warmest friends. Are you now denying this?

Reply to  AlanJ
October 14, 2023 3:56 pm

You don’t *KNOW* if the sinusoidal signal is exhibiting a linear change over time. Modulate a 10Mhz signal with a 1Khz signal using a non-linear element. You’ll get a whole plethora of sinusoidal signals as a result. The envelope of the resultant varies from positive to negative and back again. If you look at one 10Mhz cycle, depending on where you look, you can get a positive trend or a negative trend. But you’ll totally miss the up and downs in the 10Mhz signal resulting from the 1khz signal.

Describing the data you *see* based on curve fitting a subset of the data is a fool’s errand. If you don’t have all the data no amount of “curve fitting” is going to tell you what is going on.

AlanJ
Reply to  Tim Gorman
October 14, 2023 4:16 pm

You don’t *KNOW* if the sinusoidal signal is exhibiting a linear change over time

Of course you do. You know that over the time you’ve observed it, it has changed by y amount.

If you don’t have all the data no amount of “curve fitting” is going to tell you what is going on.

Well we can’t have all of the data unless we start at the beginning of time and stop when time ceases. But we can observe what is going on during the time that we have made observations.

Reply to  AlanJ
October 15, 2023 4:18 am

Of course you do. You know that over the time you’ve observed it, it has changed by y amount.”

You know what happened in the time interval you looked at. So what? From that you can’t even determine if the signal is sinusoidal or not. If you look at the signal from pi/16 to pi/8 it will look like the signal has a linear growth rate.

What you seem to be saying is that you can describe a horse merely by measuring the size of its hoofprint. You *do* recognize the absurdity of that don’t you?

Well we can’t have all of the data unless we start at the beginning of time and stop when time ceases.”

Have you *ever* bothered to study argumentative fallacies? Do you have even the slightest clue as to which argumentative fallacy this is?

AlanJ
Reply to  Tim Gorman
October 15, 2023 7:56 pm

What you seem to be saying is that you can describe a horse merely by measuring the size of its hoofprint. You *do* recognize the absurdity of that don’t you?

Not at all. I’m saying that I need to observe a hoofprint to know that there’s a horse nearby that I should look out for.

Reply to  AlanJ
October 16, 2023 4:53 am

Horse Feathers! You don’t need a hoofprint to know horses are around. Road apples will suffice. Smell will suffice. Sound will suffice. Sight will suffice.

This is just one more argumentative fallacy from you, the one known as a Red Herring. The issue isn’t “knowing” if there are horses around. The issue is describing the horse that left the footprint from only the hoofprint!

AlanJ
Reply to  Tim Gorman
October 16, 2023 6:49 am

The beginning of our investigation will be establishing that there is a horse nearby – either from observations of hoof prints (global temperature), horse droppings (say, declining glaciers), or smell (sea level rise), then we can start trying to determine other aspects of the horse we now know is nearby. We can start asking questions like, “why is the horse here? Where is it going? What kind of horse is it?” But we have to first observe evidence of a horse.

Reply to  AlanJ
October 12, 2023 1:30 pm

The Earth is in a 2.56 million-year ice age. Warming is good.

This recent study shows that the cold weather we have every year causes about 4.6 million deaths per year globally mainly through increased strokes and heart attacks, compared with about 500,000 deaths a year from hot weather. We can’t easily protect our lungs from the cold air in the winter and that causes our blood vessels to constrict causing blood pressure to increase leading to heart attacks and strokes.
‘Global, regional and national burden of mortality associated with nonoptimal ambient temperatures from 2000 to 2019: a three-stage modelling study’
https://www.thelancet.com/journals/lanplh/article/PIIS2542-5196(21)00081-4/fulltext

JC
Reply to  AlanJ
October 12, 2023 1:47 pm

OH NO! A nearly flat 20 year trend. We’re all doomed! We must immediately reconstruct the world’s democracies, control their markets and shame them all. Then make them all pay extra for being alive. We must convince them that life in the world in not worth raising children. If it ain’t, it’s because of these numbskulls.

Rick C
Reply to  AlanJ
October 12, 2023 2:51 pm

Please look up the correlation coefficient for your data and report back. Hint, if its below 0.70 the trend is not significant.

Rick C
Reply to  Rick C
October 12, 2023 3:51 pm

Never mind, I did it for you. The R-Squared is 0.0223 and the correlation coefficient (R) is 0.15. These are values that would be in the range you’d expect from a set of randomly generated numbers. That means that the small trend indicated by regression is insignificant and likely a result of random variability.

You should take you statistical data analysis skills to Las Vegas and play roulette. The casinos will be delighted to see you.

Reply to  Rick C
October 12, 2023 8:43 pm

The trend comes from having the El Nino bulge effect of the 2015/16 and 2020 El Ninos in the latter half of the data.

Climate loony-zealots always have to rely on El Ninos…

… this shows that they know there is no human causation.

AlanJ
Reply to  Rick C
October 13, 2023 5:28 am

The r-squared value does not indicate statistical significance, it indicates how well the model captures the variance in the data (i.e. how close the points are to the line of best fit). You can have a low r-squared even when the data exhibit a strongly linear trend if the variance is high.

A statistical significance test is the appropriate way to determine whether a given result might be produced in a series of random noise (p-value). In the case of USCRN, the p-value is 0.025, indicating that the trend is not likely to result from a series consisting of nothing by noise, despite the fact that the series exhibits high variability (low r-squared).

To illustrate, here are two made-up time series containing the same underlying trend, one with high noise and the other with low noise:

comment image

comment image

Both series exhibit a trend that is statistically significant at the 95% level, but one has a high r-squared value and the other a low r-squared value. You would not be able to tell from the r-squared alone that both series exhibit a trend.

Reply to  AlanJ
October 13, 2023 7:16 am

You appear to be a basic trained statistics guy. Have ever had any 300 – 500 level physical science classes or business management? Something where multivariate analysis of various cyclical components was needed?

AlanJ
Reply to  Jim Gorman
October 13, 2023 8:42 am

I have an MSc in physical sciences, mostly focused on the computational analysis of large datasets. Lots of multivariate analysis including regression and PCA.

Reply to  AlanJ
October 14, 2023 5:18 am

If you know multivariate analysis, then why are you using a simple linear regression as a tool to forecast that CO2 will kill us all with higher temperatures?

Before you deny that you are blaming CO2, please tell us what you are blaming for the increase.

You obviously haven’t figured out yet that a linear regression based on time as the independent variable is doomed to failure as time is not related to temperature. I’m glad you aren’t in control of forecasting business related requirements.

AlanJ
Reply to  Jim Gorman
October 14, 2023 10:21 am

If you know multivariate analysis, then why are you using a simple linear regression as a tool to forecast that CO2 will kill us all with higher temperatures? 

I’m not forecasting, nor am I making any kind of attribution of the observed trend. You’re doing that on my behalf, and then arguing with yourself about the words you put in my mouth.

Reply to  AlanJ
October 14, 2023 2:09 pm

Then tell everyone what good your trend is! If it’s not for forecasting nor for showing CO2 caused temperature growth versus time, what exactly good is it for?

AlanJ
Reply to  Jim Gorman
October 14, 2023 2:29 pm

To know how the mean monthly temperature tended between 2005 and September 2023 in the continental US.

There is this persistent problem in WUWT threads where people are making assumptions about what I’m saying and then arguing about the assumptions they’ve made instead of the things I’ve actually said.

Reply to  AlanJ
October 14, 2023 2:34 pm

You realized the insane person will say it is not them, it is everyone else that is insane.

Rick C
Reply to  AlanJ
October 13, 2023 10:01 pm

Do a scatter plot of the USCRN data. See if you can spot the trend. Looks like the proverbial shotgun pattern to me. The argument is that A (increasing CO2) causes B (temperature increase). But the tend in A is quite clear, very nearly linear and has a very high correlation coefficient vs time. B on the other hand shows a small weak trend with a very minimal correlation vs time. The obvious conclusion is that the A causes B hypothesis is unsupported by the data, i.e. falsified.

MarkW
Reply to  AlanJ
October 12, 2023 3:22 pm

Whenever someone picks a small piece of a much larger dataset to prove a point, the odds are the larger data set refutes his point.

AlanJ
Reply to  MarkW
October 12, 2023 3:27 pm

I only wish you had been singing this song to our dear Lord Monckton during his latest Pauses era.

Reply to  AlanJ
October 12, 2023 3:43 pm

ROFLMAO..

Except he is NOT picking a portion of the data.

The mathematical calculation is.

Your lack of understanding is really quite bizarre for someone that likes to pretend he is clever…

AlanJ
Reply to  bnice2000
October 13, 2023 8:43 am

Oh, right, it’s not cherry picking because he’s using math to select the ripest cherries for him.

Reply to  AlanJ
October 13, 2023 7:05 pm

So you admit you are IGNORANT of the method.

Of Course. !

Phil.
Reply to  bnice2000
October 14, 2023 6:41 pm

Cherry picking is exactly what he does, he chooses the portion of the data which shows the longest decline to the present.

Reply to  AlanJ
October 12, 2023 3:48 pm

Even with the latest El Nino spike, there is still a non-negative trend back to June 2015 in the UAH TLT data.

Until this El Nino spike, there was RAPID cooling, even with the 2020 El Nino effect.

Reply to  bnice2000
October 12, 2023 8:40 pm

typo…. “non-positive”

Reply to  AlanJ
October 12, 2023 4:24 pm

You just showed how little you know about time series analysis. Lord Monckton’s purpose it to show that even though CO2 has been on a continuous increase, temperature has not. That pretty much rules out CO2 as THE control knob of temperature.

Don’t you wonder why time series analysis is never shown by climate scientists? Why only linear trends of temperature. Have you ever heard of ARIMA? There are other methods also. Why don’t you try some?

Reply to  Jim Gorman
October 12, 2023 4:31 pm

Give a monkey a ruler…

… with some basic training, it may be able to draw a straight line. 😉

rah
Reply to  AlanJ
October 12, 2023 5:33 pm
October 12, 2023 11:40 am

Whhaaaatttt?! How dare they release unmanipulated data showing the boiling globe racing towards Pandemonium is a lie – get Schwab & Kerry on the phone immediately!

Joking aside, heads will roll in that organisation – the narrative must be upheld, no matter what

October 12, 2023 11:50 am

Yeah . . . but USCRN is just land-based temperature measurements. What about them “boiling seas” that were in the news recently?

“Hottest July ever signals ‘era of global boiling has arrived’ says UN chief”
https://news.un.org/en/story/2023/07/1139162

“We’ve reached the ‘boiling seas’ part of the climate crisis”https://www.msnbc.com/opinion/msnbc-opinion/atlantic-ocean-temperature-florida-climate-change-rcna96292

/sarc

Reply to  ToldYouSo
October 12, 2023 12:58 pm

Yep, it reached 100 degrees in a shallow corner of a galoon near a power plant water outlet.

100 degrees is “boiling” isn’t it ! 😉 😉

Reply to  bnice2000
October 12, 2023 5:29 pm

Well, not if expressed as 100 degrees Fahrenheit . . . or Kelvin.

Reply to  ToldYouSo
October 12, 2023 8:38 pm

Sorry, was just trying to lower my thinking ability to that of the UN boss… or a 2 year old. 🙂

Bob
October 12, 2023 12:04 pm

Looks like good news.

strativarius
October 12, 2023 12:31 pm

Alarmist media weaves a narrative and inconvenient facts go down the memory hole

Yesterday it was beer. Today there’s no ice at Antarctica for Martha’s cocktail

Rud Istvan
October 12, 2023 12:32 pm

CRN is just CONUS. But we have ARGO for all the oceans starting in 2006. It shows pretty much the same thing—CMIP6 models wrong (except INM CM5), no cause for alarm. Interestingly, IN! CM5 ocean rainfall was parameterized using ARGO, and it is the only CMIP6 model that does not produce a spurious tropical troposphere hotspot.

October 12, 2023 1:05 pm

Off topic- but- Tony Heller’s latest:

Washington D C

#1 climate scamster Michael Mann is the wealthiest university teacher in the US. President Eisenhower warned about this problem over sixty years ago.

I’m not surprised to see that he was born in Wokeachusetts. I believe his hockey stick paper was coauthored by a prof at U. Mass., Amherst. (my alma mater)

Phil.
Reply to  Joseph Zorzin
October 13, 2023 9:56 am

What a load of nonsense, typical of Heller! Mann isn’t close to being the wealthiest university teacher in the US.

Reply to  Phil.
October 13, 2023 10:48 am

Is it relevent if Mann isn’t the wealthiest teacher? Perhaps Heller should have given the source of that info. I doubt that he wrote it though he may have. So you think Heller’s work is mostly nonsense? Perhaps you’ll go to his channel and make that comment on his videos.

AlanJ
Reply to  Joseph Zorzin
October 13, 2023 12:17 pm

I’ve tried in the past to point out the myriad errors Heller has made on his channel and website, and was promptly banned for my effort. Heller himself has been banned from this very website for his egregious dishonesty, by Anthony Watts himself.

Reply to  AlanJ
October 13, 2023 7:07 pm

AlanJ pointing out errors.

ROFLMAO !!!

Reply to  AlanJ
October 14, 2023 3:10 am

Nobody’s perfect- but much of what Heller has to say makes sense- especially when he shows old newspaper articles discussing severe weather. And he often shows video clips- of for example, Al Gore crying that the oceans are boiling. If his deep understanding of climate science is less than perfect- that can be said of everyone – since nobody really understands it- which is why we shouldn’t be spending trillions to fix the climate. You claim he’s dishonest- if in fact he erred in a comment- that doesn’t mean he’s dishonest. Who could be more dishonest than the climategate crowd? And Al Gore and Greta Thunberg?

AlanJ
Reply to  Joseph Zorzin
October 14, 2023 10:42 am

It makes sense because Heller is very well-practiced at saying it. He’s been making the same arguments for more than a decade. Again, I am not alone in claiming he is dishonest, Anthony Watts has banned him from this website for his lies. It’s not that he made simple errors, it’s that he made the same errors repeatedly, after numerous people pointed them out to him. That is dishonesty (or it’s sheer stupidity, but I don’t think Heller is stupid, so it leaves only the alternative).

Phil.
Reply to  Joseph Zorzin
October 13, 2023 2:16 pm

I have done on multiple occasions, his temper tantrum after I pointed out his error on here about CO2 freezing at the S Pole was one of the reasons for his being banned from here.

October 12, 2023 2:30 pm

I find it odd that if it was the hottest September (or even summer) in the US that only 4 record highs are listed for my little spot on the globe in 2023. The last one on March 1st.

Reply to  Gunga Din
October 12, 2023 4:28 pm

USCRN says 4th warmest September since 2005

UAH USA48 says 15th warmest September out of 45 years

Gino
Reply to  Gunga Din
October 13, 2023 5:04 pm

Well it didn’t happen in CA that’s for danged sure:

–As Published in the Wine Industry Advisor on 9/27/23:

Overall, Central Coast winemakers and growers are reporting 2023 marks the latest harvest start time in decades, with major picks occurring an average of five weeks later than normal.“To put it in perspective, this is first year I’ve had Labor Day off,” said Dieter Cronje, Winemaker at Presqu’ile Winery in Santa Maria, Calif. “We’ve just begun and are currently four to five weeks behind last year – the extended timeline could increase with the mild weather we’re experiencing.”
 
The springtime’s cold temperatures and record rains delayed the grapes from “setting” in the spring, and the summer’s cooler-than-normal temperatures significantly slowed ripening.
“The 2023 growing season is the coolest since 2011,” said Bob Tillman, Founder of Alta Colina in Paso Robles. “In fact, heat accumulation (Growing Degree Days) trailed 2011 until mid-July when normal summer temperatures first appeared.”

and there is still harvest equipment in the vineyards along my county highway. It’s been a VERY cool summer,

Reply to  Gino
October 14, 2023 5:17 am

The term “global” is at the very base of the misleading propaganda put forth by by the alarmists.

It leads to a “one size fits all” type of solution which is total idiocy. E.g. “Everyone on earth must move to wind and solar power or the Earth will become an unlivable cinder”.

What this really does is absolve climate science of developing a truly comprehensive physical theory of why different regions are *different* when it comes to climate. Until this is done the “global” temperature is truly unusable for any purpose whatsoever.

It’s like the assumption that CO2 is well-mixed globally and is the same everywhere which leads to the conclusion that anything CO2 has to do with temperature has he same effect everywhere. It’s just a crazy assumption from the beginning.

It’s why climate science depends totally on “averages” calculated by combining temperatures that have different variances without accounting for the different variances. It’s why climate science ignores the simple fact that different climates can give exactly the same daily mid-range value – meaning the mid-range value provides no information on actual climate so how can it be used to determine a “global climate”?

Reply to  Tim Gorman
October 14, 2023 8:28 am

“The term “global” is at the very base of the misleading propaganda put forth by by the alarmists.”

Yep. They’re quick to claim local weather events or wildfires are because of “Climate Change” but when local data doesn’t fit the narrative then they fall back on “Global”.

October 12, 2023 10:39 pm

The weirdest thing about this is that over their joint period of measurement, the supposedly ‘pristine’ USCRN, the data set featured in this article and on this site’s side-bar, is actually running warmer than the so-called ‘heavily adjusted’ ClimDiv data set that NOAA runs concurrently and uses for the US input to its global updates.

As of September 2023, the warming trend in ClimDiv is +0.41F per decade, whereas in USCRN it’s +0.55 F per decade, over their joint period of record (since Jan 2005).

So if ClimDiv is being heavily adjusted then the only conclusion we can reach is that the adjustments are actually cooling it!

Reply to  TheFinalNail
October 12, 2023 11:12 pm

How stupid would they look if they were not controlling ClimDiv so it matched USCRN closely.

How STUPID do you look not understanding what is happening !!

EXTREMELY is the word. !!

Reply to  TheFinalNail
October 12, 2023 11:15 pm

You do realise the very slightly positive linear trends are purely an artefact of the position of the El Ninos (2015/16 and 2020) in the time series.

Or are you blind as well as ignorant ?

Now, .

How much warmer do you think it must have been for forests to grow where there are now glaciers.

Duck and run, little child-mind !!

October 12, 2023 11:11 pm

Larry Hamlin,
Since you are reporting on the NOAA data series, in this article and in another recent WUWT article you authored,
https://wattsupwiththat.com/2023/09/22/l-a-times-article-falsely-asserts-u-s-had-record-high-summer-temperatures-in-2023/
perhaps you believe you know what they mean by “Maximum Temperature”?

I can’t speak to the anomaly data but the maximum temperature data reports seems like pure nonsense.
The label is “Maximum Temperature” but the temperatures shown in the graphs, and in the table by year, for some areas with which I am familiar, are sometimes more than 25F lower than actual summer highs. Clearly the “Maximum Temperature” of these graphs mean something quite different than what thermometers measure, or they are a blatant lie.

To offer a few examples, having lived in the CA central valley for many years, summer temperatures, June, July, and August, are often above 100F and sometimes above 110.F In Nevada, where I’ve lived for the last two summers (very rural at 3500 feet), local high temperatures have been at above 100F from time to time and on my occasional trips to Las Vegas and Mesquite (to the east of Las Vegas) the afternoons have been at 116F and 117F. Even if I’ve somehow managed to only visit those locations on the very hottest days, it was more than 25F hotter than those NOAA sites claim – unless their “Maximum Temperature” label means something quite different than actual temperature measurements.

And NO, I am not supporting alarmists positions. The day I move to CA, Labor Day, 1962, the temperature in the small town north west of Sacramento in which we settled, was 110F. Hot but liveable.

CampsieFellow
October 13, 2023 4:24 am

I appreciate that the USA is a very important country but there is a difference between ‘the USA’ and ‘the world’. Just giving data about the USA doesn’t tell us what is going on in the world as a whole. There is no climate emergency but data which is confined to the USA can’t be used to prove that. Perhaps the author meant, “No climate emergency in the USA.”