Fun with Trends

Brief Note by Kip Hansen – 17 August 2022

I have been doing research for other people’s book projects (I do not write books).  One of the topics I looked at recently was the USCRN — U.S. Surface Climate Observing Reference Networks (noaa.gov);  Self-described as “The U.S. Climate Reference Network (USCRN) is a systematic and sustained network of climate monitoring stations with sites across the conterminous U.S., Alaska, and Hawaii. These stations use high-quality instruments to measure temperature, precipitation, wind speed, soil conditions, and more.”

A main temperature data product produced by USCRN is Average Temperature Anomaly for the entire network over its full length of about 17 years.  It is shown up-to-date here at WUWT in the Reference Pages section as “Surface Temperature, US. Climate Reference Network, 2005 to present” where it looks like this:

Now, a lot of people would like to jump in and start figuring out trend lines and telling us that the US Average Temperature Anomaly is either “going up” or “going down” and how quickly it is doing so.

But let’s start with a more pragmatic approach and ask first:  “What do we see here?” 

I suggest the following:

1.  What is the range over the time period presented (2005-2022)?

          Highest to lowest, the range is about 11 °F or 6 °C.  This range represents not a rise or fall of the metric but rather the variability (natural or forced).  Look at the difference between the high in late 2005 and the low in early 2021.  If this graph had been unlabeled, I would have identified it as semi-chaotic. 

2.  Is the anomaly visually going up or down?

          Well, for me, it was hard to say.  Oddly, the anomaly seems to run a bit above “0” – which tells us that the base period for the anomaly must be from some other time period.   And it is, USCRN uses a 1981-2010 base period for “0” when figuring these anomalies, the base period is not inside the time range of this particular time-series data set. 

We can, however, ask Excel to tell us mathematically, what the trend is over the whole time period.

There, now you know.  Or do you?  MS Excel says that USCRN Average Temperature Anomaly is trending up, quite a bit, about 1 °F (0.6 ° C) over 17 years

~ ~ ~

Now comes the FUN!

I’ve arbitrarily picked five-year time increments as they are about 1/3 of the whole period.  Three five-year trends (the last one, slightly longer) which are all down-trending, add up to one up-trending graph when placed end to end in date order.

Lessons We Might Learn:

a.  Don’t use short time periods when determining trends in a time series.  Trends are always sensitive to start and end dates.

b.  This phenomena is somewhat akin to Simpson’s Paradox: “is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined.” 

“In his 2022 book Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy and Everything Else, Jordan Ellenberg argues that Simpson’s paradox is misnamed:”

“Paradox” isn’t the right name for it, really, because there’s no constriction involved, just two different ways to think about the same data. … The lesson of Simpson’s paradox isn’t really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once.”  [ source ]

c.  It does bring to mind other data sets that change trend (or even trend sign) when looked at in differing time lengths  — sea level rise comes to mind, with the short satellite record claiming to be double the century-long tide-gauge SLR rate. 

d.  Why look at trends that are obviously not reliable over different time scales?   This is a philosophical question.  Can a longer trend be real if all the shorter components of the trend have the opposite sign?  Can three shorter down-trends add up to a longer up-trend that has applicability in the Real World?  Or is it just an artifact of the time scale chosen?  Or is the opposite true?  Are three shorter down-trends real if they add up to an up-trend? (When I say “real” I do not mean just mathematically correct – but physically correct.)

e.  Are we dealing with a Simpson’s-like aberration here?  Is there something important to learn from this?  Both views are valid but seem improbable.

f.  Or, is what we see here just a matter of attempting to force a short highly variable data set to have a real world trend?   Are we fooling ourselves with the interpretation of the USCRN Average Temperature Anomaly as having an upward trend – when the physical reality is that this rather short data set is better described as simply “highly variable”?

# # # # #

Author’s Comment:

I hope some reader’s will find this Brief Note interesting and that it might lead to some deeper thought than “the average and its trend have to be correct – they are simply maths”. 

Many metrics of CliSci are viewed at an artificially assigned time scale of  “since the beginning of the modern Industrial Era” usually interpreted as the late 19th century,  roughly 1860 to 1890.  Judith Curry, in her recent interview at Mind and Matter suggests that this is literally “short sighted” and that for many metrics, a much longer time period should be considered. 

I hope I have time to keep up with your comments, I try to answer all that are addressed expressly to me. 

Thanks for reading.

# # # # #

Get notified when a new post is published.
Subscribe today!
5 31 votes
Article Rating
867 Comments
Inline Feedbacks
View all comments
tgasloli
August 17, 2022 9:25 am

Or is it a matter of forcing highly variable data to show a trend?

Yes.

August 17, 2022 9:47 am

Kip,
“Can a longer trend be real if all the shorter components of the trend have the opposite sign? “
The purpose of calculating a trend is that you think your data is underlain by something with a steady rate of change, and trend calculation is how you estimate it. Breaking into several segments usually makes no sense. You don’t usually believe that bits of linear change with jumps is a reasonable model. So you should stick with your model for the whole period.

That is what is wrong with the Moncktonian notion that you can describe GAT as a series of pauses. There are discontinuous rises between the pauses, and that is where the change then occurs, but of course the no rise narrative hopes that is forgotten. Linear trend is a plausible model for temperature behaviour. Jumps and pauses is much less so.

Geoff Sherrington
Reply to  Nick Stokes
August 18, 2022 4:46 am

Nick,
Off the cuff, I cannot recall Lord Monckton discussing the parts between the pauses in any serious way.
BTW, do you consider classical statistics sometimes involving approximation to Gaussian curves is better than later bootstrap methods for customary data like daily max and min temperature time series uncertainty analysis?
Geoff S

Reply to  Geoff Sherrington
August 18, 2022 4:51 pm

Off the cuff, I cannot recall Lord Monckton discussing the parts between the pauses in any serious way.”
Well, he wouldn’t, would he?

“sometimes involving approximation to Gaussian curves is better than later bootstrap methods”
People here tend to overrate the importance of Gaussian. A lot of reasoning that depends, say, on adding variances, doesn’t require Gaussian. As for bootstrap, the basic idea of resampling is much used in treating ensembles. It isn’t often useful to explicitly form an approximating distribution, but I suppose you could.

Reply to  Geoff Sherrington
August 18, 2022 4:55 pm

I cannot recall Lord Monckton discussing the parts between the pauses

There aren’t any parts between the pauses. The last one ended several months after the new one began.

Rossmore
August 17, 2022 10:07 am
Reply to  Rossmore
August 17, 2022 5:05 pm

Very interesting. I would like to see something like this for other random locations on the globe.

Reply to  Jim Gorman
August 18, 2022 4:13 am

here are three widely spaced locations in the US

Merged_document.png
Reply to  Tim Gorman
August 18, 2022 4:25 am

here’s another

arequipa_peru_cdd.png
Reply to  Tim Gorman
August 18, 2022 4:31 am

here is the heating degree day data. Seems like it’s minimum temps going up!

arequipa_peru_HDD_65F.png
Reply to  Tim Gorman
August 18, 2022 4:40 am

the minimums are going up with a slope of .16, the maximums are going down with a slope of .05.

This would make the average temps in arequipa, peru go up. The CAGW crowd would trumpet this as the earth is burning up. When it’s actually doing nothing like that.

Where is the matching location showing CDD going up?

Paul Penrose
August 17, 2022 10:13 am

Kip,
There’s another possibility: when you have high variability and such a small trend, you should be very skeptical about that trend, especially when the data exhibits strong serial auto-correlation as this set does.

Geoff Sherrington
Reply to  Paul Penrose
August 18, 2022 4:49 am

Paul,
Would it not be better to make your point with a worked example? Geoff S

fah
August 17, 2022 10:19 am

In many sets of data one sees what one looks for, and the more ways one looks, the more one sees things, but it is more an indication of the mind of the looker perhaps than anything underlying the data. Linear trends of this data invariably have low goodness of fit and correlation coefficients. Many other functions (than a linear trend) do much better with low numbers of coefficients, for example a simple sum of several sines does quite well with the HCRN data. Polynomials do relatively poorly. Testing the data with a vratio test does not reject the hypothesis that it is a random walk. If one wants to force a linear trend on it go ahead, but remember that one is making the data tell a story one wants to hear. Torturing it until it confesses so to speak.

fah
Reply to  Kip Hansen
August 17, 2022 11:09 am

There is an old psychiatrist joke illustrating the point that I like. It goes like this:

A psychiatrist was interviewing a new patient and was showing him a series of ink blot pictures.

The psychiatrist asked what the first one looked like. The patient replied, “It is two people having sex in a bed.”

After the second one, the man replied, “It is a group of people having sex on a beach.”

After the third one, the man replied, “It is several people having sex in a forest.”

This went on a bit more and finally the psychiatrist leaned back and said, “It appears you are obsessed with sex.”

The man replied, “Me? You’re the one showing me all the dirty pictures!!!”

ResourceGuy
August 17, 2022 10:40 am

Here are a few other ways to look at it:

1) How much asphalt was in place in 1860, rounded to the nearest 1,000 square miles?
1b) What were the regulatory requirements from “good planning” for number of asphalt parking spaces per business site application in 1860?
2) What were China’s exports in 1860 as a share of the global economy?
3) How many climate satellites and ocean buoys were measuring global temps in 1860?
4) How many tons of wood pellets were shipped internationally to burn for subsidy payments?
5) How many climate science articles got through peer review in 1860?
6) And how many climate advocacy political donations were there in 1860? Related question: How many snake oil salesmen were there in 1860?

Clyde Spencer
August 17, 2022 10:50 am

Kip

This range represents not a rise of [or] fall of the metric …

Clyde Spencer
Reply to  Kip Hansen
August 17, 2022 7:55 pm

I try to keep it calibrated for the day it is really needed. 🙂

August 17, 2022 10:56 am

I just happened to be reading up on this over at the NOAA National Temperature Index Page, because I was wondering what was being used as a baseline for a temperature series only 17 years long. This is what I found in the Background section:

So as not to compare apples and oranges, the departures of nClimDiv and the U.S. Climate Reference Network (USCRN) values from normal values for each network are compared rather than the absolute values. The 30 years from 1991 through 2020 provide the basis for the normal period for each month and network. Data exist for nClimDiv from 1895 to present, so a normal is simply the 30-year average of the gridded data. USCRN observations since commissioning to the present were used to find relationships to nearby COOP stations and estimate normals at the USCRN sites using the normals at the surrounding COOP sites derived from full 1991-2020 records (Sun and Peterson, 2005)

Correct me if I’m wrong, but COOP sites are volunteer-manned National Weather Service sites and are not automated. COOP data isn’t available at the https://www.ncei.noaa.gov/pub/data site.

It seems that NOAA is using the interpolation method again to create “data” where there is none. They’re using volunteer-collected data from stations near to the USCRN stations to create a baseline where none exists, and then using it to create anomalies.

Reply to  Kip Hansen
August 19, 2022 8:55 am

The quote in my post above is from the NOAA page explaining how they got the US surface anomaly graph from data that doesn’t begin much before 2004. I think the first stations were in 2002, but there weren’t enough of them until 2005 to be useful — if they are now.

They do use the 1991-2020 period, but it’s derived for the USCRN data by correlating it with temperatures from nearby COOP stations and interpolating the measurements to create data again.

Carlo, Monte
Reply to  James Schrumpf
August 19, 2022 10:36 am

What do they mean by “the normals at the surrounding COOP sites”?

August 17, 2022 11:00 am

Because of the 87 year GB cycle you must consider that delta T must be zero over that period of time. If it is more, it means it is not coming from the sun. See here.
https://breadonthewater.co.za/2022/03/08/who-or-what-turned-up-the-heat/

Reply to  HenryP
August 17, 2022 6:19 pm

There are significant changes in Solar EMR intensity on Earth due to orbital changes. For example, the April solar EMR over USA has increased by 1W/m^2 over the past 70 years. The integrated increase from February through June sums to 3W/m^2.

If you look at the monthly trends you will find months when the temperature has declined. There is no universal warming as climate models predict. The warming follows the sun and the solar intensity in the Northern Hemisphere has been increasing for the last 500 years. The increased solar intensity is having an impact. On the flip side, NH winters are getting less sunlight and that will eventually result in ice accumulation. Once the ice mountains reform, the NH will be cooler on average despite solar intensity increasing for the next 9,500 years.

August 17, 2022 11:10 am

Here’s what all the different trends look like in context.

The confidence intervals do not include any adjustment for auto-correlation, so should probably be quite a bit wider.

Edit:
(Sorry made a mistake, and there isn’t an option to delete an image .)

20220817wuwt1.png
Reply to  Bellman
August 17, 2022 11:16 am

Here’s the graph using the dates in this article.

Note, although it’s claimed these are 5 year increments, the middle trend is only 4 years long, and the third is 7 years.

20220817wuwt1.png
Reply to  Bellman
August 17, 2022 11:21 am

Making the middle section 5 years, i.e. 2011 – 2015, you can see it pretty much follows the overall trend.

20220817wuwt2.png
Reply to  Kip Hansen
August 17, 2022 3:36 pm

2011 through 2015 is warming at the rate of 0.28°C / decade.

But I agree we shouldn’t be looking at short periods in a small area. As I said, my only interest in showing that USCRN does not show any signs of proving the global long term warming trend wrong.

Reply to  Bellman
August 17, 2022 2:31 pm

So a possible trend line could look like my poor freehand line?

bellman shor trend.png
Reply to  Jim Gorman
August 17, 2022 3:36 pm

Could be.

August 17, 2022 11:33 am

Just casual inspection of the data reveals that it does not conform to the conditions for OLS estimation, which require residuals (difference between measurement and trend) to be random, and normally distributed. It’s quite clear that there is a high level of autocorrelation – that is each residual value is a good predictor of the next, rather than being random. It gives rise to an apparent cyclical behaviour (also easily visible), which means that trend estimation is only more reliable when measured between similar points in the cycle, or using rather more sophisticated techniques that handle autocorrelated data. They do tend also to give rather wider confidence intervals, missing from your charts. Indeed, you don’t even cite the error margins of the estimated slopes under OLS. There are also statistical measurements of residuals, such as Durbin Watson, that warn of problems in the data.

Of course, if you only examine a short section of data the cyclical elements within it may simply not be readily observed.

Geoff Sherrington
Reply to  It doesn't add up...
August 18, 2022 5:00 am

There is a need for a defining article with examples that define the type of data, like is it a random walk or not, whether it is autocorrelated, whether it is IID, what statics the data allow and which statistics used in the past have been overtaken by better approaches.
Such an article could reduce the quantity of repetition of wonder on WUWT sites.
I am drafting one, but it is getting very long and meeting far too many examples of questionable assumptions in past work.
Geoff S

August 17, 2022 12:40 pm

I wonder what approach a professional statistician would take on discovering that changing the end date of his data set by one month causes the trend line to change dramatically?

The trend line function delivers the trend and fit but what about the overall sensitivity to end point changes.

If there is not an ‘official’ method of dealing with end point sensitivity, should we dismiss all trend analysis on overly sensitive data?

Reply to  Steve Richards
August 17, 2022 2:42 pm

Take my word for it, don’t go past the end point! Worked for many years in the telephone company projecting equipment needs, people, budgets, and various sundry things. You are better off taking the end point and adding some reasonable percent growth to it for the next 5 years. Linear regressions place so much weight on distant past information that it will distort what is going to happen.

The feedbacks in climate models are so big that you end up with exponential growth. Plus the fact that they turn into linear projections after a few years. They are not likely and that is why they need to tune them.

Does anyone ever wonder about cooling that may happen? The models don’t show that as a possibility, ever! Look at the attached image from UAH. The temps go up and the temps go down. The temps go down and the temps come up. Does anyone see anything that lead them to think that warming is going to continuously increase?

UAH july 2022.png
Clyde Spencer
Reply to  Kip Hansen
August 17, 2022 8:15 pm

… the trend does not, cannot, must not be drawn into the past (before the first data point) or the future (past the last data point. Never Ever.

Then what utility does a regression line have? It may improve interpolation for noisy data. However, it seems to me that the point of a linear regression is to provide a first-order estimate of the relationship between two variables, and be useful for a prediction, with the caveat that it assumes the relationship is linear, and no changes will take place to invalidate that assumption. Generally, auto-correlation makes this a safe assumption for short times or distances beyond the last data point.

If you are talking about polynomial fits, there is great risk that the function will take off for infinity immediately after the last (or before the first) data point. Polynomial extrapolations are fraught with much more danger than first-order fits, and should be used very judiciously. That is, there has to be a good physical justification for the order of the fit, not just a high R^2 value. Over-fitting is always to be avoided for other than special cases of interpolation.

Old Cocky
Reply to  Clyde Spencer
August 18, 2022 1:59 am

That’s pretty much the approach. The hope is that a trend line with good explanatory power and low noise is an approximation to the “real” trend for the period analysed, and if nothing else changes (the economists [in]famous ceteris paribus) it might continue for a little longer.

That is rather a heroic assumption with a data set like this with rather large excursions. Those excursions are the interesting feature of the data set, and would warrant further investigation in much greater depth.

The amplitude of the excursions appears to have decreased – why? The excursions seem to exhibit pseudo-periodicity – why?

Reply to  Clyde Spencer
August 18, 2022 12:10 pm

The linear regression is usually done with a predictor variable and a dependent variable. The two should be related in some fashion. It is best used to show a signal from a noisy measurement, and by noisy, I mean real interfering noise. It does not mean large variability in an actual signal. What would I consider noise in temperature? UHI and poor siting.

If you plot a measurement against time, like climate science, you are hoping time somehow affects the dependent variable directly. Time is not part of the physical determination of the temperature.

What people are doing when plotting something against time is hoping they can predict into the future. That is fraught with danger if time is not a predictor. Climate science has pinned their hopes on CO2 since it has risen for a brief time in the history of the world. The fact that CO2 lags temperature should have shattered that illusion a long time ago.

bdgwx has given an example of a model that includes using some cyclic ocean current indexes. That may be a valiant start but it also indicates that predictions of the future rely on accurate predictions of the piece parts. But more importantly, it shows that temperature against time is not a good predictor. Linear progressions of temperature vs time should not then be used to say that we know where temperature is headed. Those regressions are only accurate for past data.

Reply to  Clyde Spencer
August 18, 2022 2:18 pm

A statistically durable linear trend is merely a starting point. It provides a basis to justify further technical study. Money is not being budgeted or spent on trend extensions before this additional work is completed – and often not even then.

Reply to  bigoilbob
August 19, 2022 5:51 am

You have to be careful when trying to define a “statistically durable linear trend” from a cyclical process.

See the attached UAH anomaly graph. Anomalies go up and anomalies go down. The anomalies have bee trending down since 2016. Will they return to the base line? Not sure how a linear trend line will tell you. The anomalies in 2016 and 2020 appear to be larger than any prior ones but does that mean they won’t return to the baseline? I certaintly can’t tell, does anyone know?

uah_anomaly_graph.png
Reply to  Tim Gorman
August 19, 2022 6:53 am

“You have to be careful when trying to define a “statistically durable linear trend” from a cyclical process.”

Agree. But you can have both cyclicity and an overriding statistically durable trend. You seem to be showing us that. Please provide the data.

Reply to  bigoilbob
August 19, 2022 9:50 am

I’ll repeat – with a cyclic process around a baseline like in the UAH anomaly graph, it is difficult to establish a “statistically durable linear trend”. Such a trend would generally lead the anomalies to gradually move away from the baseline in one direction or another.

The data is at the UAH site. That’s where I grabbed the graph from.

Reply to  Tim Gorman
August 20, 2022 8:49 am

Baselines matter not to trended data. Here s the trend of that same data:

1.34 +/- 0.06 degC/century

Yes, proper consideration of cyclicity might make it even more statistically durable, but I’ll take this win. FYI, this is the probability that that trend is flat or down. 1.64E-103

And thanks for making me realize that I read my laptop specs carelessly. It actually calculates more than the 96 places to the right side of the decimal point I previously thought….

https://www.woodfortrees.org/data/uah6

Reply to  bigoilbob
August 20, 2022 10:16 am

Sorry for the laziness. Reworking the trend to use data from the start to the end of the last month that would make the number of years even, I get:

1.35 +/- 0.06 degC/century.

Per the last civil discourse between Frank Semyon and Danny Santos “You can keep your rings on. Won’t matter to me.”

Reply to  bigoilbob
August 20, 2022 1:54 pm

How do you trend a sine wave? Using the peak value? The average value? The power value? Sometimes one and then the other? What do you use for a baseline?

Reply to  Tim Gorman
August 20, 2022 3:36 pm

“How do you trend a sine wave?”

Apparently, it’s gets done routinely.

http://www.nlreg.com/trend.htm

https://www.fsb.miamioh.edu/lij14/672_s16.pdf

You can find the trend of the data through the last completed sine wave period (or from the first data point that will result in a completed sine wave – your pick).You can then detrend the data, and fit the best sine wave to it. That sinusoidal function then gets added to the original trend.

For your UAH data, the best fit sine wave has a period of ~6.7 years and an amplitude of ~0.029 degC*. But when that is superimposed onto the previously trended data, the trend changes hardly at all.

The trend component of the sine waved trend is 1.33 degC/century. Unchanged…

*Yes, seems low, but that is what gives the best fit.

Reply to  bigoilbob
August 21, 2022 4:31 am

Did you even bother to actually look at what was being graphed in the first link?

Function Y = p0 + p1*X + Amplitude*sin(2*pi*(X-Phase)/Period);

This isn’t a linear trend of a sine wave. It is a linear trend of p1*X with an oscillating component overlaid!

The second one is no different.

y = -2*cos(2*pi*tr/j)+ 3*sin(2*pi*tr/j)+0.5*tr+e

It’s a linear trend of 0.5tr with an oscillating component overlaid. See the attached graph.

I’m sure you didn’t mean to but you just undercut the entire climate models. After a few years they become the very same function:

T = m(CO2) + Asin(ɑ) where m >> A

Effectively a linear trend based on CO2 plus a small oscillation around the linear trend line.

No pauses, no bending, no real analysis of the non-linear, chaotic system that even the IPCC recognizes the atmosphere is.

image_2022-08-21_061813863.png
Reply to  Clyde Spencer
August 18, 2022 4:53 pm

Generally, auto-correlation makes this a safe assumption for short times or distances beyond the last data point.”

This might be true for a cyclical process characterized by a single sine wave. If you have a complex wave form it gets more complicated to calculate the autocorrelation. If you have a cyclical process made up of three sine waves you might be able to write this as:

S(t) = A0 + A1cos(ω1 + ⱷ1) + A2cos(ω2 + ⱷ2) + A3cos(ω3 + ⱷ3)

The autocorrelation is:

R(τ) = [A0^2 + A1^2cos(ω1τ) + A2^2cos(ω2τ) + A3^2(ω3τ ] / (A0^2 + A1^2 + A2^2 + A3^2)

(had to go back to an old Schaum’s outline to get this!)

Just one more reason for a trend line of an average calculated from a time series is far more complicated than is used by climate scientists today.

Reply to  Tim Gorman
August 19, 2022 2:29 pm

ROFL!! Downchecks with absolutely no refutation offered.

My guess is that the down checkers don’t even understand the math!

Reply to  Tim Gorman
August 20, 2022 9:27 am

Even with cyclicity and autocorrelation you can still calculate, and determine the statistical durability of, the underlying trend. Please finish what you claim to have started, and show your work…

Reply to  bigoilbob
August 20, 2022 2:01 pm

I showed you how to calculate it. Please note carefully that the frequencies of the components do *NOT* have to be the same. When you add them you get a very complex wave. See attached.

How do you trend this complex waveform with a baseline of zero?

multiple_sine_waves.png
Carlo, Monte
Reply to  Tim Gorman
August 20, 2022 3:10 pm

These people have trends on the brain, sheesh.

Reply to  Tim Gorman
August 20, 2022 3:48 pm

No, you didn’t. AGAIN, the baseline matters not at all for trending. That’s why it was so easy for me to find you linked examples where other do it routinely. Not to mention how I did it from scratch for YOUR referenced UAH data.

Reply to  bigoilbob
August 21, 2022 4:34 am

What you found was a linear component overlaid with a sinusoid. In other words you must believe that T = m(CO2) plus a small oscillation.

That is *NOT* what UAH anomalies show.

They show a cyclical process around a baseline.

uah_anomaly_graph.png
Carlo, Monte
Reply to  Tim Gorman
August 21, 2022 7:27 am

The only thing they know to do is lay an arrow shaft on the graph paper.

Reply to  Jim Gorman
August 17, 2022 5:08 pm

Based on my understanding of the physics of stars I believe the very long prospect is for some extreme global warming before the earth gets swallowed into the expanding red giant of the sun. But not anything to worry about just yet….

August 17, 2022 2:44 pm

How to make temperature data mo’ better
using arithmetic and statistics:

Infill numbers
Use poorly sited weather stations
Adjust numbers
Homogenize numbers
Pasteurize numbers
Do not report temperature changes by latitude
Do not report temperature changes by month of the year
Do not report temperature changes by time of day (TMIN versus TMAX
“Adjust” historical inconvenient data to fit narrative
Report only a global average temperature that not one person lives in
Show a trend of that global temperature rather than individual years
Predict a steeper future trend never before observed in the past 10,000 years
Wave arms and get hysterical while predicting climate doom
Claim this is a certainty because “scientists say so”
Collect government paycheck.
Hope for promotion.

August 17, 2022 4:03 pm

Starting with an anomaly hides reality. It gives a false impression of any situation.

The attached is what the GHCN temperature looks like for central North America; 30N to 50N, 74 to 123W.

It has an annual range of 32C (60F). If you look deeper into the trends, you will find some months are cooling, some months have little change and most months are warming. All this is consistent with the increasing ToA sunlight due to changing orbit.

Anyone stating the ToA solar EMR is constant is deluded. It changes substantially from day-to-night, day-to-day, month-to-month and on longer time scales. It is NEVER the same over any cycle.

Since 1959 the April solar EMR over north America has increased almost 1W/m^2 to 2020. IN fact it can swing almost 0.5W/m^2 over a few years. On average, it has increased by only 0.05W/m^2 over the last 70 years but the monthly trends are much more significant.

ighcn_cams_10_-123--74E_30-50N_n.png
ferdberple
August 17, 2022 4:57 pm

The error is in assuming a trend is contained within the data. It isnt. The trend is a physical thing. The temp is getting hotter/colder.

The Excel trendline is not a real thing. It is an abstract symbol to represent the actual trend.

The actual physical change is the trend. This does not change regardless of the math.

See: “this is not a pipe”

Same logic a drawing of the trend is not the trend. It is a symbolic representation of the trend.

KTM
August 17, 2022 9:40 pm

Oddly, the anomaly seems to run a bit above “0” – which tells us that the base period for the anomaly must be from some other time period.  And it is, USCRN uses a 1981-2010 base period for “0” when figuring these anomalies, the base period is not inside the time range of this particular time-series data set.”

Is there any other example where a “Reference Network” is set up and then begins at an baseline of +1.75?

Climate ‘science’ at it again.

bdgwx
Reply to  KTM
August 18, 2022 6:23 am

The choice of baseline is arbitrary and has no effect on the shape or trend of the observations. It’s just another scale not unlike how Fahrenheit and Celsius themselves are scales with arbitrary baselines. For example, a value of 32 F means the same thing as 0 C. Similarly a value of 0.36 C on the 1991-2020 UAH TLT baseline means the same thing as 0.49 C on the 1981-2010 baseline for the month of July. UAH could have arbitrarily decided to use the full dataset average as the baseline and nothing would have changed with any of the conclusions. It’s the same with USCRN or any other dataset. The choice of baseline is still arbitrary either way.

Carlo, Monte
Reply to  bdgwx
August 18, 2022 6:58 am

Maybe you should lecture him about how 1/1000 equals 0.001.

bdgwx
Reply to  Carlo, Monte
August 18, 2022 7:33 am

If KTM thinks 0.10 C = 0.001 K then we’ll cross that bridge when we get there.

Carlo, Monte
Reply to  bdgwx
August 18, 2022 7:36 am

who are “we”, mr. disingenuous?

Reply to  KTM
August 19, 2022 9:07 am

NOAA says they use a base period of “The 30 years from 1991 through 2020 provide the basis for the normal period for each month and network. Data exist for nClimDiv from 1895 to present, so a normal is simply the 30-year average of the gridded data. USCRN observations since commissioning to the present were used to find relationships to nearby COOP stations and estimate normals at the USCRN sites using the normals at the surrounding COOP sites derived from full 1991-2020 records (Sun and Peterson, 2005)

Joseph Walker
August 18, 2022 7:27 am

For non-analog data, it is all in the sample timing.

In the chem/oil/food/aviation industries, and for feedback control loops that are not analog, I’ve seen uncontrollable loops which turned out to be only sample timing (too long OR to short) problems. Fix the sample timing and get great control. Take away the closed loop and you get REAL process variable info… your temperature and your anomalies.

Of course we have all the analog data we need for our digital or hybrid loops,
and you have 50%+ of the data missing, ‘skooched’ by East Anglica etal, or lied about.

Thanks for the great article Kip Hansen.
Kip Hansen – Watts Up With That?

john harmsworth
August 18, 2022 9:54 am

I’m having trouble spotting the “crisis”!