The Button Collector Revisited: Graphs, Trends and Hypotheses

Guest Essay by Kip Hansen

 

dotlongdog-blog350Prologue:    This essay is a follow-up to two previous essays on the topic of the usefulness of trend lines [trends] in prediction.  Readers may not be familiar with these two essays as they were written years ago, and if you wish, you should read them through first:

  1. Your Dot: On Walking Dogs and Warming Trends posted in Oct 2013 at Andy Revkin’s NY Times Opinion Section blog, Dot Earth. Make sure to watch the original Doggie Walkin’ Man animation, it is only 1 minute long.
  2. The Button Collector or When does trend predict future values? posted a few days later here at WUWT (but 4 years ago!)

Trigger Warning:  This post contains the message “Trends do not and cannot predict future values” .  If this idea is threatening or potentially distressing, please stop reading now.

If the trigger warning confuses you,  please first read the two items above and all my comments and answers [to the same questions you will  have] in the above two essays, it will save us both a lot of time.

I’ll begin this post commenting on an ancient comment  to a follow-up Dot Earth column by Andrew C. Revkin, “Warming Trend and Variations on a Greenhouse-Heated Planet” (Dec. 8, 2014). [Alas, while the link is still good, Dot Earth is no longer, it has gone way of my old blog, The Bad Science Times, which faded away in the early 1990s.)  Revkin’s piece repeated the Doggie Walking Animation, and contained a link to my response.   This comment, from Dr. Eric Steig, Professor, Earth & Space Sci. at the University of Washington, where he is Director of the IsoLab and is listed on his faculty page as a founding member and contributor to the influential [their word] climate science web site, “RealClimate.org“, says:

“Kip Hansen’s “critique” of the dog-walking cartoon, is clever, and completely missing the point. Yes, the commentators of the original cartoon should not have said “the trend determines the future”; that was poorly worded. But climate forcing (CO2, mostly) does determine the trend, and the trend (where the man is walking) does determine where the dog will go, on average.”  (my emphasis)

Dr. Steig, I believe, has simply “poorly worded” this response.  He surely means that the climate forcings, which themselves are trending upwards, do/will determine (cause) future temperatures to be higher (“where the dog will go, on average”). He is entitled to that opinion but he errs when he insists “the trend (where the man is walking) does determine where the dog will go”.  It is this repeated, almost universally used,  imprecise choice of language that causes a great deal of misunderstanding and trouble for the rest of the English speaking world (and I suppose for others after literal translations) when dealing with numbers, statistics, graphs and trend lines.  People, students, journalists, readers, audiences…begin to actually believe that it is the trend itself that is causing (determining) future values. 

Many of you will say to your selves, “Stuff and nonsense!  Nobody believes such a thing.”   I didn’t think so either…but read the comments to either of my two essays… you will be astonished.

 

Data points, lines and graphs:

[Warning:  These are all very simple points. If you are in a hurry, just scroll down and look at the images.]

Let’s look at the definition of a trend line:  “A line on a graph showing the general direction that a group of points seem to be heading.”  Or another version “A trend line (also called the line of best fit) is a line we add to a graph to show the general direction in which points seem to be going.”

Here’s an example (mostly in pictures):

whats_wrong_with_this

Trend lines are added to graphs of existing data to show “the general direction in which [the data] points seem to be going.”  Now, let’s clarify that a little bit — more precisely,  the trend line only actual shows the general direction in which [the data] points have gone”  — and one could add — “so far”.

trends_are_only_valid

That seems awfully picky, doesn’t it?  But it is very important to our correct understanding of what a data graph is — it is a visualization of existing data — the data that we actually have — what has actually been measured.   We would all agree that adding data points to either end of the graph — data that we just made up that had not actually been measured or found experimentally — would be fraudulent.  Yet we hardly ever see anyone object to “trend lines” that extend far beyond the actual data shown on a graph — usually in both directions.  Sometimes this is just lazy graphics work.   Sometimes it is intentional to imply [unjustifiably] that past data and future data would be in line with the trend line.  However, just to be clear, if there is no data for “before” and “after” then that assumption cannot and should not be made.

Now, one (or two) more little points:

whats_wrong_eggnog

I’ll answer in a graphic:

whats_wrong_eggnog_ans

But (isn’t there always a ‘but’?):

eggnog_without_traces

Traces added to join data points on a graph can sometimes be misunderstood to represent the data that might exist between the data points shown.  More properly, the graph would ONLY show the data points if that is all the data we have — but, as illustrated above,  we are not really used to seeing time series graphs that way – we like to see the little lines march across time connecting the values.  That’s fine as long as we don’t let ourselves be fooled into thinking that the lines represent any data. They do not and one should not let the little lines fool you into thinking that the intervening data lays along those little lines.  It might…it might not…but there is no data, at least on the graph, to support that idea.

For eggnog sales, I have modified part of the graph to match reality:

eggnof_monthly

This is one of the reasons that graphing something like “annual average data” can present wildly misleading information — the trace lines between the annual average points are easily mistaken for how the data behaved during the intermediate time — between year-end totals or yearly averages.  Graphing just annual averages or global averages easily obscures important information about the dynamics of the system that generates the data.  In some cases, like eggnog, looking only at individual monthly sales, like July sales figures (which are traditionally near zero),  would be very discouraging and could cause an eggnog producer to vastly underestimate yearly sales potential.

There are several good information sources on the proper use of graphs — and the common ways in which graphs are misused and malformed – either out of ignorance or to intentionally spin the message for propaganda purposes.  We see them almost everywhere, not just in CliSci.

Here’s two classic examples:

temperatures_rescaled

rescaled_GAST

On both of the above graphs, there is another invisible feature — error bars (or confidence intervals even) — invisible because they are entirely missing.    In reality, values before 1900 are “vague wild guesses”, confidence increases from 1900 – 1950 to “guesses based on some very imprecise, spatially thin data”,  confidence increases again 1950-1990s to “educated guesses”, and finally, in the satellite era, “educated guesses based on computational hubris.”

That’s the intro — a few “we all already knew all that!” [“Wha’da’ya think we are?  Stupid?”] points — of which we all need to remind ourselves every once in a while.

 

The Button Collector:  Revisited

My two previous essays on Trends focused on “The Button Collector”  — let me re-introduce him:

I have an acquaintance [actually, I have to admit, he is a relative] that is a fanatical button collector. He collects buttons at every chance, stores them away, thinks about them every day, reads about buttons and button collecting, spends hours every day sorting his buttons into different little boxes and bins and worries about safeguarding his buttons. Let’s call him simply The Button Collector.  Of course, he doesn’t really collect buttons, he collects dollars, yen, lira, British pounds sterling, escudos, pesos…you get the idea. But he never puts them to any useful purpose, neither really helping himself nor helping others, so they might as well just be buttons.

He has, latest count, millions and millions of buttons, exactly, on Sunday night.  So, we can ignore the “millions and millions” part and just say he has zero buttons on Monday morning to start his week, to make things easy.  (see, there is some advantage to the idea of “anomalies”.)  Monday, Tuesday and Wednesday pass, and on Wednesday evening, his accountant shows him this graph:

the_button_collection

As in my previous essay, I ask, “How many buttons will BC have at the end of day on Friday, Day 5?”

Before we answer, let’s discuss what has to be done even to attempt an answer.   We have to formulate an idea of what the process is that is being modeled by this little dataset.  [By “modeled” we simply mean that the daily results of some system are being visually represented.]

“No, we don’t!”,  some will say.  We just grab our little rulers and draw a little line like this (or use or complicated maths program on our laptops to do it for us) and Viola!  The answer is revealed:

Untitled-14

And our answer is “10”……(and will be wrong, of course).

There is no mathematical or statistical or physical reason or justification to believe we have suggested the correct answer.  We skipped a very important step.  Well, actually, we rushed right over it.  We have to first try to guess what the process is (mathematically, what function is being graphed) that produces the numbers we see.  This guess is more scientifically called a “hypothesis” but is no different, at this point, than any other guess.  We can safely guess that the process (the function) is “Tomorrow’s Total will be Today’s Total plus 2”.  This is, in fact, the only reasonable guess given the first three day’s data – and it even complies with formal forecasting principles (when you know next to nothing, predict more of the same).

Let’s check Thursday’s graph:

button_collection_DAY4

We’re rocking! – right on target – now Friday:

button_collection(1)

Shucks!  What happened?   Certainly our hypothesis is correct.   Maybe a glitch….?  Try Saturday (we’re working the weekend to make up for lost time):

button_collection(3)

Well, that looks better.  Let’s move our little trend line over to reassure ourselves….:

bc_dasy6_originsal_hypo

Well, we say, still pretty close…darn those glitches!

But wait a minute…what was our original hypothesis, our guess about the system, the process, the function that produced the first three days of results?   It was:  “Tomorrow’s Total will be Today’s Total plus 2”.   Do our results (these results are a simple matter of counting the buttons – that’s our data gathering method – counting) support our original hypothesis, our first guess, as of Day 6?   No, they do not.  No amount of dissembling – saying “Up is Up”, or “The Trend is still going up” makes the current results support the original hypothesis.

What’s a self-respecting scientist to do at this point?   There are a lot of things not to do:  1)  Fudge the results to make them agree with the hypothesis,  2) Pretend that “close” is the same as supporting the hypothesis – “see how closely the trends correlate?”,  3) Adopt the “Wait until tomorrow, we’re sure this glitch will clear up” approach,  4) Order a button recount, making sure the button counters understand the numbers that they are supposed to find, 5) Try re-analysis, incremental hourly in-filling, krigging, de-trending and re-analysis and anything else until the results come into line like “they should”.

While our colleagues try these ploys, let’s see that happens on Day 7:

button_collection(5)

Oh, my…amidst the “Still going up” mantra, we see that the data can really no longer be used to support our original hypothesis – something else, other than what we guessed, must be going on here.

What a real scientist does at this point is:

Makes a new hypothesis which explains more correctly the actual results, usually by modifying the original hypothesis.

This is hard – it requires admitting that one’s first pass was incorrect.  It may mean giving up a really neat idea, one that has professional or political or social value apart from solving the question at hand.  But – it MUST be done at this point.

Day 8, despite being “in the right direction”,  does not help our original guess either:

button_collection(8)

 

The whole week trend is still “going up” – but that is not what the trend line is for.

What is that trend line for?

  • To help us visualize and understand the system or process that is creating (causing) the numbers (daily button counts) that we see – particularly useful with data much messier than this.
  • To help us judge whether or not our hypothesis is correct

Until we understand what is going on, what the process is, we will not be able to make meaningful predictions about what the daily button counts will be in the future.  At this point, we have to admit, we do not know because we do not understand clearly the process(es) involved.

Trend lines are useful in hypothesis testing – they can show researchers – visually or numerically —  when they have correctly “guessed” the system or process underlying their results or, on the other hand, expose where they have missed the mark and give them opportunities to re-formulate hypotheses or even to “go back to the drawing board” altogether if necessary.

Discussion:

My example above is unfair to you, the reader, because by Day 10, there is no apparent answer to the question we need to answer:  What is the process or function that is producing these results?

That is the whole point of this essay.

Let me make a confession:    This week’s results were picked at random – there is no underlying system to discover in them.

This is much more common in research  results than is generally admitted – one sees seemingly random results caused by poor study design, too small a sample, improper metric selection and “hypothesis way off base”.  This has resulted in untold suffering of innocent data being unrelentingly tortured to reveal secrets it does not contain.

We often think we see quite plainly and obviously what various visualizations of numerical results have to tell us.  We combine these with our understanding of things and we make bold statements, often overly certain.  Once made, we are tempted to stick with our first guesses out of misplaced pride.  If our time periods in the example above had been years instead of days, this temptation would have become even stronger, maybe irresistible – irresistible if we had spent ten years trying to show how correct our hypothesis was, only to have the data betray us.

When our hypotheses fail to predict or explain the data coming out of our experiments or observations of real world systems, we need new hypotheses — new guesses — modified guesses.  We have to admit that we don’t have it quite right — or maybe worse, not right at all.

Linus Pauling, brilliant Nobel Prize winning chemist, is commonly believed, late in life,  to have chased the unicorn of a Vitamin C Cancer Cure for way too many years, refusing to re-evaluate his hypothesis when the data failed to support it and other groups failed to replicate his findings.  Dick Feynman blamed this sort of thing on, what he called in his homey way, “fooling one’s self”.  On the other hand, Pauling may have been right about Vitamin C’s ability to ward off the common cold or, at least, to shorten its duration — the question still has not been subject to enough good experimentation to be conclusive.

When, as in our little Button Collector example above,  our hypotheses don’t match the data and  there doesn’t seem to be any reasonable, workable answer then we have to go back to basics in testing our hypotheses:

1) Is our experimental design valid?

2) Are our measurement techniques adequate?

3)  Have we picked the right metrics to measure? Do our chosen metrics actually (physically) represent/reflect the thing we think they do?

4)  Have we taken into account all the possible confounders?  Are the confounders orders-of-magnitude larger than the thing we are trying to measure?  (see here for an example.)

5)  Do we understand the larger picture well enough to properly design an experiment of this type?

That’s our real topic today — the list of questions that a researcher must ask when his/her/their results just won’t come in line with their hypotheses regardless of repeated attempts and modifications of the original hypothesis.

I have started the list off above and I’d like you, the readers, to suggest additional items and supply your personal professional (or student era) experiences and stories in line with the topic.

# # # # #

“Wait”, you may say.  What about trends and predictions?

  • Trends are simply visualizations — graphical or mental — of the change of past, existing results.  Let me repeat that – they are results of resultseffects of effectsthey are not and cannot be causes.
  • As we see above, even obvious trends cannot be used to predict (no less cause or determine) future values in the absence of a true [or at last, “fairly true”] and clear understanding of the processes, systems and functions (causes) that are producing the results, data points, which form the basis of your trend.
  • If one does have a clear and full-enough understanding of the underlying systems and processes, then if the trend of results fully supports your understanding (your hypothesis) and if you are using a metric that mirrors the processes closely enough, then you could possibly use it to suggest possible future values, within bounds – almost certainly if probabilities alone are acceptable as predictions – but it is your understanding of the process, the function, that allows you to produce the prediction, not the trend – and the actual causative agent is always the underlying process itself.
  • If one is forced by circumstance, public pressure, political pressure or just plain hubris to make a prediction (a forecast) in an absence of understanding — under deep uncertainty — the safest bet is to predict “More of the same” and allow plenty of latitude even in that forecast.

 

Notes:

To those of you who feel you have wasted your time reading these admittedly simplistic examples:  You are right, if you already have a firm grasp of these points and never ever let yourself be fooled by them, you may have wasted your time.

Recent studies on trends in non-linear systems [NB: “Amongst the dynamical systems of nature, nonlinearity is the general rule, and linearity is the rare exception.”  — James Gleick CHAOS, Making of a New Science] don’t offer much hope in using derived trends in a predictive manner — no more than “maybe things will go on as they have in the past — and maybe there will be a change.”  Climate processes are almost certainly non-linear – thus for metrics of physical outputs of climate processes [temperatures, precipitation, atmospheric circulations, ENSO/AMO/PDO metrics],  drawing straight lines (or curves) across graphs of numerical results of these nonlinear systems in order to make projections is apt to lead to non-physical conclusions and is illogical.

There is a growing body of evidence for the subject of Forecasting. [Hint: drawing straight lines on graphs is not one of them.]   Scott Armstrong has been heading a effort for many years to build a set of Forecasting Principles “intended to make scientific forecasting accessible to all researchers, practitioners, clients, and other stakeholders who care about forecast accuracy.”  His work is found at ForecastingPrinciples.com.   His site has many articles on the troubles of forecasting climate and global warming  (PgDn at the link).

# # # # #

Author’s Comment Policy:

I enjoy reading your input to the discussion — positive or negative.

The subject in this essay is really “What questions must  a researcher ask when his/her/their results just won’t come in line with their hypotheses regardless of repeated experimental attempts and modifications of the original hypothesis?”

Most readers here are skeptical of mainstream, IPCC-consensus Climate Science, which, in my opinion, has fallen prey to desperate attempts to shore-up a failed hypotheses collectively called “CO2 induced catastrophic global warming” — GHGs will generally induce some warming, but how much, how fast, how long, beneficial or harmful are all questions very much unanswered.  Still up in the air is whether or not the Earth’s climate is self-regulating despite changing atmospheric concentrations of GHGs and solar fluctuations.

I’d like to read your suggestions on what questions CliSci needs to ask itself to get out of this “failed hypotheses”mode and back on track.

[Re: Trends —  I know it seems impossible that some people actually believe that trends cause future results, I have been through two very rough post-and-comment battles on the subject — and the number of believers (all very vocal) is quite large.  Unfortunately, this concept runs up against a lot of the training of academic statisticians — who, in their own way,  are among the most vocal believers.  Let’s try not to fight that battle here again –  you can read all the comments and my replies at the two posts linked at the very beginning of this essay.]

[NB: 5 Jan 2018 — several minor typos that have been helpfully pointed out by readers have been corrected — since publication.  Details are in the comments section where pointed out. –kh]

# # # # #

 

0 0 votes
Article Rating
203 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
R. Shearer
January 4, 2018 6:26 pm

What would a projection of temperatures based on actual observations of previous natural climate variation and uncertainty look like? This could be for periods of 100, 500, 1000 years, etc.

D. J. Hawkins
Reply to  R. Shearer
January 4, 2018 7:25 pm

This would be similar to someone who looks at the stock market from a technical aspect. Plotting volumes, price runs and various crossings to determine the future price or even just the future trend and triggering buy and sell orders based on these metrics. It reminds me of the theory of epicycles needed to make the heliocentric universe model work.

In any event, I believe any number of people have attempted to use Fourier analysis to do just what you suggest. One critical problem is the lack of sufficient high-quality data to make a really good go of it. The Nyquist theorem suggests your sampling frequency has to be at least 2f to detect a sinusoidal signal. By that requirement, we have, almost, enough data to look at a period of 100 years. And that’s generously assuming that data going back to the 1880’s or so is as good as the data we have today. In my opinion, today’s data is still none too good.

D. J. Hawkins
Reply to  D. J. Hawkins
January 4, 2018 8:04 pm

@Kip Hansen

Very long term, the market has provided an 8% return year-over-year. If you invest in a mutual fund that reflects the broad market such as the S&P 500, Russell 5000, or even the Dow-Jones average, you will, in the long haul, out-perform every boutique fund out there. In fact, in any given year you will out-perform 85% of them. The key issue is always, when do I get out? If you use dollar cost averaging to buy equities, then start to switch to bonds around 50, using your age to determine the percent bonds you hold, you can do pretty well, as bonds and stocks tend to move in opposite directions but bonds are normally lower in volatility with lower returns. This strategy minimizes risk as you move to retirement, and certainly needs to be tailored to your specific circumstance. As always, YMMV.

Ray Boorman
Reply to  D. J. Hawkins
January 4, 2018 8:38 pm

“In my opinion, today’s data is still none too good”. In my opinion, you are overly generous to the available temperature data for any number of reasons. We are talking about taking the average temperature of Earth’s atmosphere, or at least the lower few metres of it where people live. Considering the huge spatial area covered by the atmosphere, it is an impossible task that is being attempted. The unsolvable problems include the massive range in the altitude of the surface the atmosphere sits on; the fact 70% of the planet’s surface is covered by water, with close to zero temperature observations; & the vast areas of land surface which are uninhabited, again with almost zero temperature observations.

Personally, I don’t have any faith in the huge number of assumptions used by climate scientists in their computations of what is claimed to be Earth’s average temperature. And let’s not start on the guesswork involved in their attempts to calculate what the average temperature was 100 years ago – especially as they change the claimed historical temperatures on a regular basis, without bothering to explain why their previous calculation was wrong.

If today’s climate scientists are the cream of the crop, humankind is doomed in the near future.

KTM
Reply to  D. J. Hawkins
January 4, 2018 8:59 pm

When i first discovered technical analysis, i was fascinated. I would visit a message board every day to see day traders and swing traders share their technical analysis of different markets.

I did a bit of swing trading myself, and made a small amount of profit over time. But after a few years as a regular observer and hobbyist, it became obvious that whatever technical analysis was done could be read different ways to either give an upward move or a downward move. It can be used to very convincingly describe the past, but the predictive ability seemed to be zero.

I finally got a letter from the irs saying i owed them tens of thousands of dollars of unpaid capital gains taxes because i hadn’t included any cost basis in the transactions. I corrected my tax return and paid the extra $10 or whatever that i actually owed them and stopped visiting the technical analysis forum.

Reply to  D. J. Hawkins
January 5, 2018 12:09 am

“Very long term, the market has provided an 8% return year-over-year. If you invest in a mutual fund that reflects the broad market such as the S&P 500…”

What a bizarre statement to make… From 1871-1971 the annualized return adjusted for inflation was 1.9%.

From 1971-1981 it was -1.76%

From 2000-2016 it was 3.2%

From 1999-2009 it was -4.9%

From 1980-2010 it was 4.2%

What does it mean to say there is an 8% return on average? What if you want to retire during a downturn. Then it’s all a disaster isn’t it?

rbabcock
Reply to  D. J. Hawkins
January 5, 2018 5:33 am

Over 50% of all trades in the US are now done autonomously by algorithm driven computers and the number is increasing. A game changer to say the least. The things driving a stock price in the past (valuation, PE, dividends, etc) are not necessarily what is driving the stock price today.

Unless you are privy to the algorithm logic and can trade in milliseconds, I would think all the TA charts you used to use go out the window.

Kind of like calculating the global temps. Reading thermometers and using actual data has been replaced by homogenized garbage.

Scott
Reply to  D. J. Hawkins
January 5, 2018 6:54 am

Another example, different than stocks, is valuing fantasy football player values from year to year. Todd Gurley is a great example. He had a very poor 2016, and the Gurley “traders” had him valued very low for 2017. But the Gurley “investors” looked beyond the one bad data point year, which included bad coaching, poor offensive line play, etc and decided to look at Mr. Gurleys past body of work and overall talent. Gurley ended up as one of the best players this year and won many fantasy championships for his owners. The funny thing is that the “traders” who were dead wrong will generally not admit they were wrong … even though their opinion cost them championships, the just say they made the right decision on him at the time and have new information now and will value him higher next year.

Btw, I find traders to be analygous to global warming alarmists, they are very poor forecasters and get all excited and emotional during every El Nino. You dont need to be well informed when the only thing that matters is the last data point and your ruler. The global warming skeptics are generally much better informed and more level headed.

Michael S. Kelly
Reply to  D. J. Hawkins
January 5, 2018 2:26 pm

==>D.J. Hawkins.. Fourier analysis is one of the most misused of all mathematical tools, and it’s just as useless for predicting global temperature as the linear trend line. In fact, what most people who try Fourier analysis for climate studies don’t think about (if they ever knew it) is that it is exactly the same thing as least-square fitting a linear trend line to a collection of data. The only difference is that a Fourier series can exactly represent any function whatsoever if enough terms are used. But that’s all it does, is represent. It isn’t the same thing as the function.

A Fourier series is literally a least-squares fit of a sum of discrete sine and cosine functions (instead of a polynomial), and a Fourier transform is the continuous analog. It is very useful in the analysis of data known to be the sum of periodic functions, even if noise is present. Outside of that, it is just a more mathematically sophisticated version of the linear (or polynomial) regression, whose very sophistication lulls many (most?) users into a reliance on it to identify information in the data which may not be there at all.

Now, one might think that climate data ought to contain a number of periodic functions. The Milankovitch Cycle frequencies are known a priori, and ought to show up in Fourier analysis. But Willis has done a lot of such analysis, and doesn’t seem to find any such thing. So it’s likely that we just don’t know all of the processes at work. If anything, that is the most useful takeaway of the application of Fourier methods.

This webpage give an excellent overview of the pitfalls of Fourier analysis. (The rest of the site has very useful explanations of a number of subjects of interest to readers here.)

Tom in Florida
Reply to  D. J. Hawkins
January 6, 2018 1:12 pm

Here is the problem with using stock technical analysis as an example. The stock market is subject to emotional responses. The other day Jeff Sessions announced that the DOJ was going to start to enforce the marijuana laws that the Obama administration was ignoring. That sent the marijuana stocks plummeting, not based on what actually happened but based on what might happen. So a lot of the trends in the stock charts go up and down via emotional responses to news, whether it is real or fake.

D. J. Hawkins
Reply to  D. J. Hawkins
January 9, 2018 10:07 am

@Will
Two sources:

https://www.cnbc.com/2017/06/18/the-sp-500-has-already-met-its-average-return-for-a-full-year.html

https://www.thesimpledollar.com/where-does-7-come-from-when-it-comes-to-long-term-stock-returns/

One says 7% the other almost 10%. Not adjusted for inflation, as such indices hardly ever are. These returns also include dividends, not just capital appreciation.

Hivemind
Reply to  R. Shearer
January 5, 2018 3:22 am

First, you need to know what the previous variability was. This graph off Wikipedia is a good start for the layman. It’s easy to get to and free of obvious warmist propaganda (ie it uses ‘differences’, instead of ‘anomalies’).
comment image

It clearly shows that the temperature has changed frequently by large margins and although it has often been higher than it is today, it spends most of it’s time in the bitterly cold range. In fact, the temperature frequently crashes just after reaching a peak. Far from being able to assume that temperatures will keep rising linearly, the only safe assumption is that they will drop again soon. By a lot.

François
Reply to  Hivemind
January 5, 2018 6:43 am

As usual, when is year 0, date, please.

rbabcock
Reply to  Hivemind
January 5, 2018 8:36 am

Thousands of years ago explicitly implies the 0 on the graph is NOW.. or am I missing something?

Latitude
January 4, 2018 6:29 pm

Kip…very good….excellent…and I enjoyed reading it too….you’re spot on
Trigger warming < made me laugh

Reply to  Kip Hansen
January 9, 2018 12:36 pm

Predictions about the future climate are impossible.

Predictions of the past climate are difficult too.

The temperature “history books” keep changing !

In the US the 1930’s were actually a hot dust bowl.

After a few more decades of “adjustments”
the 1930’s, I believe, will be in the leftist
history books as a snow bowl.

I think the most important lesson, by far,
learned from the climate cult in the past 50
years is:

” No one can predict the future climate. ”

We’ve got 30+ years of inaccurate climate model
predictions / projections / simulations / bull-shirt
as proof of that !

And you, Mr. Smarty Pants Hansen,
just violated that key lesson !

So, just because i thought you wrote
the best article on this website in 2017,
doesn’t mean I’m going to be nice to
you this month.

Where do you live, Hansen?

I’m coming over to slap you upside
the head for making a climate prediction.

The three basic lessons
of “climate change science” are:

(1) No one can predict the future climate,

(2) The GCM climate models are failed models
because make wrong predictions, and

(3) Leftists are stupid heads for believing
a coming global warming catastrophe
from CO2 that will end life on Earth.

There will be a test, so study !.

http://www.elOnionBloggle.Blogspot.com

Komrade Kuma
January 4, 2018 6:35 pm

My favorite example in this area is that of a set of data points generated at intervals, regular or irregular, by a sine formula, i.e. x = A.sine Ø (no need for a phase anglein this example).

If you start the data generation on a ‘trough’ and finish on a ‘crest’, i.e if you do not include data generated over an integer number of full cycles then a line of ‘best fit’ will slope upwards and down if vice versa.

In the real world things are reversed and you have to know the period of any cyclical component of the data generator before you can meaningfully start doing ‘best fit’ number crunching irrespective of whether the ‘curve’ is a line, a parabolat or whatever. I actually saw the most basic stuff up of data analysis done in a papers about sea level rise affected by the Pacific Decadal Oscillation. The data started on a trough and finished on a crest and the trend was upwards!! It just makes you wonder about the calibre of people working in this area.

AndyG55
Reply to  Komrade Kuma
January 4, 2018 6:45 pm

Like, what is the linear trend calculated on this simple sine graph.
comment image

Reply to  AndyG55
January 4, 2018 7:27 pm

The sine wave trend is established using just the maxima or minima. Otherwise there is no trend as trends are not repetitive/cyclical, they are unidirectional.

Peter Sable
Reply to  AndyG55
January 4, 2018 8:26 pm

The sine wave trend is established using just the maxima or minima. Otherwise there is no trend as trends are not repetitive/cyclical, they are unidirectional

That’s seems reasonable way to do this. In which case you have a trendline consisting of a whole 3 points.

Which is also correct from a signal processing standpoint. With that sine wave, you literally have 3 samples. Barely above Nyquist.

Any sort of noise or overlapping signals would of course render those three samples nowhere near statistically significant. You could only achieve statistical significance about the well-constructed system that created this (y = mx + sin(ax)). You could not detect anything useful about a natural system with a mere 3 data points.

Which if course, with 70 year overlapping multi-decadal cycles, likely 200 and 1000 year cycles, means our measly 38 years of satellite record is off by a factor of at least 10x to tell us anything useful about temperature trends.

Peter

michael hart
January 4, 2018 6:56 pm

It seems you can’t teach an old dog-walker new tricks

J Mac
January 4, 2018 7:15 pm

6. Do we have the ethical humility to accept the null hypothesis, when our personal hypothesis is not supported by honestly collected and valid data?

Reply to  J Mac
January 4, 2018 7:26 pm

It is generally impossible for an idea to be invalidated by the data of others as they collect their data differently, use different equipment and methods, and have different samples for study. Someone doing a different experiment with different data to compare to your experiment does not disprove your experiment. You just point out that the methods and data are different and they may be seeing another effect.

jclarke341
January 4, 2018 7:26 pm

“Until we understand what is going on, what the process is, we will not be able to make meaningful predictions…”

And with this simple example demonstrating the truth of the above statement, the idea of catastrophic anthropogenic global warming is rendered null and void.

It doesn’t matter if the science is ‘settled’.
It doesn’t matter if all the world’s academies of science support it.
It doesn’t matter if 9&% of all scientists agree
It doesn’t matter if you believe something must be done in order to save the children
It doesn’t matter if you believe something must be done to save the planet.
It doesn’t matter if trillions of dollars are at stake
It doesn’t matter if the Pope himself swears that God told him so.

If the hypothesis does not explain the observations, and we do not understand why that is so, any prediction made with the hypothesis is meaningless. Case closed. We are done.

jclarke341
Reply to  Kip Hansen
January 5, 2018 5:51 am

Totally agree!

jclarke341
Reply to  Kip Hansen
January 5, 2018 5:55 am

The “We are done!” statement, was referring to catastrophic global warming, as in “We are done with this unfounded climate crisis propaganda.

January 4, 2018 7:31 pm

There is no error at all in the yellow graph. The trend is a line, not a line segment. Linear regression shows the equation for a line, not a line segment. The idea is the X and Y are dependent, and therefore you have a sample range that is not the full range of their interaction. In effect, the equation of the line makes a prediction as a model of data you have not yet collected.

Reply to  Donald Kasper
January 5, 2018 9:48 am

This is the same mistake “scientists” make when they don’t address precision and accuracy. For what values are the “trend line” valid? The equation of a linear regression are only valid between the data points used to calculate it. You can make a hypothesis that a backcast or forecast will follow the same linearity, but it is only a guess. You must wait to see if your hypothesis is valid or not. Believe me, in the real world, the next period will likely result in a change to your hypothesis.

January 4, 2018 7:36 pm

The GSTA graph shows one linear regression, one other, probably parabolic regression, and a moving average. The moving average is not a trend.

jorgekafkazar
Reply to  Kip Hansen
January 4, 2018 9:00 pm

“How To Lie With Statistics” by Huff is worth reading.

January 4, 2018 7:41 pm

“Trends are simply visualizations — graphical or mental — of the change of past, existing results. Let me repeat that – they are results of results – effects of effects — they are not and cannot be causes.” Linear trend is formula y = mx + b. Y is is function of X, or y = f(x). X is the cause and Y is the effect. They are dependent variables. Trends predict results in the future only for dependent variables. One graph is time in X-axis. Time does not make egg nog commodity prices, therefore such a trend fit would no predictive meaning. Climate time series have no predictive meaning. CO2 versus global mean temperature with high correlation coefficient (fit to the line) would infer predictive capability. CO2 versus time is the attempt to say that to lower CO2 you must go back in time, which has no social or coherent meaning.

Clyde Spencer
January 4, 2018 7:43 pm

Kip,
The remarks about the dog walking cartoon reminded me of a TV program narrated by Neil deGrasse Tyson, where he was walking a dog on a long leash on the beach to explain climate. I thought that it was a poor analogy because the claim was made that the dog represented the weather and the average track of the dog (controlled by the human holding the leash) represented climate. That is, the ‘climate’ was not where the average of where the dog wandered, but was actually controlled by the human who had the freewill to walk wherever he wanted. So, the dog was not really determining the climate! Where the dog went (weather) was actually determined by where the human walked (climate). One might say the tail was wagging the dog in that explanation!

Ragnaar
Reply to  Clyde Spencer
January 4, 2018 7:55 pm

Assume the dog sees another dog off to the right. It stays to the right. The dog is then the climate and the man is who knows?

D. J. Hawkins
Reply to  Ragnaar
January 4, 2018 8:10 pm

Especially if the dog is an ill-trained Great Dane. 😀

Kurt in Switzerland
Reply to  Ragnaar
January 5, 2018 5:52 am

This is actually a very good representation of the belief-driven “science” of human-driven climate change. In this worldview, the [CO2-forced] “climate” actually dictates the long-term path to which “weather” is constrained.

A funny thing is that NdgT’s walk is roughly constrained by the extent of the surf on the sandy beach, which is fairly constant. Now he wouldn’t want to get his feet (or his ankles) wet, would he? I suppose if he did, he could always “blame” it on the weather…

D. J. Hawkins
Reply to  Kip Hansen
January 4, 2018 8:13 pm

@Kip
Here you go:

Published on May 28, 2014.

jclarke341
Reply to  Kip Hansen
January 5, 2018 6:34 am

In the video, NDGT says: “All that additional heat has to go somewhere. Some of it goes into the air. Most of it goes into the oceans.”

That was the point my head nearly exploded! ALL OF IT GOES INTO THE AIR! Every last bit of additional heat brought into the system by increasing atmospheric CO2 must originate in the atmosphere where the additional CO2 resides! From there and in time, some of it can go into space or the Earth’s surfaces, but all of it starts in the air. If we aren’t finding it in the air, then it is either smaller than we thought, being offset by unknown phenomena, or mostly likely both. Either one of these falsifies the hypothesis. It is impossible for increasing CO2 in the atmosphere to currently be the primary driver of the climate.

The video would be much more scientifically accurate if Mr. Tyson, representing CO2, was the size of a Ken doll. It would also be much more fun to watch.

jclarke341
Reply to  Kip Hansen
January 5, 2018 8:40 am

Of course, the assumption in these examples is that we know where the owner is going, why he is going there and that he will walk in a straight line, which is a complete fallacy. In truth, we are unsure of where he is going, or why, and must recognize that the owner has the same ability to wander as the dog.

Yirgach
Reply to  Kip Hansen
January 6, 2018 8:01 am

The Teddy Talk Dog walking video was published on YouTube on Jan 4, 2012.

Yirgach
Reply to  Kip Hansen
January 6, 2018 10:01 am

But wait there’s more: The original video was posted on Vimeo :
From a Norwegian TV series Siffer

Title Siffer: Klima
Uploader Ole Christoffer Haga
Uploaded Saturday, March 12, 2011 at 2:47 PM EST

Dave Fair
Reply to  Clyde Spencer
January 4, 2018 8:42 pm

I own dogs large enough to jerk Tyson’s ass into the surf. The same for real life science.

Reply to  Dave Fair
January 5, 2018 3:32 pm

King German Shepherds, by any chance? My ass has been jerked to the ground on more than one occasion by such a dog.

And this could be a metaphor for we just never quite know where the beast will end up, even given the straight line of his master’s previous walking path.

Suppose the leash breaks? Suppose the master falls? What if the dog gets stronger and pulls greater distances off his master’s former course? Or, heaven forbid, what if the dog spots a cat? — say bye bye to upright posture and predictable paths.

Never think, for sure, that you know the dog you are walking.

Oh, I forgot this is a statistical discussion way over my head.

Rick C PE
Reply to  Clyde Spencer
January 4, 2018 9:10 pm

Now I’m more confused than before. I have understood that ‘climate’ is defined as the average weather over a long time period – typically 30 years. If so, then doesn’t the weather have to change in some consistent way over several decades for there to be a change in climate?

When I look up ‘climate’ in old (pre-AGW debate) references I find mainly information on climate zones based essentially on what plants will grow where. These zones appear to be quite consistent with current gardening and agricultural references. When are these going to be updated to reflect the climate change that is supposed to have occurred? And more importantly will I be able to grow oranges in Wisconsin soon?

paqyfelyc
Reply to  Rick C PE
January 5, 2018 2:08 am

wisconsin? What the Hell is that? a place where they pile up Whiskey on sin ?

[Only by Packing the stadium in Green Bay. .mod]

rd50
Reply to  Rick C PE
January 5, 2018 3:10 am

You are right. Climate zones in the US, Canada, UK. have not changed since the beginning of their conception, the basis of what can be planted in these zones with good possibility of survival. Not 100% sure but good possibility.

François
Reply to  Rick C PE
January 5, 2018 6:54 am

Well, many years ago, I used to live in Wisconsin. I have no idea about the possibility of growing oranges there nowadays, though the idea seems a bit far-fetched. Closer to home, I can tell you about France : sixty years ago, it would have been ridiculous to even try growing olive trees in Paris, everyone does it now, and they bloom, and you can even get some ripe olives, every year…

Old England
Reply to  Rick C PE
January 5, 2018 8:57 am

@ Francois – growing Olives in Paris – one of the ‘benefits’ or, more accurately, Symptoms of UHI.

Kurt in Switzerland
Reply to  Clyde Spencer
January 5, 2018 4:41 am

I think this was the original animation which was broadly distributed publicly:
https://www.youtube.com/watch?v=e0vj-0imOLw (Norway’s TeddyTV, 04 Jan 2012)

Co-opted by SkS’ Tom Curtis 07 Jan 2012:
https://skepticalscience.com/trend_and_variation.html

Nick Stokes
January 4, 2018 7:55 pm

It’s true that a trend is just an arithmetic construct from a set of numbers. The trend value could be described as the first moment (zeroth being mean). To relate it to forecasting, you need a model, which should take into account what you know. And trend is one of those things.

Forecasting is important. It’s a routine part of budgetting. How much should we allow for fixing roads next year? The first things you’d ask are, how much was needed in recent years, and what is the mean and trend. Of course, you might also ask whether there are particular special things happening. But if it seems like an ordinary year, what to do?

One model is random walk, which is the one that says you might as well allocate the same as this year. But if costs have been going steadily up, that is information you should allow for. That would be random walk with drift, and then the best estimator is extending using the trend value. It isn’t guaranteed to be right; it’s the best you can infer from the information.

paqyfelyc
Reply to  Nick Stokes
January 5, 2018 2:34 am

bad example. fixing road is not “needed”, it is decided upon, with close to nil decent reasoning. Done properly, it would be the result of a balance between the cost of fixing the road, and the cost of NOT fixing the road –slower speed, increased deterioration of vehicles, etc.
Obviously, previous years budget is a very poor indicator of the current balance. Some new technology making road fixing cheaper would displace toward “fix road, incur lower vehicle cost” a balance previously set on “not fix road, incur higher vehicle cost”, so that the road fixing budget would rise, just because it is cheaper! Or, road fixing could cost more and more par unit, just because rising pay of workers or whatever, while the average vehicle would turn cheaper so that you care less about their maintenance, doing the very opposite.

There are all sort of models. Some include stars, eagles in the sky, chicken entrails, and spurious correlation.
Following the trend is NOT “the best you can infer from the information”.

Nick Stokes
Reply to  paqyfelyc
January 5, 2018 3:45 am

‘bad example. fixing road is not “needed”, ‘
Often it is – I said urgent road repairs. Landslides etc. But it doesn’t matter; it could be any one of the myriad things where a budget provision has to be made, and the only guidance of what it should be is past experience. You have to come up with a number, and past expenditures and their rate of change are the obvious ways.

paqyfelyc
Reply to  paqyfelyc
January 5, 2018 8:09 am

@kip
although it wasn’t about roads, I did this kind of budgeting, and the answer is quite simple:
1) If this is rare enough, you budget zero. This is off budget. If and when this occur, you cry for money from some others, you cancel a few planned thing (some now irrelevant, some because you wanted to kill it ASAP and now have a plausible reason that makes it possible, some because you so badly need the money), you make some new debt, and voilà. In other word: you rebudget.
2) if this is frequent enough, you budget the max that will be accepted by your control bureau. This is reserve money you will use for whatever you want, but couldn’t dare to put in the budget to begin with. And, if you are unlucky enough for the event to occur, you can bet this max will still be far from enough, so case 1) applies again

Nick Stokes
Reply to  paqyfelyc
January 5, 2018 11:42 am

“How do you budget for “urgent” “
Maybe road repair wasn’t a good example; it depends on the size of the authority. But there are myriad things for which people budget based on past numbers, basically last year plus trend. How much for printing ink? How much for Christmas cards? Coffee?

Nick Stokes
Reply to  paqyfelyc
January 5, 2018 12:56 pm

Kip,
“Using examples of common-sense everyday forward thinking is not not not the same as depending on trends of existing data to predict the future values. “
Why not? The point is, it refutes your absolute rejection of trend as a predictor. The question of how far forward it will work is just a matter of scale, and the need for a prediction. If you really need a prediction in twenty years time, well, that’s a problem. But a trend-based prediction may well still be optimal, unless you can bring more knowledge (eg GCM) to bear.

A C Osborn
Reply to  Nick Stokes
January 5, 2018 4:47 am

Which is exactly how the UK ran out of Salt & Grit and didn’t have a enough Gritters and Snow Plows about 4 or 5 years ago.
They listened to the so called experts that said it hasn’t snowed much lately so it will snow even less with globull warming.
Enough said.

You also forgot “Contingency” which should always be built in.

Frank
Reply to  Nick Stokes
January 5, 2018 5:44 am

Nick: If you apply a random walk to 20th-centry warming, you conclude that the “drift” is not statistically significant Suppose 0.5 or 1 degC is the typical “century step size” in a random walk climate model, then the Holocene would be 100 steps long. A random walk with such large century-long steps is a lousy model for the observed variation in the Holocene. We would expect on the average to end up 5-10 degC from where we started. And if we skip over the last 2 million years of oscillations between glacial and interglacial periods presumably driven by orbital changes, then we have a million century long steps over 100 million year and expect an expected change of 500-1000 degC in either direction. Mentioning random walk plus drift models at a climate blog is a bad idea.

One needs a model where deviations tend to return to a mean value. That mean can be change (by forcing). Such a model implies that feedback is negative, ie that -3.2 W/m2/K of Planck feedback is not overwhelmed by positive feedbacks. If one wants to speculate, in the colder direction slow surface albedo (and CO2 from the ocean) feedbacks may be big enough that total feedback is zero until temperature has fallen about 5 degC.

Nick Stokes
Reply to  Frank
January 5, 2018 12:03 pm

Frank,
“you conclude that the “drift” is not statistically significant”
That is an issue of statistical inference, rather than prediction. The problem with random walk there is that it uses the past information too inefficiently to make proper inference. But that isn’t an issue for finding the best estimate for the coming year.

“Mentioning random walk plus drift models at a climate blog is a bad idea.”
Point taken 🙁

TheLastDemocrat
January 4, 2018 8:28 pm

We know that there is order in the universe. Some of this orderliness can be described by “laws.” Like Boyle’s Law.

A flaw is when we can figure out a mathematical model that matches some observed data, and begin to believe that the observed phenomenon is following some law. For example, what percent of the population will get the flu this flu season? There is predictability, and this can be modeled. But that mathematical expression is not a law like Boyle’s Law.

Flynn
Reply to  TheLastDemocrat
January 4, 2018 11:53 pm

I learned that there is mostly chaos in the Universe. Climate more than anything else.

Peter Sable
January 4, 2018 8:35 pm

However, just to be clear, if there is no data for “before” and “after” then that assumption cannot and should not be made.

I would put very strong emphasis on before.

Especially in any signal that has oscillations, you have to be able to see at least 2 oscillations (for a single frequency only signal), or 5 or more oscillations for a more complex signal like say one influenced by overlapping multi-decadal oscillations.

Drawing a trend line on the some random subset of an oscillation gives a horribly wrong indication of what’s going on. Without seeing a couple of cycles, you don’t even know what phase or frequency of the oscillation is, which means any trendline is extremely misleading.

IMHO trendlines over an entire window of data that is potentially from an oscillating source should never ever be used. They give a very false sense of the low frequency information in the signal. One that signal processing theory simply doesn’t allow.

A trendline fundamentally violates the Nyquist criteria.

Peter

Reply to  Peter Sable
January 4, 2018 8:58 pm

I always envision global temps as a DC slow ramp voltage with an AC voltage on top with a quasi-period of about 60-70 years.

The slow DC ramp can be positive or negative, but is rarely zero for any significant period.
The take-away, GMST is always changing. The long term average though is declining as the Holocene slowly closes out over the next few millenia.

The climateers are merely exploiting the past 30 years AC ascending node for paychecks, grants, and reputations.

Peter Sable
Reply to  Joel O’Bryan
January 5, 2018 8:00 am

I always envision global temps as a DC slow ramp voltage with an AC voltage on top with a quasi-period of about 60-70 years.

There’s far more than a more 60-70 year period.

There’s the AMO, the PDO, and I think a couple of other oscillations all in the 50-80 year period. The beat frequency between these is its own low frequency signal and also subject to Nyquist.

I’ve seen some evidence that there’s a 200 and 1000 year period of some sort in the temperature. Not conclusive, but enough to worry about Nyquist. The warming since the Little Ice Age is a much clearer signal and of course that has been mostly ignored by the climate priests.

The “DC Slow Ramp” is basically all signals that have a low frequency than viewable in the record window. All those frequencies alias to DC.

Peter

Peter Sable
January 4, 2018 8:38 pm

TL;DR – you cannot currently invalidate the Null Hypothesis that the temperature changes are natural, because we don’t have enough data, and won’t until about year 2119, when we have two 70 year cycles worth of satellite data to look at.

Peter

jorgekafkazar
January 4, 2018 8:47 pm

I would like to look under the hood of a climate model. I have a feeling that some rather obvious factors have been omitted, glossed over with crude empirical/stochastic techniques, or otherwise mishandled.

I worked in research/testing labs early in my career and things did not always go smoothly. Tracklng down the cause was best done like this: Get away from the work place and ask: (1) What are the most fundamental constants in the experiment? Do they really sound right? What are the units? Draw diagrams of the underlying physics. Look the constants up as if starting from scratch. (2) What are the sensitivities and limitations of the instruments AS DELIVERED and INSTALLED? Read the manual. (3) What shortcuts or simplifying assumptions were made? What is the effect of errors in these assumptions? (4) Measure everything and see if it all meets specs. (5) If we had no computer, how would we do this? (6) IS there a way to bypass the computer? If so, try it. (7) Did we ignore any input we received? (8) Are some runs giving bizarre results? Is there a pattern to this? Were we too quick to attribute a cause? Did we stop at the first possible explanation? (9) Explain the experiment and its problems to a colleague who hasn’t been part of the project. (10) Look at sensitivities, error bars, plotting, and charts for classical glitches.

I remember comparing our “abysmal” results to another researcher’s. We’d seen his report before, with its nice, neat plot lines. But when we looked at it for the fourth or fifth time, it finally sank in that the width of his lines was greater than our scatter. He’d used a logarithmic ordinate.

Nick Stokes
Reply to  jorgekafkazar
January 4, 2018 10:13 pm

Here is the NCAR CAM documentation page. User guide is Here is the users guide to CAM 5. The basic structure of the code is set out here (Cam 3).. The code is accessible through the first link..

January 4, 2018 9:25 pm

I won’t bother with this interface any longer. I’m very tired of spending time writing a considered response to these articles only to have them disappear.

Reply to  Bartleby
January 4, 2018 10:21 pm

You got put in moderation for your nasty outbursts the other day.

Alan Tomalty
January 4, 2018 9:31 pm

I have stopped commenting on this site because most of my posts get deleted

Nick Stokes
January 4, 2018 10:23 pm

Numbers like mean and trend are just a property of a set of numbers – called maybe zero and first moments. They exist independently of whether they might be a good predictor. So does the trend line.

The need to predict is very common. Say you are making a local budget and have to allocate funds for urgent road repairs next year. How much? Well, you’d start from last year, and then look at previous years to see if there was a trend. Then you figure out a model. Random walk is the model that says allocate the same as last year. But if you know that the amount has been regularly increasing each year, that would be unwise. The usual thing would be to add in an average increase (RW with drift). That is predicting using a trend. It isn’t infallible (budgetting isn’t expected to be) but it’s the best you can do given what you know.

Reply to  Nick Stokes
January 5, 2018 1:28 am

“Numbers like mean and trend are just a property of a set of numbers” And this is the way one starts advocacy for a pseudo science, named numerology. You can invent many ‘properties’ of sets of numbers, many of them having no physical significance, many being anti-physical. A lot of them will not help you in predictions, but mislead you heavily.

Nick Stokes
Reply to  Adrian Roman
January 5, 2018 3:51 am

“many of them having no physical significance”
Many numbers don’t have physical significance, but people still want summary statistics. As in Dow Jones average, for example. Or trend, for that matter.

Reply to  Adrian Roman
January 6, 2018 1:31 am

” but people still want summary statistics” Many people want religion, astrology, homeopathy, and so on (even numerology), that does not make it science or something more than a pile of bullshit.

paqyfelyc
Reply to  Nick Stokes
January 5, 2018 3:17 am

This is a horrible way to budget, a sure way to maximize both the chance to have unspent budget AND the chance to have to low budget to cope with the situation. Certainly NOT ” the best you can do given what you know.”.
The proper budget procedure for low chance, high stake, known cost event like “urgent road repairs”, is
* pre selecting a contractor that will do the job for a beforehand agreed price if called
* put provision aside, at the level of MAX (not average !) possible cost.
Estimation of this max is based on the observed past events, but not necessarily the observed max, and not subject to the trend, which always exist, but is irrelevant.
You would certainly use insurance instead of provision, if possible. This way you don’t have to worry about budget anymore.

Nick Stokes
Reply to  paqyfelyc
January 5, 2018 3:49 am

Using max estimates in budgetting is no way to make them balance. And you can’t use insurance to cover every uncertain expenditure. At some stage you just have to make your own estimates.

paqyfelyc
Reply to  paqyfelyc
January 5, 2018 4:50 am

Basically budgeting only require common sense, so I certainly won’t forbid a layman to share his thought about it, but, obviously you never budgeted anything, and wouldn’t do it properly if you had to. I did. Fellows from the budgeting team nicknamed a million by my surname, for that was the smallest unit I bothered about. This joke was still in used among them last time I checked (last month).

paqyfelyc
Reply to  paqyfelyc
January 5, 2018 7:09 am

@kip
“urgent road repairs” are Nick’s word, not mine. I indeed interpreted these as unplanned, as opposed to “normal road maintenance” which would be planned, often years ahead and at a known budget.

Hivemind
Reply to  Nick Stokes
January 5, 2018 3:54 am

It is perfectly legitimate to project past the existing data. I studied this in my 1st year statistice unit. The trouble is that the error bars go exponential after just a short distance.

This is one reason I don’t trust the IPCC and related computer models. They make projections way past the danger zone, hundreds of years, where you’re really just looking at noise.

Nick Stokes
Reply to  Hivemind
January 5, 2018 5:32 am

GCMs do not use fitted lines, or any statistical prediction.

Reply to  Hivemind
January 6, 2018 1:38 pm

From the essay Kip linked to:

What nobody is acknowledging is that current climate models, for all of their computational complexity and enormous size and expense, are still no more than toys, countless orders of magnitude away from the integration scale where we might have some reasonable hope of success. They are being used with gay abandon to generate countless climate trajectories, none of which particularly resemble the climate, and then they are averaged in ways that are an absolute statistical obscenity as if the linearized average of a Feigenbaum tree of chaotic behavior is somehow a good predictor of the behavior of a chaotic system!

This isn’t just dumb, it is beyond dumb. It is literally betraying the roots of the entire discipline for manna.

Reply to  Nick Stokes
January 5, 2018 5:13 am

Nick writes

The need to predict is very common. Say you are making a local budget and have to allocate funds for urgent road repairs next year. How much? Well, you’d start from last year, and then look at previous years to see if there was a trend.

And on that basis, how accurate would the budget be when you’re predicting it out 100 years?

Some things can be predicted out a short distance. Annual budgets. Weather, even the stock market to some extent but they all have a best before date and in some cases, that is very short indeed.

Nick Stokes
Reply to  TimTheToolMan
January 5, 2018 5:30 am

I don’t recommend predicting using linear extrapolation for 100 years. But this post pronounces baldly:

“This post contains the message “Trends do not and cannot predict future values” .”

And that just isn’t true. People use trends all the time, as in these mundane situations. It is ignoring trend that makes for bad predictions.

jclarke341
Reply to  TimTheToolMan
January 5, 2018 7:16 am

“It is ignoring trend that makes for bad predictions.” And it is having complete faith in trends that make for worse predictions. A trend is simple a fraction of a pattern. Looking only at an isolated trend without any knowledge of the overall pattern is disastrous. Even knowing the pattern without knowing the underlying cause is a poor predictor, but it is far better than just knowing the trend,

In climate science, we currently have a partial understanding of a fraction of the underlying causes, suppress or ignore apparent patterns and steadfastly adhere to the extension of a trend that is not happening! Could the ‘science’ be any weaker?

paqyfelyc
Reply to  TimTheToolMan
January 5, 2018 7:47 am

@kip
” See, some people do actually believe that the TREND predicts the FUTURE. who would’a thought? ”
Actually, completely expected from Nick.

jclarke341
Reply to  TimTheToolMan
January 5, 2018 8:17 am

Yes…Consider the stock market. The ones who have the most success in the market are those who understand the underlying causes, and recognize the general economic and investment patterns. They grow their wealth from the many investors who only bank on the trends, and generally lose their money.

I learned this the hard way after purchasing investment software that was entirely based on identifying trends. I didn’t want to have to study the market or individual companies. That was too complex. I wanted to make money the easy way, by following the trends. The software identified trends with great accuracy, and it only took me a year to lose my retirement nest egg,,,about 100K. Of course, this was in 2007, when everyone betting on the real estate trends, lost much of their wealth.

Don K
Reply to  TimTheToolMan
January 5, 2018 8:32 am

“I don’t recommend predicting using linear extrapolation for 100 years.”

Quibble. The timespan where linear projections may have some validity depends on context. For example, geologists assume constant motion of plates for very long time spans. Ask a geologist and they’ll probably tell you that, given current slip rates along the San Andreas fault, the Los Angeles Basin should be arriving at San Francisco in about 47 million years. They might be right.

gnomish
Reply to  TimTheToolMan
January 5, 2018 4:17 pm

the casinos LURVE a player who has a system.
they have a special name for those.
gambler’s fallacy is the foundation of statistical prediction.

Peter Sable
Reply to  Nick Stokes
January 5, 2018 8:14 am

e unwise. The usual thing would be to add in an average increase (RW with drift). That is predicting using a trend. It isn’t infallible (budgetting isn’t expected to be) but it’s the best you can do given what you know.

I have a budget for house repairs. What your are proposing would be a terrible way to run it.

The roof needs to be done every 15 years. Same with kitchens, bathrooms, and anything else involving water. (I live in the wet Pacific North West…)

If I do a kitchen remodel last year for $60k, I don’t plan for a $60k budget the next year. I plan for one in 15 years, which means I save $4k each year towards the next remodel or some other suitable savings method. (okay, panicking is one such method…)

Notice what I have here is a periodic signal. Every 15 years something involving water needs a remodel. I just hope they don’t happen on the same year…

A trendline aliases to DC every signal whose frequency is lower than 2x implied by the window length the trendline is drawn over.

For the temperature record we have that’s reliable, the AMO, the PDO, the alleged 200 and 1000 year cycles, etc. are all aliasing to DC in some unknown way since we don’t know the phase or amplitude of those cycles very well.

Trendlines should never be used when the underlying signal is oscillatory and there is a valid hypothesis that there are signals present that have a lower frequency than 2x implied by the window length of your data.

and 2x is barely there. You really need 5x when you have multiple signals near each other’s frequency, like say the AMO and the PDO, because the signals beat against each other

The trendline is worse than meaningless, it’s misleading.

Peter

Nick Stokes
Reply to  Peter Sable
January 5, 2018 4:34 pm

“I have a budget for house repairs. What your are proposing would be a terrible way to run it.”
Yes. But suppose you were managing 100 houses.

“Trendlines should never be used when the underlying signal is oscillatory”
You should use your best knowledge of the data and its basis. If you mistake oscillation for trend, that’s bad for your forecast. If you mistake trend for oscillation, that’s bad too.

Trend is basically a differentiating filter, with averaging (so average derivative over a period). It’s actually a Welch smooth of a derivative (more here and links). It works well as a derivative for oscillations of long period relative to regression length. As the period gets shorter, it doesn’t follow the oscillations. But it gets closer to the mean, which is the best estimator if you can’t establish the oscillation.

Peter Sable
Reply to  Peter Sable
January 5, 2018 10:57 pm

It works well as a derivative for oscillations of long period relative to regression length.

No, trendlines doesn’t work well at all in this case. Oscillations of period longer than the regression length just randomly give you an upslope or a downslope that’s completely meaningless, because it’s random, depending on where in the oscillation your smaller window was sampled.

Such a trendline is worse than meaningless, it’s misleading.

Nick Stokes
Reply to  Peter Sable
January 6, 2018 1:21 am

“that’s completely meaningless, because it’s random”
It isn’t meaningless. It is a derivative, which is all a trend ever claimed to be. It isn’t random, it has sinusoids with the usual 90° phase shift.

John F. Hultquist
January 4, 2018 11:33 pm

Let’s take the US stock market for Jan 2, Jan 3, and Jan 4 (2018).
Project this in a linear fashion.
Okay, maybe not! Button collectors beware.

January 5, 2018 12:57 am

Yes, Kip, all of this is very basic and well known, and it is very surprising that most people appear to have got the idea that the trend will determine future values.

All we can say is that trends have a tendency to continue until they don’t. We should prepare and act in consequence.

Far more interesting is the study of inflection points. Are we prepared to recognize them when they come? For a time they are not easy to spot. Are we capable of predicting them?

Do we use a linear or a polynomial trend to predict future September Arctic sea ice extent?
comment image

Reply to  Kip Hansen
January 5, 2018 6:30 am

Kip,

However we do have some success at predicting the future in certain cases. We can predict the coming of the next summer, or the occurrence of the next eclipse.

Seasons can be predicted even without any knowledge due to their high repeatability. Eclipses can be predicted due to a clear understanding of their causes.

Other things we know will happen, but we don’t know when. The present interglacial will eventually come to an end.

gnomish
Reply to  Kip Hansen
January 5, 2018 7:37 pm

that’s right Javier- cause and effect are not statistical. logic pertains.
stochastic processes are, by definition, not predictable.
so if a proposition can not resolve to true/false, then it is unreasonable and not scientific.
it’s just that simple.

Coeur de Lion
January 5, 2018 1:17 am

Read ‘Black Swan ‘ by Nicholas Kassandra Taleb. (Turkeys had a nice life until say 23 Dec)

A C Osborn
Reply to  Coeur de Lion
January 5, 2018 4:53 am

Most Turkeys wouldn’t have any kind of life if they weren’t needed for Christmas.
Just like all other live stock.

Don K
Reply to  Kip Hansen
January 5, 2018 8:01 am

I think perhaps you’ve missed Taleb’s point which may well be valid. What Taleb seems to be telling us is that a lot of stuff is well-behaved most of the time, but the math that works most of the time can and sometimes does grossly underestimate the number and magnitude of outliers.

That is to say that “two-sigma (or more)” events may well only occur 2.5% of the time. But “ten sigma” or “twenty sigma” events — while rare — are way more common than one might expect.

If I’ve got this right, the impact on routine climate analysis is very small. But when, for example, one of the “Supervolcanoes” decides to blow up, climate models won’t predict it and will make incorrect predictions for a very long time after the event.

Reply to  Coeur de Lion
January 9, 2018 12:16 pm

Fooled by Randomness was a good book.

The Black Swan was not in my opinion.

I have no idea what happened to Taleb
between those two books, but it was not good.

Dr. S. Jeevananda Reddy
January 5, 2018 3:14 am

Let me present a case of vested groups manipulate data series to prove the preconceived hypothesis –“there is plenty of water in Krishna River” [whole story I brought to the notice of Prime Minister of India through a mail”:

Bachawat Tribunal used 78 years data series [1894-95 to 1971-72] which was available to him at that time – all the three riparian states accepted the data series. In this 41 years forms poor rains and 37 years good rains of 132 year cycle.

Brijesh Kumar Tribunal used 47 years data series [1961-62 to 2007-08] though 114 years data series [1894-95 to 2007-08] were available, in which 40 years form good rains and seven years forms poor rains of 132 year cycle – Undivided AP disagreed on using this high rainfall period data. Because of this the mean available water is more than that of previous Tribunal by 185 tmcft.

Though he selected 47 years data series but in reality used five different data sets to prove his unethical inferences. For example: To serve the vested interests Brijesh Kumar Tribunal followed a manipulated path with reference raising Almatti Dam height; and the same was adopted by Central Water Commission (CWC). The former used 26 years data from the 47 years; CWC used 30 years data series in which 22 years of 26 years data series form part. Some of the statistics of these computations along with 114 years data series are given below:

1. Bachawat Tribunal [April 1969 – 27 May 1976] used 78 years’ data series [1894-95 to 1971-72]:
The Lowest — 1007 tmcft [1007, 1125, 1273, 1451]
The Highest — 4166 tmcft [4166, 3760, 3721, 3482]
The 75% probability value — 2060 tmcft + 70 tmcft of return flows
The mean [43% probability value] — 2393 tmcft

2. Brijesh Kumar Tribunal [August 2006 – 30 November 2013] used 47 years data series [1961-62 to 2007-08].
The Lowest — 1239 tmcft [1239, 1253, 1512, 1649, 1836, 1840]
The Highest — 4194 tmcft [4194, 3760, 3624, 3519, 3397, 3318]
The 75% probability value — 2173 tmcft – but used 2130 tmcft only
The mean [58% probability value] — 2578 tmcft

3. 114 years data series [1894-95 to 2007-08] – 78+47-10 (overlap period)
The Lowest — 1007 tmcft [1007, 1125, 1239, 1253]
The Highest — 4194 tmcft [4194, 4166, 3760, 3721]
The 75% probability value — 2173 tmcft – but used 2130 tmcft only
The mean [58% probability value] — 2578 tmcft

4. Brijesh Kumar Tribunal to justify for raising Almatti Dam height used 26 years data series [1981-82 to 2006-07] from 47 years data series
The Lowest — 1239 tmcft [1239, 1253, 1649, 1836, 1840, 1842, 1868, 1934]
The Highest — 3624 tmcft [3624, 3318, 3239, 3187, 3185]
The 75% probability value — 2000 tmcft
The mean [50% Probability value] — 2400 tmcft

5. CWC to justify plenty of water in Krishna used 30 years data series [1985-86 to 2014-15]
The Lowest — 1934.89 tmcft
The Highest — 4165.42 tmcft
The 75% probability value — 2522 tmcft
The mean [50% Probability value] — 3144 tmcft

The data series periods in the five systems along with the 132 year cycle periods in rainfall of AP [top] are schematically represented below.

Figure 35: Schematic Presentation of Different Data Sets in 132 year cycle periods

1858 B 1935 A 2001 B 2066
|———————————–|———————————-|———————————-|

1894-95 78 1971-72
1. |—————————————-|

1961-62 47 2007-08
2. |————————————|

1894-95 114 2007-08
3. |———————————————————————|

1981-2 26 2006-07
4. |——————–|

1985-86 30 2014-15
5. |—————————-|

A = above the average part of 66 years [24 drought years, 12 flood years]; B = below the average part of 66 pears [12 drought years, 24 flood years]

According CWC the highest water availability of 4165.42 tmcft was in 2010-11. However, even higher value of 4194 tmcft was recorded in1975-76.

They say the lowest 1934.89 tmcft was recorded in 2002-03. According to Brijesh Kumar Tribunal this was shown against 2004-05; and in 2002-03 and 2003-04 recorded 1239 & 1253 tmcft [the lowest observed was 1007 in 1918-19; also 1125 in 1899-1900; 1273 in 1905-06]. That means CWC showed around 700 tmcft more than the real lowest value.

How can it will be possible when rainfall, water received in to Nagarjunasagar Dam, water used in Delta as against allocated 181.2 tmcft and water entered in to the sea, etc are all against this inference of CWC. For example:

In undivided AP out of 23 districts, in 2002-03 received deficit [< 90% of the average] rainfall in 16 districts during the southwest monsoon season and 15 districts during the northeast monsoon season — In 2002 and 2009 with severe drought conditions with 81% and 79 % of average rainfall presented a raise of 0.7 and 0.9 oC in temperature, respectively at all India level –. .

It is also a fact that many years after 2001 Nagarjunasagar Dam wasn’t reached to its full capacity on many years – pumped below the dead storage level for drinking water to Hyderabad. Also, during 2001-02 to 2005-06 water availability were 1836, 1239, 1253, 1934, 3624 tmcft and Delta received 190, 118, 84, 137 & 187 tmcft against the allocated 181.2 tmcft; and water entered the sea were respectively 111, 13, 12, 23, 1273 tmcft.

All these clearly show it is a false alarm created by CWC to serve the vested interests and finally the intention appears to be that with this get vacated the Supreme Court stay order on Brijesh Kumar Tribunal Award. The 75% probability and the mean values on the probability curve primarily relate to the lowest and the highest water availability values per year. CWC manipulated the lowest [1935 tmcft] by around 700 tmcft and arrived at very high values for “75% as 2523 tmcft and for the mean as 3144 tmcft”, which are abnormally high and thus created sensation by saying “Plenty of Water Available in Krishna”. These values are available on 114 probability curve at 42% and 12% and not at 75% and the Mean. Scientific institutions and well judges on tribunals present such poor quality assessments that affect common man for generations.

Dr. S. Jeevananda Reddy

Kurt in Switzerland
Reply to  Dr. S. Jeevananda Reddy
January 5, 2018 7:26 am

1) lies
2) damn lies
3) statistics

Unattributed adage: never trust a study which you didn’t personally skew. 🙂

Dr. S. Jeevananda Reddy
Reply to  Kurt in Switzerland
January 5, 2018 4:16 pm

Kurt in Switzerland — please tell us what you want say, so that I can respond.

Dr. S. Jeevananda Reddy

D. J. Hawkins
Reply to  Kurt in Switzerland
January 9, 2018 10:26 am

@Dr. S. Jeevananda Reddy;

I believe that Kurt was supporting your position by quoting an aphorism that was popularized by Mark Twain. It is supposed to have originated with Benjamin Disraeli according to Twain, per Wikipedia.

A C Osborn
January 5, 2018 4:57 am

Kip, I have one small niggle with the Button Collector, how do you physically count the Negative Button on day 9?
Was it a miscount? in which case the previous count needs correcting etc.

A C Osborn
Reply to  Kip Hansen
January 5, 2018 7:55 am

So not as I first thought accumulative?
Each count is for that day, got it now.

Old England
Reply to  Kip Hansen
January 5, 2018 9:03 am

But if the Button Count was being run by Climate Scientists there would be no real problem with the numbers reaching 10 million by the end of the week …. I can just imagine it…..

100 Button counters counting the pile of buttons pushed to them … but by half way through the week some of the button counters aren’t getting high enough counts …. must be something wrong with their counting ( a bit like temperature stations that show colder than hoped-for temperatures) …. simple we’ll choose which button counters to get rid of and then extrapolate the numbers from the button counters who have regularly had higher counts …… end of week No Problem we have the ‘correct’ amount of buttons (warming).

or am I too cynical.

Kurt in Switzerland
January 5, 2018 5:27 am

Typos:
Data pints, lines and graphs
Linus Pauling, brilliant Noble Prize winning chemist.

Clyde Spencer
Reply to  Kip Hansen
January 5, 2018 11:26 am

Kip,
While were on the topic of ‘typos,’ in your prologue: “Readers may not familiar with…”

nutso fasst
January 5, 2018 6:02 am

Most dog walkers eventually wind up back where they started.

Don K
January 5, 2018 6:49 am

“As we see above, even obvious trends cannot be used to predict (no less cause or determine) future values in the absence of a true [or at last, “fairly true”] and clear understanding of the processes, systems and functions (causes) that are producing the results, data points, which form the basis of your trend.”

Counterexample: The Ptolemaic (geocentric) model of the universe with its cycles and epicycles was dead wrong. And by the 15th Century AD there were lots of folks who were pretty sure it was wrong. But it did make correct predictions. You could navigate using the star and planet position predictions it made.

Dan Evens
January 5, 2018 6:49 am

Let me restate this entire post: If you have no clue what the process is that produces the data, then any trend lines you may draw on a graph are not justified. They could very easily be completely different from reality.

You could have said this and saved so much time.

January 5, 2018 6:53 am

…graphing something like “annual average data” can present wildly misleading information…

Yes, all the “hottest year ever” claims are based on averages. NOAA’s Climate at a Glance allows you to track the Minimum and Maximum temperatures. Those of course are averages too, but the picture that emerges is a little different:

http://oi66.tinypic.com/bbjue.jpg

From my funny quotes and tag line file there’s this one on averages:

“Be careful of averages, the average person has one breast and one testicle” – Dixy Lee Ray

Don K
Reply to  Steve Case
January 5, 2018 7:33 am

The problem with “hottest year ever” is that the metric being used is basically the temperature of the tropical Pacific Ocean. That’ll work I suppose … If you have a century or three to collect data. Shorter term, you’re looking at the balance between ENSO and the Humboldt Current — which may not be all that informative about temperatures elsewhere on the planet.

paqyfelyc
Reply to  Steve Case
January 5, 2018 7:36 am

I (like lots of people) live in place where weather alternate between humid+mild temperature for a few days, and dry+extreme( hot in summer / cold in winter) for the few next days. So, everyday, the weatherman states that the temperature are either “higher than average [for the season]” or “colder than average’, except a few days in the whole year when he says that “temperature are average”. Most people do not get it, but these are the ABNORMAL days, the days with one breast and one testicle.

Reply to  Steve Case
January 5, 2018 1:11 pm

Time and temperature are independent variables. As such a trend of U.S. temperature to time has no meaning. The purple trend predicts that in order to lower U.S. temperature, we must go back in time.

Don K
January 5, 2018 7:12 am

A thought: Perhaps the best known example of deriving an important theory took place about 1600. And the order was NOT (predict-observe-adjust) until the answers converged. It was observe (Tycho Brahe)-analyze and predict(Kepler)-explain(Newton).

Maybe there’s a message there. First get good data. Second produce (a) model(s) that matches the data. Third verify that the model makes correct predictions. And only then explain why the model works.

BTW, it seems to me that quantum physics is following the observe-analyze-explain path. They’ve collected lots of data. They have a bunch of modeling that make good predictions. And they don’t really have the slightest idea why/how it all works.

Climate “science” OTOH seems to be following an observe-predict-change_the_observations_to better_fit_the_predictions-fire_off_insults_at_anyone_who_questions_the_process approach.

paqyfelyc
Reply to  Don K
January 5, 2018 7:43 am

+1
One thing most people forget is that the geocentic model DID work very well, giving close to perfect predictions. Astronomers were able to predict where and when eclipse would occur years ahead.
Climate “science” cannot even do that…

Thomas Homer
Reply to  paqyfelyc
January 5, 2018 12:45 pm

paqyfelyc – “the geocentic model DID work very well”

Indeed. However, the geocentric model had to introduce “celestial spheres” to complete the model for planetary orbits within our solar system. Does the correctness of the geocentric model then prove the existence of “celestial spheres”?

CAGW models have introduced a non-measurable property of Carbon Dioxide, and then claim that their model proves the existence of it. Even when their models don’t work very well.

Reply to  Don K
January 5, 2018 4:44 pm

And the order was NOT (predict-observe-adjust) until the answers converged. It was observe (Tycho Brahe)-analyze and predict(Kepler)-explain(Newton).

But would Kepler have gotten anywhere if he hadn’t started with Copernicus’s model?

Svend Ferdinandsen
January 5, 2018 7:47 am

Even with no real correlation you can get a very good correlation between two data sets by averaging, smoothing, and trendlines. The trendlines are bound to have 100% correlation, unless one of them is level.
Anomalies are another trick to make results look different than they are. Sometimes it improves the understanding, at other times it hides the reality. Think of temperature anomalies and ice cover and snow.
When you really wants to make things up, you can use matrix operations.

JimG1
January 5, 2018 8:09 am

Kip,
Excellent post. I always consider temperature information to be based upon very nebulous, if not nefarious, data due to a variety of factors, including but not limited to instrumental precision, sampling techniques, proxies, changes over time in proxies and equipment and algorithms and sample sizes. The relatively small variations which are usually cited are ridiculous and error bars do not include all of the potential error in the forgoing.
JimG1

January 5, 2018 9:37 am

These two guys were jogging along at a sedate pace until the mid 1970’s when they discovered steroids.
http://www.vukcevic.talktalk.net/CT4tl.gif

Don K
January 5, 2018 10:44 am

Kip

As usual, your essay is well written, but I’m I little confused. Is your point that the map (a linear data fit) is not the territory? Of course it isn’t.

Or are you saying that one can’t ever make and act on projections of anything that isn’t fully understood? Seems to me that the latter is a non-starter. Taken literally, humanity would never have moved beyond Eurasia and Africa. After all, there’s no way to be sure there’s any land beyond the horizon.

Bill Powers
Reply to  Don K
January 5, 2018 10:55 am

Don, I didn’t interpret what he was saying to mean that one can’t act on projections. I understood him to mean that one must understand his data and act accordingly.

Reply to  Don K
January 5, 2018 11:13 am

Don’t think that was what Kip was implying. “Fully understood” may not be necessary but you do need to have some understanding of what is underlying your trend before using it to predict. If you just use a trend with no knowledge of what is going on means you’re a mathematician (lol) with confidence in your skill with numbers. If it turns out badly, told you so!

Don K
Reply to  Kip Hansen
January 5, 2018 1:28 pm

Columbus-types were a bit more adventurous…

Completely tangential, but I’ve long suspected that the reason Columbus had trouble getting his project funded that most Court Wise Men in Southern Europe knew that Eratosthenes had measured the size of the Earth back in 200BC and they told their monarchs that there was no way Columbus could get to the East Indies in a reasonable sized ship. Columbus was just lucky that the Americas got in his way or he’d have been in serious trouble. Or maybe he wasn’t so lucky since he ended up suffering from a number of maladies probably related to his voyages and also did some jail time.

Bill Powers
January 5, 2018 10:52 am

Kip, Highly informative! My father, who was in the insurance business, use to remind me growing up that “figures lie and liars figure” That would make an apt subtitle to you article.

January 5, 2018 10:54 am

Kip -> Very good article. I just wish the essays you and Dave have written over the last few months would become required reading for many scientists (not just CliSci) publishing papers. I wish every editor required that accuracy and precision be addressed in every paper. And, I do know the saying about wishes in one hand, … .

Clyde Spencer
January 5, 2018 11:50 am

Kip,

It is well known that humans have the uncanny ability to see patterns where there are none, such as a ‘face’ in a cloud formation. I suspect it had survival value for our ancestors trying to find a predator hiding in the tall grass. We also would like to know what the future has to hold for us. It is a very unpleasant experience to be walking in the dark and discover that you have just walked off the edge of a high cliff. Therefore, we try to grasp at anything that might help us discern future events. Some even resort to auto-correlation to hazard a guess as to what the temperature might be tomorrow. I agree that a model based on physical principles is the best approach. However, in the absence of any reliable models, and a strong desire not to have the future always be a complete surprise, relying on auto-correlation to extrapolate a trend for a short distance might be rationalized.

I think that a good analogy might be a fighter pilot attempting to shoot down an enemy aircraft. The pilot leads the enemy aircraft, attempting to guess where the plane will be a couple seconds later, in the hope that the bullets and plane will intersect. Of course, the enemy pilot is trying hard to prevent that from happening, so it is continually changing its ‘trend.’ While it may not be the optimal strategy for destroying an enemy aircraft, it was the best we could do until the invention of heat-seeking or radar-guided missiles. That is to say, projecting a trend may be useful some of the time for short extrapolations, even if it is often wrong. After all, planes did get shot down with simple machine guns. However, projecting out 100 years is a whole different ball game!

Clyde Spencer
Reply to  Kip Hansen
January 5, 2018 2:18 pm

Kip,
Your Little Leaguers are unknowingly projecting the path of a parabola, which is quite predictable, even with numerical trends — unless there are strong gusty winds at the time. However, that reinforces the point about auto-correlation. The ball may come down in a place different for an ideal parabola, with winds, but it won’t be out of the ballpark.

Don K
January 5, 2018 12:53 pm

To make scientifically valid predictions or projections from a data set, one has to understand what the system is that is producing the data

Sorry. I can’t quite buy into that. One can understand fairly well without having all the details pinned down. In fact, I think that’s pretty usual. When that’s the case, you make estimates based on your best guess. Given the tendency for polynomials to zoom off to infinity or crash to zero when projected outside the data range and the fact that exponential growth doesn’t usually last that long, your best guess on on-cyclic data will likely be a linear projection. Of course it needs realistic uncertainty estimates and those typically get large very quickly. That seems to be really hard for a lot of people to deal with … including many who really should know better.

Nick Stokes
Reply to  Kip Hansen
January 5, 2018 3:32 pm

“Anyone can guess, even five-year-olds.”
They don’t usually do linear regression. That is an evidence-based forecast. It is of course advisable to use more evidence if you can. You never get certainty. You are drawing a non-existent distinction.

Nick Stokes
Reply to  Don K
January 5, 2018 3:29 pm

“Given the tendency for polynomials to zoom off to infinity or crash to zero when projected outside the data range”
That is really the key here (though polynomials don’t go to infinity in finite time). I remember in the 60s when people were trying fitting polynomials to stock prices for short term prediction. They weren’t necessarily bad fits. But higher order meant more variance, and that carries a cost. Most people have a nonlinear response to variance (eg going broke). So the frequent conclusion that tomorrow’s prices same as today’s was as good as any in practice.

And that is the quantitative issue re use of linear projection. The trend may well be optimal as a probability peak of outcomes, but it is uncertain, and the uncertainty is magnified by the leverage effect of projecting over long times. So it isn’t that trend is useless in prediction. It’s important. But at sometime the variance will be too much. That depends on how far you project relative to the extent of data you have and the goodness of fit, and the cost to you of variance.

That is all familiar. I’ve mentioned budgetting. People normally budget for a year. That is a period for which experience is a useful guide, and trend is part of that. But three year budgets are also quite common. It’s a tradeoff between uncertainty of forecasts vs the planning benefits of having a budget to work to.

January 5, 2018 1:09 pm

Time does not make button sales. Since the two are independent, the trend is constantly changing with each data point. In commodity trading, traders only use trends for breakouts from the trends to trigger buy/sell decisions, they do not use the trend itself to predict anything, as it predicts nothing. Least squares is based on functionally related x and y values in a graph, that is, they are dependent.

January 5, 2018 3:57 pm

Past data must be stored (archived).
There are programs to that which allow the upper and lower limits be set as just which values will be stored.
Some values are not stored because they don’t “break” the upper or lower limit. The idea is to reduce the disk space required and/or allow for more rapid retrieval.
Old stored data stored with by an older program can be run through a new program. Old values may be dropped (lost).
The upper and lower limit might be set to, say +/- 0.5 MGD (Million Gallons per Day), or they could be set to +0.6 MGD and -0.4 MGD.
No valid reason I can think of for doing that, but it could be done.
Past temperature?
PS The time frame for just which +/- limits are applied can also be set.
The past values can be changed en mass.
Present values going into the archive can also be … handled … in such a manner as they are being stored.
Most would want an accurate (enough) record to reflect the past to on which to base future plans.
Or “adjust” the past to trend the future.

January 6, 2018 8:27 am

I’ve been downing vitamin c powder for colds for years, it does work.

Reply to  Mark - Helsinki
January 6, 2018 11:14 am

Vitamin C became a ‘miracle tablet’ after Linus Pauling wrote a book about Vitamin C and cold. Pauling got Nobel prize for chemistry and some years later the Nobel peace prize; great scientist, academic, educator and peace campaigner, but according to a member of his wider family I met in the UK some three decades ago, he was a very difficult person to live with.

robinedwards36
January 8, 2018 4:31 am

I read stuff on linear fitting and “valid” predictions made from such fits. There’s a lot of this about! Linear fits (in time series calculations) are reasonable enough provided that the underlying model is indeed linear in nature. In the esoteric world of climate data fitting (and therefore by implication predictions – what s a model for?) – climatologists and others routinely compute linear fits to data that are grossly non-linear in nature, for which simple scatter plots are adequate reveal this.
Now, climate data are affected by countless forms of influence, and indeed errors of various sorts, which are typically ignored by the technique of allocating them to “noise”. Nevertheless these influences can have the effect of swamping any attempt to make a useful or valid “prediction” or “projection.
How is one to appreciate this underlying problem? Well, a simple one is to plot all the data “as is” together with the linear fit, including its equation and inferential statistics based on presumed underlying normality of residuals, PLUS confidence intervals for the regression line AND for an individual future observation from the same population. Choose your probability level at a sensible level.
This plot would illustrate to all the amateur statistical detectives who try to infer conclusions from the simple, single straight line that is all that is usually published, that their confident assertions are, to say the very least, subject to some uncertainty.
My own analyses /always/ include these essential details.

Reply to  Kip Hansen
January 9, 2018 12:21 pm

So all that blathering and commenting
and re-commenting
and you never got around to telling us
when all life on earth
is going to end
from runaway global warming ?

Or is that in your next article?

http://www.elOnionBloggle.Blogspot.com