Why You Shouldn’t Draw Trend Lines on Graphs

Guest essay by Kip Hansen

featured_image_linesWhat we call a graph is more properly referred to as “a graphical representation of data.”  One very common form of graphical representation is “a diagram showing the relation between variable quantities, typically of two variables, each measured along one of a pair of axes at right angles.”

Here at WUWT we see a lot of graphs —  all sorts of graphs of a lot of different data sets.  Here is a commonly shown graph offered by NOAA taken from a piece at Climate.gov called “Did global warming stop in 1998?” by Rebecca Lindsey published on September 4, 2018.

agw_propagsnda

I am not interested in the details of this graphic representation — the whole thing qualifies as “silliness”.  The vertical scale is in degrees Fahrenheit and the entire range change over 140 years shown is on the scale 2.5 °F or about a degree and a half C.   The interesting thing about the graph is the effort of drawing of “trend lines” on top of the data to convey to the reader something about the data that the author of the graphic representation wants to communicate.  This “something” is an opinion — it is always an opinion — it is not part of the data.

The data is the data.  Turning the data into a graphical representation (all right, I’ll just use “graph” from here on….), making the data into a graph has already  injected opinion and personal judgement into the data through choice of start and end dates, vertical and horizontal scales and, in this case, the shading of a 15-year period at one end.  Sometimes the decisions as to vertical and horizontal scale are made by software — not rational humans —  causing even further confusion and sometimes gross misrepresentation.

Anyone who cannot see the data clearly in the top graph without the aid of the red trend line should find another field of study (or see their optometrist).  The bottom graph has been turned into a propaganda statement by the addition of five opinions in the form of mini-trend lines.

Trend lines do not change the data — they can only change the perception of the data.  Trends can be useful at times [ add a big maybe here, please ] but they do  nothing for the graphs above from NOAA other than attempt to denigrate the IPCC-sanctioned idea of “The Pause”, reinforcing the desired opinion of the author and her editors at Climate.gov (who, you will notice from the date of publication, are still hard at it hammer-and-tongs, promoting climate alarm). To give Rebecca Lindsey a tiniest bit of credit, she does write “How much slower [ the rise was ] depends on the fine print: which global temperature dataset you look at”….   She certainly has that right.  Here is Spencer’s UAH global average lower tropospheric temperature:

spencrs_pause

One doesn’t need any trend lines to be able to see The Pause that runs from the aftermath of the 1998 Super El Niño to the advent of the 2015-2016 El Niño.  This illustrates two issues:  Drawing trend lines on graphs is adding information that is not part of the data set and it really is important to know that for any scientific concept, there is more than one set of data — more than one measurement — and it is critically important to know “What Are they Really Counting?”, the central point of which is:

So, for all measurements offered to us as information especially if accompanied by a claimed significance – when we are told that this measurement/number means this-or-that — we have the same essential question: What exactly are they really counting?

Naturally, there is a corollary question: Is the thing they counted really a measure of the thing being reported?

I recently came across an example in another field of just how intellectually dangerous the cognitive dependence (almost an addiction) on trend lines can be for scientific research.  Remember, trend lines on modern graphs are often being calculated and drawn by statistical software packages and the output of those packages are far too often taken to be some sort of revealed truth.

I have no desire to get into any controversy about the actual subject matter of the paper that produced the following graphs.  I have abbreviated the diagnosed condition on the graphs to gently disguise it.  Try to stay with me and focus not on the medical issue but on the way in which trend lines have affected the conclusions of the researchers.

Here’s the big data graph set from the supplemental information for the paper:

Note that these are graphs of Incidence Rates which can be considered “how many cases of this disease are reported per 100,000 population?”, here grouped by 10-year Age Groups.  They have added colored trend lines where they think (opinion) significant changes have occurred in incident rates.

age-specific_incidence_men

[ Some important details, discussed further on, can be seen on the  FULL-SIZED image, which opens in a new tab or window. ]

IMPORTANT NOTE:  The condition being studied in this paper is not something that is seasonal or annual, like flu epidemics.  It is a condition that develops, in  most cases,  for years before being discovered and reported, sometimes only being discovered when it becomes debilitating.  It can also be discovered and reported through regular medical screening which normally is done only in older people.  So “annual incidence” may not a proper description of what has been measured — it is actually a measure of “annual cases discovered and reported’ — not actually incidence which is quite a different thing.

The published paper uses a condensed version the graphs:

Incidence_Trends

The older men and women are shown in the top panels, thankfully with incidence rates declining from the 1980s to the present.  However, as considerately reinforced by the addition of colored trend lines, the incident rates in men and women younger than 50 years are rising rather steeply.  Based on this (and a lot of other considerations), the researchers draw this conclusion:

Conclusions

Again, I have no particular opinion on the medical issues involved…they may be right for reasons not apparent.  But here’s the point I hope to communicate:

Confused_by_Trendlines

I annotate the two panels concerning incidence rates in Men older than 50 and Men younger than  50.   Over the 45 years of data, the rate in men older than 50 runs in a range of 170 to 220 cases reported per year, varying over a 50 cases/year band.   For Men < 50, incidence rates have been very steady from 8.5 to 11 cases per year per 100,000 population for 40 years, and only recently, the last four data points, risen to 12 and 13 cases per 100,000 per year — an increase of one or two cases [per 100,000 population per year. It may be the trend line alone that creates a sense of significance. For Men > 50, between 1970 and the early 1980s, there was an increase of 60 cases per 100,000 population.  Yet, for Men < 50, the increased discovery and reporting of an additional one or two cases per 100,000  is concluded to be a matter of “highest priority” —  however, in reality, it may or may not actually be significant in a public health sense —  and it may well be within the normal variance in discovery and reporting of this type of disease.

The range of incidence among Men < 50 remained the same from the late 1970s to the early 2010s —  that’s pretty stable.  Then there are four slightly higher outliers in a row — with increases 1 or 2 cases per 100,000.   That’s the data.

If it were my data — and my topic — say number of Monarch butterflies visiting my garden annually by month or something, I would notice from the panel of seven graphs further above, that the trend lines confuse the issues.   Here it is again:

age-specific_incidence_men[ full-sized image in new tab/window]

If we try to ignore the trend lines, we can see in the first panel 20-29y incidence rates are the same in the current decade as they were in the 1970s — there is no change. The range represented in this panel, from lowest to highest data point, is less than 1.5 cases/year.

Skipping one panel, looking at 40-49y, we see the range has maybe dropped a bit but the entire magnitude range is less than 5 cases/100,000/year.  In this age-group, there is a trend line drawn which shows an increase over the last 12-13 years, but the range is currently lower than in the 1970s.

In the remaining four panels, we see “hump shaped” data, which over the 50 years, remains in the same range within each age-group.

It is important to remember that this is not an illness or disease for which a cause is known or for which there is a method of prevention, although there is a treatment if the condition is discovered early enough.   It is a class of cancers and incidence is not controlled by public health actions to prevent the disease.  Public health actions are not causing the change in incidence.  It is known to be age-related and occurs increasingly often in men and women as they age.

It is the one panel, 30-39y , that shows an increase in incidence of just over 2 Cases/100,000/year that is the controlling factor that pushes the Men < 50 graph to show this increase.  (It may be the 40-49y panel having the same effect.) (again, repeating the image to save readers scrolling up the page):

age-specific_incidence_men

Recall that the Conclusion and Relevance section of the paper called this “This increase in incidence among a low-risk population calls for additional research on possible risk factors that may be affecting these younger cohorts. It appears that primary prevention should be the highest priority to reduce the number of younger adults developing CRC in the future.”

This essay is not about the incidence of this class of cancer among various age groups — it is about how having statistical software packages draw trend lines on top of your data can lead to confusion and possibly misunderstandings of the data itself.   I will admit that it is also possible to draw trend lines on top of one’s data for rhetorical reasons [ “expressed in terms intended to persuade or impress” ], as in our Climate.gov example (and millions of other examples in all fields of science).

In this medical case, there are additional findings and reasoning behind the researchers conclusions — none of which change the basic point of this essay about statistical packages discovering and drawing trend lines over the top of data on graphs.

Bottom Lines:

  1. Trend lines are NOT part of the data. The data is the data.
  1. Trend lines are always opinions and interpretations added to the data and depend on the definition (model, statistical formula, software package, whatever) one is using for “trend”. These opinions and interpretations can be valid, invalid, or nonsensical (and everything in between)
  1. Trend lines are NOT evidence — the data can be evidence, but not necessarily evidence of what it is claimed to be evidence for. 
  1. Trends are not causes, they are effects. Past trends did not cause the present data.  Present data trends will not cause future data.   
  1. If your data needs to be run through a statistical software package to determine a “trend” — then I would suggest that you need to do more or different research on your topic or that your data is so noisy or random that trend maybe irrelevant.
  1. Assigning “significance” to calculated trends based on P-value is statistically invalid.
  1. Don’t draw trend lines on graphs of your data. If your data is valid, to the best of your knowledge, it does not need trend lines to “explain” it to others. 

# # # # #

 

Author’s Comment Policy:

Always enjoy your comments and am happy to reply,  answer questions or offer further explanations.  Begin your comment with “Kip…” so I know you are speaking to me.

As a usage note, it is always better to indicate who you are speaking to as comment threads can get complicated and comments do not always appear in the order one thinks they will.  So, if you are replying to Joe, start your comment with “Joe”.  Some online periodicals and blogs (such as the NY Times)  are now using an automaticity to pre-add “@Joe” to the comment field if your hit the reply button below a comment from Joe.

Apologies in advance to the [unfortunately] statistically over-educated who may have entirely different definitions of common English words used in this essay and thus arrive at contrary conclusions.

Trends and trend lines are a topic not always agreed upon — some people think trends have special meaning, are significant, or can even be causes.   Let’s hear from you.

# # # # #

Advertisements

252 thoughts on “Why You Shouldn’t Draw Trend Lines on Graphs

  1. It would be interesting to see these temperature graphs presented on a couple of other charts: one would be a chart where the y-axis presents temperature over the range of what humans might call “comfortable.” My definition of a comfortable range would be in the neighborhood of 50degF to 95degF.

    Another graph would have the y-axis presenting temperature over the range within which life at/near the surface of Earth lives well, omitting outliers like life/temperatures at the sea bottom near volcanic vents.

    Eric Hines

      • Since I have lived just outside of Wichita most of my life, I can add a few qualifiers to those graphs.
        I assume the data was taken from the weather station at what is now Eisenhower airport. Nowadays the airport is basically “in town”. Pre-1970s it was not. The area around the airport is now nearly surrounded by Malls and suburbs. I would postulate that the data presented is contaminated by a large UHI due the urban development over the time frame shown. I am about 12 miles from down town and see as much as 5°F. Unfortunately, local TV and radio started reporting temperatures from either their studio backyards “in town” or from the “Old Town” area which is basically a refurbed downtown brick warehouse district.

        al

  2. A couple of years ago I realised that the words and meanings used in statistics today were not the same as I had learned 50 years ago. Seeing as how I was reading a lot about climate here and elsewhere I decided the best thing would be to do a course in R2 and get up to date.

    This was my conclusion too :-

    “If your data needs to be run through a statistical software package to determine a “trend” — then I would suggest that you need to do more or different research on your topic or that your data is so noisy or random that trend maybe irrelevant.”

    Since then I have not taken much notice of anything that requires “nuanced statistical manipulation” to make its point.

    Thanks for the reinforcement Kip.

    • Ernest Rutherford may or may not have said “If your experiment needs statistics, you ought to have done a better experiment”. Regardless of who said it, I took it to heart through my career. One problem that much research in general and medical research in particular suffers from is the failure to define what constitutes a biologically significant effect before the experiment is undertaken. This contributes to the silliness of an increase from 2 cases per 100,000 to 3 cases per hundred thousand being described as a 50% increase when in fact it is a 0.001% increase. A favourite trick of alarmists everywhere.

      • My brother used to work at a bank, and their customer service survey results were the same. It would infuriate my brother when they would get lectured about poor customer service when the ‘poor’ results were caused by a single responder giving a “3 our of 5” star rating.

      • Exactly right. Which leads to very unlikely and not very important things being given huge prominence because it’s far easier to “double” the risk of a rare thing than a common thing.

      • I once knew someone who had worked in education research. Her comment was “if you need statistics to detect an educational effect, it isn’t worth having.”

    • Keitho ==> Thanks for that…it is not that statistics are “bad” or invalid — it is that far too many researchers are using very complex statistical software packages without really understanding what the software is doing — then “believing the results”. William Briggs has a lot to say on this topic.

    • And as a humble layperson, I just have to be guided by my sense of whether someone is trying to make a confident case for decisions based on “poofteenths” of variations.

      Hence my CAGW skepticism.

  3. As a physician and a former member of a screening committee (where efforts were made to reign in over-zealous screening programs that can actually do more harm than good), I can propose one potential reason for the perceived increased incidence in younger people. If the tendency to screen for a disease (look for it in people systematically when they have no symptoms), even in lower risk populations is increasing there will always be more of the “disease” found and in many cases what is found is actually a less aggressive and in some cases benign version of the disease that would never lead to illness and yet does now lead to medical/surgical interventions because screeners can’t tell “benign” from dangerous cases. I fully agree with the point that adding trend lines and trying to find patterns that may just be random noise is a problem, and one which is rife in medical literature. Researcher are rewarded for finding things, getting grants, publishing papers, and attaining prominence, but not necessarily for accurately portraying the limitations of their data.

    • Andy Pattullo ==> The researchers on Canadian CRC incidence rates did try to address these issues, at least they were willing to discuss them in the paper — but I do still feel they have been ‘fooled” into alarm by trend lines. Note that they were LOOKING for trends all along.

    • I had the same thought. True believers only accept ONE trend line—the one that shows what they want to believe.

    • Wow. With very little training and spreadsheet skills, I could cause the exact same data to turn that graph on its head to show how alarmism is used in graphic representation.

  4. Regarding “Trend lines do not change the data — they can only change the perception of the data. Trends can be useful at times [ add a big maybe here, please ] but they do nothing for the graphs above from NOAA other than attempt to denigrate the IPCC-sanctioned idea of “The Pause””: The trend line drawn for 1998-2012 is not hiding or distracting from The Pause. NOAA’s choice of data is. It uses the infamous ERSSTv4 or a successor thereof for its SST subset, thanks to Thomas Karl. A much flatter trend line would be the result if HadCRUT4, the dataset preferred by IPCC , was used instead.

    • Donald ==> I used Spencer’s lower trop as the foil for NOAAs propaganda. There are other data sets, and there was quite a kerfuffle over Karl’s “Pause Buster” paper. — Start here and follow the links back in time.

  5. Did you hear about the mathematician who was pulled over for DWI? He was told never to drink and derive.

  6. We were warned in school that derivatives (first differentials or dy/dx’s) were inherently less accurate than the underlying data–“there’s slop in the slope.” What we’re looking at above, incident rates, are already derivatives. Assigning trend lines to the derivatives is compounding the felony, taking the slopes of the slopes. What does that say about the meaningfulness of the trend lines?

    • Jorge: “We were warned in school that derivatives (first differentials or dy/dx’s) were inherently less accurate than the underlying data”

      I don’t agree. Certainly velocity (i.e. the first derivative of position) is not less accurate than the underlying position data. And acceleration (i.e. the second derivative of position) is not less accurate than the underlying position data.

      A derivative is a trend line all on its own. It gives you the trend of the underlying data. Velocity is a trend of position. Acceleration is a trend of velocity.

      • don’t agree. Certainly velocity (i.e. the first derivative of position) is not less accurate than the underlying position data.
        </blockquote

        I don't agree with that. The first derivative with position is with respect to *time*, and time has errors as well as position.

        Furthermore, there are dead-reckoning errors with measuring the changes (deltas) in position.

        • “I don’t agree with that. The first derivative with position is with respect to *time*, and time has errors as well as position.”

          Then the error lies in the measurements of time and position, not with the derivative itself. Read what I said again.

          “Certainly velocity (i.e. the first derivative of position) is not less accurate than the underlying position data.”

          The same thing applies to time. The first derivative is not less accurate than the underlying time data.

          • If there is uncertainty in the “time” dimension ordinary line regression will give faulty results. Google “regression dilution”.

            This is one of many pitfalls in statistics that climate scientists are happily ignorant of.

    • jorgekafkazar ==> I think it menas that they have used too many statistical procedures to arrive at slopes they feel are valid because of (what Briggs calls) “wee-Ps”. They may have fooled themselves into being alarmed about rising rates.

      In this context, see the latest “sea level rise acceleration” paper .

  7. Kip, I am someone who has used statistics at university and at work, and is rather passionate about it.
    For this essay, I just want to thank you, and virtually shake your hands. Can’t be written better than that. KUTGW!!!

        • Clyde ==> see any of my links to older essays on trends and predictions, the Button Collector series, etc. Plenty of evidence of lack of support….such lack doesn’t mean I’m not right, of course, nor does it discourage me.

  8. Excellent discussion, thanks. But I don’t agree 100%. Sometimes you need to draw trend lines, but how and when should depend on a specific purpose for a specific audience. I run into client situations where, if I don’t draw a trend, they will fixate on specific data points. But then again, I’m hired to make conclusions from the data, so it does have a specific purpose, for a specific audience. (FYI, I work in telecom).

    • Mark ==> Ah, you are using trend lines “rhetorically” — to convince others of your opinion about the data (your opinion may be perfectly valid, of course).

      When I was in the web design business, I often used clashing blocks of color when I wanted to demonstrate a new structured page — otherwise the VPs would focus on the color scheme or the titles — I wanted them to see the shapes!

      • Kip:

        Did you want them to see just shapes, or did you want them to see spatial relationships among objects?

        Actually I’m using trend lines to argue for a specific information latency contained in the data, so it’s not so much trend-line vs not trend-line that is valid as it is the selection of variables, and frequently it’s not just variables, it is latent factors

        again, great article, thanks for the work in pulling it together

  9. “One doesn’t need any trend lines to be able to see The Pause”
    Never mind numbers – “I know it when I see it”. The problem is, what do you do when others don’t see it? Especially if they don’t look only at UAH?

    • The whole point about the pause, is that it exists if you want it to exist – just like the whole “mnmde warming” trend exists ONLY if you want it to exist.

      People who understand data and 1/f type noise, know that neither is significant. But people like you who don’t understand 1/f type noise can’t help but seeing the pause (when we show it to you) … just as you can’t help but seeing the “manmade warming” … neither are there in any meaningful sense, that’s the whole point of the pause.

    • What does it matter … it’s a correlation on a graph of data …. it tells you next to nothing in science terms.
      You can argue whatever you like from it but just don’t expect a real scientist or engineer type to believe you.

    • Nick ==> The point is that there are many data sets that purport to reveal the changes in “global temperatures”.

      There is no sense re-fighting “The Pause Wars” here in comments.

      UAH is one of the many well accepted data sets of global temperatures, and I use it to show that, “Yes, Rebecca” is more than one data set — and UAH Lower Trop shows The Pause — no artificial trend line needed.

      • Kip,
        I agree with Nick, if you want to be able to claim that the Pause existed then you need a
        better proof than “I can see it in the data”. Trends lines and all the other statistical methods
        you seem to dislike exist so that there is a semi-objective answer to questions like “was there
        a pause”. Trends are also extremely useful if you want to make predictions using the available
        data. If somebody asks you how many people will be born in the next 10 years in a particular
        area (needed for planning schools, hospitals, etc) how would answer that question without
        looking at trends and extrapolating? The answer wouldn’t be perfect but it would be better than just guessing.

        • Actually, trying to predict population in a given area *is* pretty much guess work. I used to work in long range planning for a major telephone company. We tried all kinds of tricks to guess population in an area in order to determine central office siting and sizing. You were far better off just asking local real estate agents and construction permit people where the population was going to grow and where it wasn’t. If you don’t believe me just ask the city planners for Detroit how badly they missed their population guesses for the various areas in the city.

          Plotting population growth only allows you to make a “projection” for the future and projections are not predictions. The past is not the future in almost all of reality.

          • I moved on from that job long ago. Neither of my two sons have landlines nor do any of their friends. I’m sure, however, that central office siting still has to be done because populations do move. And some percentage of that population will still want landlines. It still boils down to past trends are terrible at forecasting in the face of confounding variables.

          • Tim ==> Yes, glad to hear it. My point in asking is that the trends of past demand for land lines could not have predicted the massive shift to cellular telephones. Drawing trend lines from the past into the future would have resulted in horribly failed predictions.

        • Izaak ==> There are valid Forecasting Principles and they have little to do with simplistic trend lines. Using straight “trend lines” projecting into the future is invalid and unlikely to produce a valid forecast — see all of the failed forecasts of The Club of Rome, etc.

          Forecasting future population numbers is a very complex undertaking — and linear trend lines are absolutely useless for the task.

          • Kip,
            Using straight lines to predict the future is the basis of all modelling — it is
            just the same as a first order Taylor series expansion. And again such models
            become more and more accurate as the time period becomes smaller and worse and worse as the time period becomes larger. Using a linear trend I can get a reasonable guess for the population of a region next year but it is not going to be accurate for next century.

          • Izaak: “Using a linear trend I can get a reasonable guess for the population of a region next year but it is not going to be accurate for next century.”

            Actually you can’t. Ask the city planners in Detroit how well that works. Ask the city planners in New Orleans how well that works. There are too many confounding external factors to even predict a year into the future for any specific population area. Take a look at the population loss in New York. That is not linear at all, it is of a higher order. If you based your population forecast for next year based on the population loss this year you would not get a reasonable guess at all.

          • Izaak ==> “:Using straight lines to predict the future is the basis of all modelling —” yes, and that is where things go very very wrong.

        • Izaak,
          If all trends are linear, drawing a line to project future values may have some chance of giving an accurate prediction.
          But what if the data records values of a parameter that varies in a cyclical way?
          Under this scenario, drawing a trend line out into the future, or even to make sense of what has been recorded in the past, is guaranteed to be wrong.
          Perhaps the most glaringly obvious example of this is when trends in sea ice are drawn on graphs that all begin in 1979.
          If one instead compares a graph of the AMO, or of unadjusted temperatures recorded at Reykjavik Iceland, it becomes quite apparent that one is looking at a portion of an oscillation cyclical pattern.
          Forcing a linear fit to a sine curve is an inanity

    • Nick, it’s hard to see or not see a meaningful trend in less-than-perfect data when we don’t have a model of the system that resembles reality. Most of the climate models overpredict the warming. Most of the models cannot reproduce today’s climate using eight decades of historical data as the training set. You also can’t take the predictions from all these different models and average them and pretend that it is meaningful. They all use different physics, parameters, and assumptions. You can’t average apples and oranges and expect a result that will have predictive value, which is what we really are after. To do so is intentionally misrepresenting the results of the models.

  10. Trend lines are bad, we definitely agree on that. One point that you miss is that all data points should have error bars of some sort. Coming from physics the standard is one sigma, it may be different in other fields, but they absolutely need to be included. How else can you determine if there is anything like a real signal and not just fluctuations due to random noise.

    • I suspect if they put reasonable error bars on their attempts to derive average global temperatures, they would show the possibility no global warming at all for the last thirty years, just outliers during the El Niño years.

    • One sigma is only a measure of statistical variance. Each data point should also include bars indicating a measurement error component. I think in most cases those errors will far exceed the trend lines.

    • Paul ==> No argument there. In regards to the CRC paper itself, there are a lot more stats and sigmas and all that than shown here in this essay. The authors do discuss possible confounders, etc, and have CIs on some of the other numbers they produce.

      In CliSci — graphs tend to be “errorless” — sea level to 0.05 mms, no error bars, etc.

      • Funny how the case confidently being made is to spend more money studying the issue. The whole “belief strongest when living depends on it” thing.

  11. What gets me are those amorphous clouds of data that then get lines drawn through them. One wonders what the point is. The trend line has approximately zero predictive ability. Are they trying to demonstrate causality? Just because you can calculate a non-zero correlation doesn’t mean anything in the face of a data blob.

    If you have some a priori knowledge of the system you’re looking at, things change. Then, given enough data, you can extract a signal. That’s not what I’m objecting to though. What I’m objecting to are the papers that generate a slope and a correlation from a data blob. If they gather more data, it will have a different slope and a different correlation. That’s why we have the replication crisis. Calculating a trend and a correlation from a data blob proves nothing except possibly that the researcher has a copy of Matlab.

  12. Trend lines are fine if they are displayed correctly and add useful information, such as smoothing out seasonal variation or showing a long term trend to linear data. The problem arises when people abuse it, such as applying linear trend lines to non-linear data, or extend trend lines beyond the data points, implying a forecast that is likely very poorly thought through and with no confidence intervals.

    • WR2 ==> While I think you are pragmatically correct: There are those (possibly me among them) that think that trend lines are simply a form of Ultimate Smoothing — smoothing an entire data set into one single straight (or curved in some cases) line. The scientific value of smoothing data sets is in question. In the end, the “smoothed” data must not be confused with the real data — it is not the data — the data is the data. The smoothed version is something else altogether.

      Smoothing does not “add useful information” — smoothing actually takes away informational detail and fools us into believing that the resulting “trend” better represents the data. Not everyone agrees with this view.

      Trend lines can be use to illustrate a rhetorical point.

  13. As William M. Briggs (statistician to the stars) wrote a few years ago “just LOOK at the data”.

    Looking at it, I can see the so called global mean temperature is lower now than it was in 1998, and the world didn’t go to hell in a hand-cart then either.

    I also have a general rule of thumb which states that the more advanced the level of statistics needed to make a claim, the less significant that claim is. It is a rule that works as well in climate science as it does in pharmaceutical development. No fancy statistics were needed to show that penicillin worked.

  14. Kip
    I disagree. The trend line gives an indication of where we are going when looking at a particular parameter, especially versus time. In fact, everything we measure in chemistry and physics depends a lot on the correlation coefficient and the particulars of the trendline.
    e.g. in AAS (atomic absorption spectrophotometry) we feed the instrument with standards of known
    concentration and read the absorption at a certain chosen wavelength. Depending on the strength of the correlation coefficient at the chosen wavelength we decide to go – or no go – with the trend line for the analysis of a sample of unknown concentration….

    • HenryP ==> We certainly disagree — but probably because we are talking of different things we both are calling “trend lines”. This issue is pretty well covered in the essay and following comments (search comments for my answers). The Button Collector. There is a follow-up The Button Collector Revisited .

      You are talking about the mostly-linear output of known physical/chemical processes — it is the process that produces the future values based on its internal physics/mathematics. What you “see” from the known predictable process when turned into a graphical representation, you are calling a trend line . . . . though it is far from a statistically determined trend.

    • ..but you do it within the range of the known calibration standards. Extending beyond the range of available data is perilous.

    • “The trend line gives an indication of where we are going when looking at a particular parameter”
      Not with cyclic data it doesn’t if it is a straight line.

      • Osborn ==> Trend lines do not apply to the future….only to the past. It is possible that a trend might continue….and then again, it might not… which of those — continues or does not continue — is not determined by the current trend.

        • “Trend lines do not apply to the future….only to the past. It is possible that a trend might continue….and then again, it might not… which of those — continues or does not continue — is not determined by the current trend”

          People who trade stocks learn this lesson early. 🙂

    • @Kip/ Michael/A C Osborn

      I show another example:
      https://i2.wp.com/oi64.tinypic.com/vyxdld.jpg

      Rainfall versus time. Measured at a particular place on earth, it is of course looking highly erratic measured from year to year. Not a high correlation…

      But the point in showing the straight trend line was to prove a relationship over time i.e. the 87 year Gleissberg cycle.

      • Henry ==> Your data points already show what is really there — no long term trend. Further analysis (by whatever method you’ve used) seems to reveal a cycle represented by the points in th elower graph — adding the line overlays your opinion or your hypothesis — which the data may or may not support. The line is not evidence of any sort — it is just an illustration of what you wish others to see.

  15. Unfortunately I never had a statistics course in my college in the 50’s when I took electrical engineering. However it had become the trend to teach it when I got out and went to work at a large computer company in 1960. They had an excellent 6 month “new engineer” training program for all newly hired engineers at the time that took up half of each day with various subjects they thought important to their business. One of them was statistics. The instructor was excellent and cautioned us about drawing conclusions from statistical analysis by emphasizing assumptions and also confidence limits, something I don’t hear mentioned much today. The text book he used (I don’t remember the title) had a anonymous quote on the title page that I have remembered all these years:

    “Figures don’t lie, but liars figure”

      • I did read your “Uncertainty — ” et al. A great article.

        Judging from all the comments that article and this one created, it appears that a wide “Uncertainty Band” exists around the science community as to how valid any data analysis is that has been analyzed using statistics!

        Keep up the good work.

        Fergie

    • Paul ==> Yes, all sorts of tricks used to fool themselves and others. It is the desperate need to show how things are getting worse that drives many of them.

  16. If you start in a time that was called “the little ice age” and you can’t drag anything better out of the numbers over 150 odd years than a wobbly line that goes up a bit maybe, then I suggest that we don’t have much to worry about.

  17. “A chart is an inaccurate representation of a partially understood truth.”
    – Shoghi Effendi

    • In the case of most of what passes for “climate science,” I think we would need to correct that to read:

      “A chart is a grossly inaccurate misrepresentation of a mostly misunderstood reality.”

  18. In opposition to Kip Hansen again, and a lot of others, again.

    In Spectroscopy, the parameter of interest is proportional to the slope of the line, in many cases. The more points we have, the more accurately the line can be determined and the better our analysis. This is true in many other fields as well. Kip Hansen’s prohibition rules out large swaths experimental chemistry, and drives a truck through a bunch of other fields.
    Of course we know our systems, and are well past trying to draw inferences about cause and effect based on a trend line. We are using our lines for quantitative information about our system.
    Kip Hansen says we should not do this.
    OK, henceforth I will never again draw a trend line over my data. Instead, I shall draw a regression line, and go on my merry way.
    Analytical spectroscopy has been around longer than Kip Hansen has, so I think I will keep with the standards of analytical spectroscopy.

    If somebody else wants to misuse or abuse their data or mislead their readers, that is on them. (Heaven Forbid, someone attempts to mislead us using poor data techniques! Who would have thought???)

    Am I really supposed to change the way I do things because some clown is allegedly misusing trend lines?
    The data is the data, we are told. But we also see that The Trend Is The Trend and The Quest Is The Quest!

    • TonyL ==> It is possible that you are confusing/conflating two different things that “look” the same but are quite different — the trajectory of a cannonball can be fairly confidently predicted based on simple Newtonian physics and, when drawn on a graph, can be confused with a “trend line” (a curving one). You speak of the “slope of the line” which is a parameter of how your data is changing over some other parameter — all this based on a known physical process that that (at least quasi-) linear output. I suspect that you are not calculating a trend line — but something that looks similar.

      See my comment above

  19. On our SCADA system at work I’ve made lots of trends. But I’ve never made one to prove a point. I’ve made them to accurately give a picture of what was happening. Some are only referred to once or twice a year but they are useful.
    I’ve never tried to put a trend line on them. Not sure if I could. I can’t imagine what good one would do.

    Now, I have seen trends here of models projections vs actual observations with a line on them. But that line was the average of all the models, not “trend line’ as you’re talking about.

  20. Trend lines are NOT part of the data. The data is the data.

    Technically, the data *are* the data.  Datum is singular, data are plural… 😉

    Trend lines are always opinions and interpretations added to the data and depend on the definition (model, statistical formula, software package, whatever) one is using for “trend”. These opinions and interpretations can be valid, invalid, or nonsensical (and everything in between)

    Trend lines are just equations.  Some data sets are amenable to linear regressions, some clearly exhibit a logarithmic or exponential function.  Some data sets exhibit no trends at all.

    The biggest pitfall is over-fitting, particularly with polynomial functions – Often irresistible.

    Trend lines are NOT evidence — the data can be evidence, but not necessarily evidence of what it is claimed to be evidence for.

    Again, it all depends on the time series.  Some sequences clearly can be linearly extrapolated or interpolated… Some can’t be.

    Trends are not causes, they are effects. Past trends did not cause the present data. Present data trends will not cause future data.

    Again, this depends on the data set.  The trend line can actually be the function of how one variable affects another… AKA the cause.

    If your data needs to be run through a statistical software package to determine a “trend” — then I would suggest that you need to do more or different research on your topic or that your data is so noisy or random that trend maybe irrelevant.

    That’s not usually done to define a trend.  It’s done to make the signal more coherent.  Sometimes it works, sometimes it makes hockey sticks.

    Assigning “significance” to calculated trends based on P-value is statistically invalid.

    P-value is used to assess marginal significance, the probability of a given event.  Again, it all depends on the data.

    Don’t draw trend lines on graphs of your data. If your data is valid, to the best of your knowledge, it does not need trend lines to “explain” it to others.

    The purpose of graphing data is to demonstrate the mathematical relationship between two or more variables, or lack thereof.  The equation generated by the trend line is the mathematical relationship.  The r2 value (“goodness of fit” or “explained variance”) tells you how well the equation explains the data.

        • The beauty of all those fitted trend lines, is that they are all through the same data points !
          Look closely.

          XKCD nails it — as usual. (Though I think he is misguided an AGW)

    • Dave ==> Too much to disagree with — the essay stands on its own merits regarding trend lines.

      See my other comments to those who insist they use trend lines doe some useful and sound purpose, and the Button Collector series here at WUWT.

      • Kip,

        You made some interesting points… But almost all of your conclusions were either flat-out wrong or over-generalizations.

      • Here are a couple of real world examples of why “eye-balling it” isn’t as good as a mathematical trend line:

        Ranges of fluid density and gradient variation

        Oil-field liquids and gases occur in a wide range of compositions. The table below shows typical density ranges and gradients for gas, oil, and water. However, because exceptions occur, have some idea of the type of fluid(s) expected in the area being studied and use appropriate values.

        AAPG Wiki

        The pressure gradient is a linear regression – a trend line. You shoot at least 3 pressure points in a reservoir, plot them on a graph, and plot a linear regression. The slope of the trend line is the pressure gradient. While it’s easy to distinguish salt water from oil/gas on a resisitivity log, it’s not always easy to distinguish oil from gas. That’s one of the reasons we take pressures in potential pay sands.

        Fluid Normal density range (g/cm3) Gradient range (psi/ft)
        Gas (gaseous) 0.007-0.30 0.003-1.130
        Gas (liquid) 0.200-0.40 0.090-0.174
        Oil 0.400-1.12 0.174-0.486
        Water 1.000-1.15 0.433-0.500

        Here’s a check shot survey from a well in the Gulf of Mexico…

         Two-Way Time (ms)  Depth (ft below sea level)
                                                          0                        0
                                                          365                     564
                                                          665                 1,654
                                                          965                 2,684
                                                       1,265                 3,674
                                                       1,565                 4,794
                                                       1,865                 5,944
                                                       2,165                 7,169
                                                       2,465                 8,454
                                                       2,765                 9,804
                                                       3,065               11,244
                                                       3,365               12,754
                                                       3,665               14,309
                                                       3,965               15,834

        Let’s say you’re drilling a well to a seismic anomaly near this well, but deeper. Let’s say your target is at 4,355 ms… What’s the most accurate way to forecast the depth at 4,355 ms? You’ll be shocked to learn that it’s a linear regression: a trend line.

  21. A trend is only useful when measuring a physical cause, e,g. I did a physics experiment at university to calculatie the heat conductivity of sand.

    • Also… Velocity gradients, production decline curves, any variables that exhibit mathematical functions.

      • Placing/analyzing a particular sample within a well defined relationship, based on understood, consistent, physical properties is somewhat different than making a claim, or projection, about something dependent upon many poorly understood variables, such as wide scale temperatures surface temperatures, sea level changes, and other sacred tenants of climate science, no?

  22. Kip

    Makes you wonder why Excel has a trend function (-:

    Here’s two graphs one with and one without a trend line:

    NASA 2019 minus 2009 Data
    https://i.postimg.cc/L8BZjZhq/image.png

    NASA Trend Comparison 1997-2018
    https://i.postimg.cc/sD1ZKVF3/image.png

    One graph requires a trend line to see the difference, and looks like your point is that an increase of 0.25 deg C in 100 years is just that, 0.25 deg and so what.

    The other graph shows a pattern that looks suspicious.

      • Kip

        One problem seems to be that one shouldn’t make a big deal about 0.25 deg/century. OK

        The issue not addressed is the pattern of changes. That graph don’t need no stinking trend line.

        Both graphs address the same issue, every month NASA’s LOTI changes over 40% (1st 6 mo of 2019) of their monthly entries and over time it has produced that pattern. So, is that a probloem?

  23. You could readily create an ‘uptrend’ by taking data calculated from a sinusoidal expression, i.e. y = a sin x, start it at a trough and finish it near a crest and hey presto, the end of the world is upon us!!

  24. Don’t draw trend lines on graphs of your data. If your data is valid, to the best of your knowledge, it does not need trend lines to “explain” it to others.

    Are you opposed to publishing calibration curves? Using the functions fit to instrument calibration data? How about age-related reference regions for diagnostic interpretation of lab results?

    To me, this is roughly equivalent to staying home when the advice is to “drive carefully!”

    What do you think about nonlinear transforms of the data, for example to make log-log plots?

    • Matthew R Marler ==> “Calibration curves” are not “trend lines” — they just look like a trend line.

      See some of my replies above, and the links to the Button Collector series and my replies below those essays.

      • Kip Hansen: “Calibration curves” are not “trend lines”

        I disagree. They are generally estimated via least squares, same as most other trend lines. There may or may not be tests of linearity and homogeneity of variance, but they are trend lines.

        • None of my electronic equipment is evaluated using least square curve fitting. The calibration curve is determined by comparison of my instruments against a known calibration instrument or element. The curve thus generated is the calibration curve. If at some point my instrument deviates a large amount from the calibration standard due to non-linear factors just happening to coincide then the calibration curve goes through that point, it is not ignored.

          • Tim Gorman: None of my electronic equipment is evaluated using least square curve fitting. The calibration curve is determined by comparison of my instruments against a known calibration instrument or element.

            Measuring instruments are always calibrated against something. It is the calibration curve that allows the value of the measured quantity (pH, volts, temperature) to be calculated from the indicator (usually current flow, but could be mercury extent in a common thermometer) — from the inverse of the calibration curve. Your phrase “comparison of my instruments” hides all the details of what “comparison” is actually carried out — most likely a least squares fit to the (possibly log or log-log transformed) calibration data. Your electronic measuring instruments have a calibration curve (or curves, along with expected accuracy) that may be supplied in the paperwork that is in the boxes the devices came in, or may be obtainable from the manufacturer.

          • “It is the calibration curve that allows the value of the measured quantity (pH, volts, temperature) to be calculated from the indicator (usually current flow, but could be mercury extent in a common thermometer) — from the inverse of the calibration curve.”

            I’m not even sure what you are saying here. The value of the measured quantity is not calculated from the inverse of the calibration curve. The calibration curve gives an adjustment factor to be applied to the instrument reading. If my power meter reads 1db low at 144Mhz according to the calibration curve then I must add 1db to my reading in order to have an accurate measurement.

            “Your phrase “comparison of my instruments” hides all the details of what “comparison” is actually carried out — most likely a least squares fit to the (possibly log or log-log transformed) calibration data.”

            I’m not even sure we are talking about the same thing. Calibration is done by applying a standard input to my equipment and to the calibration standard equipment. The difference then becomes the next point on the calibration curve. There is no “least squares fit”. The calibration curve goes through all the points that are measured for calibration. If you don’t do that then there isn’t much reason to calibrate your instrument against a standard.

          • Tim Gorman: I’m not even sure we are talking about the same thing. Calibration is done by applying a standard input to my equipment and to the calibration standard equipment.

            I am sure that we are not talking about the same thing. My Micronta 43-Range Multitester displays the result of the measurement via a needle that swings in an arc from left to right. What is literally driving the needle is the magnetic field generated by the flow from a coil. To get the numbers on the dial, the developers put known standards to the test, and for the known standards marked the appropriate values on the dial (actually, the process is more complicated, but that’s the gist of it). They used 10 – 20 standards in the range. Points on the dial between the marked standards were interpolated. There is then a function relating the standard value to the magnetic field strength. Developing that function is what I mean by “calibration” of the measurement instrument. Inferring the value of the measured attribute from the position of the needle is effectively reading the inverse of the value of the magnetic field strength.

            For an HPLC to measure a blood constituent, the standards are vials filled with aliquots that have known quantities of the analyte. The result of the HPLC run is the function relating the known concentrations to the area under the curve of the absorption function (or perhaps the maximum of the absorption function — n.b. it is the absorption function, smoothed data, not the raw absorption data). Then the area is computed for a sample of unknown concentration, and the functional inverse of that is used as the estimate of the measured quantity.

            What you have described to me reads like “adjustments to the calibration curve”, which is probably perfectly well shortened, in this context, to “calibration”. Or you have some other well-defined usage for your setting.

          • “There is then a function relating the standard value to the magnetic field strength. Developing that function is what I mean by “calibration” of the measurement instrument.”

            The magnetic field strength will vary from instrument to instrument for various physical reasons. You *must* develop a calibration curve for each instrument individually by determining the difference between the instrument readings and the readings of a standard.

            This has *nothing* to do with doing a linear regression of data points as was initially proposed. Not even *your* description has anything to do with developing a linear regression of the data points! Interpolation is not regression.

            If your absorption function does not match the data exactly then integrating the function to find the area under the curve winds up with an error bar all of its own related to the error bars associated with the measurement of the data.

            “estimate of the measured quantity.” The operative word here is “estimate”. Calibration of measurement devices is meant to provide for eliminating “estimating”.

          • Tim Gorman: If your absorption function does not match the data exactly then integrating the function to find the area under the curve winds up with an error bar all of its own related to the error bars associated with the measurement of the data.

            That is true. You have error either way, and the maximum or area computed from the fitted curve will have smaller mean square error than the area computed from only the data — if the model fits well enough, which can generally be checked.

            But if you only use data in calibration, not the model, you restrict your calibration to the subset of values actually used in the calibration, not the full range of expected values.

            Interpolation is not regression.

            Do you do “linear” interpolation, based on a straight line fit to two data points?

            You *must* develop a calibration curve for each instrument individually by determining the difference between the instrument readings and the readings of a standard.

            If you do this with a subset of values, and then apply it to the full range of values expected to be encountered, then you are developing a calibration curve relating the new instrument to the “gold” standard. What you use in practice is the full calibration curve (which may be a higher order polynomial or sum of exponentials or b-spline approximation, or something from other families of functions), not just the relatively few values tested.

      • Kip Hansen: “A trend line (also called the line of best fit) is a line we add to a graph to show the general direction in which points seem to be going.”

        “The line of best fit” gives away the story. How is “best” determined, least squares? “Best” among what — best low order polynomial, best among 2-compartment diffeqn models, best exponential decay model, best subject to continuity or parsimony constraints, etc?

        Trend lines can not be depended on to predict the future accurately, but if you are trying to predict the future at least approximately, then you depend on the trend line, not the data.

        Your cautionary remarks are the usual cautionary remarks about “believing” the trend line of a single data set without testing. There is no good case that trend lines should always be avoided.

        • Matthew ==> If you are following comments here, you see many examples in which trend lines obscure data — there are plenty of reasons for avoiding simplistic trend lines, given in this essay and in the essays I have linked in comments.

          • Kip Hansen: you see many examples in which trend lines obscure data

            Sure.

            Everything that can be done well can be done badly, including graphing trend lines. The solution is not to abjure trend lines altogether, but to use them thoughtfully.

            What you are doing is “throwing the baby out with the bathwater.”

          • Kip Hansen: I have specifically denied “throwing the baby out with the bathwater”

            Then you need to rewrite this: Don’t draw trend lines on graphs of your data. If your data is valid, to the best of your knowledge, it does not need trend lines to “explain” it to others.

            Otherwise the denial is hollow.

    • My advisor told me, that absent very good data, in special cases, anyone making log-log plots should be shot.

      It is too easy to hide the noise or scatter.

      • a_scientist: absent very good data, in special cases, anyone making log-log plots should be shot.

        That’s absurd. They are just one of many transforms of the data that may be informative beyond merely eyeballing the raw data.

  25. Kip Hansen said:

    Drawing trend lines on graphs is adding information that is not part of the data set

    Exactly 100% Absolutely Wrong!
    The trend line is derived 100% exactly from the data. The information represented by the line comes from nowhere else.
    Allow me to explain.

    The Situation In One Dimension:
    We have a set of numbers, all measurements of the same thing. We have simply conducted our measurement multiple times so as to generate an average of some sort. There is much information in the whole data set. Maybe we care, maybe not.
    A) We report the mean. It is the figure of merit and all we care about. What is lost is any information about the dispersion of the individual points about the mean. We do not care.
    B) We report the mean and standard deviation. Some information about the dispersion is retained, some is lost. For instance, there may be a tendency for the measurements to systematically increase as the experiment proceeds. This will be lost.
    C) We report the mean, standard deviation, and sample size, which is the best condensation we can typically do. Of this is still not good enough, no sense trying to condense the data, back to the raw data you go.
    In all cases, no information is magically added to the system. Just the opposite, information is discarded for the sake of brevity.

    The Situation In Two Dimensions:
    The case, and issues are the same. The mean becomes the slope and intercept. Two values instead of one.
    A) Just the same as above, slope and intercept reported, dispersion lost.
    B) Just the same as above, slope, intercept, and confidence intervals reported.
    C) A Little Different! Show all the Raw Data AND the summary Trend Line. Keep everybody happy.
    100% of the data and the information contained within is displayed. No information is discarded, and none is magically conjured up ad added.

    • TonyL ==> If only it was that simple. Of course, the trend is derived from the data.

      The practical problem is we never have all the data, there are start points and end points — there is the matter of scale — which definition and formula for trend we might use — there are a lot of variables and thus your decision to use and show one of them is an OPINION about the data, even if it is derived from the data.

      The statistical mean is not a trend line — it is the statistical mean. It may or may not be useful. In multiple measurements of one unchanging thing — it is how we reduce measurement error.

      Your 2-D example only works for trivial situations — like mechanical processes with short simple data sets.

      For physics problems, sometimes a “slope” can be useful information — a “slope” is not exactly the same as statistical trend line laid over a set of data — but looks like one.

    • The trend line is derived 100% exactly from the data. The information represented by the line comes from nowhere else.

      So which bit of the data describes the beginning and end of the trend lines?

      • I honestly do not even know what you are attempting to say, you seem to be so far off the point.
        So how do we determine the end points of a trend line?
        The mathematician tells us that a point is infinitely small, and also that a line is an infinite collection of points. We also find out that a line extends infinitely in both directions.
        We know that a line is fully and uniquely described by the slope and the intercept. Nothing else is needed. A line does not have endpoints, nor does it does not need them.

        If your trend line is truly a line, it has the above described properties of a line. If your trend line has endpoints, it may be because you added them.

        • TonyL ==> Any trend line derived for an existing data set is ONLY valid for the data points known. the trend is the trend of the existing data — not of unknown past data and not of unknown future data.

          Fooling around with the geometrical definition of “a line” does not change that, That’s just silliness.

          • Past performance is no guarantee of future results. Corrolary: short term trends can become long term trends, how does linear regression capture this?

        • I’ll make it simpler.
          There are numerous trend lines on the temperature graph. It is an editorial decision – a political decision – to choose that many trend lines.

          You could just draw a straight line over the whole data set. But that is making a choice that the data is homogenous and no new factor has come into play over that period. That choice is not in the data.

          You could draw inflexions when something changes, such as the moon entering the house of Aquarius. But choosing to prioritise astrology is again, not in the data set.

          Now, you are clearly fluent in the language of statistics; you know your stuff. But that means you can understand the ‘words’ even when they are untrue.
          And you seem to not realise that you are being misled.

          • Ah, I see. I honestly did not see what you were getting at.

            You could just draw a straight line over the whole data set. But that is making a choice that the data is homogenous

            If you want to be this strict about it then still no, I am merely accepting the null hypothesis of no change. This is not making a decision and adding information to the data set.
            If I were to break the data set into two or more portions, and plot two or more trend lines, then yes, I have added to the system. I am well aware of that. Whenever anybody does that, the howls and cries of “Cherry Picking” bombard your ears. You have to be ready for that. So you are correct for a split graph. But I would never claim that all the information is contained within the data for a split graph.

    • Nick: “A) We report the mean. It is the figure of merit and all we care about. What is lost is any information about the dispersion of the individual points about the mean. We do not care.”

      Of course we care. If what you have determined is the mean length of 1000 steel girders then the dispersion of the individual points about the mean is *very* important and we *very* much care about it!

    • TonyL The trend line is derived 100% exactly from the data. The information represented by the line comes from nowhere else.

      I would not say it that way. Information is provided from other research showing that some member of a parameterized family will fit the data well, and that the specific estimated parameter is physically meaninful. With radioactivity counts, for example, the linear trend estimated from the log-transformed data can be transformed to the half-life of the sample. Likewise for the terminal portion of a concentration curve from a pharmacokinetic experiment; parameters from the fit of a compartment model can used to devise a dosing schedule for experiments on the efficacy of the drug.

      Kip Hansen’s essay does not distinguish between fields where such evidence of applicable models has been verified, fields where verification has barely begun, and fields where no such verification has been done but the experiment under consideration may be a pathfinder in the field.

  26. Before even arriving at meaningful trends, you need accurate enough and consistent enough data over a “long enough” interval.

    Prior to the 1940’s we don’t know what Global Average Temperatures were with a high enough accuracy to calculate trends as accurately as they are often stated.

    Prior to the 1940’s there wasn’t enough data to know the 1° Grid temperatures of over 75% of the world to with better than +/- 2 C accuracies…if that. For 3/4 of the world, it was all extrapolations and guesswork. Prior to 1920 it was considerably worse. With that level of uncertainty there is no way to append the old data onto the better new data to generate meaningful trends for the last century.

    The minimum temporal interval for discussing climate is 30-50 years FOR INDIVIDUAL DATA POINTS. Noise from decades long ocean and air circulation cycles and decades long energy transfer lags define about a half century interval FOR Detecting ANY CLIMATIC CHANGES…let alone trends…else we are really only talking about weather trends.

    UAH GAT’s have risen 0.28 C the last 30 years. Or 0.09 C per decade. The previous 30 years (cludged together with far less certain data sources) the GAT’s rose 0.19 +/- 0.09 C. For the 30 years prior to that (1928-1958), the data is too sparse and too uncertain to produce any meaningful trend values. “Meaningful” here being trends accurate to +/- 0.25 C per decade for INCLUSION IN THE DISCUSSION about trends in the GAT.

    So, we really only have 2 useful data points and one of those is “shaky” for CLIMATIC trends (60 to 100 years for 2 or 3 points) in GAT’s. The average of this insufficiently small data set indicates a century trend in GAT of less than + 1 C.

    • DocSiders ==> Shaky is a good word — and I agree. Wait til yu read the new SLR Acceleration paper!

  27. Temperature can be considered an output variable of the climate system and is dependent on many input variables. Any scientific analysis of the trend in an output variable must include analysis of the input (KIVs) variable(s) having the most impact on the output variable to provide a complete picture of observed trend.

    While determining statistical trends in data is sometimes necessary, it is important to keep in mind the confidence in the estimated trend. Usually when working with continuous data (e.g. temperature), 25 to 30 data points are necessary to establish a confidence estimate of 95% that you are seeing a true behavior in the output data. In the temperature data example, the use of 15/point trends significantly reduces the confidence that you’re seeing a true behavior in the data (as this particular data set clearly shows).

    • Joel Brown ==> When one does not know the cause of the measured data — then the trend of the existing data does not inform us of anything not already apparent in the data — the set starts at 1 then 2 then 3 then 10 then 5 then 8 then 9 then 2……one could draw a trend line — which will tell you nothing other than what the data did in the past seven measurement periods — but you already knew that. You only need look at the graph….

      The trend, calculated from sufficient data points, only tells you what the system being measured did over the period covered by those data points — and I repeat — that is something you already know because you have the data points.

  28. Trend lines are useful for addressing clearly defined questions.
    The questions need to be clearly defined so as the viewer can then consider of the trend line is:

    A) Important.
    B) Pertinent.
    C) Significant.

    If the question isn’t important the trend won’t be.
    If the trend line doesn’t relate to the question then the trend line won’t be pertinent.
    If the data doesn’t support the trend line sufficiently for the question being addressed then it’s meaningless.

    The question being asked in the climate data is about the Pause and can be written as:
    “Has there been any Global Warming in Greta Thunberg’s lifetime?”

    When you know what the question is you can judge whether the trend lines are fit for purpose.

    • M Courtney ==> Your Greta question is answered by simply by listing or graphing the data from her birth date to present — then look at it. It is either warmer now or not. But your real question is not so simple….you want some kind of statistically determined “trend” line to tell you “has it been getting warmer”… even though you have access to the original data.

      If the data itself can’t tell you by simple looking — then you are asking a question that maybe the data can’t really honestly tell you.

      By most measures, the “global average surface temperature” (as measured and manipulated) is higher now than 16 years ago…marginally.

      • What you have noticed is the irony of the word “Significant”.
        Statistically the graph is asking a question that the data can’t really honestly tell you.

        But that assumes the graph was created with “really honestly” being a consideration. Not necessarily so.

        • M Courtney ==> The issues involved in “most research findings are false” and that some CliSci practitioners are advocates first is not the topic of this essay…. 🙂

          • Point taken. I wasn’t directly on the point of the essay. Nor was I first though.

            Yet, as the topic of this essay was why trend lines are didactic and illustrative I thought that summarising when being didactic is acceptable would be helpful.

            Not in the case you used for example.

  29. Hey Kip,

    Part of the ‘replication crisis’ in some branches of science is misuse of statistical techniques. For instance, small change in a treatment of outliers may lead to different conclusions.

    This ‘pause’ in global warming is interesting. As such it contradicts severity of the greenhouse effect – despite steady increase of CO2, what supposedly should lead to proportional retention of thermal energy in the atmosphere, we did not observe increase in averaged ‘global’ temperature. I can see at least two possible explanations:

    1. Another large-scale physical effect temporarily contracted greenhouse effect. The question is then why not attribute most of the observed warming to such naturally occurring effects, other than greenhouse effect?
    2. Local events skewed averaged ‘global’ temperatures, dragging it down. In such case we should ask for the detailed ‘heat map’. Again, the question would be why chain of such local events is not responsible for most of the observed warming, pushing ‘global’ averages up?

    • Paramenter ==> You’ve got good questions — always the most important part of thinking about something.

      The majority of CliSci researchers are trying to figure out the answers — some minority are just pushing alarmism.

  30. Kip,

    I will agree that trend lines are often misused or abused. However, they do have utility in data analysis. Perhaps the greatest value is to emphasize the “opinion” of someone making conjectures about the meaning of the data.

    Succinctly, if one has a data set with a lot of scatter, and there is a need for a best guess for the Y-value at a certain X-value for which there is no measurement, the equation for the regression line gives a quantitative interpolation that is better than a visual guess. Extrapolations are more risky, but it may be the only way of predicting. Linear extrapolations are better behaved than polynomial extrapolations and may be acceptable if one has other reasons to believe that the relationship is actually linear.

    Calibration curves ARE a form of regression that minimizes the effect of scatter in a set of measurements.

    • Clyde Spencer ==> Guessing may be important in some fields — fooling oneself that the guesses are “scientifically provided answers” is rife and unfortunate.

      That said, guessing for hypothesis formulation is an important part of science — but we mustn’t confuse our guesses, no matter how much math is involved, with actual measurements and reality in the physical world.

      Guessing that the Global Average Surface Temperature anomaly is XX.xx degrees C is pretentious and dangerously foolhardy. (not that you did it, of course, just an example.)

  31. @Kip/ Michael/A C Osborn

    I show another example:
    https://i2.wp.com/oi64.tinypic.com/vyxdld.jpg

    Rainfall versus time. Measured at a particular place on earth, it is of course looking highly erratic measured from year to year. Not a high correlation…

    But the point in showing the straight trend line was to prove a relationship over time i.e. the 87 year Gleissberg cycle.

  32. This topic of trendlines needs a qualifier. Sometimes the data is following known physical laws and its continuation by extrapolation with trend lines is predictable. For example, oil well production follows a predictable exponential decline, as does groundwater contaminant extraction, natural attenuation and many other things. Drawing a trendline and extrapolating helps to obtain the “K” factor or half-life of the decline curve and forecast the economic limit of extraction.
    S curves are common in technology waves, for example a new smart phone or digital TV resolution. At first sales are slow, then accelerate, then taper off. The trouble comes when a trend line extrapolation is used to extrapolate a phenomenon that has no tangible reason to continue to follow the trendline. To put it simply, “statistics are descriptive, not explanatory”.
    Even the Hubbert curve or “peak oil” is useful as long as we keep in mind that it applies only to current technology. The old Hubbert Curve was for oil production from sandstone and limestone. Hydraulic fracturing in oil shales in horizontal wells was a major technology change, new source of oil reserves and curve disrupter. Nevertheless a new Hubbert curve should form for this relatively new technology and the old curve is still good if limited to sandstone and limestone reserves.
    On the other hand, extrapolation of high dose toxicology studies to low dose remains controversial. The most difficult of all is known as the “single fiber theory” with asbestos fibers. According to the theory, if a million people were each exposed to a single asbestos fiber, there is someone out there so sensitive that they will develop mesothelioma. Or so the theory goes. This enables extrapolation down to zero exposure level. The reality is the body has defenses to keep particulates out of the lungs (sinuses, mucous, nasal hair, etc), which must be overwhelmed to induce disease, so most likely there is a minimum threshold exposure value and extrapolation to zero is invalid. The fact we can’t identify a threshold value does not mean it does not exist. Extrapolation of high dose exposures to low dose and trans-species studies (mice as surrogates for people) are done all the time. This enables us to make at least conservative estimates of toxicity. But it should be remembered that these are nothing but worst-case estimates. Occupational exposure studies are the most trustworthy toxicological studies and even then should be limited at low dose to interpolation, not extrapolation. The most ridiculous thing we do is apply a risk value of 1:100,000 adverse health consequence to a rural area where no one is exposed. Or a maximum contaminant limit to an aquifer that no one drinks.
    Extrapolating a temperature trend that is known to be cyclical is risky business. Complex data usually has three components, all scale dependent, a trend (not necessarily linear), a cyclical component, and a random variation. Unfortunately, random variation (variance) is extremely high with weather compared to the others. Put simply, if the average high is 80 and the record is 110, and the average low is 50 and the record is 20, what difference will it make if the average low changes to 51? When making trend lines, different trends should be tried (linear, exponential, quadratic, step change, etc) and the highest correlation coefficient should be found. Correlation coefficients should always be stated with trend analysis.

    • Jim ==> I am not saying there can be no extrapolation — that is an important part of may science-based fields.

      But what those extrapolations are arenot simple “trend lines” — they are projections of future values based on known and well-understood underlying “systems” (how oil wells work, how contaminants move through soils, etc. Like the projection of the trajectory of a cannon ball. Those are not TRENDS.

  33. This “something” is an opinion — it is always an opinion — it is not part of the data.

    While fitting a trend, linear or otherwise, to data often involves some subjective choices, it definitely is NOT just an opinion. It is entirely an objective mathematical exercise performed upon the data. The onerous part is that few have the mathematical capability to recognize the appropriateness of their choices. Bad ones can indeed produce grossly misleading results. Contrary to William Briggs’ misguided admonition, however, good ones can reveal quantitative features that are not apparent simply by looking at the data. In either event, trend-fitting is not a proper time-series analysis tool. Almost invariably, there’s no recognition that linear (regressional) trends are very crude band-pass filters, whose desired function can be achieved far more objectively and powerfully through other analytic methods.

    • 1sky1 ==> See https://www.xkcd.com/2048/.

      You’ve read Briggs — it is not to say there there are no valid analysis tools that can be used to take a closer look at data sets.

      But what we have been referring to as “trend lines” is pretty specific (with expressed examples) and not to be confused with all of that.

      • What we have been referring to as “trend lines” is pretty specific …

        While your link shows various examples of curve-fitting, you deal only with linear ones. Nothing presented here truly specifies the exact nature or source of the pitfalls.

        • 1sky1 ==> Read the links given in the essay if you’d like more background material — some are to my previous work, some to that of others.

          • 1sky1 ==> This blog is for the general public. If you want more technical discussion, you need to look elsewhere. Remember, I am talking of simple straight (sometimes curved) trend lines drawn over times series graphs.

          • 1sky1 ==> Well, I agree with Peter

            1) “Trend lines are very bad signal processing technique, and belong in the same category as Mann’s attempt at signal processing resulting in bogus hockey sticks.”

            2. Signal processing engineers don’t ‘draw’ trend lines, we ‘draw’ sine waves. (we probably haven’t drawn them since Fourier’s time, we calculate them).

            He seems to have understood what I was talking about, and signals his agreement.

        • Say what you will, but Peter specifies the nature of the pitfalls in a rigorous way that you never did.

  34. Kip,
    The claim that “The data is the data” is nonsense. Take for example the temperature graph
    from Dr. Spencer that you like and ask what was actually measured. The answer is the number
    of electrons flowing through a wire over a period of time. If anything that is the data. Then to
    get a temperature there are a large number of processing steps. Firstly the number of electrons
    is converted to a voltage using a trend line (i.e. Ohm’s law) then that is converted to a photon
    flux. That photon flux is then converted to a ratio of intensities at different wavelengths. Then
    using another calibration curve you get a “temperature”. You then need to correct for the speed,
    and height of the satellite, the angle of inclination of the sensor (so you know how much of the atmosphere
    you are looking at) the time of day etc. Then the UAH group averages that over a month and releases
    the final result.

    Compared to all of that processing complaining about someone else drawing a straight line on top of
    part of the graph seems a little silly. So unless you want Dr. Spencer to release graphs showing the
    number of electrons versus time (and even time is measured in electrons if you get right down to it) you
    have to accept that scientists process data all the time for valid reasons. And say “the data is the data”
    is nonsense.

    • Izaak ==> There are a lot of reasons to question CliSci data of all sorts, and the more they have been derived from other sorts of data, the more suspect they are.

      That does not change the simple fact that once presented — it is what it is. Drawing lines on it will not change it and will not (usually) give us any more information than we already had.

      Trend lines are just a form of SMOOTHING data sets.

      • Trend lines are just a form of SMOOTHING data sets.

        Then explain how these trend lines “smooth” the data sets.

        The development of AVO crossplot analysis has been the subject of much discussion over the past decade and has provided interpreters with new tools for meeting exploration objectives. Papers by Ross (2000) and Simm et al. (2000) provide blueprints for performing AVO crossplot interpretation. These articles refer to the Castagna and Swan (1997) paper which laid the foundation for AVO crossplotting. The AVO classification scheme presented by Castagna and Swan, which was expanded from the work of Rutherford and Williams (1989), has become the industry standard. Castagna and Swan also investigated the behavior of constant Vp/Vs trends, concluding that for significant variations in Vp/Vs many different trends may be superimposed within AVO crossplot space, making it difficult to differentiate a single background trend. A misapplication of this concept has been to infer a direct correlation between these changing background Vp/Vs trends with rotating intercept/gradient crossplot slopes (what Gidlow and Smith (2003) call the fluid factor angle, and Foster et al. (1997) the fluid line) observed in seismic data. The central question of this paper is: when is this background trend (or fluid line) rotation a representation of real geology and when is it a processing-related phenomenon? To answer this question I review the theoretical expectations regarding rotating AVO crossplot trends, the role of seismic gather calibration (or lack thereof), and the value of various compensating methods. Furthermore, I investigate the appropriateness of using constant Vp/Vs lines in a crossplot template by examining both the mathematical and modeled AVO crossplot responses which incorporate an established compaction trend. I conclude that when exploring in a reasonably compacted environment (Vp/Vs ratios of 1.6-2.4) a relatively small range of fluid angles (background trend rotations) can be expected. Large variations of fluid angle observed in seismic data can be attributed to the difficulty in preconditioning gathers for AVO analysis.

        • David ==> You are conflating sophisticated data analysis mthods, which even the authors have their doubts about, with the concepts of simple trend lines discussed in this essay.

          b

          • There’s nothing “sophisticated” about AVO trend lines. They are all linear regressions and can be used to differentiate oil from gas from brine on seismic data.

        • David ==> You are talking about something entirely different, and I think you know it.

          Advanced statistical techniques of squeezing guesses out of existing data based on well understood physical systems (like cannon ball trajectories) are not “trend lines” even if they use the word”trend” to describe them.

          I’m sure you can come up with an endless list of things that are not trend lines to support your position that ….whatever your position is….

          • All of the examples I cited were trend lines, Kip. Trend lines, without which the graphs would be useless.

            The purpose of cross-plotting two or more variables on a graph is to determine if a mathematical relationship exists. The trend line is the mathematical relationship.

            While trend lines are often misused and/or intended to deceive, this is simply ridiculous generalization: “Don’t draw trend lines on graphs of your data. If your data is valid, to the best of your knowledge, it does not need trend lines to “explain” it to others.”

      • Kip,
        Suppose I get an undergraduate student to plot voltage against current for a
        simple resistor. They will get a good approximation to a straight line. Surely in
        that case plotting a trend line will tell them what the value of the resistor is. And
        using that information the student will be able to predict what the value of the voltage
        will be for new values of the current. Do you really want to claim that in that case a trend line is useless or that it doesn’t add anything?

        Now of course not everything is as simple as a resistor. But trend lines can still be used to
        extract useful information from a graph. They can also be used to mislead or can be complete nonsense (see http://www.tylervigen.com/spurious-correlations for excellent examples). But if Dr. Spencer chooses to average the UAH data over time periods of one month why shouldn’t I choose a different averaging period? What makes monthly smoothly correct while decadal smoothing wrong?

        • Izaak: “Suppose I get an undergraduate student to plot voltage against current for a
          simple resistor. They will get a good approximation to a straight line. Surely in
          that case plotting a trend line will tell them what the value of the resistor is.”

          Why do you have to plot a line? R=V/I All you need is one measurement, i.e. a POINT, to determine the value of the resistor.

          The trend line *is* useless and doesn’t add anything.

          “Now of course not everything is as simple as a resistor. But trend lines can still be used to
          extract useful information from a graph.”

          Really? What useful information can be extracted from a trend line of temperature, for instance? Can it tell you what is happening in the atmosphere? What if the data used to generate the trend line is an average? Does the trend line tell you what is happening in reality? Can the trend line be extended past the last data point to give a reliable prediction?

          • Tim,
            do you really believe that one point is sufficient for measuring the value of a
            resistor? All real measurements have errors and those need to be accounted for.
            Which can be done by taking different measurements and fitting a curve to them.

            Again with temperatures trend lines are useful if applied correctly but can be
            misleading if extended too far. Knowing the temperature in May and June might
            allow me to predict the temperature in July but I would get it wrong if I tried to
            extrapolate through to December.

          • Izaak,

            Yes, I believe one point is sufficient to determine the value of a resistor. If your measuring devices have a systemic error built-in then no quantity of measurements at different points will result in a more accurate measurement of the value of the resistor. If the error is in reading the devices output (i.e. an analog scale) then you just read it multiple times using the same values for voltage and current.

            “Knowing the temperature in May and June *might* allow me to predict the temperature in July” (asteriks mine, tim)

            The word *might* is quite telling in your statement! If extending the trend line only *might* allow future predictions then of what use is extending the trend line? Extending the trend line and expecting it to come true just becomes an article of faith, not a scientific article of fact.

        • Izaak ==> Your line on the resistor graph is not a trend line — your resistor is not a time series. It is simply a visual representation of the known physical relationships involved in the formula for voltage and resistance.

          The overall topic and discussion of smoothing is not part of this essay. The only link is that drawing linear trend lines across a time series graph is Ultimate Smoothing….

  35. Dear Readers:
    Depending on a trend line cost Dr Richard Feynman a Nobel Prize.
    You can read about it in his biography.
    He was attempting to participate in the discussion about what happens in “weak” decay of nuclear particles. He extrapolated from published data and ignored the last two known points.
    Had he not extrapolated, he would have been first off the mark and gotten a second Nobel.
    DON’T DO IT!
    Graphed data is a valid approximation when used between known data points.
    Almost no physical phenomena are linear. Nature isn’t like that.
    The error bars outside of known measured data are all over the place.
    Best story of extrapolation: Mark Twain, “Life on the Mississippi”, describing the length of the Mississippi River. Bleed-through of oxbows had resulted in a shortening of 200 miles in 100 years. So Twain projected forward and back. He ended up with New Orleans a suburb of Chicago!

    • New Orleans and Cairo actually, Chicago not being on the Mississippi even in Twain’s day. His observation on science in this connection is well worth remembering:

      “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

  36. Two books:

    How to Lie with Maps
    How to Lie with Charts

    Both have multiple editions

    And Tufte’s books, on selling your position with attractive charts and graphs.

  37. EPILOGUE:

    Well, that was fun. I even correctly predicted that statistic-ish folks would not like it and I was right. Stats-heads had all sorts of irrelevant examples of how some lines drawn by some advanced statistical shenanigans might produce some sort of useful something. I’m pretty sure most of them knew I wasn’t talking about their little preciousnesses.

    Most readers correctly read the essay as dealing with simplistic trend lines as used in the examples and thus hopefully gained something, even it just reinforcement of your level of trust in your own understanding.

    Never fear — there is no need to throw the baby out with the bathwater — there are times and uses for even simplistic linear trend lines — most of them rhetorical, used in persuasion — and all of them based on the opinion of the “drawer of the lines”.

    In truth, I have often imported graphs from journal papers and then used Photoshop to remove the trend lines so as not to influence my perception of the data — it is easy if they are colored lines.

    Thanks for Reading.

    # # # # #

    • To paraphrase Benjamin Disraeli…

      There are lies.
      There are damn lies
      And there are people who don’t understand statistics.

      😉

  38. Lost or created information. Inference, certainly, but inference from a limited, circumstantial set of data, is especially prone to misinterpretation and interpretation.

    • n.n. ==> You got this “especially prone to misinterpretation and interpretation.” right!

  39. Kip ==> Good job. You pulled up one of the three types of alarmist “science” (as I classify them).

    This is a Type One: Take your data set concerning a bad thing and plot a trend line (it doesn’t matter whether it is linear or some other kind). Report your mathematically valid, but in reality meaningless trend as something to be alarmed about (exaggeration is sometimes necessary in your graphical presentation, as here, but not always). Summarized as “This bad thing is increasing, be afraid. Very, very afraid. (Send us money.)”

    Type Two is: Take your Type One trend line and correlate it with another thing that you think is bad. Whether it is the burning of fossil fuels, or consumption of sugary soft drinks, or eating asparagus – you can probably find a correlation between the acknowledged bad thing and what you want other people to agree with you is bad.

    Type Three, of course, is: “Alter the contents of one (or both) data sets as necessary to create the required correlation between acknowledged bad thing and what I think is a bad thing.

    The first two are mathematically defensible, and even when deceptively presented (as in your source study) can be detected by a thinking and reasonably well educated person. The third entirely fraudulent alarmism is where we have real trouble.

      • Meant to add to that comment, but it was rather long already…

        The disease in question (um, your hiding didn’t work all that well) – all of those incidence changes are completely due to medical advances.

        The younger cohorts show an increasing “trend” thanks to a lowering of the cost of a biochemical test for the disease markers. Insurers (whether private or the government run one in Canada) have hit the “magic point” where paying for wider deployment of the test is equal to or less than treatment for the small number of cases that were previously missed. (The test is only indicative, not conclusive – but it allows the insurer to target a much smaller group for more expensive testing).

        In the older cohorts, the drop is thanks to an outpatient surgical procedure that detects and removes the disease precursors and is also becoming more routine. (Again, having the precursors doesn’t absolutely mean that you will develop the disease – but removing them vastly increases the odds that you will not. If you are in one of those older age cohorts, talk to your doctor!)

        • WO ==> The original authors discuss some of these points, but are bamboozled by the trend lines.

  40. Additionally I’d point out that a straight line isn’t a “trend” at all, it’s a slope that’s been forced by the compilation of anecdotes because the margin of error for each datum is not only different but differs depending on sensitivity and response of a device.

    In atmospherics that means that a spurious electrical discharge suddenly becomes a 135°F “high” for the day.

    There is no on-going margin of error study for ANY of the sensing devices used in meteorological data collection. That means their readings are no more accurate than the old 2°F marked thermometers in boxes covered with snow. Their margin of error was +- 2.5F+(.1F per year of use) and many were used until they broke… making history colder due to diminishing heat response.

    • Prjindigo ==> Do yo have a handy reference for that last bit “Their margin of error was +- 2.5F+(.1F per year of use) and many were used until they broke…”? If so, can you post a link or a cite? thanks….

  41. Kip,
    Thank you for this much needed essay.
    Relatedly, another failure that needs to be stressed is the improper use, or lack of use, of statistics that demonstrate error bounds.
    For over a year now, I have been trying to extract from our BOM an estimate of the uncertainty of measurement for routing daily Tmax temperatures. I have even simplified my ask to frame it as “How far apart should temperatures from 2 stations be so that their difference is not because of statistical noise, but can be stated confidently as a real difference?” or words to that effect.
    The BOM gives me semantic lectures with few useful figures. I think that some of their authors, like many others in the climate domain, simply do not know enough of the basics of statistics, errors and uncertainty to give a useful response.
    The subjects of your essay are intertwined with error measurement, so there are two problems needing repair.
    Geoff S

    • Geoff Sherrington ==> Thanks, Geoff. Yes, the failure to report, and often to even consider, real measurement error is huge for human reported temperatures — and needs to be considered scientifically for Automated Weather stations.

      I have done a couple of pieces on measurement error — which the stats-men complain about (they seem to believe that stats will make measurement error disappear.).

      The latest trick is anomalization of data sets, pretending that by reducing the data set to anomalies of means that they eliminate (by an order of magnitude) the original measurement error.

  42. Nice Kip

    I will expect less grief here when I remind people that the data is the data and the trend is in the model selected, not the data

  43. Trend lines work for dependent variables only. X makes Y. Then you get a trend which is y = mx + b. Since time does not make climate, time series trend have no meaning.

  44. Signal processing engineers don’t ‘draw’ trend lines, we ‘draw’ sine waves. (we probably haven’t drawn them since Fourier’s time, we calculate them).

    A trend line as drawn in the above diagrams is simply a sine wave whose period is far far longer than the data window we are looking at.

    IOW, a trend line is guessing about data we don’t have, both in the future and in the past.

    Formally, we know nothing about any sine wave whose period is more than 1/2 that of the length of the data we are looking at.

    Trend lines are very bad signal processing technique, and belong in the same category as Mann’s attempt at signal processing resulting in bogus hockey sticks.

  45. Trying to explain the pause, I used this to explain why the was pause was real and important.
    https://i.postimg.cc/WbLbpph9/image003.png
    Where one third of human emissions occurred in even this simple model of a constant warming rate plus a sinusoidal with 60 year period and 1/3 C of a degree warming due to acceleration since 1950, is where the fit is the poorest. It makes it less reasonable to claim both human CI2 and temperature are going up with time so it must be human emissions.

    PS Similar issue with the term modelling. Its a quantitative argument, not divining.

  46. Kip, I’m sorry, I didn’t read all the comments, so maybe someone else said this.

    Regarding the cancer incidence graphs, did you read the methods? The authors used a statistical program, Joinpoint Regression, which is specifically designed to test for changes in trends. Unless you want to tell us how this tool is being wrongly used, or that it’s inadequate for the purpose, I’m not sure what your point is.

    There’s nothing wrong with trend lines per se, as long as they are used carefully with the correct statistic (regression vs. correlation, for example), and the assumptions of that statistic are met. Interpreting the line is the next step. Of course, if the wrong beginning and endpoints are used, or not enough points, or if the data need to be transformed, or whatever, it can weaken the statistic or the argument. But unless one simply draws a line where one feels it’s appropriate, it’s not an “opinion,” it represents a statistical relationship among the points, and additional numeric information about the quality (slope and its direction) and strength of the relationship between variables.

    You are attacking something that has commonly been used for at least 150 years to aid in the visual communication of a statistic. Why?

    • Kristi ==> If you don’t understand why I have objections to the practice, I must assume I failed to communicate it properly.

      The problem is that trend lines are being drawn where “oine feels appropriate”.

      Having statistical packages draw ones lines is worse, not better.

      The better question is “Are the trends found by the stats software Minimally Clinically (in this case, public health-wise) Important?”

      I stated several times, I am not involved in the medical issue — only the trend lines — drawing of anfd use of — issue.

      Goods to see comments from you here again.

      • Kip,

        You stated that trend lines were a matter of opinion. When you use statistics properly, it removes the “opinion” part. The statistics to use are often decided before the data are even collected, so where does the “opinion” come in? Opinion may influence the best statistical test or program to use, but that is not the same as deciding where the trend lines fall. “Joinpoint fits the selected trend data (e.g., cancer rates) into the simplest joinpoint model that the data allow. The resulting graph is like the figure below, where several different lines are connected together at the “joinpoints.” The figure is here: https://surveillance.cancer.gov/joinpoint/Joinpoint_Help_4.5.0.1.pdf, which also explains the options one can select in the program, its mathematical basis, etc. The program selects where to draw the lines.

        Whether the trends are “minimally clinically important” is about the interpretation of the trends, not about drawing the lines themselves.

        In your climate example, presumably the trend lines for the upper graph were selected based on the dataset used and the years of the so-called “pause,” for comparison. The lower graph then uses the same length of the pause trend line to look at previous blocks of the same length as the “pause.” One could argue that this is not the right length of time, or doesn’t cover the right start and stop point of the “pause,” or whatever, but the point is that the sequential trend lines are not of arbitrary length, and not merely opinion.

        “The interesting thing about the graph is the effort of drawing of “trend lines” on top of the data to convey to the reader something about the data that the author of the graphic representation wants to communicate. This “something” is an opinion — it is always an opinion — it is not part of the data.”

        The trend line is not part of the data, no, but there would be no trend line without the data. The line is meaningful, a visual and mathematical representation of the data, so how is that “opinion”? An opinion would be, “The trend is very steep.” Where is the opinion in, “Based on this data, between 1880 and 2017 the global surface temperature has risen an average of 0.13 F per decade”? Are all statistics simply opinions?

        “If your data needs to be run through a statistical software package to determine a “trend” — then I would suggest that you need to do more or different research on your topic or that your data is so noisy or random that trend maybe irrelevant.”

        What would you suggest as a replacement? Eye-balling it, and saying, “Well, it seems to go up”?

        While there are plenty of examples of abuse of trend lines, it’s true that I honestly don’t see your underlying motive for attacking their use in general. Just like a histogram or a pie chart, trend lines are a useful way of visually communicating mathematical relationships. You are right, you haven’t (to my mind) adequately explained why you think trend lines are “opinions.”

        I stopped commenting (and come here seldom in general) because I got so tired of seeing the same old deprecation of large groups of people, the same disparagement of most of the scientific community, the same old specious arguments… it got frustrating and boring when there are so few interesting discussions. And one gets tired of being personally attacked for one’s views. WUWT is a waste of my time.

        But you’ve usually been civil to me, and I’ve appreciated that. Take care, Kip!

        • Kristi ==> Always glad to see you here — I was quite serious that I was not taking up the medical issues of the CRC paper, only their dependence on (or insistence on) trends produced by a stats package.

          Not intended to critic their paper in general — just that one point. Yes, they started off looking for “trends”and my opinion is that it led them astray be cause their trends were ‘significant”…etc etc.

          There is a lot of trend nonsense in many fields, CliSci is trend crazy.

          You need to read my whole linked series on the Button Collector and the piece I did on Andy Revkins NY Times blog to see the big picture.

          Trend lines are opinions because they depend so much on the start and end points (chosen to match the opinion of the line drawer) and on the type of trend calculated.

          If you missed it, see https://www.xkcd.com/2048/

          • Kip,

            As I said, trends can be abused. All statistics can be improperly applied and interpreted. That’s why there are rules in science about the way statistics are used. I’ve seen on WUWT trend lines applied to data that look to me as if the assumptions weren’t met (e.g. normal distribution, homoscedascity, etc.), but that doesn’t mean everyone does it. Sometimes it’s not even clear that people know the difference between a correlation and a regression or what inferences can be drawn from each. That’s the problem when anyone can stick some data into Excel and run a test on it without knowing anything about statistics. But it’s not safe to assume that’s the case across the board, for all scientists who use trend lines. Do you know, for example, that the author of the climate paper chose the data to use after she saw the trends different starting/ending dates would produce? After all, she could have chosen to start the main trend at 1910 or extended the “pause” to 2016 and would have gotten more of a slope.

        • Krisii: “Where is the opinion in, “Based on this data, between 1880 and 2017 the global surface temperature has risen an average of 0.13 F per decade”? Are all statistics simply opinions?”

          The trend line does *not* show that the global surface temperature has risen since it is based on an average value which loses the data needed to understand if the global surface temperature has risen at all. The average hides whether Tmax is going up, whether Tmin is going up, or if a combination of the two is happening. Tmax going up might be bad and Tmin going up might be good. The trend line of the average can’t tell you either.

          That’s one of the pitfalls of trend lines. If you don’t understand the data then the trend line can “fool” you into an interpretation that is totally at odds with the data.

          • Tim,

            It depends on what one wants to assess. If one is trying to find out if the globe is warming as a whole over time, one would use averages and anomalies. Tmax and Tmin don’t matter to the whole except through their impact on the average at each weather station. If Tmax is rising on average as the same rate Tmin is falling, the average will stay the same. This, too, may be meaningful, but looking at one or the other will not tell you whether the globe as a whole is getting warmer through the decades. If, on the other hand, one is trying to find out if the minimum temperature for February at Casper, WY is going up or down over the years, one could use the absolute minimum temp in Feb. for each year, or the average of the Feb. daily minimums for each year.

            Tmax going down and Tmin going up might be bad, too, in some areas, depending on what effect one is interested in. For example, minimum winter temperatures control how far north some destructive insects are able to survive. In some/most cities, extreme heat kills more people per degree rise than extreme cold kills per degree fall (because it is cold in general that tends to kill through effects on disease, while extreme heat more often causes acute bodily harm), in others the opposite is true. It is up to the researcher to choose what is most meaningful for the study.

            A trend in average temperature change over 150 years suggests the globe’s heat budget is changing even if the trend is down in some periods as long as the changes from year to year are not so great that the signal is lost in the noise. A trend line alone is not enough to assess this, and sometimes adding a trend line can be improper, which is why there are rules for using and reporting trend lines (which are simply graphical representations of statistics). I don’t know if the author of the climate research above followed the rules, and I’m not going to simply assume she did or didn’t (unless it was peer-reviewed and published in a reputable journal, and even then I might try to confirm it myself). Assuming she did something wrong is a mistake, too. There are far too many incidents of assuming researchers did poor work just because it didn’t support readers’ biases, or good work because it did.

            It’s not just understanding the data that’s important, it’s also understanding the proper use of statistics. Often readers have to trust that scientists know what they are doing. Unless there is documented evidence for widespread scientific misconduct or laxity within a field (which has not been demonstrated in climate science), I see little reason to mistrust most researchers – though even the best may make errors or get erroneous results by chance, which is why assessing a body of research is better than relying on any one paper. On the other hand, there is reason to be skeptical of how science is reported by the media and those without expertise in the field.

          • Kristi: “If one is trying to find out if the globe is warming as a whole over time, one would use averages and anomalies. Tmax and Tmin don’t matter to the whole except through their impact on the average at each weather station.”

            If the increase in Tmin keeps below the point where it causes ice and snow to melt over wide ranges of the globe then Tmin certainly matters. If Tmax is actually going down then this negates all the claims of the AGW alarmists that we are going to see a decrease in food production, especially in grains which is a large part of the global food supply. Grains are negatively impacted by Tmax going above 90degF to 95degF. If Tmax is going down or even staying stable it is a *big* deal.

            “Tmax going down and Tmin going up might be bad, too, in some areas, depending on what effect one is interested in. For example, minimum winter temperatures control how far north some destructive insects are able to survive.”

            Can you point to someplace on the globe where destructive insects have made the place inhabitable? With today’s ag capability the ability to control destructive insects is far greater than our ability to control CO2 production.

            “A trend in average temperature change over 150 years suggests the globe’s heat budget is changing even if the trend is down in some periods as long as the changes from year to year are not so great that the signal is lost in the noise.”

            If some regions are seeing cooling and some are seeing warming then is the *globe’s” heat budget changing? Or just some regions? How do you tell from a “global” average?

            “It’s not just understanding the data that’s important, it’s also understanding the proper use of statistics.”

            Applying statistics to an “average” is, in almost every situation, an improper use of statistics. Once you take the average then you have absolutely no idea of what is actually happening in reality. You can calculate all the statistics from the average that you want, the statistics only tell you what is happening to the average.

            It’s like taking 10 groups of 1000 steel girders and calculating an average length for each group. Plotting that average and developing a trend line from a regression analysis will tell you what? You will still have no idea what the longest and shortest lengths are so you can’t design a fish plate to connect them that will work in all situations. It’s the same with using an “average” temperature for the globe. You can plot and calculate all you want with that “average” global temperature, you still won’t know what is going on around the globe.

          • Tim,

            “If the increase in Tmin keeps below the point where it causes ice and snow to melt over wide ranges of the globe then Tmin certainly matters. ”

            “Budget” was the wrong word. How about “index”?

            When you are trying to find out if the global temperature is increasing over time, as in the first graph, the processes don’t matter. What you do is look at anomalies for every station, and average them over the course of a year, then average all those to find a global average (that’s oversimplifying, obviously, but I’m not going to get into a long explanation here). Plot the yearly global averages, run a regression, and you have a trend.

            “If Tmax is actually going down then this negates all the claims of the AGW alarmists that we are going to see a decrease in food production, especially in grains which is a large part of the global food supply. ”

            It’s far more complicated than that. For example, If Tmax goes down, you may have a shorter growing season in some areas if the ground takes longer to thaw and freezes early. You could also see a change in precipitation patterns if the air holds less water. The systems are variable and complex.

            “Can you point to someplace on the globe where destructive insects have made the place inhabitable? With today’s ag capability the ability to control destructive insects is far greater than our ability to control CO2 production.”

            I assume you mean uninhabitable, but that’s not the point. My point was that there’s potentially a very high economic cost when destructive insect expand their range and/or become more prolific. The bark boring beetles that killed thousands of acres of forest in the Rockies are a good example. Emerald ash borer populations are limited by cold in the north. These insects cause billions of dollars worth of damage every year. I’m not as familiar with food crop pests, but I imagine there exist similar environmental limits. Yes, to some extent it’s possible to control many pests, but control, too, comes at a cost, and for some, particularly subsistence or small-scale farmers in the developing world, these costs are simply too high. Again, it’s a complex subject.

            An extremely wide array of plants and animals are changing their ranges and behavior in response to climate change. That is well-established.

            “If some regions are seeing cooling and some are seeing warming then is the *globe’s” heat budget changing? Or just some regions? How do you tell from a “global” average?”

            The global average is important because that’s what we look at to see if the planet is warming. As more heat is trapped, the planet warms. (Of course, much of the extra heat is absorbed by the oceans, and we don’t have a very good understanding of where it goes – but we do know that they are generally warming.)

            “Applying statistics to an “average” is, in almost every situation, an improper use of statistics.”

            I disagree. An average is simply a number. Looking at a trend in a time series of averages is perfectly legitimate. Here, in Figure 2, is an example for an introductory course on statistics, published in Journal of Statistics Education http://jse.amstat.org/v21n1/witt.pdf. (A good example for Kip, too!) Note Table 1. It turns out that a quadratic equation for the regression is better than linear, as the rate of decline in Sept. Arctic sea ice extent is increasing.

            “You can plot and calculate all you want with that “average” global temperature, you still won’t know what is going on around the globe.”

            If you mean the regional variation, that’s perfectly true. That’s a whole nuther ball game!

          • Kristi:

            “The global average is important because that’s what we look at to see if the planet is warming.”

            But it does *not* tell you if the globe is warming. If only one region is warming, say central Africa, and the rest are static you it will see an increase in the global average. So exactly what does the global average actually tell you? Again, taking an average *loses* valuable data. It would be far better to come with solutions for the areas that are warming than trying to shoehorn everyplace into a one-size-fits-all solution.

            ” Looking at a trend in a time series of averages is perfectly legitimate. ”

            Taking an average of MINIMUM sea ice extent in Sept and doing a regression of a time series only tells you something about the minimum extent in Sept. What does it tell you about the *maximum* extent? Nothing.

            “If you mean the regional variation, that’s perfectly true. That’s a whole nuther ball game!”

            The globe is made up of regions. If you don’t know what is going on in the various regions then you really don’t know what is going on with the globe.

          • Tim,

            No one proposes that the whole planet is warming uniformly. Some regions may not be warming at all, and some may even be cooling due to changed weather patterns (due, for example, to increased Arctic ice melt and its effects on ocean currents – mind you, I don’t know if there is regional cooling happening). It IS the average that one has to look at to see if there is an effect of increased atmospheric CO2 on the TOTAL global energy exchange with outer space. I don’t know how that can be made clearer.

          • Kristi: ” It IS the average that one has to look at to see if there is an effect of increased atmospheric CO2 on the TOTAL global energy exchange with outer space. I don’t know how that can be made clearer.”

            If there is regional cooling and warming then the *average* global energy exchange with space is meaningless. And if it is weather that is causing the cooling trends in some places then why isn’t it weather that is causing heating in other places?

            If you don’t believe regional cooling is happening then google the term “global warming hole”.

            As far as the energy budget, as the Earth warms it radiates *more* IR (see the S-B equation). Yet we are being asked to believe that doesn’t happen with the Earth. We are to believe that as the Earth warms it radiates at a fixed rate that doesn’t change so that the Earth’s temperature can continually go up. There is something wrong with that belief.

          • Tim,

            When I say “the planet,” I include the atmosphere. Without it, you are right: the Earth would simply radiate heat back into space. Fortunately, we have an atmosphere, making the planet habitable. But adding CO2 changes the atmosphere, making it retain more heat. That’s the whole point. I thought you’d heard.

            I’m not going to argue about this any more. It’s going nowhere. The physics is sound, the observational evidence is there…the planet is warming.

          • Kristi:

            “But adding CO2 changes the atmosphere, making it retain more heat”

            How does an atmosphere retain heat? If a molecule in the atmosphere gains energy, i.e. heat, then it will radiate that extra energy, it will lose the heat in a collision with another molecule, or it will change position (usually rising – e.g. hot air). All of these will contribute to the heat being lost, e.g. hot air rising to where it is colder.

            “I’m not going to argue about this any more. It’s going nowhere. The physics is sound, the observational evidence is there…the planet is warming.”

            In other words, don’t question the religious dogma. The physics simply do not work. It requires believing that the Earth can warm without losing that extra heat to a colder body (space). Pardon me if I know enough physics to question the religious dogma.

  47. Kip! I agree that trend lines are not the data. But unfortunately the data is not the data either. To illustrate this:
    Divide the UAH data point stream into 12 different data point streams, one for each month. If you calculate the standard deviation of each stream you will find that the November stream has much lower standard deviation than the rest and the February stream has the highest std. November points are simply more trustworthy than February points. If you calculate a trendline that treat these two streams as comparable you are making an error. The warming trend in these monthly streams seems to fall in three categories: 0,0147-0,0149 K/a(nnum) for Jan-, Feb-, Sep-, Oct-series; 0,0110-0,0114 K/a for May-, June-, August-series; and 0,0121-0,0131 K/a for Mar-, Apr-, Jul-, Nov-, Dec-series. Adding to this complexity is vulcanos, ENSO and other disturbances But if we are willing to examine the data they seem to be capable of telling us more than if we are not prepared to do so. There is simply no way any single graph can show all details.

    • Johan ==> The data is not always “the” data — and it does not always represent the thing it is claimed to represent…..you are quite right.

  48. Just a final remark
    In climate science you often come across certain known functions,
    e.g. I found the speed of warming of Tmax at specific places on earth following a sinusoid, with wavelength 87 years. Similarly, I found rainfall patterns at certain places on earth following the pendulum of clock, from top to bottom 43 years (2 successive Hale cycles) and from bottom to top another 43 years (for the next 2 Hale cycles, making 1 full Gleissberg cycle).
    In both the above samples, it follows that you can use a linear trend line for the original to show ‘no trend’ over the relevant wavelength period, i.e. 86.5 years.
    I hope this makes sense to some here, like A C Osborn?
    Obviously, if you don’t know the cycle time you will get rubbish from a trend line, it might even scare you…

  49. I have been closely examining tide charts for the past several weeks, and applying what Kip is saying in this article to those tide charts, shows clearly the wisdom of what Kip is saying.
    Trends are drawn on these charts which completely obscure what is actually shown by the data.

  50. In cartography and geography there is a practice called “vertical exaggeration,” wherein the vertical scale is exaggerated in relation to the horizontal scale, so that a 10-km cross-section with 100 m of relief doesn’t look like a straight line like it would if the two scales were equal.

    Instead, the horizontal scale might be 1 cm =1 km, while the vertical scale might be 1 cm = 10 m. Since the ratio of horizontal to vertical scale is 1km:1cm: 10m:1cm, the vertical exaggeration would be 1000m:10m, or 10 to 1.

    Does the graphing of time series have a similar concept? If the appropriate scales are used in these plots of anomalies, they can be made to look like a steep mountain of global warming or a smooth surface of pause.

    If there is no such name for the concept, one should be invented.

    • James ==> There are two problems in this category with CliSci graphs. One is exaggeratedly compressed vertical scale, by that I mean the scale top-to-bottom covers minuscule magnitude to make tiny changes appear to be large changes. This is often done intentionally.

      The second problem is a software problem — if you have used any software package (MatLab, etc) or used an online graphing engine — such as plot.ly or VisMe — they often auto-magically set the scale to be some little bit larger than the extent of the data….this is a reasonable decision but can be very distorting of data with small changes over time.

  51. Kip, you may have generalized too much, but as respects “climate” graphs, “trends” are meaningless where:

    1. The CAUSE of the “trend” is unknown.

    2. There are MULTIPLE “causes” of the trend, which have not been separately identified and quantified.

    3. The quality of the DATA is NOT FIT for the purpose of identifying meaningful “trends.”

    4. The subject being measured is subject to CYCLIC variations that render the available data too short in terms of time to be meaningful, in terms of identifying “trends.”

    ALL of these issues apply to virtually every climate “graph” in existence, yet the so-called “climate scientists” like to pretend their “data” has TENTH of a degree precision, when (a) most of what they call “data” ISN’T EVEN DATA, and the precision is orders of magnitude worse than they represent it to be.

    • AGW ==> I like to take strong positions and let others work through them … some of my “strong” is rhetorical….

  52. And yet trend lines over very short periods of time are the climate “sceptics” bread and butter.

    • Very few skeptics will put a trend line to a graph. They mostly see the graph as noise. Take 10 data points representing some Y value for each X value year. Give a value of 1 to each of 5 successive years at the beginning. Then give a value of 10 to each of the last 5 remaining years to the present or far right of the graph. Everyone would say the graph represents a tipping point at the end of the 5th year and thus definitely has a trend upward.

      Now consider a completely different data set of reordering of the yearly data so that each every other year the value 10 follows a 1 value like this 1, 10, 1, 10, 1, 10, 1, 10, 1, 10,

      Now there is no tipping point but the the standard deviation is the same in both. Skeptics would say the 2nd data set is all noise whereas climate scientists would calculate the trend line by least squares regression. If they had done it for the 1st set of data, they would end up with the same trend line for both. However both trend lines are invalid and both graphs are noise.

      • I should have said only the last graph is noise, but even that can be argued because of some underlying cause of the 1 to 10 cycle of each paired successive year.

Comments are closed.