A new way of looking at 'The Pause'. Why Karl et al. got it wrong about 'The Pause'. (Part 1)

Much has been written about the Karl et al “pause buster” paper published this past summer, this essay suggests Karl et al actually shot themselves in the foot with the paper

Guest essay by Sheldon Walker

In this article we will:

1) look at an interesting new technique for analyzing global warming

2) use the new technique to analyze the time interval [January 1950 to December 1999]

3) use the results of 2) to show why Karl et al got it wrong, in their paper about “The Pause”

Most people are familiar with the use of linear regression in global warming.

Pick a start time, pick an end time, and calculate the slope of the regression line from the dates and temperature anomalies in the data series. What could possibly go wrong?

One of the common accusations made with global warming, is that the start time and/or end time were cherry-picked to give a particular result.

Accusations are also made that the length of the trend was too short to give a significant result (e.g. trends less than 10 years, or even trends less than 30 years).

What if there was a technique that we could use to get around these accusations?

To overcome the problem of cherry-picking, we use all possible start and end times from the time interval being investigated.

To overcome the problem of short trends, we only look at trends which are at least 10 years in length.

For example, imagine that we were interested in the time interval from [January 1975 to December 1999]. This interval is 24 years and 11 months in length. We divide [January 1975 to December 1999] up into EVERY possible trend of at least 10 years.

When I say EVERY possible trend of at least 10 years, I mean EVERY possible trend of at least 10 years. For the time interval [January 1975 to December 1999] there are 16,920 possible trends, and this method uses them ALL.

Example trends from [January 1975 to December 1999]:


10 year              trends:  e.g. January   1975 to January   1985

10 year              trends:  e.g. February  1976 to February  1986

10 year and  1 month trends:  e.g. February  1980 to March     1990

10 year and  2 month trends:  e.g. January   1985 to March     1995

20 year              trends:  e.g. September 1979 to September 1999

20 year and  3 month trends:  e.g. September 1979 to December  1999

24 year              trends:  e.g. June      1975 to June      1999

24 year and  5 month trends:  e.g. May       1975 to October   1999

24 year and 11 month trends:  e.g. January   1975 to December  1999  (the entire interval)

plus the other 16,911 trends.

This might appear to be overwhelming, but with Excel and a modern computer, it can be calculated quite easily.

There are several ways to present the results. The simplest way is to plot a “scatter” graph of the warming rate versus the trend length.

What does the graph of every possible combination of warming rate and trend length for [January 1975 to December 1999], look like? Have a look at Graph 1.

Graph 1

This graph holds a lot of valuable information, but it needs a little interpretation.

For example, how does the warming rate change with the trend length.

From the graph:

The warming rate for 10 year trends varies from -0.20 to +2.80 degC/century

The warming rate for 15 year trends varies from +0.65 to +2.20 degC/century

The warming rate for 20 year trends varies from +1.02 to +1.61 degC/century

The warming rate for 24 year and 11 month trends doesn’t vary at all, because there is only one, which is for the entire period, and it equals +1.71 degC/century

These results probably agree quite well with most people’s expectations. One lesson is, be wary of 10 years trends. You can get just about any warming rate that you want from a 10 year trend. Note than in certain circumstances a 10 year trend can be meaningful, but in general, 10 years trends are all over the place.

In general, warming rates become more stable with increasing trend length. But not always. Look at the warming rates for trend length = 22 years. There is a very small range of warming rates varying from +1.43 to +1.52 degC/century. But as the trend length increases to 23 years, the range of warming rates widens considerably. Why?

Also, after having a fairly stable warming rate of about +1.48 degC/century at trend length 22 years, the interval ends up with a warming rate of +1.71 degC/century for the entire interval. What made the warming rate suddenly increase by over 15%, as the trend length increased by just 3 years?

I am going to guess the answer to these 2 question, using the “scatter” graph, and a graph of the temperature anomalies over the interval. If you disagree with my quick guess then let me know what you think the answer is. At the start of the interval there is a La Nina type event from about 1975 to 1977. At the other end of the interval there is the large 1998 El Nino from about 1997 to 1999. As the trend length gets long enough to be influenced by both of these at the same time, the slope of the regression line is increased by the El Nino at one end, and also increased by the La Nina at the other end. So as the trend length exceeds 22 years, there is a double boost to the warming rate, which the “scatter” graph shows quite nicely.

Looking at the “scatter” graph for a single time interval, is only one possible use for this technique. Comparing the “scatter” graphs from different time intervals is another exciting possibility. It is this method that I will use to prove that Karl et al got it wrong in their paper about “The Pause” (“Possible artifacts of data biases in the recent global surface warming hiatus”).

To start, have look at Graph 2. This is similar to Graph 1, but shows every possible combination of warming rate and trend length for a different time interval, this time [January 1950 to December 1974]. This graph looks a bit like the one for [January 1975 to December 1999], but it is also a bit different.

Graph 2

To make it easier to compare these scatter graphs, I will put them onto the same graph. This means that one of the graphs hides some of the other graph, where they overlap. If necessary, this can be improved by plotting only the perimeters of each graph, but I am more interested in where the graphs don’t overlap at the moment, so we will ignore the overlap for now.

See Graph 3 – All combinations of warming rate and trend length that exist in the periods [1975 to 1999] and [1950 to 1974], for trends of at least 10 years.

Graph 3

Now it is easier to appreciate the differences between the 2 graphs. They are sort of similar in shape, but the green curve is translated down from the orange curve. Why is this? Looking at the warming rate for the entire interval for each graph gives the answer.

The orange curve has a 24 year and 11 month trend of +1.71 degC/century. A rate of global warming which is NOT low.

The green curve has a 24 year and 11 month trend of +0.28 degC/century. There is not much global warming in this interval.

Note how there is no overlap between the 2 graphs for trend lengths greater than about 15 years. This reinforces the idea that these 2 time intervals have very different warming rate profiles.

Now, the BIG question. If you add together these 2 periods, [1950 to 1974] and [1975 to 1999], and calculate the warming rate for the combined interval [1950 to 1999], what would the warming rate be? I have done this, and a linear regression over the combined interval has a warming rate of +1.12 degC/century. OK, but what does this value of +1.12 degC/century actually represent.

It is NOT the warming rate for normal anthropogenic global warming.

It is NOT the warming rate for when there is NO anthropogenic global warming.

It is an artificial average rate of warming, for an interval when anthropogenic global warming was present for about 1/2 the time, and absent for about 1/2 the time.

Unfortunately, Karl et al used this value as their “normal” anthropogenic warming rate, and based on this value, they concluded that the warming rate for [2000 to 2014] did NOT support the notion of a global warming “hiatus”.

Recapping quickly on the Karl et al paper:

Karl et al adjusted the NOAA data to account for the 0.12 degC average difference between buoy and ship SSTs. This “correction” had an impact on temperature trends, with the largest impact being on trends from 2000 to 2014 (which is where “The Pause” was meant to be).

So Karl et al calculated the new warming rates for [1950 to 1999] and [2000 to 2014]. They got:

Warming rate [1950 to 1999] = +1.13 degC/century

Warming rate [2000 to 2014] = +1.16 degC/century

Karl et al concluded that since the warming rate from [2000 to 2014] was virtually indistinguishable from the warming rate from [1950 to 1999], it does NOT support the notion of a global warming “hiatus”.

I am NOT questioning the adjustments that Karl et al made to Sea Surface Temperatures (SSTs). I am not qualified to dispute these adjustments, so I will use the adjusted NOAA data as it stands. Special note – I am using the NOAA data. If I find a “Pause” in the NOAA data, then they can not accuse me of using the wrong data.

I am also NOT disputing the calculation results from Karl et al. I get very similar results to theirs.

My issue is with the use of the Warming rate for [1950 to 1999]. Karl et al said this:

“Our new analysis now shows the trend over the period 1950-1999, a time widely agreed as having significant anthropogenic global warming (1), is 0.113°C dec−1, which is virtually indistinguishable with the trend over the period 2000-2014 (0.116°C dec−1).”

Now [1975 to 1999] is an interval having significant anthropogenic global warming.

But [1950 to 1974] is an interval having very little anthropogenic global warming.

By joining these 2 intervals together to form [1950 to 1999], Karl et al have created an interval that basically has half strength anthropogenic global warming (half with warming, and half without warming). But Karl et al used this value as their “normal” anthropogenic warming rate, when they compared it to [2000 to 2014].

If the warming rate for [2000 to 2014] matches the warming rate for [1950 to 1999] (which it does), then that means that [2000 to 2014] also has half strength anthropogenic global warming.

There are 2 simple ways to explain how [2000 to 2014] could have half strength anthropogenic global warming.

1) The period [2000 to 2014] could consist of 2 parts, one part which has anthropogenic global warming, and one part which does NOT have anthropogenic global warming (like [1950 to 1999]). But I do not think that this is the case.

2) The more reasonable explanation is that the period [2000 to 2014] has a lower warming rate than “normal” anthropogenic global warming. The warming rate would be about 50% of the “normal” warming rate. This could be called a “Slowdown”, a “Hiatus”, or a “Pause”. Whichever name you prefer, the data shows that it exists.

So Karl et al, while trying to convince everybody that there is NO Pause, have actually provided strong evidence that “The Pause” does exist (once their error concerning [1950 to 1999] is corrected).


In part 2 of this article, I will analyse [2000 to 2015] using the new technique described in this article.


newest oldest most voted
Notify of

Nice number-crunching, but it still uses dodgy data as an input. I do wonder what the same sort of analysis would look like with raw (uncorrected) temperatures.


GIGO (garbage in, garbage out) is a concept common to computer science and mathematics: the quality of output is determined by the quality of the input

Walt D.

GIGOGIB (Garbage in, garbage out, and garbage in between) – the model are also no good.

Gary Kerkin

No, no, no! GIGO was transformed a long time ago: Garbage In, Gospel Out. (It appeared on a lapel badge at a “Computer Conference” in Sydney, Australia, around 1972. Took out the prize for the best badge. The runner up was “On a clear disk you can seek forever”!)

Eugene WR Gallun

Gary Kerkin
GIGO — GARBAGE IN GOSPEL OUT — never heard it before. Thankyou.
Eugene WR Gallun

Walt D.

Once you start changing temperatures, your results are only a property of your model and not the real world. The temperatures close to the Argo buoys did not suddenly jump as a result of Karl et al. changing the data.


Science is not about being controversial or hypothesis or computer science. But based on facts…. Show me the FACTS.


Here are a few Facts :These Plains and Radar Station are now stock in solid ice 400 feet deep on Greenland 10 miles from the coast and haven’t moved in 70 years except in more ice cover. I rest my case http://www.nytimes.com/1988/08/04/us/world-war-ii-planes-found-in-greenland-in-ice-260-feet-deep.html one other example http://lswilson.dewlineadventures.com/dye2pics.htm

A C Osborn

I would also like to see what happens with Raw data, including the 1920-1940 period.


How raw would you like it sir, rare, medium or bloody?
Without Karl et al’s ridiculous use of temporally biased air temps to “correct” SST, certainly. If you want everything raw there will be errors. It may be interesting to see how much it “matters”.
The main problem here is the whole concept of adding land and sea data to get an ‘average temperature’. That will bias to more warming because it over-weights the land data producing a spurious additional warming.
Graph 1 is interesting since it looks like there are two datasets present, with out of sync changes. Actually close to anti-phase. What is this from? Is this NH/SH split or is it land vs sea?
I would guess it is N vs S. perhaps Sheldon could re-run his analysis with just NH and just SH data to see whether this separates the apparent patterns.
I also not in both graphs a ‘nip off’ point at around 12 and again about 9 1/2 years later.
Each pattern has three smaller cycles within it. Looks like the circa 9y periodicity found in ACE and SST:

Evan Jones

I would also like to see what happens with Raw data, including the 1920-1940 period.
Adjustments. Are. Necessary. They are.
And that makes it all the more important to get the adjustments right.

Chip Javert

Tom, Russell, Walt, et al
Walker didn’t claim he was going to clean up bad data – he simply claimed to have a new and rather straight-forward way of using Karl, et al data (what ever the quality) and demonstrate they may have reached the wrong conclusion.
I think he did a pretty good job of doing just that (thank you, Walker).
It’s hard to understand the hissy fit about the quality of the data when this hd absolutely nothing (zippo, nada, zilch) to do with quality of data.

What I was thinking is that some part of the results of the data analysis is a statistical atifact of the “corrections” done by Karl and friends, particularly the section showing 1975-1999 warming.

@ Chris Javert…my thought also!


Similar idea was done here on temp acceleration ( plotted with longest on the feft )


The description for that graph is similar ‘all possible trends’ fitting but it was done on dT/dt so it shows temp acceleration. Note how starting in the post war years gives the largest acceleration since it is a cooling followed by warming. Going back further averages out the early 20th c. warming, the post war cooling and recent warming. Net result is of lesser magnitude.

John Brisbin

When you first described your method, I expected a visualization of the results with the date as the x axis. This approach exposes entirely different information. Thanks for your efforts.

Reblogged this on Climate Collections and commented:
Interesting treatment of trend lengths.


Sheldon Walker:
Thankyou for your fine analysis and clear graphical display of its results.
Your report says,

2) The more reasonable explanation is that the period [2000 to 2014] has a lower warming rate than “normal” anthropogenic global warming. The warming rate would be about 50% of the “normal” warming rate. This could be called a “Slowdown”, a “Hiatus”, or a “Pause”. Whichever name you prefer, the data shows that it exists.

Yes, and that is what the UN Intergovernmental Panel on Climate Change (PCC) said prior to the Karl et al. paper.
So, your conclusion says the IPCC was right and Karl et al. has not made a change to that.
Box 9.2 on page 769 of Chapter 9 of IPCC the AR5 Working Group 1 (i.e. the most recent IPCC so-called science report) is here and says

Box 9.2 | Climate Models and the Hiatus in Global Mean Surface Warming of the Past 15 Years<
Figure 9.8 demonstrates that 15-year-long hiatus periods are common in both the observed and CMIP5 historical GMST time series (see also Section 2.4.3, Figure 2.20; Easterling and Wehner, 2009; Liebmann et al., 2010). However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realizations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box 9.2 Figure 1a; CMIP5 ensemble mean trend is 0.21ºC per decade). This difference between simulated and observed trends could be caused by some combination of (a) internal climate variability, (b) missing or incorrect radiative forcing and (c) model response error. These potential sources of the difference, which are not mutually exclusive, are assessed below, as is the cause of the observed GMST trend hiatus.

GMST trend is global mean surface temperature trend.
A “hiatus” is a stop.
And this was from the IPCC in that is tasked to provide information supportive of the AGW hypothesis and was published three years ago and the Hiatus is now over 18 years.

I enjoyed that. The graphs are attractive also.
The main problem with Karl et al is that they adjusted Argo temps to more closely match the temps from ocean-going vessels, instead of adjusting the less-reliable (steamship) records to match the more reliable (Argo) records.

Just how in the name of all that is sane does anyone get away with doing that? Is someone were to suggest adjusting Planck cosmic microwave background data to more closely match WMAP data they would be retired and put on strong medication. How is this blatant data fudging being allowed to continue?

Maybe it wont. Smiths Congressional oversight committee has subpoenaed all the relevant emails based on part on whistleblowers (yes plural) from within NOAA. NOAA committed contempt of congress by not complying. Suggests the whistleblowers are correct that something dodgy went on.
You have to ‘adjust’ ARGO and other modern surface float data up to erase the surface pause. So the data got Karlized.


It is allowed because it gives the politicians, the media and the grant awarders the arguments they need to keep the fear flowing and the taxes coming in and the subsidies flowing.

Chip Javert

In 3 words: lack of ethics

Your ‘half and half’ demonstration and logic rightly refute Karl’s conclusion.
Please consider a third post beyond part 2, performing your analysis on the NCEI interval 1925-1950 (equivalent in duration to figures 1-3). Lindzen pointed out (as did Curry in the Cruz hearing) that the temperature increase in that interval is essentially indistinquishable from 1975-2000. Your method will therefore produce a ‘blue’ shape that largely overlaps figure 1. Overlay this ‘blue’ shape onto your figure 3, and it should obscure most of the yellow 1975-2000 shape.
What you will have done is provide a visual proof other than Lindzens (which side is the earlier rise when the anomalies are offset to ‘0 start’?) that the IPCC AGW attribution to 1975-2000 is wrong. There was by far not enough increase in CO2 concentration in the 1925-1950 interval to mirror 1975-2000. It had to be largely natural variation. Which means 1975-2000 could be also, and NOT anthropogenic as in the IPCC attribution. Just like 2000-2015 maybe consided a partial reprise of 1950-1975.

The theory put forward by United Nations climate panel doesn´t leave much room for natural variation. Change in solar irradiance (The small one at the bottom) is the only natural factor they count in.comment image
I can´t see how the theory is not falsified by so different trends over two 25 year periods as shown in Graph 3.


Based on this careful refutation of their analysis, they’ll withdraw the paper now, right?

Mark from the Midwest

Nice method, which I will shamelessly rip-off and apply to a different substantive domain…

What Trofim Karl and the rest of them don’t get, but the general public does, is the fact that if they have to spend their lives adjusting (i.e. faking) the data, it shows that there is no dangerous AGW. If there was, they would be able to point at it without chicanery. After all, we have had a half-doubling+ of the purported effects of an atmospheric CO2 increase.

Nice point. Trofim Lysenkoism and Tom Karlization have a lot in common. Politicized dishonesty supported by supreme leaders Stalin and Obummer.

Indeed, and this will be Obama’s real legacy – a combination of clown science and scientific f****, sponsored by the government (with a small”g”).


This might appear to be overwhelming, but with Excel and a modern computer, it can be calculated quite easily.

Perhaps Sheldon could share his spreadsheet so that others can play. He provided a good description of his method but it would save reinventing the wheel if he posted a spreadsheet to do it.

Chris 4692

Computationally it’s rather trivial, though it reqires one column for each series length. The more interesting part is how he got all those columns to plot the same color.

Bill Illis

Great analysis. Keep it coming.


The narrowing of the trend range at 22 years for both periods is intriguing (35 trends by my calculation). Very curious?


Yes, I commented on that above. There is also a narrowing about 9.3y years earlier. Probably lunar in origin.
What I find most intersting in graph 1 is the appearance that we are looking at two different datasets, with out of phase peaks. I suspect this is NH vs SH. perhaps Sheldon could try separating the data for each hemisphere.


Is the 22 yr neck (low variance) in both graphs a reflection of the Hale cycle?


I don’t see any reason for that being the case. Most of the necking is due to the small number of points. The datasets are 25y long so the number of possible 22y trends in monthly data is just 36.
If you select one length , like 22y what you have is a sliding window on the data, ie it is effectively a running mean of dT/dt. So all that the neck shows that over a limited 3y range the 22y running mean is flat.
I don’t think that is particularly interesting or informative. Especially since running means distort that data any may not even be flat if looked at with better filtering.


Could well be an indication of the Hale cycle.
Is this method perhaps an indirect way of doing Fourier analysis ?


Nice visualization, but I can’t help feel they demonstrate the lack of a pause. You are talking about a 15 year period (2000-2014) where the rate of warming was +1.16 C/Century. That falls very well within the orange part of your graph, indicating it was not significantly different to other periods between 1975 and 1999.

Ah, but AR4 and AR5 both said it would be about twice this rate. Karl manufacturing a pause buster also busted IPCC ‘projections’.

Willis Eschenbach

I got this far and came to a halt:
“It is an artificial average rate of warming, for an interval when anthropogenic global warming was present for about 1/2 the time, and absent for about 1/2 the time.”
I don’t buy that claim, and it has no support in the paper. How do we know that anthro warming is there for “about half the time”??
Since the rest of the paper depends on that claim, I fear I can’t support this.
However, I give the author full marks for a very interesting and revealing graphic technique.

Willis, I agree with your remark, and my comment upthread suggests a way to use this technique to further illustrate your point that 1975-2000 might be natural rather than anthro.
But not in the context of this post, since Karl clearly assumes it is anthro. The half and half argument shows Karl’s reasoning about equivalence based on Karl’s anthro assumption 1975-2000 is wrong.

Where the paper says “It is an artificial average rate of warming, for an interval when anthropogenic global warming was present for about 1/2 the time, and absent for about 1/2 the time.” perhaps should be reworded to reference anthropogenic Co2 increase which is relatively uncontroversial instead of accepting the claimed consequence of that increase.


Since the rest of the paper depends on that claim, I fear I can’t support this.

What “paper”? This is an , as yet, unreviewed, blog post form someone who declares himself not competent to comment on Karl et al’s rigging of the data.
Pretty graphics but this is not a paper in any sense of the term.

Willis Eschenbach

Mike February 21, 2016 at 10:14 pm

Since the rest of the paper depends on that claim, I fear I can’t support this.

What “paper”? This is an , as yet, unreviewed, blog post form someone who declares himself not competent to comment on Karl et al’s rigging of the data.
Pretty graphics but this is not a paper in any sense of the term.

Dear heavens, Mike, do you have a new definition for “paper” now? The dictionary says:

2: an essay or thesis, especially one read at an academic lecture or seminar or published in an academic journal.

Note that it does NOT say “exclusively one … published in an academic journal”. The definition includes any essay or thesis. Since this is an “essay or thesis”, it is indeed a paper.
Don’t like it?
Take it up with Merriam-Webster, not me.
Finally, so what? So what if I called it a paper? This is a total red herring, it has nothing to do with the substance of my comment. If you dislike the word “paper”, mentally substitute your preferred noun and keep going. How tough can that be?
We now return you to your regularly scheduled programming.

Good work. Good comments.
I see something expected. Natural cooling followed the high temperatures 1926 through 1945 until about 1975 (based on NWS observations). Natural warming occurred from the late 70s through 1998. Pretty darned close to the prediction one would make based on “oscillation” information showing (loosely guys) a 64 year cycle. If we now get a slight cooling trend I will be nearly convinced the Oceanic effect controls.

JH, that same quasi oscillation, only slightly phase shifted, is qualitatively evident in Arctic sea ice. Larsen made the first single season Northwest Passage transit in summer 1944 near the peak of the last warming phase. Akasofu’s 2010 paper also posited this oscillation around a rising natural trend from the LIA. His paper predicted that the CMIP5 Climate models would run hot since the parameterization tuning for hindcasting was 1975-2005, basically the rising half phase. He showed tht the CMIP3 models were, for essentially the same reason. He had with the benefit of more hindsight a very precient figure on this, reproduced in essay Unsettling Science.

Tom Henman

Ah nice, obscure cherry picking by cherry picking. Why the 2 separate 25 yr periods? Why not 1950-2000 in total? Why assign anthropogenic warming as a cause claiming it was there 1/2 the time and not there 1/2 the time. Where it the proof it was or was not there?


@ Tom Henman: ask Karl et al. This article is using Karl’s data and Karl’s assumptions to show that Karl et al drew the wrong conclusions from their own stuff. It does not comment on the inherent value of that stuff.


Question: Why is there a similarity in the shapes of the two graphs, although they pertain to two entirely different data from two different time periods?
Is it due to some inherent mathematical property for all data subjected to this analysis?
Why is there a neck around the 22 year period? This obviously shows that the La Nina / El Nino effects are not an explanation, unless they occurred and the same times in those different time periods, which seems highly improbable.

Hocus Locus

Why is there a neck around the 22 year period?

No answers here, but what a natural question. Why is it there and why so ‘thin’? I am reading the bottleneck as a relative uptick in certainty as to whether a Fourier spike might exist within the data for the 21.8~22.3y interval. Or since we are in phase-space, could it be a visualization of the guaranteed-to-exist single hair or ‘whorl’ standing straight up in the Hairy Ball Theorem? Holy Great Red Spot of Jupiter, Batman!
I’d be curious to see how the first half ‘morphs’ into the other, by bumping it forward by month in an animated sliding window. Sorta like Willis’ periodogram of of 1stHalf/2ndHalf/Full dataset come to life.

Joe Crawford

HC said: “I’d be curious to see how the first half ‘morphs’ into the other, by bumping it forward by month in an animated sliding window.”
I Agree. Throw in a method to start/stop the animation and it might help to sort out the multiple signals/shapes that appear to be contained in the data

Don K

“Why is there a neck around the 22 year period?” Good question, but you may end up wishing you hadn’t asked it. 22 years is two solar cycles. I think that’s probably coincidence, but a lot of people won’t.

Hocus Locus

As a teenager with a moderate interest in astronomy I first read about the sunspot cycle and how its true mechanics and true cause were unknown. It was strange to think of something churning inside the sun like a washing machine that ‘repeats’ on such a long period. It just didn’t make sense. Though the article did not suggest it, within a minute I had an idea and grabbed an Encyclopedia, you know one of those old things with lots of books, and started looking at planet-orbit-years. Stopped on Jupiter at ~11.8 Earth years. I thought aloud to myself, “And they call this a mystery…?” It seemed to me that if this wasn’t the answer somehow it must be the most evil coincidence in the Universe. I have since learned that it’s more complicated than that and nothing makes sense and people change and you can’t fool Mother Nature and Dawn gets grease out of your way.


Why is there a neck around the 22 year period?

I created databases using a version of a random walk and analyzed them per the article. The wavy structure seen in the graphs above was common as were ‘necks’ near the end.
I suspect that what you’re seeing is an artifact of the analysis rather than anything to do with the data (because random data often gives a similar result).

Hocus Locus

Why do monkeys at typewriters have to go and ruin everything?
Their hysterical chatter reduces Shakespeare to gibberish.
The drunkards are walking all over our delicious mysteries,
but we’re the ones bumping into the lamp post again and again.
One more spin of the Monte Carlo method. I can stop at any time.


Hocus Locus says:
February 22, 2016 at 6:16 pm
Why do monkeys at typewriters have to go and ruin everything?

I think we are the monkeys and we make progress by a process that looks pretty random.

… innovation is not driven by narrowly focused heroic effort; and that we would be wiser (and the outcomes better) if instead we whole-heartedly embraced serendipitous discovery and playful creativity. Why Greatness Cannot be Planned


@Hocus Locus “It was strange to think of something churning inside the sun like a washing machine that ‘repeats’ on such a long period. It just didn’t make sense. Though the article did not suggest it, within a minute I had an idea ..and started looking at planet-orbit-years. Stopped on Jupiter at ~11.8 Earth years. I thought aloud to myself, “And they call this a mystery…?”
You are suggesting that the solar cycles are linked to the orbit of Jupiter. Your hypothesis would have some credence if they coincided with the perihelion of its orbit. At that point Jupiter would exert a gravitational force about 13 times that exerted by the Earth, on the sun.


Sheldon: Interesting analysis. Unfortunately splitting the record in 1950-1975 and 1975-2000 is so what arbitrary.
Your presentation might be improved by distinguishing between forced and unforced temperature change/warming/variability and between anthropogenic and natural forcing, which together cause forced warming. For the short periods you are discussing, forcing can be translated into forced warming using TCR. (Skeptic like Lewis&Curry (2014) and climate models don’t disagree that much about TCR, so forced warming has modest uncertainty associated with it. Whatever isn’t forced warming is unforced variability: ENSO, AMO, PDO, etc. When you discuss anthropogenic warming over a period without mentioning its cause – anthropogenic forcing over that period – and make no mention of unforced variability, it is hard to judge the value of your post.


That might be an improvement.
But then it wouldn’t be following the logic of Karl et al.
By accepting all the assumptions of the Pausebuster paper this article convincingly disputes the busting of the Pause.

Frank, IMO his split is legitimate based on NCEI temp. There was virtually no T rise in the first period, and a distinct rise in the second. Karl uses the first period to lower the rate for the two combined, then karlized SST to create a rise during the pause, then used the similarity in rates to claim no pause. What he forgot is that his new artifical rate is significantly less (about half) of what the climate models claim will happen. TCR ~1.7-1.8 over 70 years gives about 2.1-2.2C rise by 2100, not ~1.1C. And BAU (A1B in CMIP3, RCP6.0 in CMIP5) both reach about 750-800ppm by 2100. So Model TCR is a good approximation.


This post isn’t concerned with TCR over 70 years. What it calls “anthropogenic warming” is TCR times the anthropogenic forcing change for1950-1975 and 1975-2000.
Yes, Karl moved warming from the earlier to the later period – meaningless to me since the 50 year trend is more meaningful than any 25 year trend. Worse, Karl proved that there are many different and equally good answers to global warming for any one decade.

Evan Jones

This essay is a remarkable concatenation of ideas.


The graphs resemble a damped oscillation or damped sinusoidal wave, which is probably what you get when you do that type of analysis.


No, it’s just the changing frequency response as the length of the regression increases. It’s basically a numerical differentiation followed by a low pass filter, with decreasing bandwidth and shifting nodes as the length increases. Sometimes the frequency components in the data line up such that they are attenuated, sometimes relatively amplified. All except the lowest get progressively more attenuated as the length increases.


So the changing frequency response as the length of the regression increases is a damped sinusoidal wave. If one takes the mean points one would probably get the linear trend.


interesting…now if we could just find some real numbers to work with
The pause is only in adjusted numbers…..otherwise it’s a decline

Larry Wirth

Why not compare 1984-1999 with 2000-2014? Apples to apples, so to speak?


Exactly. I think that is the whole point. Karl et al. justify their result by comparing apples to tennis balls. Anyone who professes to believe their arm-waving is either dumb as rocks, or cynically going along with it to further an entirely different agenda. I.e., they are either fools or knaves.


This type of analysis is still suffering from cherry picking: the beginning and end date temperatures will define the convergenge point. If an extreme anomaly occurrs, say 5 years before the end date, it will bias the positive trends, whereas it is will select angaint negative trend. This analysis can only be properly done with datasets that spane more than say 10 times the longest tested interval. Otherwise, extremes close to the boundaries will dictate the majority of the trends.

Geoff Sherrington

Thank you for this novel (to me) and interesting way to analyse data.
You write “In general, warming rates become more stable with increasing trend length.”
This has been generally agreed for a long time, so it is good to move to quantifying the effect.
What does this mean for the BEST temperature/time series, whose basic rationale is the use the scalpel to create shorter and shorter intervals? Do you get into a state of more accuracy by homogenising after the scalpel, being offset by the greater variation of shorter intervals> Compromise point where?
Jeff Id some years ago at The Air Vent showed another effect of shortening data. He cot a trend into two parts and averaged them the recombined. The former rising trend became a staircase with ne step and a much lower averaged trend. Just searched for it, could not find it. It impacts quite a lot on the interpretation of the shape of your figures as graphed.

Robert of Texas

I follow the argument but there is a huge assumption – that anthropogenic warming accounted for a large part of global warming for about half the interval. I am sorry, but claiming that the hiatus resulted in half the amount of man-made warming is not substantiated by the argument. I get it that the point is there was a slow down in warming – but there is no way to attribute this to natural or unnatural causes so you shouldn’t try.
Here is the FACT I cannot reconcile – estimated CO2 emissions have grown over 30% since 2000, while the RATE of temperature increase declined. Does that sound like CO2 drives global warming to you?

RoT, see upthread. This post does not assume Karl’s assumptions are correct. More potently, it shows that EVEN IF they were (not at all conceded), Karl’s conclusion is STILL wrong. Elegant

Forget about mechanical trendsetters – interesting or not. . Put your brain to work and analyze what you see.This is how it was done before everyone got a computer.

Chris 4692

Conquer one issue at a time.

Chris 4692

That would not as directly refute Karl et all.

I would like to make two points in response to your excellent analysis.
1. the surface temperature is known to contain memory and persistence. this condition violates OLS assumptions. OLS trends are spurious under this condition and robust tests for trends are necessary.
2. Karl, Nieves, Hansen, Lacis, Mann, Trenberth and their camp have been very successful in limiting the debate to temperature whereas the real question in AGW is not whether it is warming but whether warming is related to fossil fuel emissions.
the only empirical evidence of this alleged relationship is a correlation between cumulative emissions and surface temperature (i.e. cumulative warming). this correlation is spurious.


Indeed. Temperature is autoregressive ( current value is mainly determined by previous value, +/- a small change ). Much of the change is supposed, by climatologists, to be random or “stochastic” change. The autoregressive accumulation of random change is called a random walk.
Slopes in a random walk will happen all the times and on all time scales. Most correlations will be spurious.

While statistics can be useful refuting bad statistics (Karl), they do not seem useful in separating human from natural warming. For that, we must understand the constraints on what CO2 can possibly do, and subtract this from observed trends.

Dr. S. Jeevananda Reddy

Though the author tried to show there is a pause but this is not right way of doing it. On one side we are telling the fact the data was adjusted and still we are showing a pause. In fact, prior to 1997/98 [volcanic activities] and after 1997/98 [El Nino activities] present a zero trend from 1979/80 with shift of 0.2 oC. The best way is to derive is a sine curve through iterative regression technique.
Dr. S. Jeevananda Reddy


It is interesting to realise that this graph is like 3D plot of running means on dT/dt.
A plotting all the points on a vertical cut through the graph would be a running mean filter of that period. The OLS slope provides an estimation of the average rate of change over that period. All the different dots are all the slopes as the X-year long window scans the data.
If each year-month was colour coded this would give 3D graph of running mean of dT/dt.
We can see that some running mean transects, like 14.5y, have large variability, while others have less. This is probably more to do with the how the distortions of running means interact with the data than anything more helpful.
Anyone not familiar with that distortion should read the following:

Peter Sable

The signal processing folks need to step up and start doing this work, not statisticians. IMHO almost all of standard statistics tools are wrong for studying an AR, non stationary, quasiperiodic signal, aka global temperature.


I must have missed it, but I couldn’t find anywhere that the source of the data was given. Was it original data, or “homogenized”? Ie, real or fake?

Adam Gallon

To refute the methodology of a paper, you use the same data, right or wrong.
The argument isn’t “Is this the right/true/unreasonably tampered with” data, but whether the methodology applied to it is correct.

johann wundersamer

the refugees to germany are wether desinfected neither high temperature treated, whatsoever.
that’s a REAL environmental problem for europa.

Frederik Michiels

very interesting, but it would be even better if the warming of the 30’s would be added as then you compare the last warming with the first warming episode, and i would not be surprised to see them “overlap nearly perfectly” which totally would debunk the global warming theory….

Proud Skeptic

Sheldon…Thanks for this. The graphic you came up with is inspired. As for whether it refutes Karl or not, I cannot say but it certainly provided me with a picture that will stick in my mind.
Now…if only “climate science” could come up with a reliable temperature record that 1) goes back far enough to be meaningful (where is that time machine when you need it?) and 2) actually covered the Earth, its oceans, and its atmosphere with enough direct measurements distributed uniformly, we might have something to support all of the half baked hypotheses we are constantly being exposed to.


Interesting technique and discussion. It seems that sampling all trends will necessarily over-represent any trend in the middle of the dataset. For example, when the minimum trend length exceeds half the dataset length, the middle portion of the dataset will be part of every trend plotted. How might this influence the interpretation?


Couldn’t you have skipped all the trend charts, and just pointed out that they used a double length time period to cheat?


Very clearly written and illustrated explanation of this novel way to compare warming rates in various periods for an educated but non expert like me. Bravo!
Question – how does the temperature data massaging weather bureaus have done affect this?


Our new analysis now shows the trend over the period 1950-1999, a time widely agreed as having significant anthropogenic global warming (1), is 0.113°C dec−1, which is virtually indistinguishable with the trend over the period 2000-2014 (0.116°C dec−1).

So, 25 years of ever more rapidly increasing atmospheric CO2 levels had ZERO effect on the “global warming” rate.
Cool. Time to stop worrying about CO2.

Geoff Sherrington

This is the reference I was chasing in an earlier note here.
If you have not read it, it could help because it shows a way in which your graphs are shaped the way they are.
In year 2014, Jeff Id from The Air Vent, plus Steve McIntyre from Climate Audit, with substantial input from regular CA blogger named “Roman”, a high-level statistician, worked on this matter together. It is all about segments, changes in trends after creating more segments in a time series.
I hope this helps.

“But as the trend length increases to 23 years, the range of warming rates widens considerably. Why?”
I wondered about that too. But you can check that out using this gadget. I’ve set it up to show that max 23 year trend:
It runs from the red dot to the blue. And there is a peak at 1998, and a dip at 1976. 23 years spans this nicely and gets the benefit of both, for a big trend. But if you cut to 22 years, it can only contain one of these features, wherever you slide it to in the range. So the max trend is smaller.

It’s pretty easy to show what Karl et al. did to ‘bust’ the dreaded “Pause”. They simply lifted the global SST anomaly up en bloc by ~0.05K across the May-June 2006 interval and voilà! That’s all it took:comment imagecomment imagecomment image
No “buoy/ship correction” argument justifies such a move …


They claim that they avoid cherry-picking by choosing to calculate every 10+ year trend in the dataset, and then they only focus on two specific trends they calculated, the ~25 year trends from 1950-1974 and 1975-1999, and use these two trends exclusively to argue that Karl et al. did not correctly identify a “normal” AGW warming rate.
You can plot 17,000 data points, but if you then only select two and build your entire case on it, the other 16,998 data points aren’t being used and you’re simply back to cherry-picking.
What the data actually shows is that 15-year trends within a 25-year dataset can vary such that a their prediction of the 25-year trend has a huge margin of error, something like +/- 25-50% of the 25-year trend, based on the plots provided. Which produces a conclusion: A sub-multi-decadal trend does a poor job of resolving the multidecadal trend. This is probably due to the competitive influence of natural climate variability on the trend over sub-multi-decadal periods, and is something we already know.
It also means that all of this bickering about whether there was/is or wasn’t/isn’t a hiatus over the last 15 years is a largely pointless exercise. It’s a sub-multi-decadal time period, so it doesn’t do a good job of resolving the multi-decadal (AGW) trend. So it doesn’t say very much about the multi-decadal (AGW) trend.
There’s no reason to be having this fight. It’s just pulling statistics out of noise and trying to argue that they represent signal.


I am trying to figure out what this graph is really showing. Is this something commonly used in statistics?