Another impartial look at global warming…

Guest essay by M.S.Hodgart (Visiting Reader Surrey Space Centre University of Surrey)

 

A feature of the politicised debate – if such it may be called – over AGW (anthropogenic global warming) and so-called ‘climate change’- is the tendency on both sides to cite only the evidence supporting their views and to ignore what does not. Scientists of course are supposed to be above this sort of thing and to take into account all relevant evidence.

One finds a lot of partiality when it comes to interpretation of the trend in climate data – particularly the available time series of average temperature measurements on the surface of this planet. Is it going up or down or has it paused? What is happening?

Sceptical commentators were the first to draw attention to a recent pause or hiatus in global temperatures and are naturally tempted to see this as being persistent for as long as possible. The ‘warmist’ climate scientists – those that compiled the IPCC reports including those who work for or presumably get their research funding from the UK Meteorological Office have tended the other way. For a long time they were in a state of denial of any pause – not even conceding any reduction in warming rate – presumably because anything that detracted from the sacred dogma that an uncontested increase in atmospheric CO2 must entail a rise in temperature was very unwelcome.

But where both sides of the debate are often referring to the same data one must ask why it is not possible to come to a more objective conclusion.

I focus first on the time series of remote sensed TLT satellite measurements released by Remote Sensing Systems . I also look again at the HadCRUT4 data which were the object of my analysis in the WUWT of September 2013. It should be emphasised that the physical accuracy of any of these data is not under review here and is a separate issue.

Plotted either as monthly or annual updates the time series of globally averaged temperature measurements shows a substantial random-looking scatter from one month to the next (or year to the next). This scatter and a general lack of knowledge as to what exactly drives the temperatures makes it difficult to determine the trend. Yet so many people debate, write and comment as if the trend in these data were entirely obvious. They think they know – ignoring the fact that the scatter in the data makes for a significant problem, not least in establishing what a trend means. The distinguished econometrician Phillips has memorably written (see his introduction)

No one understands trends. Everyone sees them in data.

also (and not altogether ironically)

A statistician is a fellow that draws a line through a set of points based on unwarranted assumptions with a foregone conclusion.

In other words be careful if you run a linear regression on data like these. In the spirit of impartiality and with all respect for his warning I try here to draw reliable conclusions about the trend from these particular cited data. I must however put on record that like our ‘climate lord’ Matt Ridley I am a ‘luke-warmist’. My sympathies are with the ‘sceptics’ because there seems to have arisen an officially-sponsored global warming industry and a general scare-mongering by and of the scientifically ignorant. It has for example become a political ‘fact’ – contrary to all biology and chemistry – that CO2 in the atmosphere at present or worst-case future concentrations is or will be a pollutant i.e. a poison. It is not; its presence is essential to plant growth and therefore our survival. The material bulk of all trees and crops derives and is converted from CO2 in the air. Trees and crops grow out of the air not the ground! See the brilliant “Fun to imagine” TV series by Feynman. It is difficult to take seriously an unremitting propaganda that is prepared to distort the science as badly as this.

Lord Monckton and the RSS data

Viscount Monckton of Brenchley is a prominent climate sceptic. In a recent release to WUWT he emphasises what seems to him an obvious fact that global surface temperatures have paused for almost two decades. He is not alone in this view but let us see how he comes to this conclusion. He appeals first to the TLT satellite measurements released by Remote Sensing Systems (RSS). By the simple procedure of linear regression on their monthly data he finds for effectively a zero slope (his last cited month was September 2015) going back to February 1997. I replicate his result in my fig 1 (the red line). In consequence it seems obvious to him – and to so many others – that indeed global warming has stopped for all this time. But has it?

His problem

The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977. A linear regression over all these months yields a line (brown) with a slope of 0.12 deg C/decade. Although he acknowledges this effect he does not seem to realise that this longer regression makes his conclusion untenable, whatever assumptions are made as to what the linear regression achieves.

He probably assumes that the slope resulting from linear regression determines the trend in global temperature. In other words “whatever I choose to calculate and the way I do it defines the observed effect”. If he does then he runs into a flat contradiction. The red line gives him his “Pause” (he uses a capital letter); but the brown line says that over the same time interval temperatures continued to rise. So which ? The trend can’t be doing both. The RSS web-site plots only the longer span regression. For them there is no pause.

If however he were to make the more orthodox assumption that linear regression estimates a linear trend there are still difficulties. It could be that the data back to 1997 conforms to a classical signal + noise model with a straight line of some slope and offset (the signal) which one cannot see because of an obscuring random variation (the noise). The standard model is

clip_image002 1

(i) (ii)

where z[k] is the time series, the variable k is a count in months or years (it is easiest to start at zero), and the signal= trend in (i) is defined by the offset a and rate b. The noise terms v[k] in (ii) are introduced in order to give an account of that random-looking fluctuation we can see in the time series. Ideally they answer to a description of ‘white’ noise but the terms here exhibit some limited correlation – approximating what electrical engineers call ‘low pass noise’. Linear regression estimates an offset clip_image004and slope clip_image006 which are in error from the true a and b because of that scatter. There are then two problems – the minor one being that his zero slope is at best a likely estimate – it is not definite.

More importantly it is confusing to decide over just what span of years this model (1) could be valid. We could postulate that model (1) applies over a limited span. But it is asking a lot of Nature to oblige Monckton with even an approximation to a linear model as which just happens to start in Feb 1997. If it applies over all the years then the two regressions are estimating the same trend and the flat red regression is a ‘freak’ due to a chance combination of noise terms. Again one would conclude that only the longer regression had any validity.

clip_image008

Fig 1 RSS monthly data and linear regressions. Red line from Feb 1979 to September 2015 (Monckton’s regression). Blue line: regression from mid 1973. Brown line: regression through all data.

But there is hope for Lord Monckton still. It can be shown that the assumption that a linear trend runs over the whole is unlikely to be true. The difference in slope between the two regressions of 0.12 deg C/decade is too large to be attributable to ‘chance’ – as one can readily determine. The two regressions and also a third regression (blue line) calculated from mid-1993 with an intermediate slope strongly suggests that beneath the noise the trend is not following a straight line.

All three lines can be reconciled if we allow that there is a non-linear trend – as indeed the IPCC scientists readily concede in ‘Box 2.2’ of their latest report AR5. There has to be something more complicated than a straight line beneath the noise. A generalisation of (1) is the classic

clip_image010 2

where z[k] are again the data points, and the signal = trend s[k] follows an assumed but unknown curve. The v[k] are again noise terms. The curve hidden in the data can be assumed to cover the whole span of years. Model (1) is at best an approximation over a limited span.

A linear regression is not invalidated by this model but the computed slope has to be interpreted differently. It will have to be seen as an average of a trend with some actual variation within the span of years.

Accordingly the overall regression (brown line) computes an average trend of something which is non-linear between the years 1979 and 2015. But Monckton’s regression in principle is also no more than an average trend. So yes: there is a ‘Pause’ but its strict interpretation is that “an estimate of the average trend from Feb1997 to Sept 2015 happens to have a zero slope”. But no: he has not demonstrated what is the most likely actual trend over this time.

As I show below it is much more likely that temperatures were still rising past 1997 and that Monckton only gets his Pause from a later date. As many others have pointed out it is easy to get fooled in statistical analysis by an apparent pattern suggested by what turns out to be the influence of a random component in the data.

Monckton’s construction does have one useful consequence: he has shown that none of these linear regressions (including his own) is likely to be estimating a straight line.

Alternative stochastic model?

In this deterministic trend model (2) there is assumed to be some unknown but well-defined curve or line concealed by low-pass noise i.e. strictly a weak sense stationary stochastic process. We need to be aware of a substantial literature which views the entire time series as a generalised non-stationary stochastic process. It is ‘all noise’. This approach is the preferred choice of econometricians who have taken a look at climate data. In his extensive publications Professor Terence Mills has looked at both approaches but favours the all-stochastic. If identification of ARIMA processes is your meat then there is plenty to work on. I wish you luck! In my opinion the stochastic approach leads to paradox and a terminological confusion. The data series has to be regarded as the output of a feed-forward and feed-back machine whose input is a white noise. If this were true then every possible time series is ‘random’. So where is your anthropogenic global warming ? I will follow the climate scientists and stay with deterministic trend estimation in general and (2) in particular.

Estimating a non-linear trend

If we have to fall back on the generalisation which is (2) then we shall have to estimate s[k] while only having access to the data z[k]. This is an exercise in curve fitting– for which there are a plethora of methods.

The difficulty with all methods of curve fitting is that there are essentially two kinds of error to contend with: the random error or variance due to the omni-present noise v[k] ; and a systematic error or bias due to the poor fit of a proposed fitting function to the unknown hidden signal s[k]. Whatever method is adopted the unavoidable problem is to decide if the computed curve is over-fitting (too much random error) or is under-fitting (too much bias error). There is a model selection problem.

In my earlier release to WUWT back in 2013 analysis of the HadCRUT4 data I proposed using a cubic loess – which Mills shows is superior to quadratic or linear loess – and also a polynomial regression In the case of loess the problem is to decide on the effective window width and with a polynomial to decide on the degree.

For loess if the window width is too narrow random error dominates over the systematic and if too wide vice versa. For a polynomial regression if the degree is too high random error dominates over the systematic and if too low vice versa. There are many model identification methods designed to guide a choice – starting perhaps with Akaike Information Criteria, modifications such as that by Hurvich and Tsai and many more. There are also various forms of cross validation technique. But they seem to me (having tried some of them) to be uncertain and unreliable. Statistical experts may disagree.

Corroborating curve fitting

Whatever the procedure the would-be statistician is left with a degree of freedom in allocating a crucial parameter. Some years ago however I stumbled on the fact that a combination of cubic polynomial loess and a standard polynomial regressions offer a unique choice of window width for the former and degree for the latter which gives the least disparity between the two generated curves. The one selects the other. The combination is self-selective. This idea seemed to work well on the HadCRUT4 data. This serendipitous result is now found to apply to the RSS data. In fig 2 a (half) window width of 168 months for a cubic polynomial loess and a polynomials degree of 5 give the closest agreement to each other (shown in blue dashed lines with no attempt to distinguish between them).

These very similar curves are perhaps the most likely deterministic estimates of the trend but they cannot be the exact truth. The uncertainty is again due to the noise present in the data. Assuming however that they are ‘close enough’ what they have in common, if we disregard the discernable oscillation, is a depiction of a rising trend followed by a pause effectively starting around 2003 – and not 1997.

Alternative segmented linear regression

The shape of these curves provides also a motivation for a different idea: to apply a split or segmented regression. The idea is to run two regressions over all the data years but with a break point which offers the least discontinuity between the two segments.

The break point is found after a trial and error search to be September 2003. Monckton still gets his pause but it is now reduced to the last 12 years. The first segment of the proposed regression in fig 2 from 1997 to 2003. finds for a computable rate of 0.16 deg C/decade. There is a pause after that over which the trend is indeed flat. The trend does not literally switch in slope on the month of September 2003. The purpose is to provide a meaningful computable rate.

clip_image012

Fig 2 RSS monthly data Jan 1979 to September 2015. Dashed blue curves: cubic polynomial loess with 168 month half window width; polynomial regression with degree 5. Continuous red lines: segmented linear regression with break point September 2003.

However each regression is seen by comparison with the loess and polynomial curves to be an acceptable approximation. The two segments are plausible averages over respectively separate ranges of data. The apparently contradictory or competitive regressions in fig 1 are now explained by more than just positing average slopes of a non-linear trend. Some information has been gleaned as to what that trend consists.

Application to HadCRUT4 data

The RSS data tell us nothing about global trends before 1979 and one has to turn to the publicly available land and sea-based surface measurements. The UK compilation HadCRUT4 goes back to 1850 but the two US series go back only to 1880. It is not my intention to try and assess the accuracy and reliability of any of these compilations. It is clearly a difficult exercise relying on measurements which were never intended for a systematic global experiment. Particular difficulties must be associated with sea temperature measurements which historically were very crude indeed. The series is of course under continual review from both its compilers and from sceptical critics – which can only be a good thing. Avoiding the very important issue of measurement error what can be inferred about the trend in global temperature if we should decide to trust HadCRUT4? To repeat: in my previous submission to WUWT in September 2013 I used this self-checking combination of a high degree polynomial fit and a cubic loess. But now let try something simpler – a succession of split linear regressions. We will need more than one break year. The same criterion will be adopted: that there needs to be the least discontinuity between successive regressions. All the break years meeting this requirement have to be searched and discovered by trial and error.

The result of this exercise is shown in fig.3 on the annually updated time series.

clip_image014

Fig 3 HadCRUT 4.4 annual boxed connected points to 2014 . Discrete heavy spots are Met Office approved discrete decadal averages. Brown lines are sequential regression segments. Arbitrary start from 1870; break point years 1910, 1942, 1975, 2005. Estimated r.m.s noise clip_image016= 0.098 deg C. Red lines estimate average trend; discovered break point year 1941; post-war average trend 0.087± 0.012 (2 s.d) deg C /decade from 1941 to 2014.

I start on the same year 1870 as in my previous report to WUWT. We need four break years – splitting the trend estimate into five segments (see brown lines). It should be noted that these break years are discovered – not arbitrary choices. The heavy points also depicted are discrete decadal averages of temperature located in the middle of each decade – a simple statistic which the UK Met Office has long favoured and was adopted for the first time by the IPCC in their AR5 report (see part 2.4.3 AR5 )

As can be seen the proposed line regressions are in excellent agreement with these averages. This agreement surely promotes confidence in both procedures. Comparison with my earlier presentation also shows a good agreement with optimally chosen cubic loess and polynomial regression. One can see a broad similarity with the RSS time series from the 80s onwards. The temperatures started rising from 1975 and no pause is found until a break year of 2005 (two years later than for the RSS data). With this latest version of HadCRUT4 (now issue 4.4) we now get a low warming rate (of about 0.01 deg C/decade) from 2005 (compare flat response with the RSS data). I have not included the year 2015 which was not completed when running all these calculations.

One should emphasise that (i) these computed lines are probabilities not certainties; (ii) they are not meant to be taken literally but to be seen as approximants to some postulated smooth curve which is hidden from view and for which the loess and polynomial regressions may be better estimates.

The split regression segments graphically convey the impression that there were two long periods when temperatures were actually falling. Temperatures fell from at least 1870 to 1910, but rose from 1910 to 1942. They then were falling again from 1942 to 1975. From 1975 to 2005 warming resumed with a probable rate of 0.20 deg C /decade. But the warming did not persist at this rate. It seems to me to be probable that a third half- period has begun in which if there is now a pause (but with revised HadCRUT4.4 it is now a very slow warming).

This recent pause looks to be a continuation of an oscillation of global temperatures with a period of slightly more than 60 years going right back through the record imposed on a generally rising mean trend. I am not of course the first ‘sceptic’ to point this out.

I come to much the same conclusion as in my 2013 report. It seems that the much simpler sequential regressions are as convincing a way of specifying the trend in the data as my previous effort using polynomial regression and cubic loess.

What is the matter with the UK Met Office and the IPCC scientists?

In the summer of 2013 the UK Met Office, and the academics which they support, called a press conference in London to concede (reluctantly) a pause or a ‘hiatus’ in global temperatures and also confess they hadn’t clue as to why it was happening. The rather critical BBC journalist David Shukman who was present noted that

….the scientists say .. pauses in warming were always to be expected. This is new – at least to me…I asked why this had not come up in earlier presentations. No one really had an answer, except to say that this “message” about pauses had not been communicated widely..

Indeed! The press conference coincided with a reports by the Met Office (report 1, report 2, report 3) on the same theme. What the Met Office scientists did not discuss or even concede in that 3-part report is the presence of substantial oscillation over the historical record. This oscillation surely cannot be attributed to increasing concentration of atmospheric CO2 and it accounts for half the faster rate of warming in the 80s and 90s.

I find it troubling that presumably intelligent scientists (and they have competent statisticians also) cannot bring themselves to acknowledge – let alone explain or even properly discuss – the statistical fact that two extended cooling periods have featured in the past while CO2 levels were presumably always rising .

The reader will find the same statistical obfuscation in the two most recent reports (AR4 and AR5) released by the IPCC. A pause (or hiatus or standstill) is most unwelcome. Yet there is surely something to explain here for those who believe in the dominant anthropogenic effect on global warming. Since at least 1958 with the Keeling measurements (Mauna Loa etc) – and no doubt long before that – atmospheric CO2 levels have been rising monotonically (after seasonal averaging). It is hard to avoid the impression that there has been political pressure not to acknowledge the obvious: that an ever-rising concentration of atmospheric CO2 cannot be the only effect determining global surface temperature.

Trend v. average trend

In principle an oscillation does not have a trend. There is a need therefore to identify a mean trend which discounts that obvious oscillation. As suggested before one can differentiate

trend in the data = mean trend in the data + quasi-periodic oscillation

How then to estimate this mean trend? My previous effort was perhaps too elaborate. The following may be more convincing. One can construct a split regression with just two segments (the two red lines in fig. 3). To my mind the lines are steering a convincing middle course between the oscillating trend conveyed by the multiple split regressions. They may be about right. The break year of 1941 is again not an arbitrary choice: it has to be searched in order to ensure the least discontinuity between the two regressions with this construction. This notional mean trend is being estimated by two average trends computed by linear regression between favourable years. The post war average trend is found to be 0.087 ± 0.011 ( 2 s.d) deg C /decade i.e. less than 0.1 deg/decade which is half the rate of the actual trend which peaked (temporarily) in the 80s and 90s. The error limits are computed after first estimating the standard deviation of the noise of clip_image018 0.098 deg C.

It is extraordinary that in their various releases neither the UK Met Office nor the IPCC seem to want to confront these statistical facts in their own data. It is of course unwise to make a projection into the future but if we trust neither the elaborate computer climate models favoured by the Met Office nor the projection of Mills- type all-stochastic models this is all we have got. One can only note that in the 85 years from now to 2100 the projected increase could be around 0.0087 ´ 85 = 0.74 degrees. Could this be realistic and if so is that a cause for alarm? I only ask.

0 0 votes
Article Rating
206 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 21, 2016 12:46 pm

Lord Moncton does not fit a line to the data starting in 1997 to the present and shows that the slope is close to zero. He CLEARLY states in his posts that he CALCULATES the endpoints based on the hypothesis that the regression trend is close to zero or negative. He is not cherry-picking the endpoints, he is deriving them from the time series.

Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 21, 2016 1:01 pm

My understanding of Moncton’s argument is that the climate scientists themselves have said that if there is a 15 year pause, then something is wrong with the models. He has found a 15 year pause. Ergo, something is wrong with the models. The fact that if you go back more than 15 years, you don’t have zero trend anymore is irrelevant to the argument he is making.

Reply to  tim maguire
January 21, 2016 1:11 pm

Yes, but worse.
First they said 10 years would invalidate the models (Jones)
Then they said no, it would take 15 years (Santer)
Then they said 17 years (Santer again)
After moving the goal posts several times, they’ve now taken the position that the playing field doesn’t exist.

Reply to  tim maguire
January 21, 2016 2:21 pm

“First they said 10 years would invalidate the models (Jones)
Then they said no, it would take 15 years (Santer)
Then they said 17 years (Santer again)”

You should quote properly. None of those people said any of those things.

Latitude
Reply to  tim maguire
January 21, 2016 3:37 pm

They moved the playing field to space…it’s the satellites
Problem is, they have twerked the temp history so bad…that no computer game will ever be right
The games are predicting sky high…based on a unreal past..but they are right in line with that unreal past

TRM
Reply to  tim maguire
January 21, 2016 7:52 pm

Just for you Nick. You are technically correct. Happy now? 😉
Dr. Phil Jones – CRU emails – 7th May, 2009‘Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.’
Santer 17 year: http://nldr.library.ucar.edu/repository/assets/osgc/OSGC-000-000-010-476.pdf

Reply to  tim maguire
January 21, 2016 10:33 pm

Shows the importance of actually quoting what was said. Turns out PJ spoke of 15 yrs, not 10. Or was that Santer talking about ENSO-adjusted numbers?
But then, so was Jones. The full quote from the email was:
” Bottom line – the no upward trend has to continue for a total of 15 years before we get worried. We’re really counting this from about 2004/5 and not 1998. 1998 was warm due to the El Nino.”

Reply to  tim maguire
January 21, 2016 10:39 pm

And yet…
Global warming stopped 18 years, 8 months ago.
The alarmist crowd bends itself into pretzels attempting to deflect from the fact that global warming has been STOPPED for many years.
Now I see why martyrs will die to be right…

Reply to  tim maguire
January 21, 2016 10:44 pm

Besides, Nick, why should we listen to anything Jones said? He proved himself to be a thoroughly corrupt, dishonest individual. So of course he will ‘Say Anything’ that is self-serving.
Why do you make an unethical rascal like that your HE-RO? Jones lied for money, for status, and to support the UK’s good old boy network. But you approve.
I don’t see very many credble people on your side of the fence. Certainly there’s a lack of honesty. And ZERO scientific skepticism…

Chris Wright
Reply to  tim maguire
January 22, 2016 2:48 am

Yes, I think the OP’s argument about Monckton doesn’t make sense.
Here’s an example:
Suppose the 20th century warming was as follows:
A constant rise from 1900 to 1950
Zero change from 1950 to 2000
I then make the claim that there was zero global warming from 1950 to 2000.
The claim is obviously true: the graph for that period is perfectly flat.
The fact that there was warming prior to 1950 is completely irrelevant. My statement was specifically for the period 1950 to 2000.
I think Christopher Monckton is right.
Chris

Bob Boder
Reply to  tim maguire
January 22, 2016 3:36 am

Nick
Flat out did the models predict the pause?

Reply to  tim maguire
January 22, 2016 8:17 am

Bob Boder,
Their models couldn’t predict the sun rising in the morning.

TYoke
Reply to  tim maguire
January 22, 2016 10:57 pm

“The fact that if you go back more than 15 years, you don’t have zero trend anymore is irrelevant to the argument he is making.”
All statistical analysis is directed towards answering a single question: does the imperfect observational data support or falsify some given model? Note well that it is the statistician/scientist who supplies the model(s) in EVERY case. Choice of a some particular model against which the observational data is to be compared is an intrinsic part of statistics.
Mr. Hodgart quarrels with Lord Monckton’s model. That model is that a “pause” is a useful measure of what recent global termperatures are doing when compared with the GCMs. Monckton carefully defines a “pause” as the maximum length of time measured backwards from the present with zero trend. Objections could certainly be raised to that particular model, but it is entirely wrong to think that a statistical analysis could somehow DISPENSE with a model.
Mr. Hodgart for instance speaks of trends back to 1977, break points, polynomial regressions, and cubic polynomial loess, but all of those choices are entirely his own. They don’t somehow arise magically out of the observational record itself, and they are certainly modeling choices that are every bit as disputable as Monckons.

DAV
Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 21, 2016 2:35 pm

Agreed. Moncton doesn’t help things though by placing a left to right arrow over the graph. Makes it look like he picked the left point.

Reply to  DAV
February 2, 2016 1:33 am

To TYoke. There is a general problem to identifying the trend in a time series of no known theoretical structure. There are fundamentally two different approaches – the deterministic and the stochastic. The former is perhaps easier to follow. The classic introductory text Kendall and Ord [REF ] gives some idea what it entails. I follow these experts by regarding the trend as whatever in principle could be removed from the time series and leave no trend. All may surely agree that a white noise sequence (or a filtered approximation – technically a weakly stationary stochastic process) – shows no trend.
This consideration leads automatically to the signal + noise deterministic model of eq 2 to account for the time series, where the trend is more precisely identified with the slope of that signal component. On the RSS data this signal seems to be following some kind of curve. In general trends can be expected to be non-linear – a point which is now conceded by IPCC scientists – see Box 2.2 in ch 2 of the AR5. The signal + noise model is widely adopted throughout the science and engineering world. What other model would you suggest? Ultimately perhaps it is a matter of arriving at a convention acceptable to a participating and informed community.
I made at least one typo in referring to the start of the RSS data as 1977 rather than 1979. Apologies.
[REF Kendall, M.G. & Ord, J.K. (1990) Time Series, 3rd edn ]

Anne Ominous
Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 21, 2016 2:51 pm

Which means, in the same sense as Hodgart states, that the points are not arbitrary but calculated based on the assumption of a certain class of curve. No less valid than his own method, for the purposes that were clearly stated.
Another point Hodgart glosses over is that it would be very interesting to see what how his Fig. 3 would change if the chosen dark brown points were averages at the beginning of each decade, rather than the middle. It would certainly affect the point at which slope changes on the far right. But would it be earlier, or later?
I don’t know the answer. But it illustrates the very kind of arbitrary choices Hodgart appears to be railing against.

Proud Skeptic
Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 22, 2016 2:36 pm

I probably understand this the least of anyone commenting, but I understand that Moncton’s methodology is to run his calculations starting from the present, then rerunning it, extending the period by one month until he gets to a point where the trend is no longer flat (or statistically flat). At that point he stops. He has been very up front about this and it makes sense to me.
That said, for something that doesn’t exist, the Pause has sure had a lot of people (ones who claim to know something) trying to explain it away. My understanding is that they have even gone back and messed with the data in order to send the Pause into the same memory hole as the Medieval Warm Period.
If Moncton doesn’t know what he is talking about then he sure has a lot of company in this on the other side.

Monckton of Brenchley
Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 22, 2016 2:43 pm

I am very grateful to a supporter of the splendid Ted Cruz, and to many other commenters here, for their kindness in drawing attention to the fact that the word “impartial” that the author of the head posting has awarded to himself is more than somewhat of a misnomer.
Given that Dr Hodgart is a Reader (a sort of senior lecturer), he ought perhaps to have taken the trouble actually to read my monthly updates to the global temperature record.
Had Dr Hodgart read my temperature updates, he would have realized a number of points. First, as Couldn’t-be-Cruzier has rightly pointed out, I state quite clearly every month that the length of the Pause in my RSS graph is calculated as the longest period of months, ending in the most recent month for which data are available, during which no global warming has occurred.
Dr Hodgart accuses me of having ignored the rest of the RSS dataset since 1979. Had he read my material, he would have seen that I frequently – and never less than six-monthly – show the full RSS dataset, along with the full UAH, HadCRUT, GISS and NCEI datasets. If I remember rightly, I showed the full RSS dataset not more than a month back.
In his preachy tone, he presumes to lecture me on various matters on which he is simply wrong. He objects to taking linear trends on stochastic data, but linear trends are what the IPCC uses and Phil Jones recommends; everyone (except Dr Hodgart) understands them, and the IPCC’s 1990 predictions for warming this century are themselves almost linear. I use linear trends because that removes one source of potential disagreement between me and the IPCC. They can hardly complain if I use their own method to see whether their predictions are coming true. Indeed, any genuinely impartial analysis of global temperature trends would compare those trends with IPCC predictions over various periods, as I did a couple of weeks ago.
Dr Hodgart says one cannot or should not use linear trends on stochastic data. Nonsense. It is precisely when data are stochastic that calculating a linear trend gives us some idea of whether there has been warming, cooling or no change over some selected or calculated period.
Dr Hodgart – without the slightest justification – says, in effect, that I am assuming that the trend on past data is a prediction of a trend on future data. Again, if he had done me the credit of actually reading my monthly postings before commenting on them, he would have seen that just about every month I include a warning that a trend is not a prediction.
Much of Dr Hodgart’s posting is devoted to a rambling and somewhat inexpert proposal to use several methods to determine – or not to determine – trends on the temperature data. Most textbooks of statistics contain warnings about what seems to be Dr Hodgart’s favorite proposed method – juggling with linear trends over multiple periods to see where they join up. That is a fool’s game, and one which has led to the IPCC being reported to the Swiss fraud authorities. It also contradicts Dr Hodgart’s assertion (albeit a nonsensical one) that the full dataset is better for determining trends than a partial dataset.
What Dr Hodgart and many others who have clumsily tried to challenge my surely simple monthly graphs fail to appreciate is that, with CO2 concentration increasing rapidly, the likelihood of very long pauses such as the near 19 years shown by the RSS dataset is supposed to be vanishingly small: yet even the IPCC admits the existence of the Pause and, in consequence, has greatly reduced its predictions of near-term global warming – a fact that the “impartial” Dr Hodgart somehow failed to mention.
The reason why the past couple of decades are important is that during those two decades the rate of increase in CO2 concentration rose. Models tell us that there should be an instantaneous and quite strong warming response. But that response is not happening. And that raises questions – in the minds of genuinely impartial observers, but not, perhaps, in the mind of Dr Hodgart – about whether the models are all they are cracked up to be, and whether the “science” is truly “settled”.
I suspect that Dr Hodgart is part of a concerted effort – noticeable in recent months – to try to do away with the Pause. There was the ludicrous Tom Karl paper tampering with the ARGO bathythermograph dataset because it inconveniently showed no warming of the surface strata of the ocean, and what little warming it does show (equivalent to a terrifying 1 degree every 430 years) is coming from below, and not from above.
Then the ERSST temperature data were tampered with in a manner that conforms to Karl’s paper. Then came the 20-lies video from the usual suspects. Now Dr Hodgart comes along, self-evidently not having read or understood my monthly analyses, and does his level best to cast doubt upon them and nasturtiums at me, presenting himself as though he were as skeptical as my noble friend Matt Ridley.
Well, it won’t wash.
If Dr Hodgart wants to do away with the Pause, all he has to do is wait. The el Nino persists in region 3.4, and that is likely to keep the temperature rising for some months, and perhaps to extinguish the Pause for a time, and perhaps for good – after all, one would expect some warming as a result of our enriching the atmosphere with greenhouse gases. Even if rising temperatures do not eradicate the Pause, the unspeakable Dr sMears of RSS, who participated all too enthusiastically in the “20 lies” video attacking the rival satellite dataset, looks as though he is gearing up to rewrite the RSS data to ensure that the Pause does not return in any event.
However, notwithstanding the vast amounts of data tampering in which the keepers of the terrestrial temperature datasets have already indulged, the rate of global warming over just about all timescales and on all datasets has proven to be, and continues to be, very considerably less than what was predicted. That is the central fact that emerges from my temperature analyses. It is a fact that Dr Hodgart barely addresses.
Finally, one of the many points on which Dr Hodgart is simply wrong is his assertion that, in principle, an oscillation does not have a trend. It is, however, perfectly possible for an oscillation to occur either side of a rising trend – indeed, that is what seems to have happened to global temperatures in the 20th century, as Syun-Ichi Akasofu has pointed out.
I hope that this balancing comment will go some way to restore the impartiality claimed by Dr Hodgart in the title but lamentably absent from his posting.

Proud Skeptic
Reply to  Monckton of Brenchley
January 23, 2016 5:40 am

Thanks, Dude (Lord Dude?) I really enjoy reading your stuff!

spock2009
Reply to  Monckton of Brenchley
January 23, 2016 9:26 am

C.M stated, ” I state quite clearly every month that the length of the Pause in my RSS graph is calculated as the longest period of months, ending in the most recent month for which data are available, during which no global warming has occurred.”
As I read through the the Dr. Hodgart article, I wondered why such an obvious point has seemingly been overlooked. However, at the time I wanted to post something to that effect but I’m now glad that I held off as you’ve (re)explained much better than my feeble attempts could have ever accomplished.
Thank you again for the points and clarification. Keep up the excellent work.

jim
Reply to  Monckton of Brenchley
January 24, 2016 3:30 pm

Both Monckton’s and Hodgart’s methods are flawed. Monckton’s method, in which the pause is “calculated as the longest period of months, ending in the most recent month for which data are available, during which no global warming has occurred”. is the very definition of cherry picking. He’s picking the month for which the OLS slope is zero, because that is the result he wants to find. When calculating the slope he ignores all the data before each start date – which means the intercept for his fitted line is incorrect – it assumes temperatures magically jumped from the pre-pause levels to the level at which they “paused”, all within a single month. It makes no sense. I can use the exact same method to ‘show’ that there has been a surge in the rate of warming since Feb 2007 (using the same RSS data). That is the longest period, ending in the most recent month, during which the rate of warming has exceeded the highest rate observed prior to May 1997 (the current start of Monckton’s pause). So using Mockton’s method of ‘calculating’ the start date of climate periods, there has been a pause in warming since May 1997 but a surge of very rapid warming since Feb 2007 – i.e the last half of “the pause” has been a “surge”. This is the sort of logical absurdity that arises from flawed methods that ignore prior data (the fact that you cherry pick from “all available” data does not change the fact that you ignore earlier data when calculating trends for any specific period).
Hodgart’s piecewise regression at least uses all the data and fits a continuous line -avoiding the magical jumps implicit in Monckton’s approach, but despite mentioning the “model selection” issue, he ignores it completely when choosing his break point(s). A “process of trial and error” is also known as “fishing”, and is very poor statistical practice. With 400 + months to choose from, you are almost guaranteed to find a piecewise model that fits the data better than a simple linear model. This must be accounted for in deciding whether the 2-line model is better supported by the data than the simple linear model. When you do change-point models properly – using methods that account for the model selection uncertainty (i.e. methods that penalize the extra freedom associated with 1) additional parameters, and 2) many possible change-points) you find no evidence of a pause in global warming – in any dataset.

Proud Skeptic
Reply to  Monckton of Brenchley
January 25, 2016 6:21 am

Jim (and pretty much everyone else) – Wasn’t it Ernest Rutherford who said something like, “If your experiment relies heavily on statistics then you need a better experiment.”?
If you sort through all of the BS and get right down to it, the climate change debate revolves heavily around how you analyze the numbers. everyone seems to think everyone else’s statistical methods are wrong. The argument seems to go in circles and ultimately to us outside observers, it just looks like people are still trying to figure out some pretty fundamental stuff here.
You can argue ad infinitum about how you crunch the numbers but if the numbers are questionable to begin with then it seems to me that you are just compounding garbage. I’ll stick with my position that you shouldn’t even be talking about any of this stuff until someone establishes the following two things…
1. That we can accurately measure the current temperature of the Earth. Of this I am skeptical.
2. That we can accurately establish the same value for 75 or 80 years ago. On this point I am virtually certain we cannot.
IMHO, until both of these things are established beyond a doubt, the rest of this is masturbation.
To Lord Moncton’s credit, he is using the exact methodology of the opposing view to deflate their own argument. This is smart. But most I think it is a mistake to let all of the rest of this stuff get debated when you can cut the legs out from under the whole thing by disagreeing with the underlying premise…that we can measure these things…”Prove to me that you can measure the temperature of the Earth to within a tenth of a degree C. Also, prove to me that you can do the same with data from 1920.”

Reply to  Monckton of Brenchley
January 25, 2016 8:30 am

For Jim (Jan 24 2016 3.30 pm)
At last some informed criticism and a chance to have a reasoned argument! ‘Jim’ very clearly identifies what is so wrong with Monckton’s procedure. He explains all this much better than I could. But he also objects to my piecewise regression which he regards as “fishing” and “very poor statistical practice”. He sees “no evidence of a pause in global warming”.
I have never been a professional statistician but I have always been interested in random variable and estimation theory and the delicate art of what constitutes valid statistical inference. I try to be aware of the multiple traps which lie in wait for the unwary. I really do not think that I was ‘fishing’.
If he reads my text carefully he will see that first I checked what the result of a curve-fitting exercise would be (the two blue curves by a cubic loess [Mills] and polynomial regression). Visitors to WUWT do not want to see a lot of maths but model selection by principle of joint corroboration was my priority consideration in selecting these curves (no – I have not tried to publish this methodology). It then seemed to me, looking at these plotted curves, that “if we disregard the discernable oscillation, (there) is a depiction of a rising trend followed by a pause effectively starting around 2003 – and not 1997”. That is what I see – don’t you?
The two segmented linear regressions are now justified as piece-wise linear approximations to these curves – a perfectly respectable mathematical technique. Locating the one break point is most easily achieved by trial and error (also an entirely respectable technique and indeed the fundamental principle behind all science and engineering). One could of course try to fit more segments with more break points but I do not see any reason to do so. Occam’s Razor should apply just as much in statistics as in science. I am of course aware of the notion of degrees of freedom, and the need when estimating the mean square error (MSE – fitting error, goodness of fit ) to lose another degree of freedom per break point after dividing into the sum of squares of the residuals. If I follow these rules and compare with the MSE computed for just one regression (respecting model 1) I find no improvement. So by that criterion he is quite right. However the MSE is not everything in my opinion. There is such a big difference in slope (by 0.12 deg C/decade ) between Monckton’s linear fit on his subset of data from 1997 and the slope of the linear fit all the way from 1979 as make the ‘null hypothesis’ of that model 1 highly unlikely. I proved this to my satisfaction both by simulation and direct analysis. I would be glad to let him see the details if he is interested. So there are two strands of evidence for a pause: one positive and the other negative.
Mills (2009) and his “Modelling current temperature trends” except that I use the standard tri-cube weighting kernel ]
MSH

Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 25, 2016 8:11 am

A 10 year pause is roughly a 2 sigma probability event. (5%) 15 years is 3 sigma (2%), 19 years + is 4 sigma (<1%). This is the probability that if the theory is correct that the event would occur. So, either the record of the last 20 years is a freak once in a 100 incidence (1 in 10,000 years type event) or the theory is wrong.
Umm, I think the theory is proven wrong.
Climate scientists seem to be the only scientists who go with the idea that as long as there is one in 10,000 chance they are right then they are right (not just could be right). Normally scientists I have read have the opposite criteria which is unless there is 1 in 10,000 chance they are right they are right.
A real scientist would also admit as the probability that their theory wasn’t going to hit the numbers they would withhold publishing and they would admit they were looking “bad” in the sense that the theory was not confirmed, i.e. they would admit it wasn’t certain.
Of course climate science isn’t like Physics or hard sciences I guess, They don’t / can’t hold themselves to such a high standard. The data is imprecise, the models aren’t worked out entirely, there is hardly way to do experimentation so this is as good as they can do. Right? Admitting that would mean that it isn’t settled and that they aren’t real scientists but more like sociologists who interview people and get wildly differing results depending on lots of conflating factors.
Any real science would have admitted that the theory was in trouble 10 years ago. When the data seemed to be moving away and with numerous “worrisome” things not conforming like humidity, clouds, temperature in the lower troposphere, the accumulation of heat in the ocean which was not predicted, the lack of ability to model PDO/AMO and the clear MISS in missing PDO/AMO 60 year cycle in the first place, the inability to nail down attribution for other factors all of these bothersome things and now on top of all this the variance between satellite and land temperatures which invalidates the CO2 hypothesis.
You see by the theory of CO2 the heat should be growing in the atmosphere over the land FASTER than on the land. We have according to Hansen/Mann temperature growing faster on the land than in the lower troposphere. The clear conclusion is that the excess heat on the land is NOT caused by CO2. It can’t be. This leaves them with the thorny explanation of where the heat on the surface they manufacture with their adjustments is actually coming from. It can’t be CO2 or humidity or clouds as those are not confirming the theory either.
On the other hand we have the assurance from Hansen and Mann that the theory is solid, the temperatures this last year were hotter than EVER and that we are the cause of it all for sure. So, now I am reassured. I will wait for the explanations in the next life I guess.
Please check out my articles:
https://logiclogiclogic.wordpress.com/2016/01/21/48-inconvenient-truth-nytimes-lies-2015-wasnt-the-hottest-year-on-record/
and
https://logiclogiclogic.wordpress.com/2015/12/21/failures-of-global-warming-models-and-climate-scientists/

Reply to  logiclogiclogic
January 25, 2016 8:13 am

I meant for most scientists 1 in 10,000 they are wrong then they can claim to be correct.

Monckton of Brenchley
Reply to  Couldn't B. Cruz-ier (@CouldntBRighter)
January 26, 2016 5:50 am

Dr Hodgart, in his replies here as in the head posting, continues to fail to see the elephant in the room, which is that according to IPCC predictions there should be, at the very least, a continuing warming at about twice or thrice what the datasets show, but instead the rate of warming has been declining and is now at just about its least value since the satellites began watching in1979.
The simplest way to illustrate this decline in the warming rate – a decline not predicted by the models – is to calculate each month how far back one can go without finding any global warming at all. No amount of desperate statistical prestidigitation on Dr Hodgart’s part can conceal that fact.
And he should not have repeatedly misrepresented what my monthly temperature reports actually say. His posting was not science but mere politics.

January 21, 2016 1:10 pm

The point of highlighting the ‘Pause” Ito the AGW campers claim is to show as an outcome, their claim that man made CO2 is ( not) the cause and driver of a runaway temperature.. One that we should through billions of dollars at now. Its really a question of opportunistic use of resources. Eg. Use the money to clean up soot, replace wood or dung fires, and reduce poverty now instead of jetsetting around the globe restricting fosil fuel uses, and the results will be better than whats going on at ptsent.

Reply to  macha
January 21, 2016 1:16 pm

Oops. Did not check for typos. Ps. Any statistics or ploting makes no difference to temperatures. Know what I mean?

Dave in Canmore
January 21, 2016 1:12 pm

“The red line gives him his “Pause” (he uses a capital letter); but the brown line says that over the same time interval temperatures continued to rise. So which ? The trend can’t be doing both.”
As has been pointed out here at WUWT, you could make the same claim about a 40 year time series of my height. Yet I clearly stopped growing taller 20 years ago! With this example, one can see the fallacy of claiming the trend of my height can’t be doing both (pausing and increasing.)
If Moncton (by way of observations) shows a significant period contains no trend in the temperature while CO2 trend rises, then this is in fact a very significant observation. I’m not sure why that isn’t a crucial exercise in theory testing regardless of where that period falls. I don’t understand the problem.

James Hein
Reply to  Dave in Canmore
January 21, 2016 6:48 pm

It may also be worth noting that the slope of the brown line has been decreasing for the entire length of the pause.

noaaprogrammer
Reply to  Dave in Canmore
January 22, 2016 8:07 pm

It’s also worth noting that Dave’s height will most likely decline if his spine experiences degenerative disc disease as he ages. (Sorry Dave!)

Matt
January 21, 2016 1:13 pm

I am sorry, but being a ‘visiting reader’ of anything has never been a credential for, well, anything. You must know that around here…

Mark
Reply to  Matt
January 21, 2016 1:15 pm

What?

Reply to  Matt
January 21, 2016 1:21 pm

Huh?

Dave N
Reply to  Matt
January 21, 2016 1:33 pm

I’m always amused by comments that seek to question credentials over addressing the actual content. It strikes me as either being too lazy, or incapable of finding fault with it, or both.
That is not to say the content can’t be faulted: I’m still ruminating over the “can’t be both” remark, since it’s a comparison of trends over different time periods (in which case, of course they can be different, i.e. “doing both”)

getitright
Reply to  Dave N
January 21, 2016 2:05 pm

“I’m always amused by comments that seek to question credentials”
Likewise I am always amused by articles where the authors seek to display credentials in an effort to buttress their content. (argumentum ad verecundiam) anyone….anyone

john harmsworth
Reply to  Dave N
January 22, 2016 3:38 pm

As an interested amateur-thanks!

Reply to  Matt
January 21, 2016 1:46 pm

@Matt – what is your point? Please re-write your sentence(s).
I prefer this graph to view the “drastic” global temperature change since 1880:
The source is the GISS data.
https://suyts.wordpress.com/2013/02/22/how-the-earths-temperature-looks-on-a-mercury-thermometer/

Smoking Frog
Reply to  Matt
January 21, 2016 7:48 pm

“Reader” is UK-ese for “professor.” The fact that “visiting reader” would be bizarre if it meant reader of this website should have tipped you off that you should look it up.

Reply to  Smoking Frog
January 21, 2016 8:58 pm

In the UK, if “reader” meant “janitor” or “village idiot” or “prince”, the post still doesn’t make any sense…

trafamadore
January 21, 2016 1:13 pm

Isn’t this analysis similar to break point analysis, like tamino does?

Reply to  trafamadore
January 21, 2016 2:13 pm

piecewise regression, perhaps … with such auto-correlated time series sets and no independent observations possible, I’m not clear what this means for ‘trends’ aside from intriguing ways of exploring

January 21, 2016 1:24 pm

“Any statistics or plotting makes no difference to temperatures.” Unless the temperatures have been adjusted prior to plotting to show the desired outcome and trend.

DayHay
January 21, 2016 1:25 pm

And if I look at the temperature trend for the entire HOLOCENE I see a definite trend. Trends are all over the place. So pick your timeframe and then ask, IS IT SIGNIFICANT. If you cannot see a trend change in global temp graphs starting around 1998-2000 then you are blind or warmist or a liar. The CO2 trend however, looks much more consistent, no concurrent decline like the temp trend. So, is it significant or not? If not, then neither are your “other” 18 years trends where ever they may be. Be anything you like, but damn, be consistent please.

Phil's Dad
January 21, 2016 1:28 pm

Lovely

Tom Halla
January 21, 2016 1:34 pm

I am not at all good at math, but the various paleoclimate and historical temperature climate reports (other than Michael Mann’s) all looked like something producing an oscilliation with a great deal of noise. Presumably, any curve can be described mathematically, and this looks, in my ignorance, to be a descent effort.
Equally notably, the curve does not fit my understanding of the IPCC models expected results.

Phil's Dad
January 21, 2016 1:36 pm

The comments above my last seem to be angrily agreeing with the author (who says it’s probably not a straight line)

Harold
January 21, 2016 1:37 pm

“The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977…”
Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom – Clifford Stoll

Andrew
January 21, 2016 1:41 pm

You produce a cubic fit using a 168-month half window. So shouldn’t your resulting curve begin at 168 months after the first data point and end 168 months before the last data point? Just wondering.
[168 months at either end – not both. Could be 84 months on both ends – but that’s a different smoothing process. .mod]

Reply to  Andrew
January 26, 2016 2:56 am

Andrew, you have hit a considerable nail on the head: the problem of computing a smoothed curve all the way from start to stop of a finite time series when using data both before and after each computed point. You are right. This cannot be done when getting close to either end if you are trying to use the whole of an averaging window. This is sometimes known as the end-point problem (you get exactly the same problem with a simple running average).
There are 440(+1) data months and the so that in principle using the full-width smoothing window one can only generate a smoothed curve with a very limited duration of 440 – 2*168 = 104 months. To extend the curve ‘my’ solution is to allow the window to ‘run off the ends’. The shape and width of the window does not change but it has to work from less data. In the limit the very first smoothed month is using only the following 168 data months while the very last smoothed month is only using the preceding 168 data months. You may complain that this gives less effective smoothing and greater random error in the estimating curve towards either end and you would be right but it can’t be helped. Exactly the same problem exists when fitting high degree polynomials, and (I believe) with any process designed to smooth a finite length time series.
Admittedly there are other ways but I believe this to be the ‘natural’ solution. I follow the methodology implied by Terry Mills in one of his clever papers [ ] and what he calls a ‘non-parametric local trend fit’. One is running a weighted least square fit within whatever data is available (I use the standard tri-cube weighting rather than his choice of ‘Gaussian kernel’ but I don’t think it makes much difference).
[ ] Mills, TC (2009) Modelling current temperature trends, Journal of Data Science, 7, pp.89-97.

robinedwards36
Reply to  M.S.Hodgart
January 26, 2016 2:45 pm

I have to ask what is the purpose of fitting a high order polynomial to /any/ data that arise from a time series? It can surely only be with intention of projecting the fitted curve beyond the range of the actual data. Fitting such a model implies a degree of belief in its existence, otherwise why use it?
The ubiquitous and naive linear model beloved by climatologists is certainly used in the hope of guessing something fairly reasonable regarding a potential extrapolation. Despite its inappropriateness it is usually the safest one to extrapolate. Its confidence intervals are simply two hyperbolae on either side of the least squares fit.
If you go to the trouble of computing the equivalent intervals for a second order fit – the simplest polynomial – you will find that any extrapolation produces rapidly diverging curves if you compute them outside the actual data range. With a cubic model these divergences become spectacular and for quartic and quintic models are ridiculous.
It always irritates me that people who display least squares fits to time series data seem never to bother with this vital piece of inferential statistics. The software I use (my own) carries out these operations on request. It was always very sobering for the scientists who presented their data to me, and so I developed the habit of demonstrating these properties of polynomials to my clients before they began their experimental work.
Please be careful with polynomials and /never/ extrapolate them.

Reply to  Andrew
January 29, 2016 2:47 am

(To RobinEdwards36). Polynomials etc… I disagree with your first proposition following your leading rhetorical question. You assert that the only point of fitting polynomials is to enable projection “..beyond the range of the actual data”. I do not think so. My purpose was to estimate the historical unknown smooth curve posited to exist IN the RSS data assuming signal + noise model (2). Nowhere did I imply or state that I had any intention in extrapolating the 5th degree fitting polynomial outside the range. I entirely agree that one gets ridiculous results using it outside. So I don’t.
But I do agree that the ‘ubiquitous and naive linear model… is usually the safest one to extrapolate”. That is exactly what I was suggesting (reluctantly) might be done with split regression on the HadCRUT4 data in fig 3 (second red line from 1941 onwards) according to the famous principle of insufficient reason (do it if you do not know any better). I am in fact highly dubious about any projections into the future. My final words were “It is of course unwise to make a projection into the future but if we trust neither the elaborate computer climate models favoured by the Met Office nor the projection of Mills- type all-stochastic models this is all we have got….”
I also disagree (totally) with your second proposition. I did not employ a 5th order polynomial in my fig. 2 on the RSS data because of any belief “in its existence”. It would be a travesty of the immensely complex physics to suppose that I did. Polynomials have desirable mathematical properties in approximating an unknown presumed smooth curve (Weierstrass theorem etc). Polynomial regression is one of many deterministic ‘parametric’ methods.
I used TWO methods if you noticed – the other being the popular non-parametric method known as loess. This assumes very little as to ‘what exists’ in the data. It is striking that with care the two smoothing techniques can be made to agree very closely – see also fig 2.
It seems to me that I have been as careful with using a polynomial regression as you would like. So why the lecture?
MSH

Marcus
January 21, 2016 1:48 pm

The reason to point out the Pause is to show that CO2 has continued to rise but the global temperature has not, so CO2 does not cause temperature rise !!

AB
Reply to  Marcus
January 21, 2016 5:10 pm

+100

Roger ayotte
January 21, 2016 1:48 pm

At least the author has discussed the issue of increasing CO2 during a time of apparent temperature pause. The other issue he brings up is the two demonstrated periods of temperature decline while CO2 is increasing.
These are the issues that most skeptics argue. The length of time for zero slop in temperature may or may not be significant, what IS significant is that it is NOT following the trend in CO2.
Roger

Robert of Texas
January 21, 2016 1:51 pm

Why is picking all the available satellite data better to determine a pause than picking the data that shows a pause? Temperature has definitely been around longer than our satellites, so picking “all” the satellite data still seems arbitrary to me. Pick the data that makes the point. If the point is that the trend appears to be near zero for a length of time, then pick that data… Jeez If the point is not a valid one, picking data is a pointless exercise. The point WAS NOT to show the trend since satellites became available.
Let’s go back to the original claims: CO2 drives most of the observed warming, CO2 is being released in ever greater amounts, therefore the RATE of warming has to increase. Since CO2 is the main driver and accounts for most warming, variability can slow it but not stop it (or the claim that CO2 accounts for most observed warming is FALSE). These are the claims of AGW.
If the temperature warming rate slows to almost zero over a long enough period of time, SOMETHING is wrong in the claims – PERIOD. Either CO2 does not drive most of the warming, or CO2 has not been released in greater amounts. Pick one. I choose to pick CO2 does not account for most of the observed warming since there is no evidence that burning fuels that produce CO2 has declined.
Lord Mockton is 100% correct for the limited point he is trying to make.
NOTE: I did not now or ever say CO2 does not cause some amount of warming. I believe it does. But it is not overwhelming the natural variability, does not account for most warming, and in general does more good than harm. Statistics THAT! LOL

Marcus
Reply to  Robert of Texas
January 21, 2016 2:26 pm

You get two gold stars !! ….. Well said !!

Richard M
Reply to  Robert of Texas
January 21, 2016 6:25 pm

Precisely, it is all about whether the temperatures are behaving as AGW science demands.
According to Santer et al 2011, we should be able to determine “human effects” within periods of 17 years or less. This gives us the ability to experimentally test AGW science. Isn’t that what scientists are supposed to do? That is the reason to look at subsets of the satellite data. As it turns out the 17 year criteria was first met in the summer of 2013. Fact is, no more work is necessary to state that “human effects” must not be nearly as strong as AGW science requires. But, the fact we continue to meet this criteria month after month after month just strengthens the case. That is what the pause is telling us. AGW (as defined by current climate models) has been scientifically falsified.
No one is trying to extrapolate the pause into the future. Lord Monckton makes this very clear.

son of mulder
January 21, 2016 1:54 pm

What do the varying trends discussed above tell us about measuring methods/processes and changes in weather station inclusion/exclusion and adjustments for whatever reasons?

January 21, 2016 2:00 pm

I care little for Lord Monckton’s clarity of thought or grasp of logic in general and even less for his character, but on this issue he was quite clear about what he was doing. He merely offered the length of time over which one can measure a non-positive least-mean-squares trend, for whatever value readers may see in it an indication of how important natural variation is in comparison with CO2 forcing.
It is self-evident that whether one sees a pause depends on the period over which one is measuring. I did not read Lord Monckton to suggest otherwise. In contrast, it is not clear what Dr. Hodgart is doing. It looks as though he thinks that at any given time there is only one true measure of trend or only one true criterion for whether there’s a pause, and that if one were only to hit upon the right technique he be able to answer the Ultimate Question. I recognize that plenty of people talk that way, but Dr. Hodgart gives the impression that he really believes it and is looking for a way to compute the unique true quantity:

These very similar curves are perhaps the most likely deterministic estimates of the trend but they cannot be the exact truth.

Perhaps I am misunderstanding him. If so, some clarification of what exactly he thinks he’s looking for would be helpful.

Reply to  Joe Born
January 21, 2016 6:31 pm

Well said. +10

January 21, 2016 2:04 pm

No matter the method, statistics has to deal withnthe fact that temperature series are autocorrelated. This, at a minimum, reduces effective degrees of freedom and increases uncertainty in whatever result. The best treatment IMO is McKitrick’s 2014? paper analyzing three of the main 6 series. He found no statistically significantbtrens for 16, 19, and 24 years respectively. That is sufficient to falsify climate models.

Reply to  ristvan
January 21, 2016 2:16 pm

“That is sufficient to falsify climate models.”
A lack of statistical significance can’t falsify anything. It just means that your test did not have sufficient power to resolve the matter. It failed. The observed trend is still there.
And of course, as with McKitrick, you can if you wish design a test that almost guarantees a fail. That says something about the test, not the data.

Richard M
Reply to  Nick Stokes
January 21, 2016 6:29 pm

Nonsense, when the science **requires** significant warming trends within a specified time frame and time frame is reached, you have falsification.

Reply to  Nick Stokes
January 21, 2016 9:09 pm

“when the science **requires** significant warming trends within a specified time frame and time frame is reached”
So what are those time frames? Who specified? Where do you get this stuff?
Neither you nor Rud understand how statistical tests work. Say you have data and think your hypothesis Y is supported. But first you should check if there is an alternative “obvious” alternative explainer, prop N (your Null Hyopthesis, often a variant of happen by chance). Here Y is positive trend, and N is zero trend with random variation.
A stat test tests N, not Y. If N can be rejected as making the result too improbable, then Y is strengthened. But if N is not rejected, Y is not weakened; it wasn’t tested. Either N or Y remains possible. The test yields nothing.

richardscourtney
Reply to  Nick Stokes
January 22, 2016 3:26 am

Nick Stokes:
You ask Richard M

“when the science **requires** significant warming trends within a specified time frame and time frame is reached”
So what are those time frames? Who specified? Where do you get this stuff?

You don’t know? I am astonished. I will enlighten you.
In 2008 the US Government’s National Oceanic and Atmospheric Administration (NOAA) reported

Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

Ref. NOAA, ‘The State of the Climate’, 2008
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
Please don’t try the usual warmunist evasions of
(a) pretending the finding only concerns times of absence of ENSO
or
(b) pretending the 95% means pauses of “15 yr or more” happen 1 in 20 times.
ENSO is always present so accepting claim (a) would be an acceptance that all the predictions of warming are wrong.
Anyway, the present lack of warming exists whether it is assessed as having starting in 1997 before the Great el Niño of 1998 or starting in 2000 after the 1998 el Niño ended.
The 95% refers to the confidence in the finding that “The simulations rule out zero trends for intervals of 15 yr or more” which is why the finding suggests “an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
However, in 2012 when warming had ceased for seemingly 15 years, Phil Jones of the Climate Research Unit (CRU) in the UK insisted that

15 or 16 years is not a significant period: pauses of such length had always been expected

. This was a flagrant falsehood because in 2009 (when the ‘pause’ was already becoming apparent and being discussed by scientists) he had written an email (leaked as part of ‘Climategate’) in which he said of model projections,

Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.

Clearly, as recently as 2008 both NOAA in the US and the CRU in the UK agreed that “observed absence of warming” for 15 or more years would “create a discrepancy with the expected present-day warming rate” indicated by climate models. And this was a decade into the ‘pause’ which has now existed for probably more than 18 years.
Richard

Bob Boder
Reply to  Nick Stokes
January 22, 2016 3:39 am

Nick
You are correct Y has never been tested.

Reply to  Nick Stokes
January 22, 2016 8:52 am

Richard Courteney,
“I will enlighten you.”
No enlightenment there. Your response is totally irrelevant. Richard M claims that ‘science **requires** significant warming trends’. You respond with a statement about observed warming trends.
But there is no pretence about the NOAA statement being about ENSO-corrected data. It’s a simple fact, and your quote, like most others, is designed to erase this essential requirement. The box containing the quote you provide begins
“Observations indicate that global temperature rise has slowed in the last decade (Fig. 2.8a). The least squares trend for January 1999 to December 2008 calculated from the HadCRUT3 dataset (Brohan et al. 2006) is +0.07±0.07°C decade —much less than the 0.18°C decade recorded between 1979 and 2005 and the 0.2°C decade expected in the next decade (IPCC; Solomon et al. 2007).”
They then say
“The trend after removing ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C decade, implying much greater disagreement with anticipated global temperature rise. “
In fact their concern then was that the ENSO-adjusted trend was zero, implying much greater disagreement. The reason why the adjustment is important is that ENSO spikes can indeed induce a long following period of negative trend, as Lord M has been repetitively emphasising. But ENSO spikes cannot be easily fitted in to a random noise framework, and in any case do not tell anything about climate trend. As WUWT readers are about to find out, the resulting “pause” just vanishes when the next big ENSO spike comes along. Removing the ENSO effect makes occurrence of a long period of zero trend during warming a much less likely and more significant observation. That is why they set out to discover the duration of that sequence that might be significant, and they suggested fifteen years (for surface data). But only after removing ENSO.

Richard M
Reply to  Nick Stokes
January 22, 2016 9:31 am

Nick, Santer et al 2011 … “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global‐mean tropospheric temperature.”
They obtained that value by testing climate models and 95% of the time they found statistical warming in 17 years or less. This is the 95% criteria normally used for scientific falsification.

Reply to  Nick Stokes
January 22, 2016 10:02 am

“This is the 95% criteria normally used for scientific falsification.”
No, you’re turning the logic upside down. He’s saying that you need at least 17 years for attribution. He’s not saying that just 17 years of zero trend will suffice for falsification. He isn’t talking about falsification at all.

Monckton of Brenchley
Reply to  Nick Stokes
January 22, 2016 9:51 pm

Mr Stokes’ standard response to the NOAA quote about 15 years of Pause indicating a discrepancy between prediction and reality is to bleat that they are talking about ENSO-corrected data – in other words, data tortured by yet another subjective, Humpty-Dumptyish tampering.
The truth is that over periods of 15 years or more the only correction needed to take account of the synoptic – i.e. self-cancelling – southern oscillation is to ensure that the period of study includes only complete El Niño+La Niña events.
The Pause starts before the event of 1997-2000 and at present remains a Pause notwithstanding the current El Niño spike with no countervailing La Niña yet. The discrepancy, therefore, is real. The models’ predictions have been falsified according to NOAA’s criterion – on the satellite datasets, at any rate. Best to admit that rather than trying to pretend otherwise.

richardscourtney
Reply to  Nick Stokes
January 23, 2016 2:20 am

Nick S.:
I give you full marks for comedy in your reply to me!
You write

No enlightenment there. Your response is totally irrelevant. Richard M claims that ‘science **requires** significant warming trends’. You respond with a statement about observed warming trends.
But there is no pretence about the NOAA statement being about ENSO-corrected data. It’s a simple fact, and your quote, like most others, is designed to erase this essential requirement. The box containing the quote you provide begins …

I did not mention “observed warming trends”: I mentioned the observed LACK of warming trend for about 18 years.
The ” ‘science **requires** significant warming trends’ but there has been no warming trend so THE ‘SCIENCE’ IS WRONG.
And my quote was NOT “designed to erase this essential requirement”. On the contrary, I raised the issue of ENSO when I quoted the NOAA assessment then wrote

Ref. NOAA, ‘The State of the Climate’, 2008
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
Please don’t try the usual warmunist evasions of
(a) pretending the finding only concerns times of absence of ENSO
or
(b) pretending the 95% means pauses of “15 yr or more” happen 1 in 20 times.
ENSO is always present so accepting claim (a) would be an acceptance that all the predictions of warming are wrong.
Anyway, the present lack of warming exists whether it is assessed as having starting in 1997 before the Great el Niño of 1998 or starting in 2000 after the 1998 el Niño ended.
The 95% refers to the confidence in the finding that “The simulations rule out zero trends for intervals of 15 yr or more” which is why the finding suggests “an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

You have not addressed those points in your reply and, therefore, your reply claims that “all the predictions of warming are wrong”!
Richard

Reply to  Nick Stokes
January 23, 2016 11:35 am

Lord M,
“Mr Stokes’ standard response to the NOAA quote about 15 years of Pause indicating a discrepancy between prediction and reality is to bleat that they are talking about ENSO-corrected data – in other words, data tortured by yet another subjective, Humpty-Dumptyish tampering.”
This is a pattern at WUWT. Someone says, say, “NOAA says 15 years is enough. Told you so”. I point out that NOAA actually said something different. Response – why would you quote “data tortured by yet another subjective, Humpty-Dumptyish tampering”? The NOAA is the authority that you quoted for the fifteen year claim.
“Bleat?”. I’m just wearily pointing out that conditionals matter. “Passengers with parachutes may safely exit the plane” is not the same as “Passengers may safely exit the plane”. Here in that box from which the 15 year quote is cherry-picked it says:
“El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions can be separated by the method of Thompson et al. (2008)(Fig. 2.8a).”
They are talking about surface data, but ENSO is an especially strong driver in the troposphere. 1998 is still the warmest year in RSS, and trend calculations for following periods will be negative for quite some time. The NOAA is taking care to remove this effect, to see what trend remains. That is where their 15 years comes from.

richardscourtney
Reply to  Nick Stokes
January 23, 2016 11:33 pm

Nick S:
I am offended by your reply to Viscount Monckton.
You say

This is a pattern at WUWT. Someone says, say, “NOAA says 15 years is enough. Told you so”. I point out that NOAA actually said something different. Response – why would you quote “data tortured by yet another subjective, Humpty-Dumptyish tampering”? The NOAA is the authority that you quoted for the fifteen year claim.

NO! How dare you misrepresent what I wrote in such a manner!
I quoted the NOAA statement to you here because you claimed not to know of it. I referenced it and I linked to it.
Importantly, very importantly, as part of my explanation of what you had claimed to not know, I wrote

Please don’t try the usual warmunist evasions of
(a) pretending the finding only concerns times of absence of ENSO

and

ENSO is always present so accepting claim (a) would be an acceptance that all the predictions of warming are wrong.
Anyway, the present lack of warming exists whether it is assessed as having started in 1997 before the Great el Niño of 1998 or started in 2000 after the 1998 el Niño ended.

You replied by trying to pretend that the ENSO issue negates what I had reported but your reply made no mention of my explanations of why ENSO is an evasion and claimed

But there is no pretence about the NOAA statement being about ENSO-corrected data. It’s a simple fact, and your quote, like most others, is designed to erase this essential requirement. The box containing the quote you provide begins…

My quote together with the explanations I provided were “designed” to inform you of the truth.
Viscount Mockton responded to your nonsense by informing you of why your inferences that ENSO provided the ‘Pause’ are wrong. His explanation is an expansion of – and addition to – my having told you “the present lack of warming exists whether it is assessed as having started in 1997 before the Great el Niño of 1998 or started in 2000 after the 1998 el Niño ended”.
And I responded by pointing out that you had ignored my having said “ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong”. And your ignoring that point is tacit acceptance of it, so you have accepted that all the predictions of warming are wrong!
And – in attempt to pretend your behaviour is not reprehensible – you have only answered Viscount Monckton and not me when I was the one who tried to help you overcome the ignorance you had claimed.
Richard

richardscourtney
Reply to  Nick Stokes
January 24, 2016 12:04 am

Nick S.:
This post is a deliberate addendum because it is intended to avoid its point being confused with discussion of your obfuscations about ENSO which you provide to evade the significance of ‘the NOAA 15-year limit’ having been broken by the ~18 hear ‘Pause’.
I also told you

in 2009 (when the ‘pause’ was already becoming apparent and being discussed by scientists) he had written an email (leaked as part of ‘Climategate’) in which he said of model projections,

Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.

‘They’ must now be very “worried” that the model predictions have been shown to be wrong by providing the ‘Pause’ for more than 18 years.
And I infer that your response to this information implies you are among the “worried” ‘they’.
Richard

Reply to  Nick Stokes
January 24, 2016 12:18 am

Richard,
“How dare you misrepresent what I wrote in such a manner!”
I was entirely responding to what Lord M wrote, as I said. I thought I had replied adequately to you earlier. I did not refer to what you wrote. However, to address:
““ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong””
It’s just nonsense. NOAA are addressing the probability of getting a 15 year zero trend, with a general warming perturbed by various sorts of noise. The probability rises with the amount of noise. NOAA are concerned with a specific perturbation, ENSO, which because of its large and fairly frequent peaks, greatly increases the probability of a “pause”. They want a stabler picture so they eliminate the ENSO perturbation. That greatly decreases the probability, or more aptly here, decreases the length at which a pause becomes significant. You want to use that ENSO-decreased length criterion, but keep the noise of ENSO in testing.
And OK, going back to
“I did not mention “observed warming trends”:”
I didn’t respond to that because the falsity is obvious. The core of your response was the quote from NOAA, and the core of that was:
“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
Why that makes your original response irrelevant is that both Richard M and ristvan were talkng about observed nonzero trends of a duration that lacked statistical significance (relative to 0). And my response to them explained why that was just a failed test from which nothing could be deduced. The NOAA quote concerned the likelihood of an observed zero trend.

richardscourtney
Reply to  Nick Stokes
January 24, 2016 12:52 am

Nick N:
In response to my repeatedly telling you

ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong

you now say to me

It’s just nonsense. NOAA are addressing the probability of getting a 15 year zero trend, with a general warming perturbed by various sorts of noise. The probability rises with the amount of noise. NOAA are concerned with a specific perturbation, ENSO, which because of its large and fairly frequent peaks, greatly increases the probability of a “pause”. They want a stabler picture so they eliminate the ENSO perturbation. That greatly decreases the probability, or more aptly here, decreases the length at which a pause becomes significant. You want to use that ENSO-decreased length criterion, but keep the noise of ENSO in testing.

Rubbish!
Firstly, the finding was that

The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more,

That clearly was a finding (n.b. NOT a tested parameter) because the sentence continues saying

suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

This “suggestion” could not exist if – as you assert – “NOAA are addressing the probability of getting a 15 year zero trend”.
In other words, your assertion is plain wrong.
Secondly, your claim of what they did is merely an expansion of your erroneous assertion that “NOAA are addressing the probability of getting a 15 year zero trend”. They did NOT – as you assert – investigate ENSO and, therefore, NOAA were NOT “concerned with a specific perturbation, ENSO, … They want a stabler picture so they eliminate the ENSO perturbation.” They observed that “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more” but they report that ENSO could alter this finding because THE MODELS CANNOT EMULATE ENSO.
What cannot be modeled cannot be in a model so cannot eliminated from a model for any specific test.
Thirdly, if – as you assert – the “noise” of ENSO means the “general warming” becomes zero for more than 15 years then – as I said – ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong.
Finally, as both I and Lord Monckton have explained to you, the recent 18-year-Pause is observed to NOT be an effect of ENSO (which you call “noise”).
Richard

Reply to  Nick Stokes
January 24, 2016 2:08 am

Richard, some of this is just mystifying. You quote
“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more,”
and say “That clearly was a finding
Well, yes. So? And you seem to think that that contradicts “NOAA are addressing the probability of getting a 15 year zero trend”
But they set out exactly how they got that probability:
“The 10 model simulations (a total of 700 years of simulation) possess 17 nonoverlapping decades with trends in ENSO-adjusted global mean temperature within the uncertainty range of the observed 1999–2008 trend (−0.05° to 0.05°C decade –1). “
It’s a frequency count from a Monte Carlo (ENSO-ajusted). 17/700=2.5%. You say
“they report that ENSO could alter this finding because THE MODELS CANNOT EMULATE ENSO”
I can’t find that in the text. You say they can’t – they say they did. And GCM’s could, even then. Just a few days ago, I linked the video (made in 2008).
“would be an acceptance that all the predictions of warming are wrong”
Sorry, I just don’t understand that at all.
“the recent 18-year-Pause is observed to NOT be an effect of ENSO”
Well, you can’t observe that – you have to do some analysis, which you rarely do. It is a simple matter of arithmetic that a large peak at the starting point of the trend period will tip the trend down.

Reply to  Nick Stokes
January 24, 2016 4:34 am

“17/700=2.5%.”
Actually, there aren’t 700 possible non-overlapping decades, so the chance of a decade “pause” will be a lot higher. But they would have done a similar count for 15 year pauses.

richardscourtney
Reply to  Nick Stokes
January 24, 2016 6:34 am

Nick S.:
You say in response to my last post

Richard, some of this is just mystifying.

Your mystification is because you have filled your mind with nonsense and left no room for sense.
I write to clear some of the nonsense so you can see sense.
Firstly, ENSO is an emergent property of climate behaviour. It is NOT in the climate models because it cannot be: a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
However, you are claiming

They want a stabler picture so they eliminate the ENSO perturbation.

NO! They cannot “eliminate” from a model what is not in the model and, they cannot “eliminate” from a model behaviour it does not exhibit. I repeat, ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
As I said

Secondly, your claim of what they did is merely an expansion of your erroneous assertion that “NOAA are addressing the probability of getting a 15 year zero trend”. They did NOT – as you assert – investigate ENSO and, therefore, NOAA were NOT “concerned with a specific perturbation, ENSO, … They want a stabler picture so they eliminate the ENSO perturbation.” They observed that “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more” but they report that ENSO could alter this finding because THE MODELS CANNOT EMULATE ENSO.
What cannot be modeled cannot be in a model so cannot be eliminated from a model for any specific test.

I remind you of the rhyme
I met a man who wasn’t there
by
Hughes Mearns
Last night I saw upon the stair
A little man who wasn’t there
He wasn’t there again today
Oh, how I wish he’d go away…

You are claiming the modellers ‘wished away’ an emulation of ENSO that was not in their models.
And all your arguments are based on your believing that impossible idea.

NOAA did not do the impossible that you claim. And NOAA did not claim to do the impossible that you claim.
Secondly, you are plain wrong when you assert

NOAA are addressing the probability of getting a 15 year zero trend, with a general warming perturbed by various sorts of noise.

What NOAA actually did is what they said they did; viz.

Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

i.e. NOAA said they were reporting the behaviour of their model(s).
NOAA were NOT doing as you suggest and addressing the probability of getting a 15 year zero trend. 1.
If they had done that then they would not have found “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.” They would not been examining “for intervals of a decade or less in the simulations”.
And 2.
They would not have reported “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”. A probability does not “rule out” anything.
And 3.
They would not have made the suggestion that “an observed absence of warming of {15 years} duration is needed to create a discrepancy with the expected present-day warming rate”: They would have reported the probability of “an observed absence of warming of {15 years}”.
I anticipated your confusion of (a) the simulations having 95% confidence with (b) the 95% confidence in their finding. I wrote

Please don’t try the usual warmunist evasions of

(b) pretending the 95% means pauses of “15 yr or more” happen 1 in 20 times.

I hope that removes some of your (deliberate?) “mystification”.
Richard

Reply to  Nick Stokes
January 24, 2016 10:01 am

Richard,
I repeat, ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
You claim, absurdly, that models can’t model ENSO. The NOAA doc says that they can identify and remove the ENSO effect. I referred to one video I posted showing modelled ENSO behaviour; here is another:

As to all your nonsense about how NOAA is not addressing the probability of getting a fifteen year trend etc – if that is true, then why is the report being quoted? Because, it says, that after suitable filtering (mention omitted) the occurrence of a fifteen year stretch of zero trend would be sufficiently improbable (5%) that it would create a contradiction with the model.

richardscourtney
Reply to  Nick Stokes
January 24, 2016 11:28 am

Nick S:
I again repeat,
ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
But you say to me

You claim, absurdly, that models can’t model ENSO. The NOAA doc says that they can identify and remove the ENSO effect. I referred to one video I posted showing modelled ENSO behaviour; here is another:

Firstly, the NOAA doc does NOT say “that they can identify and remove the ENSO effect” FROM THE MODEL SIMULTIONS. You wish it did but it does not, and that is why you don’t quote it saying that.
The NOAA doc says

El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions can be separated by the method of Thompson et al. (2008)(Fig. 2.8a).

The dubious method of Thompson et al. (2008)(Fig. 2.8a) supposedly separates ENSO effects from climate observations of global temperature to enable the global temperature trend to be compared to model indications of global temperature because the models do NOT emulate ENSO. It is because the models do not emulate ENSO that use of the method of Thompson et al. (2008) is suggested in the NOAA doc.
The flows of water that comprise ENSO events are known so, yes, computer video games can generate pretty pictures of ENSO events. But that is NOT the same as a climate model emulating ENSO. Perhaps you do not know what is meant by emergent behaviour?
Past ENSO events are historical so the data used to generate the computer video games of ENSO could be included in a climate model making hindcasts of climate. But nobody knows the timing and magnitudes of future ENSO events and the climate models do not generate that behaviour so ENSO is NOT in – and cannot be in – the forecast models under discussion.
In hope that you will – at last – understand, I again repeat;
ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
Having got that out the way, please understand that your major problem is your refusal to read what the NOAA criterion is.
The NOAA criterion is an indication of their model indications. It says

Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

It provides a reasonable caveat that ENSO effects could alter that finding because their models do not generate ENSO effects. And it points out that when ENSO effects are removed from climate data the ‘Pause’ becomes a greater discrepancy with anticipated (i.e. model predicted) temperature rise when it says

The trend after removing ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C decade, implying much greater disagreement with anticipated global temperature rise.

But you pretend the NOAA criterion is not what NOAA says it is and you claim

NOAA are addressing the probability of getting a 15 year zero trend

That is patent nonsense! A “probability” does not “rule out” anything.
And a single “probability” does not apply to 15 yr or more”: e.g. different probabilities would exist for 15 years and for 150 years
Richard

Reply to  Nick Stokes
January 24, 2016 1:08 pm

“A “probability” does not “rule out” anything.”
This comes back to the original quote from NOAA that you and Lord M have been brandishing
“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”
So what is 95% if not a probability?

richardscourtney
Reply to  Nick Stokes
January 24, 2016 11:22 pm

Nick S.:
At last you have started to ask questions instead of asserting ignorant errors. You ask me

“A “probability” does not “rule out” anything.”

This comes back to the original quote from NOAA that you and Lord M have been brandishing

“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”

So what is 95% if not a probability?

I answered that repeatedly saying

<blockquote I anticipated your confusion of (a) the simulations having 95% confidence with (b) the 95% confidence in their finding. I wrote

Please don’t try the usual warmunist evasions of

(b) pretending the 95% means pauses of “15 yr or more” happen 1 in 20 times.
Nobody has been “brandishing” anything. You asked

“when the science **requires** significant warming trends within a specified time frame and time frame is reached”

So what are those time frames? Who specified? Where do you get this stuff?

And I answered by citing and quoting the NOAA 2008 criterion and Phil Jones, then I concluded

Clearly, as recently as 2008 both NOAA in the US and the CRU in the UK agreed that “observed absence of warming” for 15 or more years would “create a discrepancy with the expected present-day warming rate” indicated by climate models. And this was a decade into the ‘pause’ which has now existed for probably more than 18 years.

Your response was to blatantly misrepresent the 2008 NOAA criterion and to ignore the Phil Jones comment despite being reminded of it.
Both Viscount Monckton and I corrected your misrepresentations and you have been attempting to justify your misrepresentations of the NOAA 2008 criterion by posting loads of bolllocks which I have been refuting. Neither I nor Viscount Monckton has “brandished” anything.
Part of my refutations has been my repeatedly supporting what NOAA said they did (i.e. NOAA reported indications of climate models) and refuting your nonsensical assertion that they did something else. You claim

NOAA are addressing the probability of getting a 15 year zero trend

I have repeatedly pointed out
(a) NOAA did NOT say they addressed “the probability of getting a 15 year zero trend”,
(b) NOAA did NOT report “the probability of getting a 15 year zero trend”
And
(c) What NOAA did report is NOT a “probability of getting a 15 year zero trend”.
And I have repeatedly pointed out to you that your pretending NOAA addressed the probability of getting a 15 year zero trend is daft because if NOAA had done that then
1.
NOAA would not have found “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.” They would not have been examining “for intervals of a decade or less in the simulations”.
And 2.
NOAA would not have reported “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”. A probability does not “rule out” anything (as I said, “the 95% level” is the confidence they have in their simulations and not the indications of their simulations).
And 3.
They would not have made the suggestion that “an observed absence of warming of {15 years} duration is needed to create a discrepancy with the expected present-day warming rate”: They would have reported the probability of “an observed absence of warming of {15 years}”.
And 4.
A single “probability” does not apply to “an observed absence of warming of” “15 yr or more”: e.g. different probabilities would exist for 15 years and for 150 years.
The NOAA criterion says

Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

Ref. NOAA, ‘The State of the Climate’, 2008
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
An “absence of warming” has existed for more than 15 years.
Richard

JohnKnight
Reply to  Nick Stokes
January 25, 2016 4:05 am

Nick Stokes,
Well, in a proverbial nutshell, it’s a 95% chance those models are not modeling the planet Earth . . What has happened, was “ruled out” to a 95% certainty, by the simulations behavior, so they don’t behave like the real climate, in a rather pronounced way.. (The attribution aspect is another layer of hokum pokum down still ; )

Reply to  Nick Stokes
January 25, 2016 12:13 pm

Richard,
You keep quoting your little excerpt from the report, with context excised. Here’s the context:
“ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate. “
You keep trying to hide ENSO-adjusted. It’s essential. And you claim 95% is a confidence, as if that is something other than probability. The context shows how they are calculating it. They count. “lies within the 90% range”. Frequency.
Here is the start of the box in which your quote appears:
http://www.moyhu.org.s3.amazonaws.com/2016/1/box.png
In the top plot they show the temperature series with and without ENSO, and the component removed. They show how after removal it has zero trend. In the second plot, they show how the ENSO-filtered trends run negative for up to 10 years for HADCRUT (but not for others). And they show the simulations, with the 70%, 90% and 95% levels clearly described as “range of trends” in the simulations. Frequencies.

JohnKnight
Reply to  Nick Stokes
January 25, 2016 6:53 pm

Nick Stokes,
This;
“Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more . . ”
Is not the equivalent of this;
*Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations show zero trends for intervals of 15 yr or more, five percent of the time.*
Do you know how often (if at all) the simulation runs showed “pauses” of 15yrs or longer, or are you just assuming that happened 5% of the time? That’s not how people generally speak of something that happens 5% of the time~ Ruled out at the 95% level.
It looks to me like a sneaky way of implying that, perhaps, but not a clear indication it actually happened exactly five percent of the time.

Reply to  Nick Stokes
January 26, 2016 9:35 am

JohnKnight
Yes, I should have said “less than 5% of the time”. They are basically doing a Monte Carlo test. You do lots of runs and see how often some event happens.

richardscourtney
Reply to  Nick Stokes
January 31, 2016 12:47 am

NickS:
I had thought this sub-thread was over but I have just now discovered you are still clinging to your straw. I write to remove your straw in hope you will notice you are sunk.
You say

JohnKnight
Yes, I should have said “less than 5% of the time”. They are basically doing a Monte Carlo test. You do lots of runs and see how often some event happens

Please state why you think
(a) NOAA did NOT say they addressed “the probability of getting a 15 year zero trend”,
(b) NOAA did NOT report “the probability of getting a 15 year zero trend”
And
(c) What NOAA did report is NOT a “probability of getting a 15 year zero trend”.
When – according to you – NOAA were not reporting the behaviour of their model(s) but had conducted a Monte Carlo Test to determine the probability of “an observed absence of warming of” “15 yr or more”.

To help your reply, I remind that I had written

I have repeatedly pointed out
(a) NOAA did NOT say they addressed “the probability of getting a 15 year zero trend”,
(b) NOAA did NOT report “the probability of getting a 15 year zero trend”
And
(c) What NOAA did report is NOT a “probability of getting a 15 year zero trend”.
And I have repeatedly pointed out to you that your pretending NOAA addressed the probability of getting a 15 year zero trend is daft because if NOAA had done that then
1.
NOAA would not have found “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.” They would not have been examining “for intervals of a decade or less in the simulations”.
And 2.
NOAA would not have reported “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”. A probability does not “rule out” anything (as I said, “the 95% level” is the confidence they have in their simulations and not the indications of their simulations).
And 3.
They would not have made the suggestion that “an observed absence of warming of {15 years} duration is needed to create a discrepancy with the expected present-day warming rate”: They would have reported the probability of “an observed absence of warming of {15 years}”.
And 4.
A single “probability” does not apply to “an observed absence of warming of” “15 yr or more”: e.g. different probabilities would exist for 15 years and for 150 years.

Richard

Marcus
Reply to  ristvan
January 21, 2016 2:32 pm

” significantbtrens ” ?? CC’s gonna jump on you for that !! LOL

Bernard Lodge
January 21, 2016 2:09 pm

Lord Moncton is not saying there is a trend. He is simply calculating how many months you can go back in time and still show a zero temperature increase when compared to today. It is not a trend, it is a simple mathematical computation which is not open to interpretation – in other words, it is a fact.

Marcus
Reply to  Bernard Lodge
January 21, 2016 2:33 pm

.. A very inconvenient fact that alarmists hate !!

Reply to  Bernard Lodge
January 22, 2016 12:58 am

“Lord Moncton is not saying there is a trend.”
From the words of the Lord here:
“The hiatus period of 18 years 8 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend.”

January 21, 2016 2:09 pm

what is up with the beginning of the blue dashed RSS (Figure 2) in that 5th degree polynomial ??
and it seems to me that despite some interesting data ‘digging’, calculations of error deviations from the fitted functions would be revealing …

January 21, 2016 2:11 pm

“With this latest version of HadCRUT4 (now issue 4.4) we now get a low warming rate (of about 0.01 deg C/decade) from 2005 (compare flat response with the RSS data). I have not included the year 2015 which was not completed when running all these calculations.”
You should include 2015. It makes a very substantial difference. Using monthly data, the trend from Jan 2005 to Dec 2014 is 0.0169 C/dec. But to Dec 2015, it is 0.1291 C/decade (Hadcrut 4). This shows the fragility of relying on such short length trends. But it also casts doubt on whether 2005 is then a break year at all. And I think it shows the overall weakness of your model here.

Richard M
Reply to  Nick Stokes
January 21, 2016 6:31 pm

No, you should never use half of an ENSO pair. You won’t know anything until both halves are complete.

Ged
Reply to  Nick Stokes
January 22, 2016 9:52 am

Despite all your pontificating about avoiding ENSO when doing trend analysis in your comments above, you then go on and try to use a fragment of an ENSO event to do trend analysis in accordance with your bias (the worst possible act). Doesn’t this irony come across as just a tad hypocritical to you?

Reply to  Ged
January 22, 2016 10:08 am

“you then go on and try to use a fragment of an ENSO event”
I’m not trying to use a fragment. I’m not advocating calculating 10 year trends at all. And I do think that for this analysis it would be better to remove ENSO effects. I’m simply pointing out that the article is not using all the data, and if you do, it makes a big difference.

Martin Hertzberg
January 21, 2016 2:15 pm

What difference does it make whether the Earth’s temperature (whatever that means) is going up or down or staying constant? If it goes up in parallel with CO2, noting is proven by that. That doesn’t prove causation. The proof that the AGW theory is false comes from the complete absence of a physically sound mechanism. The precedence of temperature changes prior CO2 changes invalidates the IPCC paradigm. Too much is being make of temperature trends even by realists.

richardscourtney
January 21, 2016 2:15 pm

M.S.Hodgart:
I am surprised you have so clearly misunderstood the trend analysis of Viscount Monckton that you write

The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977.

Not so, he has not “chosen” anything and he does not “disregard” anything.
He addresses the question of “How long has no linear trend been discernible up to now in the RSS time series of global average temperature?”
He considers the linear trends which exist for months prior to now. And the longest linear trend prior to now which has no discernible positive slope is his determination of the length of the ‘Pause’.
It turns out that his most recent determination is 18 years 8 months prior to the end of December 2015 and this is one month less than his determination for the length of the ‘Pause’ prior to the end of November 2015.
This is important because (as davidmhoffer lists above) all the model predictions of global temperature indicate that a ‘Pause’ of this length is not possible. Hence, there is empirical evidence of the models’ failing to provide useful indications of future global temperatures.
You say

A statistician is a fellow that draws a line through a set of points based on unwarranted assumptions with a foregone conclusion.
In other words be careful if you run a linear regression on data like these.

Well, Viscount Monckton cannot be a “statistician” according to that definition because he makes no assumptions about his line (he assesses the linear rise predicted by modellers) and his analysis is conducted because he does not have a foregone conclusion.
Importantly, linear regression is the required analysis to discern if there has been a consistent rise of any form. Also, and relatively trivially, assumption of any form of trend other than linear requires a justified model of the form but no such model exists.
Richard

Marcus
Reply to  richardscourtney
January 21, 2016 2:36 pm

I find it hard to believe that M.S.Hodgart cannot understand such a simple concept ..I think he’s really a warmist at heart !

Janice Moore
Reply to  Marcus
January 21, 2016 3:16 pm

I agree, Marcus. He is to be congratulated for admitting to being a “lukewarmist” above, but, this self-awareness did not correct his bias. His belief in the lukewarm conjecture (for lukewarmism is still a BELIEF, a FEELING (usually based on assuming that the properties of CO2 in highly controlled laboratory conditions justify the belief that human CO2 can drive the climate of the earth) about CO2) has clearly biased his reporting for the warmist view above.
One evidence of his bias is (as Gary Pearse does a fine job of elaborating upon in this thread) his non-fact based moral equivalency falsity here:

… the tendency on both sides to cite only the evidence supporting their views and to ignore what does not …

The documented history of the science realists’ writing speaking does NOT reveal such a tendency on their part to selectively cite to any degree near the level that would make the realists come even CLOSE to the blatant bias displayed year after year by the AGWers.

Reply to  Marcus
January 21, 2016 6:41 pm

Agreed, Marcus. Janice, I think you’ve hit the nail on the head.

TonyN
Reply to  Marcus
January 23, 2016 2:41 am

Marcus, it is sadly all to easy to believe that you cannot understand more complex concepts. Look at Fig. 2.
And BTW did you see the blue “Lukewarmist” word?

TonyN
Reply to  richardscourtney
January 23, 2016 2:36 am

Richard, have a look at fig 2. Does it show a plausible negative trend within half of Monkton’s ‘pause’ period? If so, surely it ADDS to the evidence that the effect of anthropogenic CO2 emissions are minimal?

richardscourtney
Reply to  richardscourtney
January 23, 2016 9:15 am

TonyN:
You ask me and say

Richard, have a look at fig 2. Does it show a plausible negative trend within half of Monkton’s ‘pause’ period? If so, surely it ADDS to the evidence that the effect of anthropogenic CO2 emissions are minimal?

Yes, but that was not my point. I was explaining to M.S.Hodgart his misunderstanding of what Viscount Monckton has done.
Also, I would not want to discuss “evidence that the effect of anthropogenic CO2 emissions are minimal”. It is for those who claim anthropogenic CO2 emissions have an effect on global climate to provide evidence for their claim: to date they have provided no such evidence.
Discussion of whether there is evidence for minimal effect of anthropogenic CO2 emissions distracts from the fact that there is NO evidence of anthropogenic CO2 emissions having any effect on global climate, none, zilch, nada.
Richard

Reply to  richardscourtney
January 30, 2016 3:14 am

To richardscourtney
Clearly I did not manage to make my point. I have no difficulty in understanding Monckton’s procedure but I disagree with his interpretation. The calculated slope of a linear regression between two selected dates is assumed by him, you and many others – to define the trend in temperatures over that period. You just take this for granted. That is the “unwarranted assumption”. Accordingly when that slope is found to be zero (or near zero) over a selected period there has been a pause (or Pause) for all that time. But has there? There are real difficulties with this assumption.
Classically linear regression would be applied if there were a well-justified belief that Nature has created a straight line defined by an offset and slope buried in the data beneath additive noise v[k] – strictly a discrete zero-average weakly stationary stochastic sequence (model 1). The trend is then the slope of that line.
There is every reason to suppose that such a model is unrealistic here. You can easily see this if you run another regression over the years from Jan 1979 [ ] to Feb 1997 when Monckton starts his. You get a peculiar-looking graph where the ‘trend’ temperature jumps more than 0.2 deg C in just one month. Although he did not like my approach any better jim January 24, 2016 at 3:30 pm on Monckton says exactly this: “When calculating the slope he ignores all the data before each start date – which means the intercept for his fitted line is incorrect – it assumes temperatures magically jumped from the pre-pause levels to the level at which they “paused”, all within a single month. It makes no sense”.
Quite. There is however no problem with linear regression on these data under a different interpretation and if applied to the appropriate months. To start making better sense of things and still use linear regression I suggested that we have to envisage that Nature has created a sampled smooth curve s[k] running through all the data added to which there is again added noise v[k] ( model 2). When running a linear regression over any period on this model the calculated slope now has to have a different interpretation – as an average trend i.e average slope and not the actual slope.
To estimate this s[k] one can resort to the most convenient of a battery of well-known techniques. The most powerful approach seemingly is to use two that are very different but whose results are closest to each other: polynomial regression and the popular loess method. See blue curves in fig 2. I then argued to Jim that “the two segmented linear regressions are now justified as piece-wise linear approximations to these curves ……”. Monckton still gets his pause but it starts later in September 2003. I elaborated my justification on these lines in my response to Jim – if you want to read on.
Is that a bit more clear? I hope so [I apologise for my occasional typo in sometimes wrongly putting the start of the RSS data in 1977].
MSH

richardscourtney
Reply to  M.S.Hodgart
January 30, 2016 4:18 am

M.S.Hodgart:
Thankyou for your reply. Please don’t worry about typos.: I make them all the time.
You say

Clearly I did not manage to make my point. I have no difficulty in understanding Monckton’s procedure but I disagree with his interpretation. The calculated slope of a linear regression between two selected dates is assumed by him, you and many others – to define the trend in temperatures over that period. You just take this for granted. That is the “unwarranted assumption”. Accordingly when that slope is found to be zero (or near zero) over a selected period there has been a pause (or Pause) for all that time. But has there? There are real difficulties with this assumption.

Sorry, but your clarification says you do misunderstand Monckton’s procedure when it refers to “two selected dates”.
As I tried to tell you, Viscount Monckton does NOT select two dates.
He adopts ‘now’ as being the most recent monthly datum for global average temperature anomaly (GATA) and uses that as a starting point. He then considers each and every monthly datum of GATA from that starting point. Each datum provides an end point of a time series of GATA. In each case he determines the linear regression slope for the time series between the start and end points. The shortest obtained time series that provides a positive slope (i.e. indicated warming) is his result.
In other words Viscount Monckton does NOT select “two dates”: the start point of ‘now’ is “selected” by efluxion of time (n.b. not Viscount Monckton) and the end point is his result which is “selected” by the data (n.b. not Viscount Monckton).
Secondly, it is not an “assumption” that use of linear regression to determine a trend is appropriate. Use of linear regression to determine a trend is a standard practice of so-called ‘climate science’: Viscount Monckton’s method is to assess data provided by ‘climate science’ and, therefore, he has adopted the accepted practice of ‘climate science’. Use of any other practice would not be appropriate.
You continue saying

Classically linear regression would be applied if there were a well-justified belief that Nature has created a straight line defined by an offset and slope buried in the data beneath additive noise v[k] – strictly a discrete zero-average weakly stationary stochastic sequence (model 1). The trend is then the slope of that line.

Yes, but so what? The intention is not to model the time series of GATA.
Viscount Monckton’s method is intended to determine if there is discernible warming indicated by the time series of recent GATA according to practices of ‘climate science’.
Also, as I said

Importantly, linear regression is the required analysis to discern if there has been a consistent rise of any form. Also, and relatively trivially, assumption of any form of trend other than linear requires a justified model of the form but no such model exists.

Richard

TonyN
Reply to  M.S.Hodgart
January 30, 2016 7:34 am

@ richardscourtney – January 30, 2016 at 4:18 am
You say; “Viscount Monckton does NOT select two dates.”
YES HE DOES!
Date 1 is ‘now’. Date 2 is when he backtracks through the record and when the line between has a zero slope, he stops!
As he himself indicates, if future years produce higher or even lower numbers, his method will not produce a flat line AT ALL! … and his ‘pause’ will disappear.
You then say ” Use of linear regression to determine a trend is a standard practice of so-called ‘climate science’ …… he has adopted the accepted practice of ‘climate science’. Use of any other practice would not be appropriate.”
RIGHT if you are engaging in Politics (aka a zero-sum win/lose game] but WRONG if you are engaged in Science ( aka a win/win game which seeks to increase the sum of human knowledge)
In essence, Monckton is engaged in a political argument and is using less than robust metaphysics to refute the AGW case. He really should look at Hodgart’s method, which apart from providing us all with more understanding of ‘what nature is up to’ ….. gives him a much better metaphysical weapon for his essentially political ’tilting’.

Peter Mullen
January 21, 2016 2:21 pm

There’s an error in the legend to Figure 1, namely, the stated “origin” years for the red and blue lines. Both are way too 1970s! 🙂

Melbourne Resident
January 21, 2016 2:22 pm

Why does anyone expect a straight line trend like a light switching on or off? With multiple inputs and factors that impact on global temperature, whether it be rising CO2 the AMO, the PDO, el Nino, ocean currents and layer mixing and sun output variations it must be a series of complex curves. Or am I just dumb?

Reply to  Melbourne Resident
January 21, 2016 2:46 pm

No-one expects a srtaight line trend, and it isn’t observed. What you see is a pattern that is the sum of various random and quasi-periodic processes, and possibly a steady rise (or fall). Regression in effect passes this through a smoothing filter, which attenuates the cancelling high frequency effects, and leaves a linear trend unchanged (and a non-linear trend averaged).

AndyG55
Reply to  Melbourne Resident
January 21, 2016 3:33 pm

“Or am I just dumb?”
NO. !
Those who refuse to see the cycles and singular events in the NATURAL environment, are likely to have much egg on their faces over the next few years.
AMO, PDO, Solar are all heading into a cooler phase.
Will be fun to watch. 🙂

David Wells
January 21, 2016 2:23 pm

Lies damn lies and statistics. Think the clue is complex coupled non linear chaotic system which to me means no trend unless you need to manufacture one after the fact. Up and down like a fiddlers elbow statisticians need to earn a living like alarmists need to author alarmism. Alarmists need to define a trend sceptics have no need to create a trend where no trend exists just question the authenticity of alarmism which is exactly what Christopher Monckton does very well. The planet has warmed, fact but only a little and most likely will continue to warm a little until it cools a little or a lot. Does this essay add to the debate or argument think not, does it expose any flaws that might question the authenticity of the proposition that Co2 if it does cause warming then its not a lot an assertion that the pause illustrates quite well.
Christopher Monckton did not say the science is settle that was the UN, they need certainty those who question alarmism define uncertainty. Weather unpredictable climate chaotic trends pure supposition I certainly wouldn’t put money on it. Trying to define a trend is speculating again that at some point in time we might be able to influence weather or climate which beggars belief, as if.

Gary Pearse
January 21, 2016 2:26 pm

M. S. Hodgart: I fear you have come late to the party and you overestimate the reasonableness of the warming proponents. They don’t just disagree – they will go to any lengths, including bending and twisting data to hang on to the notion that CO2 alone is the explanation of the coming runaway warming disaster they divined about 30 years ago. You also grossly underestimate and underappreciate the position and awareness of skeptics. Some have pointed out a few things on this above for your edification. Your analysis is known to the world because of skeptics pointing out there are other things than CO2.
You seem to be unaware exactly why the ‘pause’ is such a battlefield. Briefly, the longer this period of statistically no global warming lasts, the smaller the effect of CO2 in the earths temperatures. Climate sensitivity, previously thought to be (from the ‘theory’) 3-5 degrees per doubling of CO2 with feedbacks, has been shrinking to the ~1 to 1.5 level, a point at which CO2 has only a modest effect on temperature.
Indeed, skeptics are not saying the globe didn’t warm or even saying (Monckton states this in every one of his analyses, too) that we won’t have more global warming, nor that it couldn’t be harmful. Thinking skeptics have not simply been sniping contrarians. We have been holding feet to the fire of a enfranchised group that has been guilty of monstrous excesses in their zeal to prosecute their views, which would lead to enormous economic burdens on humankind and restrictions on freedoms. We point out that they entirely exclude the possibility of benefits of warming and CO2 (already greening the planet). You seem unaware of Climategate, or you believe that what was in it was just boys being boys or some such rationalization. You seem unaware of the egregious whitewashes of the behaviour of climate scientists in a half dozen so-called investigations. Indeed, I think you need to find the time to review all the issues before you come here and give us a gentle lecture on our schoolyard behavior.

Marcus
Reply to  Gary Pearse
January 21, 2016 2:50 pm

That’s one for the gold file..awesome..

Katherine
Reply to  Gary Pearse
January 21, 2016 4:31 pm

Hodgart appears to be one of those warming proponents you speak of. First he sets up a strawman he names “Lord Monckton,” and then he proceeds to bash said strawman, displaying utter incomprehension of what the real Lord Monckton had written. Time and time again, Lord Monckton has said his starting point is the latest available temperature data and he calculates the farthest month in the past where the trend is zero; in other words, he’s trying to find whether there is a Pause, based on the latest readings, and if there is, how long has there been a Pause.
Since Hodgart didn’t understand something that simple, I guess being a visiting reader doesn’t guarantee reading comprehension. Otherwise, he deliberately misrepresented the writing of the real Lord Monckton just to earn warmist points.

JohnKnight
Reply to  Katherine
January 21, 2016 6:47 pm

Katherine,
I concur, I’ve seen Mr. Monckton explain the matter several times, and this author seems to have bent over backwards trying t accuse him of somethi9ng . .

Ryan Stephenson
Reply to  Katherine
January 23, 2016 9:25 am

I studied under Dr Hodgart and he is an exceptional engineer. He knows maths. What he is saying here is not incorrect mathematically, but it does misrepresent the scientific position because Monkton only needs to prove that the exponential increase in CO2 has not resulted in any kind of reliable trend in increased temperatures. He doesn’t need to prove a pause – only that the trend over the last 17 years has not fitted the expectation based on AGW theory – the bar is thus much lower than as Dr Hodgart presents it.

Rob
January 21, 2016 2:28 pm

As you very correctly say in your introduction, trends are a meaningless statistical artifice. If you use a trend to identify some causal mechanism then you can say that the trend has been useful (in a Kuhnian manner), but you have still not said that the trend actually “means” anything. In my opinion, however, using the trend to derive the mechanism is very much the wrong way round as it pre-supposes that the trend itself has meaning and what you are doing is subject to a great deal of confirmatory bias
Trend analysis, on the other hand, is a way to test your theories of underlying mechanisms. When postulating an atmospheric CO2 driven increase in global temperature, a trend analysis is a method to refute this (something our very own Willis does pretty regularly in addressing the various cycle-theories). One thing which comes out of the above very clearly – to me – is that a trend analysis of global temperatures has pretty well refuted the atmospheric CO2-driven temperature increase theory, at least as concerns CO2 (and other “greenhouse gases”) playing the dominant role in temperature change. The trends seen here simply do match the known concentration changes of these gases in the atmosphere, either on the very simple (two-segment) model, or the more accurate multi-segment model.

January 21, 2016 2:30 pm

If you need statistical techniques to prove a trend or deny a trend and both could be argued to be correct, then it seems to that no real trend exists.

JohnKnight
Reply to  steverichards1984
January 21, 2016 9:10 pm

Well, then I guess the “precautionary principle” mandates we blow our brains out to save the planet, just in case, ya know? ; )

willhaas
January 21, 2016 2:37 pm

There is no reason to believe that global temperature as a function of time is a straight line function or even piecewise linear for that matter. The climate change we have been experiencing is caused by the sun and the oceans and that cause is not a linear nor piecewise linear function. Despite all the claims, there is no real evidence that CO2 has any effect on climate. There is no such evidence in the paleoclimate record. There is evidence that warmer temperatures cause more CO2 to enter the atmosphere but there is no evidence that this additional CO2 causes any more warming. If additional greenhouse gases caused additional warming then the primary culprit would have to be H2O which depends upon the warming of just the surfaces of bodies of water and not their volume but such is not part of the AGW conjecture. In other words CO2 increases in the atmosphere as huge volumes of water increase in temperature but more H2O enters the atmosphere as just the surface of bodies of water warm. We live in a water world where the majority of the Earth’s surface is some form of water.
The AGW theory is that adding CO2 to the atmosphere causes an increase in its radiant thermal insulation properties causing restrictions in heat flow which in turn cause warming at the Earth’s surface and the lower atmosphere. In itself the effect is small because we are talking about small changes in the CO2 content of the atmosphere and CO2 comprises only about .04% of dry atmosphere if it were only dry but that is not the case. Actually H2O, which averages around 2%, is the primary greenhouse gas. The AGW conjecture is that the warming causes more H2O to enter the atmosphere which further increases the radiant thermal insulation properties of the atmosphere and by so doing so amplifies the effect of CO2 on climate. At first this sounds very plausible. This is where the AGW conjecture ends but that is not all what must happen if CO2 actually causes any warming at all.
Besides being a greenhouse gas, H2O is also a primary coolant in the Earth’s atmosphere transferring heat energy from the Earth;s surface to where clouds form via the heat of vaporization. More heat energy is moved by H2O via phase change then by both convection and LWIR absorption band radiation combined. More H2O means that more heat energy gets moved which provides a negative feedback to any CO2 based warming that might occur. Then there is the issue of clouds. More H2O means more clouds. Clouds not only reflect incoming solar radiation but they radiate to space much more efficiently then the clear atmosphere they replace. Clouds provide another negative feedback. Then there is the issue of the upper atmosphere which cools rather than warms. The cooling reduces the amount of H2O up there which decreases any greenhouse gas effects that CO2 might have up there. In total, H2O provides negative feedback’s which must be the case because negative feedback systems are inherently stable as has been the Earth’s climate for at least the past 500 million years, enough for life to evolve. We are here. The wet lapse rate being smaller then the dry lapse rate is further evidence of H2O’s cooling effects.
The entire so called, “greenhouse” effect that the AGW conjecture is based upon is at best very questionable. A real greenhouse does not stay warm because of the heat trapping effects of greenhouse gases. A real greenhouse stays warm because the glass reduces cooling by convection. This is a convective greenhouse effect. So too on Earth..The surface of the Earth is 33 degrees C warmer than it would be without an atmosphere because gravity limits cooling by convection. This convective greenhouse effect is observed on all planets in the solar system with thick atmospheres and it has nothing to do with the LWIR absorption properties of greenhouse gases. the convective greenhouse effect is calculated from first principals and it accounts for all 33 degrees C. There is no room for an additional radiant greenhouse effect. Our sister planet Venus with an atmosphere that is more than 90 times more massive then Earth’s and which is more than 96% CO2 shows no evidence of an additional radiant greenhouse effect. The high temperatures on the surface of Venus can all be explained by the planet’s proximity to the sun and its very dense atmosphere. The radiant greenhouse effect of the AGW conjecture has never been observed. If CO2 did affect climate then one would expect that the increase in CO2 over the past 30 years would have caused an increase in the natural lapse rate in the troposphere but that has not happened. Considering how the natural lapse rate has changed as a function of an increase in CO2, the climate sensitivity of CO2 must equal 0.0.
The AGW conjecture talks about CO2 absorbing IR photons and then re radiating them out in all directions. According to this, then CO2 does not retain any of the IR heat energy it absorbs so it cannot be heat trapping. What the AGW conjecture fails to mention is that typically between the time of absorption and radiation that the same CO2 molecule, in the lower troposphere, undergoes roughly a billion physical interactions with other molecules, sharing heat related energy with each interaction. Heat transfer by conduction and convection dominates over heat transfer by LWIR absorption band radiation in the troposphere which further renders CO2’s radiant greenhouse effect as a piece of fiction. Above the troposphere more CO2 enhances the efficiency of LWIR absorption band radiation to space so more CO2 must have a cooling effect.
This is all a matter of science

Janice Moore
Reply to  willhaas
January 21, 2016 3:35 pm

WillHaas! ANOTHER FINE COMMENT (other one I recently saw was on the “Gosh, a New Model…” thread, here: http://wattsupwiththat.com/2016/01/20/gosh-a-new-model-based-study-puts-temperature-increases-caused-by-co2-emissions-on-the-map/#comment-2124875 )
@ Other non-technical major readers like I:
Read the above Haas comment for a fine elaboration on this:

There is evidence that warmer temperatures cause more CO2 to enter the atmosphere but there is no evidence that this additional CO2 causes any more warming.

Reply to  willhaas
January 22, 2016 3:38 am

Thank you, willhaas! An excellent summary.

January 21, 2016 2:38 pm

“The post war average trend is found to be 0.087 ± 0.011 ( 2 s.d) deg C /decade i.e. less than 0.1 deg/decade which is half the rate of the actual trend which peaked (temporarily) in the 80s and 90s.”
And you have used this trend over about 70 years to suggest small change till 2100. Yet in discussing LOESS you say
“For loess if the window width is too narrow random error dominates over the systematic and if too wide vice versa.”
That is generally true, and for LOESS you seem to get to a “balance” scale of a decade or two, not 70 years. If we are looking for the CO2 effect on a trend, then the amount of CO2, and its rate of increase, changed hugely over those 70 years. As you say, regression is a means of estimating a trend. It is very unlikely that the estimated ternd in 2015 is correctly determined by postwar average. And even less likely that this trend will continue with ever increasing emission.

robinedwards36
January 21, 2016 2:39 pm

M S Hodgart, This is a really welcome contribution to the “hiatus debate”. This may be because it is exactly what I have been doing with climate time series for many years! I have always called it segmented regression.
The fundamental theoretical problem is identifying the segments, or in other words the positions of step changes, and it may well be impossible to do so with statistical “certainty”. As you will know assorted techniques have been proposed for identifying step changes in time series, which often rely on non-linear and iterative methods, which are (for me at least) troublesome if not impossible to compute. I rely on an old and tested method used in SQC for identifying abrupt changes in output parameters from a production line, with the intention of intervening as swiftly as possible to avoid unacceptable product quality degradation.
The method I use is to form the cumulative sum of the series relative to a suitable base value. With historical data, which is what climate observations necessarily are, this base value is usually (and for good reasons) the mean of the observations over the time period of interest. When applied to climate parameters of various types these cusum profiles are often very striking. What is seemingly a jumbled mass of data points often transforms into a clear pattern that is characterised by approximately linear segments (with scatter) interrupted by sharp changes of slope. There are often also segments that seem to exhibit gradual curvature, also with scatter, and others where the cusum plot is clearly rather chaotic. Interpretation of these patterns is simple, though subjective. I’ll not elaborate on the properties of cusum patterns or curves – it would take more time than I have at the moment, except to say the a roughly linear cusum indicates a stable sequence of original values.
Applying this method to temperature series such as the CET data immediately reveals stable periods of various durations, from a few years to well over 100 years, as well as excursions from a linear pattern that can readily be associated with external forcings, particularly volcanoes. One very interesting outcome from this sort of analysis is in its application to single site data from northern Europe and Russia. Virtually every site has the same cusum pattern, whose most striking feature is a sharp angle change (thus a step change) that occurs mainly in late 1987. Subsequent to that date the temperatures are stable right up to the present. It looks as if “The Hiatus” began in late 1987 for this part of the world, with a gradually later onset as one goes eastwards towards Vladivostok, where the change began about a year later.
Anyone can verify this assertion regarding the hiatus by downloading temperature series from many sources, for example KNMI and the Met Office, and fitting a linear model to data from 1987 to 2015. The slopes that yu will find are seldom statistically significant. Regressions starting some years before 1987 will show significant slopes. Fitting a linear model to data that are fundamentally not linear is ubiquitous in the global warming industry, resulting the confusion that we are very aware of.
Sorry I can’t post diagrams – some help needed I suppose! I could provide endless examples using email.

Janice Moore
Reply to  robinedwards36
January 21, 2016 3:27 pm

Great comment for a layperson like me, Mr. Edwards! Thank you. As has been said many times on WUWT, only a master of a given scientific subject can write with the clarity it takes to explain it to a non-technical major.
Lots of good stuff, like this:

Fitting a linear model to data that are fundamentally not linear is ubiquitous in the global warming industry, …

Re; your diagrams and examples
A suggestion (and a hope!): Write an article (doesn’t need to be long, you know) for WUWT, including your graphics, etc… in a Word doc — attach that to an e mail (or, just write it in the e mail body directly, if the attachment thing won’t work for you) and send it to Anthony. You are such a FINE, long-term, WUWT “colleague” of Anthony’s that I have no doubt that he would not mind you using his “fire hose” In Box. Might need to send it more than once, though, for he may overlook it. OR ask (use the word “moderat0r” spelled out with an “o” instead of a zero) a moderat0r for help in how to submit your article. When he or she sees that it is YOU who is the author, they will GLADLY help you, I think!
And if monitoring and replying to those who comment on your article in the thread below it is NOT appealing to you, just ignore it. Many authors do, never responding at all. Your choice.
With admiration,
Janice
Student in the back of the WUWT classroom (who sometimes runs up front and writes stuff on the board, bwah, ha, ha, ha, haaaaaaaa!)
#(:))

Editor
Reply to  robinedwards36
January 21, 2016 5:18 pm

M.S.Hodgart, in his balanced article, referred to “ … the tendency on both sides to cite only the evidence supporting their views and to ignore what does not. Scientists of course are supposed to be above this sort of thing and to take into account all relevant evidence.“. There’s one problem with this view : When a theory has been put forward, it matters not how much evidence there is supporting it – a single fact can disprove it. The burden is greater on proponents of a theory than it is on opponents (but that still doesn’t entitle anyone to ignore anything relevant).
MSH also says “The problem is that [Viscount Monckton of Brenchley ] has chosen to disregard all the prior months of available measurements going back to January 1977. “. That is incorrect, VMofB took ALL data into account, as has been explained thoroughly by other commenters.
The rest of MSH’s article, re segmented linear trends, oscillations, etc, makes sense.
robinedwards36 – I calculate segmented linear temperature trends using a very simple and relatively objective technique: I simply optimise using both date and temperature as variables. ie, I allow the intermediate segment ends to move horizontally as well as vertically. Here’s one I did a while ago:
http://members.iinet.net.au/~jonas1@westnet.com.au/Hadcrut4MultiPhaseTrend20140508.JPG
I say ” relatively objective” because one still needs to decide how many segments.
And while we are also taking about oscillations:
http://members.iinet.net.au/~jonas1@westnet.com.au/hadleycurvefit20111114.jpg

rd50
Reply to  robinedwards36
January 21, 2016 8:21 pm

Indeed, the CUSUM technique. I love it. Used it. Most appropriate for these “anomalies”. Do you know who developed it and for whom it was developed? An interesting story. Taken up by engineers in the USA during WWII but I am not aware that it was published, in England, until the mid 1950s. The original CUSUM technique was given to the USA department of Defense by a very famous mathematician, John ….. fill in the blanks. He decided not to publish it because it was so “simple” to detect a change in a trend!

Phaedrus
January 21, 2016 2:42 pm

CO2 has gone up, temperature has not followed.
And it definitely hasn’t followed the Hockey Stick. As such the theory CO2 leads to warming is wrong!
It really is that simple.

Reply to  Phaedrus
January 21, 2016 3:42 pm

CO2 has gone up, temperature has not followed.
…….. As such the theory CO2 leads to warming is wrong!

Why? Why, for example, is it not possible that natural factors have offset the CO2 warming over the past decade or so?

Janice Moore
Reply to  John Finn
January 21, 2016 3:59 pm

1. According to AGWers, CO2 drives climate. If natural drivers, such as the oceans, can negate CO2, CO2 is not the controlling driver. If those negating forces have overwhelmed the increase in atmospheric CO2 (given it is, indeed, driving anything in climate), then, CO2 (and it may all be net natural, too, you know) is not likely to lead to warming. That is, the null hypothesis, that natural (non-CO2) drivers control the climate of the earth, stands. The burden of proof still lies at the feet of those who assert CO2 can do ANY-thing to change the climate of the earth. Not proven. Moreover, given: CO2 UP. WARMING STOPPED. — the evidence is running against AGW. The IPCC models assume CO2 as a controlling climate driver and have been proven unfit for purpose.
2. In the past, CO2 levels have been signicantly lower, yet the temperature on earth was high enough that you can find palm tree fossils just south of the Canadian border in the U.S…. and tree stumps where glaciers now reign… and Oetzi climbed an Alp in what became Italy around 3,000 B.C. died, then was buried under meters of snow and ice… .
3. Ice core proxies reveal that CO2 levels lag temperature increases by a quarter cycle.

richardscourtney
Reply to  John Finn
January 21, 2016 4:00 pm

John Finn:
You ask

CO2 has gone up, temperature has not followed.
…….. As such the theory CO2 leads to warming is wrong!

Why? Why, for example, is it not possible that natural factors have offset the CO2 warming over the past decade or so?

It is wrong because it is observed to be wrong: i.e. CO2 rose but temperature did not.
CO2 may sometimes lead to warming but that is not demonstrated.
CO2 does not always lead to warming is demonstrated.
Why CO2 does not always lead to warming is another matter. Perhaps temperature would have risen recently were it not for being offset by “natural factors”. If so, then that would mean “natural factors” have effects of as great a magnitude as the CO2 warming. And, why would one assume “natural factors” did not provide all of the warming before the ‘Pause’ when “natural factors” certainly did contribute nearly 100% of the warming from the LIA prior to the industrial revolution?
Richard

Editor
Reply to  John Finn
January 21, 2016 5:22 pm

JohnFinn – if you ask “ is it not possible that natural factors have offset the CO2 warming over the past decade or so? then you must also ask “ is it not possible that natural factors caused the warming over the previous decades?“.

Editor
Reply to  John Finn
January 21, 2016 5:25 pm

richardscourtney – apologies, I hadn’t read your comment when I wrote mine.

Brian H
Reply to  John Finn
January 21, 2016 9:56 pm

I once opined: “If natural variation is occasionally in charge (at random), it is always in charge.”

Bob Boder
Reply to  John Finn
January 22, 2016 3:43 am

Finn
why weren’t natural factors the cause for the warming in the first place, every time you post you invalidate yourself.

rd50
Reply to  Phaedrus
January 21, 2016 8:53 pm

It is indeed that simple. But this site simply refuses having this graph available here.
So why don’t we have a graph of CO2 increased since the measurements started in 1958 vs temperature on this site? Forget the CO2 of a million years ago always quoted here.
Such a graph exists at climate4you.com. Why not here?
If it is that simple, have this graph here, shows the CO2 increased from 1958 to about 1978 while temperature was about the same or even decreased slightly, then temperature increased until about 2000 (with obviously the El Ninio of 1998) and then no more increase until just now.
Indeed it is that simple. So simple that when Judith Curry and others are asked to testify they NEVER show this graph, EVER. You are correct, it is that simple. This site should show the graph, it is available.
But NO, this site will not show it!

January 21, 2016 2:42 pm

Mr. Hodgart, it’s always possible to get a better fit to a data set using polynomials, but that’s hardly the point of a statistical approach to analyzing the data. Absent some sort of theory, curve/line fitting is a pretty useless (and largely automated) activity. Why chose 1, 2, 3, or 4 factors if you have no idea what those factors might be and no way to measure them?
The purpose of Monkton’s analysis is to point out that during he past 19 odd years CO3 has increased dramatically and AGT has not. That’s the entire point, there is no other. We have one factor, atmospheric CO2 fraction, with one hypothetical effect, increased AGT. Data collected by RSS clearly falsifies that hypothesis. It’s demonstrated that prior to 1997, both CO2 and AGT were rising together. After 1997 that relationship stopped.
There’s nothing more to be said for it. Clearly there is some factor that was driving changes in AGT, but the data in hand demonstrate it isn’t CO2.
If I were to throw a bone to the loons who selectively ignore historic decreases in AGT that occurred during times of increasing CO2, I might suggest that the “Pause” could actually be a stunted “decline” if it weren’t for accumulated CO2, but that would require them to have some clue as to what the factors driving temperature actually are, and they’ve never bothered to even suggest one other than CO2, even after consuming billions in research funding clearly squandered on a useless fantasy.

January 21, 2016 2:45 pm

The biggest problem is the belief that any “trend” to be found looking backwards has some meaning in the future.

NW sage
Reply to  wickedwenchfan
January 21, 2016 5:25 pm

I am again reminded that my old statistics professor said “Statistics is an attempt to find meaning when there is none!” Still true today. What has happened in the past, linear of not, does NOT predict the future. No amount of statistical gobbledygook will ever change that.
I appreciate the points Mr. Hodgart made about the fallacies of using linear regression statistical methods when the break points cannot be known (or assumed). He shed a lot of light on this issue. Thank you!

bobfj
January 21, 2016 2:47 pm

Perhaps part of the problem is pedantry in the application of ‘pause’ and ‘hiatus’ in the debate.
In Fig 3 (HadCRUT 4.4) above, putting aside the controversy over its recent “SST corrections etcetera” versus HadCRUT 3, one could just as validly say that there are ‘plateaus’ centred around 1945 and developing around 1910 by a simple process known as “eyeball”. (A plateau means a relative flatness compared with the typically steeper sides as seen on some mountains). The earliest study that I know of to imply this is by two Russians, Lyubushin & Klyastorin (2003):
http://www.biokurs.de/treibhaus/180CO2/Fuel_Consumption_and_Global_dT-1.pdf
See their figure 5. (using their 2003 temperature data)
Both plateaus are preceded by somewhat similar warming periods via the aforementioned eyeball method (whereas if CO2 was a major driver, the more recent warming should be steeper).
As an analogy to ‘linear trend pause’, the average height of a mountain plateau can be determined or alternatively it might be approximated to the apex of a sine wave….. take your pick or expert statistical opinion.

bobfj
Reply to  bobfj
January 21, 2016 3:08 pm

Sorry, should be ‘developing around 2010’

Dodgy Geezer
January 21, 2016 2:47 pm

….I find it troubling that presumably intelligent scientists (and they have competent statisticians also) cannot bring themselves to acknowledge – let alone explain or even properly discuss – the statistical fact that two extended cooling periods have featured in the past while CO2 levels were presumably always rising …
Their jobs are on the line. What would you do?

Marcus
Reply to  Dodgy Geezer
January 21, 2016 3:01 pm

Be honorable..get a new job !!

Janice Moore
Reply to  Marcus
January 21, 2016 3:03 pm

You go, Marcus! Amen.

Marcus
Reply to  Marcus
January 21, 2016 3:36 pm

Hi Janice !! No more CC ?? LOL

Janice Moore
Reply to  Marcus
January 21, 2016 3:40 pm

Ugh. Don’t even mention that sickening thing’s name… likely to summon it from the underworld… .
And, Hi!

ironicman
Reply to  Dodgy Geezer
January 21, 2016 3:22 pm

“If you want to keep a secret, you must also hide it from yourself.”
― George Orwell, 1984

January 21, 2016 3:29 pm

All this sound and fury over temperatures is beside the point. Arguing alleged effects without even proving the cause.
According to IPCC AR5 the atmospheric CO2 concentration increased by 40%, from 278 ppm around 1750 to 390.5 ppm in 2011, a difference of about 240 GtC, aka the hockey stick/blade. How they know this is based on WAGs, SWAGs, assumptions, and “expert” opinions. The foregone assumption is that this increase cannot possibly be caused by natural variations therefore it must be due to mankind, i.e anthropogenic sources.
In the same time frame IPCC estimates/WAGs/SWAGs/assumes/opines that anthropogenic sources added about 555 +/- 85 GtC (+/- 15%!!). That’s twice the increase and a problem IPCC et al have been trying kick under the rug.
IPCC AR5 Table 6.1 partitions this 555 GtC anthropogenic sources (375 +/- 30 FF & Cement, 180 +/- 80 land use) among the various allegedly invariable natural sinks (rugs) and sources.
IPCC AR5 Table 6.1………GtC……..+/- GtC……..+/- %
Anthro Generation…………555………….85…..….15.3%
FF & Cement……………….375………….30…..……8.0%……..67.6%
Net land use………………..180…………80……….44.4%……..32.4%
Anthro Retained………..….240…………10…………4.2%………43.2%
Anthro Sequestered………-315………………………..…………-56.8%
Ocean to atmos…………..-155……..…..30…..…..-19.4%
Residual land sink……….-160……..……90…..…..-56.3%
So the CO2 increase between 1750 & 2011 that cannot possibly be ‘splained by natural processes (Considering the huge uncertainties how would they even know?), but natural processes can easily ‘splain the sinking and sweeping precisely 56.8% of the anthro contribution under the rug.

TonyN
Reply to  Nicholas Schroeder
January 28, 2016 1:41 am

@Nicholas Schroeder
AFAIK according to Henry’s Law there ought to be around fifty times more CO2 absorbed in the oceans than there is in the atmosphere.
Now given your quotation:
“According to IPCC AR5 the atmospheric CO2 concentration increased by 40%, from 278 ppm around 1750 to 390.5 ppm in 2011, a difference of about 240 GtC,”
Applying the Henry’s Law factor of 50, this must require a net source of CO2 at around 50 times 240 GtC … or 12,000Gtc.
As the IPCC claim that anthropogenic CO2 emissions were 555 GtC over the same period, this only accounts for ~ 5% of the reported increase …. leaving the other 95% to come from natural sources.
IF Henry’s Law works as stated, then only a fraction of the claimed increase in global temperature can be Anthropogenic!

Jim G1
January 21, 2016 4:04 pm

Bullseye. Causality is the real issue. CO2 is still going up, temperatures, not so much. The lines are diverging. Causality has never been proven and will not as there are too many exogenous variables and the actual mechanism is still unknown

Robert B
January 21, 2016 4:18 pm

Could be summed up as there is a pause in the data which doesn’t necessarily mean a pause in AGW (the mean of the last 10 years in the RSS (-2015.92) is only 0.03°C more than the 10 years before). It just highlights the hubris about the science being robust and settled, and alarmists down playing uncertainty until its need to debunk something like the pause.

wyzelli
January 21, 2016 4:54 pm

As a interesting exercise, I have looked at the trend from 1979 to each year from 2000 onwards and the trends become progressively smaller (with some minor variation) from about 2006 onwards. I have also looked at the successive 30 year trends from 1979-2009 through 1984-2014 and those 30 years trends also become successively lower.
I have not continued past that since WoodForTrees does not seem to have data past mid 2014. This means that the data I have, particularly for the sequential 30 year trends is inadequately small.
Of note though, is whilst the temperature trend has been mostly lowering (around the 0.015 range) and certainly fluctuating, the Mauna Loa CO2 trend (around 1.7) has been steadily increasing, and the Anthropogenic CO2 output has been exponentially increasing. I find it hard to see any correlation between these at all.
But just looking at the raw temp data, it seems obvious to me that it is not linear, whereas the raw CO2 data seems very linear (with an overlaid seasonal sinusoid).
These data can be easily accessed at http://woodfortrees.org/ for anyone who want to play themselves.

Paul
January 21, 2016 4:59 pm

“to cite only the evidence supporting their views”
I’m not a scientist and barely remember it and math from 45 years ago, but I ask the question
What evidence? Warmists don’t believe in their data without “homogenization”, deniers question that.
Even now scientists are starting to question satelite data, governments spent hundreds of millions if not billions on climate satelites that scientists said are a good idea, and will produce meaningful data.
Obama said “trust the science”, what he really meant was “trust the scientist that he trusts”.
The upshot is we can’t trust our politicians, why should we trust the scientists, let alone the science?
You can do all the charts and graphs you like, but if no one trusts the original data anymore, what’s the point? So is AGW real? I don’t think anyone really knows, the only reality is some are making a lot of money and obtaining a great deal of prestige from it. I certainly don’t believe it, unless someone can actually prove it.

Marcus
Reply to  Paul
January 21, 2016 6:37 pm

The problem is …Obama and his liberal socialist circle…Just look at what the Democratic nominee’s are for president..(1) a socialist that can’t add two plus two ..and (2) A socialite non Madonna that believes if you erase the words ” classified ” from a document , it is no longer ” classified ” !!

Kaiser Derden
January 21, 2016 6:30 pm

statistical nerd fight … using mostly made up data … seems like a waste of time …

Paul Coppin
Reply to  Kaiser Derden
January 23, 2016 5:07 pm

Yup. If, as RD50 asserts, it really is that simple, then the discussion is what is chaotic, perhaps not the CO2 story.

jmarshs
January 21, 2016 6:32 pm

I’m trying to think of an analogy to the way much Climate Science modeling is practiced, and this is the best I can come up with:
Imagine you are flying a plane from New York to Los Angeles with a computer tracking all the minute course adjustments that you make to keep the plane aloft due to turbulence.
Over Kansas City, you program the plane to use the New York to Kansas City
“trend” to finish the remaining Kansas City to Los Angeles leg of the flight. You leave the cockpit and go flirt with a flight attendant.
Anyone care to bet if you’ll make it to your destination?

Janice Moore
Reply to  jmarshs
January 21, 2016 7:03 pm

Lol. I like it!
I think… your final words will be: Hey! Where did all that water come from?
… sorta like how water (the only proven-effective greenhouse gas) and the oceans overwhelm AGW fantasy science in which it sank, about 10 years ago (and really, never got off the ground, never made a prima facie case shifting the burden of proof to its detractors, the science realists; the null hypothesis that nature drives climate stands).

Patrick MJD
Reply to  jmarshs
January 21, 2016 8:11 pm

“jmarshs says: January 21, 2016 at 6:32
Anyone care to bet if you’ll make it to your destination?”
I would say yes you would. Planes can pretty much take off and land themselves these days. An automatic landing system was implemented at London Heathrow in 1965 I think.
When I lived in New Zealand (NZ) I knew someone who worked for the NZ Govn’t negotiating air routes to various countries. He used to joke that a new security device was being introduced in the cockpit, a pitbull dog. It was to keep the pilots away from the controls.

jmarshs
Reply to  Patrick MJD
January 21, 2016 8:59 pm

You completely missed the point. The “autopilot” does not exist in my analogy.
The issue has to do with finding so called “trends” in chaotic, non-linear systems.
The belief of many in the Church of Climatology is that they can — from first principles tuned to (faulty) historical temperature data — determine future “trends” in the climate system.
The piloting of an airplane requires instantaneous feedback and continuous corrections.
If we believe that we can control the temperature of the Earth, then we need two things: 1) enough power to affect the system and 2) timely feedbacks to make corrections.
Neither of which are available to us.
And besides, what is the optimal temperature of the Earth anyway?

Janice Moore
Reply to  Patrick MJD
January 21, 2016 9:36 pm

Oh. (shrug) I guess I did, too. I thought you had to be on autopilot to leave the cockpit. I figured that you had programmed the AP via Kansas City and that Patrick (being from Australia) was just unfamiliar with U.S. geography, so the vector diff wasn’t registering with him. Well, jmarshs. Learn something new every day… .

jmarshs
Reply to  Patrick MJD
January 21, 2016 11:43 pm

@Janice
Maxwell’s Demon drove a steak through the heart of Laplace’s Demon.

jmarshs
Reply to  Patrick MJD
January 21, 2016 11:44 pm

lol,
Stake

January 21, 2016 6:59 pm

People often make a fundamental mistake in thinking about data. I see this article making it. It is a statistician’s professional job to avoid making this mistake. Stephen Jay Gould understood the issue very well and explained it pretty well.
Let me start with something else first, and come back to the big one. Fitting a straight line to this kind of data can be useful as a summary, but as a rule it’s worse than useless outside the range of the observed data. For example, suppose we discover that there is a trend of one degree per century. Since the Earth’s temperature is roughly 300 absolute, that means that in thirty thousand years the temperature will be less than absolute zero. Which is physically impossible. You can avoid that absurdity by fitting a straight line to the logarithms of the absolute temperatures, but you still get the absurdity at the other end that the Earth will eventually outblaze the Sun, all due to CO2 no doubt. PROJECTING A TREND FROM A PHYSICALLY MEANINGLESS FORMULA INTO THE FUTURE IS ALWAYS FOOLISH.
But that’s not the big issue. The flaw in people’s thinking is that there is the signal, some real, true, underlying, Platonic, *smoothly varying* global temperature, and that this is hidden by meaningless pesky noise that just gets in the way, and if we want to understand what’s really going on we have to get rid of the noise to see the big picture.
But statisticians (and evolutionary biologists, who would have nothing to study without it) know that VARIATION IS JUST AS REAL as anything else in the data. And in the temperature data, the year-to-year data *is* the big picture. Figure 1 in this article shows a trend of about 0.1 degree/decade, 0.01 degree per year. That’s the “signal” that the article is looking at in various ways. But if you compute the absolute difference of the temperatures between successive years, you find the average is close to 0.1. To me, THAT is the real signal. Year to year, the variation is huge compared with the trend.
There is a lot to learn from this, but this comment is already too long. But one thing is clear: the air and sea are affected by many things operating on many time scales so this is a very complex system, and we should be trying to understand what can jerk the planetary temperature around 0.1 degree from year to year.
The point about variation being real and being interesting applies to most situations.

AndyE
January 21, 2016 8:37 pm

Trend or no trend – and where does it (or doesn’t it) start. The whole debate is really futile, I think. We should simply stop debating it, because we simply cannot know with any scientific certainty. It all therefore becomes a matter of opinion – to which each of us is entitled to have his/her own. Let us accept the agreement reached by the International Meteorological Organisation’s conference in Warsaw in 1935 : “Climate” was to be represented at a particular site (and we may choose the whole globe, I suppose) by an averaged 30-years span of meteorological data, called “climate normal”. The first period was nominated to be between 1901 – 1930; then on to 1931 – 1960, 1961 – 1990 and (our present period) 1991 – 2020. Let the warmists be right : the global “climate normals” do show a temperature rise since 1901. But so what?? Let us look at it with detached interest and address the question again in 2020. And, most importantly, refuse to be alarmed – as it certainly doesn’t warrant alarm (yet!).

jmarshs
Reply to  AndyE
January 21, 2016 9:09 pm

The issue is not do we understand the climate. Engineers constantly struggle (and succeed) to control systems that they don’t understand.
The issue is: Do we possess the power to affect changes, and are timely feedback mechanisms in place to allow for corrections? And do we know what an optimal global temperature is?

JohnKnight
January 21, 2016 9:25 pm

M.S.Hodgart,
“His problem (Mr. Monckton’s)
The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977.”
Let me explain something, kid. When we are talking about yesterday, we grown-ups don’t include things that happened last week. See how that helps to sort of keep us from just rambling all over with no constraints on our considerations? . . Never mind, it’s hard to explain . .

TonyN
Reply to  JohnKnight
January 23, 2016 2:24 am

JohnKnight: Fig 2 should help you over your difficulty

JohnKnight
Reply to  TonyN
January 23, 2016 2:04 pm

Pfft

January 22, 2016 12:49 am

According to Greenland and other Ice Core data our Holocene Interglacial is in long-term decline.
When considering the scale of temperature changes that alarmists anticipate because of Man-made Global Warming and their view of the disastrous effects of additional Man-made Carbon Dioxide emissions, it is useful to look at climate change not from the point of view of annual or decadal changes but from a longer term, centennial or millennial perspective.
The current, warm Holocene interglacial has been the enabler of mankind’s civilisation for the last 10,000+ years. It’s congenial climate spans from mankind’s earliest farming to the scientific and technological advances of the last 100 years.
But:
• the last millennium 1000AD – 2000AD encompassing the Medieval warm Period has been the coldest millennium of the current Holocene interglacial.
• each of the notable high points in the Holocene temperature record, (the early Holocene Climate Optimum – Minoan – Roman – Medieval – Modern), have been progressively colder than the previous high point.
• for its first 7-8000 years the early Holocene, including its high point “Climate Optimum”, had virtually flat temperatures, an average drop of only ~0.007 °C per millennium.
• but the more recent Holocene, since a “tipping point” at ~1000BC, has seen a temperature diminution at more than 20 times that earlier rate at about 0.14 °C per millennium.
• the Holocene interglacial is already 10,000 – 11,000 years old and judging from the length of previous interglacials the Holocene epoch should be drawing to its close: in this century, the next century or this millennium.
• the beneficial warming at the end of the 20th century to the Modern high point has been falsely transmuted into being “the Great Man-made Global Warming Scare”.
• eventually this late 20th century modern temperature blip will come to be seen as just noise in the system in the longer term progress of comparatively rapid cooling over the last 3000+ years.
The much vaunted and much feared “fatal” tipping point of +2°C would only bring Global temperatures close to the level of the very congenial climate of “the Roman warm period”.
Were possible to reach the “horrendous” level of +4°C postulated by Warmists, that extreme level of warming would still only bring temperatures to about the level of the previous Eemian maximum, a warm and abundant epoch, when hippopotami thrived in the Rhine delta.
Global warming protagonists should accept that our interglacial has been in long-term decline for the last 3000 years or so and that any action taken by man-kind will make no difference whatsoever. And it’s implausible that any action by Man-kind could reverse the inexorable in the short period of the coming century.
Were the actions by Man-kind able to avert warming they would eventually reinforce the catastrophic cooling that is bound to return relatively soon.
see
https://edmhdotme.wordpress.com/2015/06/01/the-holocene-context-for-anthropogenic-global-warming-2/

Hivemind
January 22, 2016 2:36 am

I have always had a problem with doing a linear curve fitting exercise, when the temperature record is far from linear. If you look at the record, you can see many discontinuous places where a linear curve will fit. Oddly enough, I have never seen anybody trying to fit anything but a linear curve. If you were desperately trying to prove disaster from exponential temperature you would be trying to fit an exponential growth, aka Mann’s famous hockey stick.
But the discontinuities are the interesting parts. It would be naive in the extreme to assume that a linear curve on such a noisy plot can actually predict anything. In fact, no sooner does a trend line appear, than it it disappears. In basic statistics, we learned how rapidly the error bars diverged even with good data. The error bars on the temperature record makes the predictive value of a straight line unusable within a couple of years.
The only worthwhile model I have ever seen took the recovery from the last glacial period, added the AMO/PDO and got surprisingly good results.

dave
January 22, 2016 5:20 am

Any data set coming out of HadCUT should be thrown away. They are cheats that were caught red-handed.

Mike
January 22, 2016 5:35 am

Trend v. average trend
In principle an oscillation does not have a trend.
Oh yes is does ! If you pick your dates properly. 😉
http://climategrog.files.wordpress.com/2013/04/warming-cosine.png
There is a need therefore to identify a mean trend which discounts that obvious oscillation.
No there is a need to realise that neither the trend nor the mean trend have any meaning.
The random element is in the “forcings” ie in dT/dt. The temperature record is simply the integral of this randomness. That makes it a “random walk” which will have periods of time with trends. It’s that simple.
If much of the variation is assumed to be random or “stochastic” all the various “trends” are simply fitting to segments of a random walk , they mean NOTHING.
When I started reading I was hoping that the author was going to address. Instead he seems to be buying into the trends game.
One can only note that in the 85 years from now to 2100 the projected increase could be around 0.0087 ´ 85 = 0.74 degrees. Could this be realistic and if so is that a cause for alarm? I only ask.
No, this is no more realistic than anyone else’s cherry-picked arbitrary trend fitting and totally spurious projection way outside the data fitting period.
Sorry , it’s garbage. I’d say that if a warmist did something similar and I’ll say it if M.S.Hodgart (Visiting Reader Surrey Space Centre University of Surrey) does it.

TonyN
Reply to  Mike
January 22, 2016 6:55 am

Mike, from your post I’d say there is a need to re-read the OP. Especially the last para.

Gary Pearse
January 22, 2016 6:42 am

bobfj
January 21, 2016 at 2:47 pm
In Fig 3 (HadCRUT 4.4)… one could just as validly say that there are ‘plateaus’ centred around 1945 and developing around 1910 by a simple process known as “eyeball”… The earliest study that I know of to imply this is by two Russians, Lyubushin & Klyastorin (2003):
http://www.biokurs.de/treibhaus/180CO2/Fuel_Consumption_and_Global_dT-1.pdf
They predicted cooling starting in 5-10 years (from 2003). Russian scientists seem to follow the data and science where it takes them – very refreshing. One bit of data that I’m surprised has not been brought forward over the years by skeptics – I seem to recall seeing something on WUWT once- is that of the Russian astronomer Adussamatov. I love this news report from no less than National Geographic!!
” Habibullo Abdussamatov, head of space research at St. Petersburg’s Pulkovo Astronomical Observatory in Russia, says the Mars data (the data is from NASA!!!) is evidence that the current global warming on Earth is being caused by changes in the sun.
“The long-term increase in solar irradiance is heating both Earth and Mars,” he said.
NASA noticed in 2005 that the southern polar ice cap on Mars had been shrinking for 3 years (likely shrinking before but not so noticeable or ignored?)
I especially like the last sentence on page 1 leading to an outcry on page two that is worth reading:
“Abdussamatov’s work, however, has not been well received by other climate scientists….”
I urge everyone to read this.

Gary Pearse
Reply to  Gary Pearse
January 22, 2016 6:43 am
steveta_uk
January 22, 2016 8:56 am

Oh dear, Mr M.S.Hodgart, you’ve really misunderstood Monckton’s point, haven’t you?
Analogy: picture that you are walking along a plateau at the top of a hill. Your companion notices that you have not been climbing the hill for over 300 yards. He proves this by using his handy theodolite that shows how far back the slope has been flat.
And you respond by saying that you cannot measure just back to where the hill flattens out – you must measure right back to the base of the hill, and so you are clearly still rising, though at a reducing rate.
Can you not see just how silly that sounds?

TonyN
Reply to  steveta_uk
January 22, 2016 9:47 am

Clearly you know a lot. So, perhaps you would be good enough to do a series of five linear regressions on the past 5,10,15, 20, 25 years respectively, and show us what each of your regressions tell us about ‘The Pause’ ?
And then pehaps you could also tell us why some Climatologists have been claiming a recent series of ‘hottest ever’ years, during ‘The Pause’.

TonyN
Reply to  TonyN
January 22, 2016 10:34 am

Opps! The above post was meant for Steveta_UK

JohnKnight
Reply to  TonyN
January 22, 2016 4:24 pm

Tony,
Do you have a point?
“And then pehaps you could also tell us why some Climatologists have been claiming a recent series of ‘hottest ever’ years, during ‘The Pause’.”
Maybe they are lying, or are taken in by lies. That’s what it looks like to me . .

steveta_uk
Reply to  TonyN
January 23, 2016 6:27 am

TonyN, while walking along the plateau at the top of the hill, would you really expect your climatologist friends to repeatedly points out that for the last 10 yards, you never been higher, over and over again?

TonyN
Reply to  TonyN
January 23, 2016 10:45 am

steveta_uk
You do realise that your ‘plateau’ is riven with pinnacles and crevasses, and according to Monckton is shrinking, and may well disappear altogether ?

Janice Moore
Reply to  TonyN
January 23, 2016 10:56 am

TonyN, you misunderstood Monckton. He got into his Chevy Suburban and drove [from] the wall (a wall only God can see beyond) at the end of the 18-mile-long plateau, disregarding minor fluctuations (to make an issue of which, per Dr. Richard Lindzen, is, per se, dishonest), checked the odometer and saw that his earlier reading of the length of the plateau was off by a few feet.
In case you misunderstood the basics:
1. The Earth has been, in ups and downs, generally cooling for about 6,000 years.
2. The earth has been, apparently, warming slightly since the end of the LIA.
3. The earth is, now, not warming. That’s all we know. To call this stop in warming a “hiatus” (in warming) is presumption.
[if the “wall” represents a known elevation up the flank of a mountain whose final slope and height is unknown, Monckton in your example is driving back away from that “wall” into the past. .mod]

Janice Moore
Reply to  TonyN
January 23, 2016 10:56 am

“…drove to the wall.”

JohnKnight
Reply to  steveta_uk
January 22, 2016 6:09 pm

Do you mean those who have obviously asserted that we are headed for a climate meltdown? Or do mean to imply that others must somehow prove a negative in that regard?

TonyN
Reply to  steveta_uk
January 23, 2016 1:30 pm

Janice,
I quote from Monckton’s first posting upthread;
“…… The el Nino persists in region 3.4, and that is likely to keep the temperature rising for some months, and perhaps to extinguish the Pause for a time, and perhaps for good”
Monckton’s plateau or Pause is, according to him, likely to be impermanent. It has already shrunk by a year!
If you want a case to show that Anthropogenic CO2 emissions are not a significant cause of warming when compared with other natural causes, you will need more robust evidence that does not melt away with temperature rises from natural causes.
Hodgart gives you that tool, which points to warming and cooling within the recent data-series, and these indelibe facts will not melt away, firstly with more warming, and secondly becase it will not be possible for the Warmists to doctor these recent records.
Look at his Fig 2. And if you have the time, read his OP again.

JohnKnight
Reply to  TonyN
January 23, 2016 3:07 pm

Tony,
“If you want a case to show that Anthropogenic CO2 emissions are not a significant cause of warming when compared with other natural causes, you will need more robust evidence that does not melt away with temperature rises from natural causes.”
It’s the same record, nothing will melt away if temps start rising (or falling) in a sustained fashion, except the present tense in referring to this “pause” in temp change.
Temps have not risen in eighteen years
Temps didn’t rise significantly for eighteen years
See how that works? You just change the wording a bit, nothing melts away.
Do you understand that much?

brians356
Reply to  TonyN
January 25, 2016 9:41 am

Plot temperature for the past ~18 years against CO2 concentration. Gaze upon it. Think. Is CO2 really what’s forcing temperature? None of the plethora of IPCC models can account for the disconnect between temperature and CO2 for such an extended period. CO2 concentration climbing like a homesick angel – temperature essentially flat (the “plateau” discussed elsewhere.)

TonyN
Reply to  steveta_uk
January 24, 2016 12:07 am

John Knight, re your post recent post.
Even Monckton acknowledges that the Pause may disappear.
” “…… The el Nino persists in region 3.4, and that is likely to keep the temperature rising for some months, and perhaps to extinguish the Pause for a time, and perhaps for good”
You are right that it will remain in th record as a period within which you can get a flat line.As this may well be the case for other periods,it could then be argued that it is a ‘cherry-pick’ To guard against that criticism, look again at Hodgart’s paper,

JohnKnight
Reply to  TonyN
January 24, 2016 11:56 am

Is there something in Mr Hodgart’s paper that you feel is beyond dispute/criticism? Something that the same CAGW pushers who (ridiculously ) speak of treating the recent past as particularly significant in trying to determine what is curr3ently happening; “cherry picking”, will instead be left speechless by if Mr. Monckton mentions it?
Yet, you don’t even mention what that might be as you tell people to read the paper again? Perhaps you mean he ought to just tell those CAGW clansmen to read Mr. Hodgart’s paper, and that alone will shut them up? ; )
Seriously, say it, please or I can’t hear it . .

Michael C
January 22, 2016 10:46 am

Thank you Mr Hodgart for this lucid summary
The obvious ‘trend’ in the first graph that strikes me is no trend, but two stable regimes intersected by a pulse increase in 98 – the so-called 98 El Nino. El Ninos according to the current simplistic model, simply pump water around with wind. They cannot increase the global warmth status. They only redistribute heat. 98 is the only obvious anomaly. Get to work on that chaps and you make some inroads: was it an increase in incoming energy or a decrease in outgoing? – or (my arm-wave) an injection of tectonic heat?
Aside from this:
“ It should be emphasised that the physical accuracy of any of these data is not under review here and is a separate issue”.
The data is still within a field of 1c/century. Knowing how data was collected in the first half of the century I would hate to be floundering around in the dark on a mountain with these odds
Personally I feel we are ignoring the very high probability of negative feedback. This has to be the most important influence on noise. It ain’t noise. It is the mechanism that has preserved our environment for so long

J Martin
January 22, 2016 1:44 pm

I would have liked to have seen a comparison between the result obtained and the same result with any warming effects of El Ninos removed. Likely that the rate of warming would have been lower still.

brians356
January 22, 2016 2:21 pm

I think I heard His Ludship banging away with gust on a Smith Corona …

brians356
Reply to  brians356
January 22, 2016 2:22 pm

gusto
“Edit” button …please!

Proud Skeptic
January 22, 2016 2:40 pm

I still fail to understand why people even accept the underlying premise that we can accurately measure the average temperature of the Earth. Further, I reject the idea that we have anything of sufficient accuracy to compare it to in order to make a claim like…”the Earth has warmed 0.8 C over the last 100 years.”
Everything I have read on this and other sites leads me to the conclusion that we just don’t know this stuff.

Michael C
Reply to  Proud Skeptic
January 22, 2016 11:48 pm

You are right. No one knows this stuff, yet. I hope I live long enough to see some real understanding emerge

ImranCan
January 22, 2016 7:23 pm

I dont think the writer has understood what Monkton has done. He also suffers from the illusion that everything started in 1979 ( which he incorrectly states as 1977), just because the dataset starts there. Sure there is a rising tend since then, just as there is a flat trend over the last 18+ years. And if you go babk 10,000 years the tend will be declining, go back 50,000 years and the trend is increasing. The point is whatever trend you identify is a funtion of the length of the time series. Its not complicated.

TonyN
Reply to  ImranCan
January 23, 2016 2:21 am

ImranCan; Are you sure you understand what Hodgart has done? Look at Fig 2.

Ryan Stephenson
January 23, 2016 9:20 am

This seems to somewhat misrepresent the situation.
Monkton does not need to “prove” a particular trend nor to make some new prediction. Thus the exact trend that could be derived does not need to fit any particular trend line. It is up to the proponents of AGW theory to demonstrate that THEY can fit their data to a particular trend.
Up to now proponents of AGW theory have claimed that exponentially rising increases in CO2 concentration in the atmosphere have led to increasing global temperatures. They have pointed to temperatures from 1945 to 1997 from their own (dubious) data to demonstrate a trend – fair enough, but Monkton has demonstrated that this trend does not hold for the period 1997 to 2015. Thus the AGW theory remains unproven. This is all he needed to do.
Now the reason for the divergence in the initial trend to the trend over the last 18 years can be one of many. Perhaps there was AGW but we reached saturation point? Perhaps the data massaged before 1997 was massaged in the wrong way creating a false trend? Perhaps using thermometers in Stephenson screens isn’t an accurate way to measure climatic temperatures?
I’m going for the last one. Thermometers in Stephenson screens situated in the UK do not measure climate temperatures. They measure the impacts of [1] The season [2] The level of cloud cover [3] The wind speed and direction. These can cause 20 degree differences in temperature over the course of 24 hours. Much bigger than the signal you are looking for. Of course you can apply a simple low-pass filter to this “noise” signal to try and find a totally different measurement underneath – but you are far more likely to simply remove all the high frequency random noise leaving a low frequency noise signal that gives the impression you are looking at a trend over a given period of interest when actually you are looking only at a low frequency random signal. What climate scientists should have been concentrating on over the last 35 years is finding a way to measure the climate temperatures with less of the confounding noise signals from cloud cover and wind – they need this to not only prove their theories might be correct – they also need it to demonstrate and inform their computer models. Without accurate measurements of the climate temperatures, how can they possibly test their own climate models? It’s a bit of a disgrace that the climate science community hasn’t spotted this flaw in their science and corrected it long before now.

January 23, 2016 3:42 pm

Lord Monckton proves the GREAT PAUSE, as he calls it by his linear trend..for 17+ years.
This “GREAT PAUSE” as “TEMP PLATEAU” was first discovered back in 2003, at a time, at which
this pause was barely/not at all visible. A good sense and knowledge was need to counter-argument those hyped-up MILLENIAL climate warmist predictions of AR3, which came out in Mar 2003, followed by only months afterwards by a PLATEAU-study.
This plateau study, according to some blog replies further up, was:
“””” The earliest study that I know of to imply this is by two Russians, Lyubushin & Klyastorin (2003):
http://www.biokurs.de/treibhaus/180CO2/Fuel_Consumption_and_Global_dT-1.pdf “”””
and Lord Monckton proves this study right, showing now, 13 years later, that the CO2 did not
increase global temps, anf trashing AR3, as nonsense-science.
Therefore…….he Lyubushin&Klyashtorin PLATEAU is the ONE AND ONLY correct study.
According to L&K., the PLATEAU will continue until 2040 and Lord Monckton will, wishing him a long life, continue to show each month the continuing temp plateau …..
and I am sorry that the author Hodgart left this PLAREAU background of the Lord Monckton GREAT PAUSE unconsidered….JS.

January 26, 2016 10:32 am

So I got curious and decided to evaluate the data another way. I picked a random data point, a weather station. I picked a month, January. I looked at each day, January 1 for instance, from 1990 to 2015. I did this to 12 weather stations. They track as a pause for a lot of years. Now if each weather station was biased, it would be biased the same way. So if they all track, even if they were off they are off by the same amount. So by real logic, we must assume that the actual temperature variation at the stations is consistent with the pause. I wonder how they would argue against that?

January 27, 2016 2:19 pm

This microscopic view of short term temperature trends ignores the rather significant macro trend which is our little upswing since the little ice age that has stalled is but a minor upswing like prior upswings in temps that smooth to indicate we are still on a descent to the next ice age. Our current peak is less than the prior ones and those peak tops all show an overall descending temperature curve. Cruz’s home country and much of the UK are gone. Now there is climate change to be concerned about. There are real research issues to be dug into as there are still no clear explanations for all this natural variability that surrounds this current slight increase in temps.
That being said, it’s still fun to see simulation models clearly disproved in this shorter time frame and everyone arguing against this obvious proof with their cherry picking arguments. Faith is a religious attribute and it’s pretty clear that the religion of ‘name of the moment’ has sucked in a number of adherents.
The sad state of affairs is that much of it’s priesthood is clearly aware of the paleo-climate history so is knowingly deceiving the masses.
If they are not aware of paleo-climate history then they are hardly worthy of the term climate scientist.

johann wundersamer
January 31, 2016 9:19 pm

that essay by M.S.Hodgart in no aspect accordingly answers to
1. In the real world, until present time, temperatures are global stagnant.
2. Looking back from present date the pause in global warming lasts more than 18 ys.
3. During that more of 18 ys lasting pause of global warming the whatever miniscule share of CO2 on the mass of atmosphere is growing continually.
4. Says: since more than 18 ys Mother Nature falsifies the
theory of CO2 driving temperatures global.
Thankfully Monckton of Brenchley has the tools and continues the excellent work of giving a realistic, plain true sight of an aspect critical to climate science.
Best regards – Hans