Guest essay by M.S.Hodgart (Visiting Reader Surrey Space Centre University of Surrey)
A feature of the politicised debate – if such it may be called – over AGW (anthropogenic global warming) and so-called ‘climate change’- is the tendency on both sides to cite only the evidence supporting their views and to ignore what does not. Scientists of course are supposed to be above this sort of thing and to take into account all relevant evidence.
One finds a lot of partiality when it comes to interpretation of the trend in climate data – particularly the available time series of average temperature measurements on the surface of this planet. Is it going up or down or has it paused? What is happening?
Sceptical commentators were the first to draw attention to a recent pause or hiatus in global temperatures and are naturally tempted to see this as being persistent for as long as possible. The ‘warmist’ climate scientists – those that compiled the IPCC reports including those who work for or presumably get their research funding from the UK Meteorological Office have tended the other way. For a long time they were in a state of denial of any pause – not even conceding any reduction in warming rate – presumably because anything that detracted from the sacred dogma that an uncontested increase in atmospheric CO2 must entail a rise in temperature was very unwelcome.
But where both sides of the debate are often referring to the same data one must ask why it is not possible to come to a more objective conclusion.
I focus first on the time series of remote sensed TLT satellite measurements released by Remote Sensing Systems . I also look again at the HadCRUT4 data which were the object of my analysis in the WUWT of September 2013. It should be emphasised that the physical accuracy of any of these data is not under review here and is a separate issue.
Plotted either as monthly or annual updates the time series of globally averaged temperature measurements shows a substantial random-looking scatter from one month to the next (or year to the next). This scatter and a general lack of knowledge as to what exactly drives the temperatures makes it difficult to determine the trend. Yet so many people debate, write and comment as if the trend in these data were entirely obvious. They think they know – ignoring the fact that the scatter in the data makes for a significant problem, not least in establishing what a trend means. The distinguished econometrician Phillips has memorably written (see his introduction)
No one understands trends. Everyone sees them in data.
also (and not altogether ironically)
A statistician is a fellow that draws a line through a set of points based on unwarranted assumptions with a foregone conclusion.
In other words be careful if you run a linear regression on data like these. In the spirit of impartiality and with all respect for his warning I try here to draw reliable conclusions about the trend from these particular cited data. I must however put on record that like our ‘climate lord’ Matt Ridley I am a ‘luke-warmist’. My sympathies are with the ‘sceptics’ because there seems to have arisen an officially-sponsored global warming industry and a general scare-mongering by and of the scientifically ignorant. It has for example become a political ‘fact’ – contrary to all biology and chemistry – that CO2 in the atmosphere at present or worst-case future concentrations is or will be a pollutant i.e. a poison. It is not; its presence is essential to plant growth and therefore our survival. The material bulk of all trees and crops derives and is converted from CO2 in the air. Trees and crops grow out of the air not the ground! See the brilliant “Fun to imagine” TV series by Feynman. It is difficult to take seriously an unremitting propaganda that is prepared to distort the science as badly as this.
Lord Monckton and the RSS data
Viscount Monckton of Brenchley is a prominent climate sceptic. In a recent release to WUWT he emphasises what seems to him an obvious fact that global surface temperatures have paused for almost two decades. He is not alone in this view but let us see how he comes to this conclusion. He appeals first to the TLT satellite measurements released by Remote Sensing Systems (RSS). By the simple procedure of linear regression on their monthly data he finds for effectively a zero slope (his last cited month was September 2015) going back to February 1997. I replicate his result in my fig 1 (the red line). In consequence it seems obvious to him – and to so many others – that indeed global warming has stopped for all this time. But has it?
His problem
The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977. A linear regression over all these months yields a line (brown) with a slope of 0.12 deg C/decade. Although he acknowledges this effect he does not seem to realise that this longer regression makes his conclusion untenable, whatever assumptions are made as to what the linear regression achieves.
He probably assumes that the slope resulting from linear regression determines the trend in global temperature. In other words “whatever I choose to calculate and the way I do it defines the observed effect”. If he does then he runs into a flat contradiction. The red line gives him his “Pause” (he uses a capital letter); but the brown line says that over the same time interval temperatures continued to rise. So which ? The trend can’t be doing both. The RSS web-site plots only the longer span regression. For them there is no pause.
If however he were to make the more orthodox assumption that linear regression estimates a linear trend there are still difficulties. It could be that the data back to 1997 conforms to a classical signal + noise model with a straight line of some slope and offset (the signal) which one cannot see because of an obscuring random variation (the noise). The standard model is
(i) (ii)
where z[k] is the time series, the variable k is a count in months or years (it is easiest to start at zero), and the signal= trend in (i) is defined by the offset a and rate b. The noise terms v[k] in (ii) are introduced in order to give an account of that random-looking fluctuation we can see in the time series. Ideally they answer to a description of ‘white’ noise but the terms here exhibit some limited correlation – approximating what electrical engineers call ‘low pass noise’. Linear regression estimates an offset
and slope
which are in error from the true a and b because of that scatter. There are then two problems – the minor one being that his zero slope is at best a likely estimate – it is not definite.
More importantly it is confusing to decide over just what span of years this model (1) could be valid. We could postulate that model (1) applies over a limited span. But it is asking a lot of Nature to oblige Monckton with even an approximation to a linear model as which just happens to start in Feb 1997. If it applies over all the years then the two regressions are estimating the same trend and the flat red regression is a ‘freak’ due to a chance combination of noise terms. Again one would conclude that only the longer regression had any validity.
Fig 1 RSS monthly data and linear regressions. Red line from Feb 1979 to September 2015 (Monckton’s regression). Blue line: regression from mid 1973. Brown line: regression through all data.
But there is hope for Lord Monckton still. It can be shown that the assumption that a linear trend runs over the whole is unlikely to be true. The difference in slope between the two regressions of 0.12 deg C/decade is too large to be attributable to ‘chance’ – as one can readily determine. The two regressions and also a third regression (blue line) calculated from mid-1993 with an intermediate slope strongly suggests that beneath the noise the trend is not following a straight line.
All three lines can be reconciled if we allow that there is a non-linear trend – as indeed the IPCC scientists readily concede in ‘Box 2.2’ of their latest report AR5. There has to be something more complicated than a straight line beneath the noise. A generalisation of (1) is the classic
where z[k] are again the data points, and the signal = trend s[k] follows an assumed but unknown curve. The v[k] are again noise terms. The curve hidden in the data can be assumed to cover the whole span of years. Model (1) is at best an approximation over a limited span.
A linear regression is not invalidated by this model but the computed slope has to be interpreted differently. It will have to be seen as an average of a trend with some actual variation within the span of years.
Accordingly the overall regression (brown line) computes an average trend of something which is non-linear between the years 1979 and 2015. But Monckton’s regression in principle is also no more than an average trend. So yes: there is a ‘Pause’ but its strict interpretation is that “an estimate of the average trend from Feb1997 to Sept 2015 happens to have a zero slope”. But no: he has not demonstrated what is the most likely actual trend over this time.
As I show below it is much more likely that temperatures were still rising past 1997 and that Monckton only gets his Pause from a later date. As many others have pointed out it is easy to get fooled in statistical analysis by an apparent pattern suggested by what turns out to be the influence of a random component in the data.
Monckton’s construction does have one useful consequence: he has shown that none of these linear regressions (including his own) is likely to be estimating a straight line.
Alternative stochastic model?
In this deterministic trend model (2) there is assumed to be some unknown but well-defined curve or line concealed by low-pass noise i.e. strictly a weak sense stationary stochastic process. We need to be aware of a substantial literature which views the entire time series as a generalised non-stationary stochastic process. It is ‘all noise’. This approach is the preferred choice of econometricians who have taken a look at climate data. In his extensive publications Professor Terence Mills has looked at both approaches but favours the all-stochastic. If identification of ARIMA processes is your meat then there is plenty to work on. I wish you luck! In my opinion the stochastic approach leads to paradox and a terminological confusion. The data series has to be regarded as the output of a feed-forward and feed-back machine whose input is a white noise. If this were true then every possible time series is ‘random’. So where is your anthropogenic global warming ? I will follow the climate scientists and stay with deterministic trend estimation in general and (2) in particular.
Estimating a non-linear trend
If we have to fall back on the generalisation which is (2) then we shall have to estimate s[k] while only having access to the data z[k]. This is an exercise in curve fitting– for which there are a plethora of methods.
The difficulty with all methods of curve fitting is that there are essentially two kinds of error to contend with: the random error or variance due to the omni-present noise v[k] ; and a systematic error or bias due to the poor fit of a proposed fitting function to the unknown hidden signal s[k]. Whatever method is adopted the unavoidable problem is to decide if the computed curve is over-fitting (too much random error) or is under-fitting (too much bias error). There is a model selection problem.
In my earlier release to WUWT back in 2013 analysis of the HadCRUT4 data I proposed using a cubic loess – which Mills shows is superior to quadratic or linear loess – and also a polynomial regression In the case of loess the problem is to decide on the effective window width and with a polynomial to decide on the degree.
For loess if the window width is too narrow random error dominates over the systematic and if too wide vice versa. For a polynomial regression if the degree is too high random error dominates over the systematic and if too low vice versa. There are many model identification methods designed to guide a choice – starting perhaps with Akaike Information Criteria, modifications such as that by Hurvich and Tsai and many more. There are also various forms of cross validation technique. But they seem to me (having tried some of them) to be uncertain and unreliable. Statistical experts may disagree.
Corroborating curve fitting
Whatever the procedure the would-be statistician is left with a degree of freedom in allocating a crucial parameter. Some years ago however I stumbled on the fact that a combination of cubic polynomial loess and a standard polynomial regressions offer a unique choice of window width for the former and degree for the latter which gives the least disparity between the two generated curves. The one selects the other. The combination is self-selective. This idea seemed to work well on the HadCRUT4 data. This serendipitous result is now found to apply to the RSS data. In fig 2 a (half) window width of 168 months for a cubic polynomial loess and a polynomials degree of 5 give the closest agreement to each other (shown in blue dashed lines with no attempt to distinguish between them).
These very similar curves are perhaps the most likely deterministic estimates of the trend but they cannot be the exact truth. The uncertainty is again due to the noise present in the data. Assuming however that they are ‘close enough’ what they have in common, if we disregard the discernable oscillation, is a depiction of a rising trend followed by a pause effectively starting around 2003 – and not 1997.
Alternative segmented linear regression
The shape of these curves provides also a motivation for a different idea: to apply a split or segmented regression. The idea is to run two regressions over all the data years but with a break point which offers the least discontinuity between the two segments.
The break point is found after a trial and error search to be September 2003. Monckton still gets his pause but it is now reduced to the last 12 years. The first segment of the proposed regression in fig 2 from 1997 to 2003. finds for a computable rate of 0.16 deg C/decade. There is a pause after that over which the trend is indeed flat. The trend does not literally switch in slope on the month of September 2003. The purpose is to provide a meaningful computable rate.
Fig 2 RSS monthly data Jan 1979 to September 2015. Dashed blue curves: cubic polynomial loess with 168 month half window width; polynomial regression with degree 5. Continuous red lines: segmented linear regression with break point September 2003.
However each regression is seen by comparison with the loess and polynomial curves to be an acceptable approximation. The two segments are plausible averages over respectively separate ranges of data. The apparently contradictory or competitive regressions in fig 1 are now explained by more than just positing average slopes of a non-linear trend. Some information has been gleaned as to what that trend consists.
Application to HadCRUT4 data
The RSS data tell us nothing about global trends before 1979 and one has to turn to the publicly available land and sea-based surface measurements. The UK compilation HadCRUT4 goes back to 1850 but the two US series go back only to 1880. It is not my intention to try and assess the accuracy and reliability of any of these compilations. It is clearly a difficult exercise relying on measurements which were never intended for a systematic global experiment. Particular difficulties must be associated with sea temperature measurements which historically were very crude indeed. The series is of course under continual review from both its compilers and from sceptical critics – which can only be a good thing. Avoiding the very important issue of measurement error what can be inferred about the trend in global temperature if we should decide to trust HadCRUT4? To repeat: in my previous submission to WUWT in September 2013 I used this self-checking combination of a high degree polynomial fit and a cubic loess. But now let try something simpler – a succession of split linear regressions. We will need more than one break year. The same criterion will be adopted: that there needs to be the least discontinuity between successive regressions. All the break years meeting this requirement have to be searched and discovered by trial and error.
The result of this exercise is shown in fig.3 on the annually updated time series.
Fig 3 HadCRUT 4.4 annual boxed connected points to 2014 . Discrete heavy spots are Met Office approved discrete decadal averages. Brown lines are sequential regression segments. Arbitrary start from 1870; break point years 1910, 1942, 1975, 2005. Estimated r.m.s noise
= 0.098 deg C. Red lines estimate average trend; discovered break point year 1941; post-war average trend 0.087± 0.012 (2 s.d) deg C /decade from 1941 to 2014.
I start on the same year 1870 as in my previous report to WUWT. We need four break years – splitting the trend estimate into five segments (see brown lines). It should be noted that these break years are discovered – not arbitrary choices. The heavy points also depicted are discrete decadal averages of temperature located in the middle of each decade – a simple statistic which the UK Met Office has long favoured and was adopted for the first time by the IPCC in their AR5 report (see part 2.4.3 AR5 )
As can be seen the proposed line regressions are in excellent agreement with these averages. This agreement surely promotes confidence in both procedures. Comparison with my earlier presentation also shows a good agreement with optimally chosen cubic loess and polynomial regression. One can see a broad similarity with the RSS time series from the 80s onwards. The temperatures started rising from 1975 and no pause is found until a break year of 2005 (two years later than for the RSS data). With this latest version of HadCRUT4 (now issue 4.4) we now get a low warming rate (of about 0.01 deg C/decade) from 2005 (compare flat response with the RSS data). I have not included the year 2015 which was not completed when running all these calculations.
One should emphasise that (i) these computed lines are probabilities not certainties; (ii) they are not meant to be taken literally but to be seen as approximants to some postulated smooth curve which is hidden from view and for which the loess and polynomial regressions may be better estimates.
The split regression segments graphically convey the impression that there were two long periods when temperatures were actually falling. Temperatures fell from at least 1870 to 1910, but rose from 1910 to 1942. They then were falling again from 1942 to 1975. From 1975 to 2005 warming resumed with a probable rate of 0.20 deg C /decade. But the warming did not persist at this rate. It seems to me to be probable that a third half- period has begun in which if there is now a pause (but with revised HadCRUT4.4 it is now a very slow warming).
This recent pause looks to be a continuation of an oscillation of global temperatures with a period of slightly more than 60 years going right back through the record imposed on a generally rising mean trend. I am not of course the first ‘sceptic’ to point this out.
I come to much the same conclusion as in my 2013 report. It seems that the much simpler sequential regressions are as convincing a way of specifying the trend in the data as my previous effort using polynomial regression and cubic loess.
What is the matter with the UK Met Office and the IPCC scientists?
In the summer of 2013 the UK Met Office, and the academics which they support, called a press conference in London to concede (reluctantly) a pause or a ‘hiatus’ in global temperatures and also confess they hadn’t clue as to why it was happening. The rather critical BBC journalist David Shukman who was present noted that
….the scientists say .. pauses in warming were always to be expected. This is new – at least to me…I asked why this had not come up in earlier presentations. No one really had an answer, except to say that this “message” about pauses had not been communicated widely..
Indeed! The press conference coincided with a reports by the Met Office (report 1, report 2, report 3) on the same theme. What the Met Office scientists did not discuss or even concede in that 3-part report is the presence of substantial oscillation over the historical record. This oscillation surely cannot be attributed to increasing concentration of atmospheric CO2 and it accounts for half the faster rate of warming in the 80s and 90s.
I find it troubling that presumably intelligent scientists (and they have competent statisticians also) cannot bring themselves to acknowledge – let alone explain or even properly discuss – the statistical fact that two extended cooling periods have featured in the past while CO2 levels were presumably always rising .
The reader will find the same statistical obfuscation in the two most recent reports (AR4 and AR5) released by the IPCC. A pause (or hiatus or standstill) is most unwelcome. Yet there is surely something to explain here for those who believe in the dominant anthropogenic effect on global warming. Since at least 1958 with the Keeling measurements (Mauna Loa etc) – and no doubt long before that – atmospheric CO2 levels have been rising monotonically (after seasonal averaging). It is hard to avoid the impression that there has been political pressure not to acknowledge the obvious: that an ever-rising concentration of atmospheric CO2 cannot be the only effect determining global surface temperature.
Trend v. average trend
In principle an oscillation does not have a trend. There is a need therefore to identify a mean trend which discounts that obvious oscillation. As suggested before one can differentiate
trend in the data = mean trend in the data + quasi-periodic oscillation
How then to estimate this mean trend? My previous effort was perhaps too elaborate. The following may be more convincing. One can construct a split regression with just two segments (the two red lines in fig. 3). To my mind the lines are steering a convincing middle course between the oscillating trend conveyed by the multiple split regressions. They may be about right. The break year of 1941 is again not an arbitrary choice: it has to be searched in order to ensure the least discontinuity between the two regressions with this construction. This notional mean trend is being estimated by two average trends computed by linear regression between favourable years. The post war average trend is found to be 0.087 ± 0.011 ( 2 s.d) deg C /decade i.e. less than 0.1 deg/decade which is half the rate of the actual trend which peaked (temporarily) in the 80s and 90s. The error limits are computed after first estimating the standard deviation of the noise of
0.098 deg C.
It is extraordinary that in their various releases neither the UK Met Office nor the IPCC seem to want to confront these statistical facts in their own data. It is of course unwise to make a projection into the future but if we trust neither the elaborate computer climate models favoured by the Met Office nor the projection of Mills- type all-stochastic models this is all we have got. One can only note that in the 85 years from now to 2100 the projected increase could be around 0.0087 ´ 85 = 0.74 degrees. Could this be realistic and if so is that a cause for alarm? I only ask.
Lord Moncton does not fit a line to the data starting in 1997 to the present and shows that the slope is close to zero. He CLEARLY states in his posts that he CALCULATES the endpoints based on the hypothesis that the regression trend is close to zero or negative. He is not cherry-picking the endpoints, he is deriving them from the time series.
My understanding of Moncton’s argument is that the climate scientists themselves have said that if there is a 15 year pause, then something is wrong with the models. He has found a 15 year pause. Ergo, something is wrong with the models. The fact that if you go back more than 15 years, you don’t have zero trend anymore is irrelevant to the argument he is making.
Yes, but worse.
First they said 10 years would invalidate the models (Jones)
Then they said no, it would take 15 years (Santer)
Then they said 17 years (Santer again)
After moving the goal posts several times, they’ve now taken the position that the playing field doesn’t exist.
“First they said 10 years would invalidate the models (Jones)
Then they said no, it would take 15 years (Santer)
Then they said 17 years (Santer again)”
You should quote properly. None of those people said any of those things.
They moved the playing field to space…it’s the satellites
Problem is, they have twerked the temp history so bad…that no computer game will ever be right
The games are predicting sky high…based on a unreal past..but they are right in line with that unreal past
Just for you Nick. You are technically correct. Happy now? 😉
Dr. Phil Jones – CRU emails – 7th May, 2009‘Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.’
Santer 17 year: http://nldr.library.ucar.edu/repository/assets/osgc/OSGC-000-000-010-476.pdf
Shows the importance of actually quoting what was said. Turns out PJ spoke of 15 yrs, not 10. Or was that Santer talking about ENSO-adjusted numbers?
But then, so was Jones. The full quote from the email was:
” Bottom line – the no upward trend has to continue for a total of 15 years before we get worried. We’re really counting this from about 2004/5 and not 1998. 1998 was warm due to the El Nino.”
And yet…
Global warming stopped 18 years, 8 months ago.
The alarmist crowd bends itself into pretzels attempting to deflect from the fact that global warming has been STOPPED for many years.
Now I see why martyrs will die to be right…
Besides, Nick, why should we listen to anything Jones said? He proved himself to be a thoroughly corrupt, dishonest individual. So of course he will ‘Say Anything’ that is self-serving.
Why do you make an unethical rascal like that your HE-RO? Jones lied for money, for status, and to support the UK’s good old boy network. But you approve.
I don’t see very many credble people on your side of the fence. Certainly there’s a lack of honesty. And ZERO scientific skepticism…
Yes, I think the OP’s argument about Monckton doesn’t make sense.
Here’s an example:
Suppose the 20th century warming was as follows:
A constant rise from 1900 to 1950
Zero change from 1950 to 2000
I then make the claim that there was zero global warming from 1950 to 2000.
The claim is obviously true: the graph for that period is perfectly flat.
The fact that there was warming prior to 1950 is completely irrelevant. My statement was specifically for the period 1950 to 2000.
I think Christopher Monckton is right.
Chris
Nick
Flat out did the models predict the pause?
Bob Boder,
Their models couldn’t predict the sun rising in the morning.
“The fact that if you go back more than 15 years, you don’t have zero trend anymore is irrelevant to the argument he is making.”
All statistical analysis is directed towards answering a single question: does the imperfect observational data support or falsify some given model? Note well that it is the statistician/scientist who supplies the model(s) in EVERY case. Choice of a some particular model against which the observational data is to be compared is an intrinsic part of statistics.
Mr. Hodgart quarrels with Lord Monckton’s model. That model is that a “pause” is a useful measure of what recent global termperatures are doing when compared with the GCMs. Monckton carefully defines a “pause” as the maximum length of time measured backwards from the present with zero trend. Objections could certainly be raised to that particular model, but it is entirely wrong to think that a statistical analysis could somehow DISPENSE with a model.
Mr. Hodgart for instance speaks of trends back to 1977, break points, polynomial regressions, and cubic polynomial loess, but all of those choices are entirely his own. They don’t somehow arise magically out of the observational record itself, and they are certainly modeling choices that are every bit as disputable as Monckons.
Agreed. Moncton doesn’t help things though by placing a left to right arrow over the graph. Makes it look like he picked the left point.
To TYoke. There is a general problem to identifying the trend in a time series of no known theoretical structure. There are fundamentally two different approaches – the deterministic and the stochastic. The former is perhaps easier to follow. The classic introductory text Kendall and Ord [REF ] gives some idea what it entails. I follow these experts by regarding the trend as whatever in principle could be removed from the time series and leave no trend. All may surely agree that a white noise sequence (or a filtered approximation – technically a weakly stationary stochastic process) – shows no trend.
This consideration leads automatically to the signal + noise deterministic model of eq 2 to account for the time series, where the trend is more precisely identified with the slope of that signal component. On the RSS data this signal seems to be following some kind of curve. In general trends can be expected to be non-linear – a point which is now conceded by IPCC scientists – see Box 2.2 in ch 2 of the AR5. The signal + noise model is widely adopted throughout the science and engineering world. What other model would you suggest? Ultimately perhaps it is a matter of arriving at a convention acceptable to a participating and informed community.
I made at least one typo in referring to the start of the RSS data as 1977 rather than 1979. Apologies.
[REF Kendall, M.G. & Ord, J.K. (1990) Time Series, 3rd edn ]
Which means, in the same sense as Hodgart states, that the points are not arbitrary but calculated based on the assumption of a certain class of curve. No less valid than his own method, for the purposes that were clearly stated.
Another point Hodgart glosses over is that it would be very interesting to see what how his Fig. 3 would change if the chosen dark brown points were averages at the beginning of each decade, rather than the middle. It would certainly affect the point at which slope changes on the far right. But would it be earlier, or later?
I don’t know the answer. But it illustrates the very kind of arbitrary choices Hodgart appears to be railing against.
I probably understand this the least of anyone commenting, but I understand that Moncton’s methodology is to run his calculations starting from the present, then rerunning it, extending the period by one month until he gets to a point where the trend is no longer flat (or statistically flat). At that point he stops. He has been very up front about this and it makes sense to me.
That said, for something that doesn’t exist, the Pause has sure had a lot of people (ones who claim to know something) trying to explain it away. My understanding is that they have even gone back and messed with the data in order to send the Pause into the same memory hole as the Medieval Warm Period.
If Moncton doesn’t know what he is talking about then he sure has a lot of company in this on the other side.
I am very grateful to a supporter of the splendid Ted Cruz, and to many other commenters here, for their kindness in drawing attention to the fact that the word “impartial” that the author of the head posting has awarded to himself is more than somewhat of a misnomer.
Given that Dr Hodgart is a Reader (a sort of senior lecturer), he ought perhaps to have taken the trouble actually to read my monthly updates to the global temperature record.
Had Dr Hodgart read my temperature updates, he would have realized a number of points. First, as Couldn’t-be-Cruzier has rightly pointed out, I state quite clearly every month that the length of the Pause in my RSS graph is calculated as the longest period of months, ending in the most recent month for which data are available, during which no global warming has occurred.
Dr Hodgart accuses me of having ignored the rest of the RSS dataset since 1979. Had he read my material, he would have seen that I frequently – and never less than six-monthly – show the full RSS dataset, along with the full UAH, HadCRUT, GISS and NCEI datasets. If I remember rightly, I showed the full RSS dataset not more than a month back.
In his preachy tone, he presumes to lecture me on various matters on which he is simply wrong. He objects to taking linear trends on stochastic data, but linear trends are what the IPCC uses and Phil Jones recommends; everyone (except Dr Hodgart) understands them, and the IPCC’s 1990 predictions for warming this century are themselves almost linear. I use linear trends because that removes one source of potential disagreement between me and the IPCC. They can hardly complain if I use their own method to see whether their predictions are coming true. Indeed, any genuinely impartial analysis of global temperature trends would compare those trends with IPCC predictions over various periods, as I did a couple of weeks ago.
Dr Hodgart says one cannot or should not use linear trends on stochastic data. Nonsense. It is precisely when data are stochastic that calculating a linear trend gives us some idea of whether there has been warming, cooling or no change over some selected or calculated period.
Dr Hodgart – without the slightest justification – says, in effect, that I am assuming that the trend on past data is a prediction of a trend on future data. Again, if he had done me the credit of actually reading my monthly postings before commenting on them, he would have seen that just about every month I include a warning that a trend is not a prediction.
Much of Dr Hodgart’s posting is devoted to a rambling and somewhat inexpert proposal to use several methods to determine – or not to determine – trends on the temperature data. Most textbooks of statistics contain warnings about what seems to be Dr Hodgart’s favorite proposed method – juggling with linear trends over multiple periods to see where they join up. That is a fool’s game, and one which has led to the IPCC being reported to the Swiss fraud authorities. It also contradicts Dr Hodgart’s assertion (albeit a nonsensical one) that the full dataset is better for determining trends than a partial dataset.
What Dr Hodgart and many others who have clumsily tried to challenge my surely simple monthly graphs fail to appreciate is that, with CO2 concentration increasing rapidly, the likelihood of very long pauses such as the near 19 years shown by the RSS dataset is supposed to be vanishingly small: yet even the IPCC admits the existence of the Pause and, in consequence, has greatly reduced its predictions of near-term global warming – a fact that the “impartial” Dr Hodgart somehow failed to mention.
The reason why the past couple of decades are important is that during those two decades the rate of increase in CO2 concentration rose. Models tell us that there should be an instantaneous and quite strong warming response. But that response is not happening. And that raises questions – in the minds of genuinely impartial observers, but not, perhaps, in the mind of Dr Hodgart – about whether the models are all they are cracked up to be, and whether the “science” is truly “settled”.
I suspect that Dr Hodgart is part of a concerted effort – noticeable in recent months – to try to do away with the Pause. There was the ludicrous Tom Karl paper tampering with the ARGO bathythermograph dataset because it inconveniently showed no warming of the surface strata of the ocean, and what little warming it does show (equivalent to a terrifying 1 degree every 430 years) is coming from below, and not from above.
Then the ERSST temperature data were tampered with in a manner that conforms to Karl’s paper. Then came the 20-lies video from the usual suspects. Now Dr Hodgart comes along, self-evidently not having read or understood my monthly analyses, and does his level best to cast doubt upon them and nasturtiums at me, presenting himself as though he were as skeptical as my noble friend Matt Ridley.
Well, it won’t wash.
If Dr Hodgart wants to do away with the Pause, all he has to do is wait. The el Nino persists in region 3.4, and that is likely to keep the temperature rising for some months, and perhaps to extinguish the Pause for a time, and perhaps for good – after all, one would expect some warming as a result of our enriching the atmosphere with greenhouse gases. Even if rising temperatures do not eradicate the Pause, the unspeakable Dr sMears of RSS, who participated all too enthusiastically in the “20 lies” video attacking the rival satellite dataset, looks as though he is gearing up to rewrite the RSS data to ensure that the Pause does not return in any event.
However, notwithstanding the vast amounts of data tampering in which the keepers of the terrestrial temperature datasets have already indulged, the rate of global warming over just about all timescales and on all datasets has proven to be, and continues to be, very considerably less than what was predicted. That is the central fact that emerges from my temperature analyses. It is a fact that Dr Hodgart barely addresses.
Finally, one of the many points on which Dr Hodgart is simply wrong is his assertion that, in principle, an oscillation does not have a trend. It is, however, perfectly possible for an oscillation to occur either side of a rising trend – indeed, that is what seems to have happened to global temperatures in the 20th century, as Syun-Ichi Akasofu has pointed out.
I hope that this balancing comment will go some way to restore the impartiality claimed by Dr Hodgart in the title but lamentably absent from his posting.
Thanks, Dude (Lord Dude?) I really enjoy reading your stuff!
C.M stated, ” I state quite clearly every month that the length of the Pause in my RSS graph is calculated as the longest period of months, ending in the most recent month for which data are available, during which no global warming has occurred.”
As I read through the the Dr. Hodgart article, I wondered why such an obvious point has seemingly been overlooked. However, at the time I wanted to post something to that effect but I’m now glad that I held off as you’ve (re)explained much better than my feeble attempts could have ever accomplished.
Thank you again for the points and clarification. Keep up the excellent work.
Both Monckton’s and Hodgart’s methods are flawed. Monckton’s method, in which the pause is “calculated as the longest period of months, ending in the most recent month for which data are available, during which no global warming has occurred”. is the very definition of cherry picking. He’s picking the month for which the OLS slope is zero, because that is the result he wants to find. When calculating the slope he ignores all the data before each start date – which means the intercept for his fitted line is incorrect – it assumes temperatures magically jumped from the pre-pause levels to the level at which they “paused”, all within a single month. It makes no sense. I can use the exact same method to ‘show’ that there has been a surge in the rate of warming since Feb 2007 (using the same RSS data). That is the longest period, ending in the most recent month, during which the rate of warming has exceeded the highest rate observed prior to May 1997 (the current start of Monckton’s pause). So using Mockton’s method of ‘calculating’ the start date of climate periods, there has been a pause in warming since May 1997 but a surge of very rapid warming since Feb 2007 – i.e the last half of “the pause” has been a “surge”. This is the sort of logical absurdity that arises from flawed methods that ignore prior data (the fact that you cherry pick from “all available” data does not change the fact that you ignore earlier data when calculating trends for any specific period).
Hodgart’s piecewise regression at least uses all the data and fits a continuous line -avoiding the magical jumps implicit in Monckton’s approach, but despite mentioning the “model selection” issue, he ignores it completely when choosing his break point(s). A “process of trial and error” is also known as “fishing”, and is very poor statistical practice. With 400 + months to choose from, you are almost guaranteed to find a piecewise model that fits the data better than a simple linear model. This must be accounted for in deciding whether the 2-line model is better supported by the data than the simple linear model. When you do change-point models properly – using methods that account for the model selection uncertainty (i.e. methods that penalize the extra freedom associated with 1) additional parameters, and 2) many possible change-points) you find no evidence of a pause in global warming – in any dataset.
Jim (and pretty much everyone else) – Wasn’t it Ernest Rutherford who said something like, “If your experiment relies heavily on statistics then you need a better experiment.”?
If you sort through all of the BS and get right down to it, the climate change debate revolves heavily around how you analyze the numbers. everyone seems to think everyone else’s statistical methods are wrong. The argument seems to go in circles and ultimately to us outside observers, it just looks like people are still trying to figure out some pretty fundamental stuff here.
You can argue ad infinitum about how you crunch the numbers but if the numbers are questionable to begin with then it seems to me that you are just compounding garbage. I’ll stick with my position that you shouldn’t even be talking about any of this stuff until someone establishes the following two things…
1. That we can accurately measure the current temperature of the Earth. Of this I am skeptical.
2. That we can accurately establish the same value for 75 or 80 years ago. On this point I am virtually certain we cannot.
IMHO, until both of these things are established beyond a doubt, the rest of this is masturbation.
To Lord Moncton’s credit, he is using the exact methodology of the opposing view to deflate their own argument. This is smart. But most I think it is a mistake to let all of the rest of this stuff get debated when you can cut the legs out from under the whole thing by disagreeing with the underlying premise…that we can measure these things…”Prove to me that you can measure the temperature of the Earth to within a tenth of a degree C. Also, prove to me that you can do the same with data from 1920.”
For Jim (Jan 24 2016 3.30 pm)
At last some informed criticism and a chance to have a reasoned argument! ‘Jim’ very clearly identifies what is so wrong with Monckton’s procedure. He explains all this much better than I could. But he also objects to my piecewise regression which he regards as “fishing” and “very poor statistical practice”. He sees “no evidence of a pause in global warming”.
I have never been a professional statistician but I have always been interested in random variable and estimation theory and the delicate art of what constitutes valid statistical inference. I try to be aware of the multiple traps which lie in wait for the unwary. I really do not think that I was ‘fishing’.
If he reads my text carefully he will see that first I checked what the result of a curve-fitting exercise would be (the two blue curves by a cubic loess [Mills] and polynomial regression). Visitors to WUWT do not want to see a lot of maths but model selection by principle of joint corroboration was my priority consideration in selecting these curves (no – I have not tried to publish this methodology). It then seemed to me, looking at these plotted curves, that “if we disregard the discernable oscillation, (there) is a depiction of a rising trend followed by a pause effectively starting around 2003 – and not 1997”. That is what I see – don’t you?
The two segmented linear regressions are now justified as piece-wise linear approximations to these curves – a perfectly respectable mathematical technique. Locating the one break point is most easily achieved by trial and error (also an entirely respectable technique and indeed the fundamental principle behind all science and engineering). One could of course try to fit more segments with more break points but I do not see any reason to do so. Occam’s Razor should apply just as much in statistics as in science. I am of course aware of the notion of degrees of freedom, and the need when estimating the mean square error (MSE – fitting error, goodness of fit ) to lose another degree of freedom per break point after dividing into the sum of squares of the residuals. If I follow these rules and compare with the MSE computed for just one regression (respecting model 1) I find no improvement. So by that criterion he is quite right. However the MSE is not everything in my opinion. There is such a big difference in slope (by 0.12 deg C/decade ) between Monckton’s linear fit on his subset of data from 1997 and the slope of the linear fit all the way from 1979 as make the ‘null hypothesis’ of that model 1 highly unlikely. I proved this to my satisfaction both by simulation and direct analysis. I would be glad to let him see the details if he is interested. So there are two strands of evidence for a pause: one positive and the other negative.
Mills (2009) and his “Modelling current temperature trends” except that I use the standard tri-cube weighting kernel ]
MSH
A 10 year pause is roughly a 2 sigma probability event. (5%) 15 years is 3 sigma (2%), 19 years + is 4 sigma (<1%). This is the probability that if the theory is correct that the event would occur. So, either the record of the last 20 years is a freak once in a 100 incidence (1 in 10,000 years type event) or the theory is wrong.
Umm, I think the theory is proven wrong.
Climate scientists seem to be the only scientists who go with the idea that as long as there is one in 10,000 chance they are right then they are right (not just could be right). Normally scientists I have read have the opposite criteria which is unless there is 1 in 10,000 chance they are right they are right.
A real scientist would also admit as the probability that their theory wasn’t going to hit the numbers they would withhold publishing and they would admit they were looking “bad” in the sense that the theory was not confirmed, i.e. they would admit it wasn’t certain.
Of course climate science isn’t like Physics or hard sciences I guess, They don’t / can’t hold themselves to such a high standard. The data is imprecise, the models aren’t worked out entirely, there is hardly way to do experimentation so this is as good as they can do. Right? Admitting that would mean that it isn’t settled and that they aren’t real scientists but more like sociologists who interview people and get wildly differing results depending on lots of conflating factors.
Any real science would have admitted that the theory was in trouble 10 years ago. When the data seemed to be moving away and with numerous “worrisome” things not conforming like humidity, clouds, temperature in the lower troposphere, the accumulation of heat in the ocean which was not predicted, the lack of ability to model PDO/AMO and the clear MISS in missing PDO/AMO 60 year cycle in the first place, the inability to nail down attribution for other factors all of these bothersome things and now on top of all this the variance between satellite and land temperatures which invalidates the CO2 hypothesis.
You see by the theory of CO2 the heat should be growing in the atmosphere over the land FASTER than on the land. We have according to Hansen/Mann temperature growing faster on the land than in the lower troposphere. The clear conclusion is that the excess heat on the land is NOT caused by CO2. It can’t be. This leaves them with the thorny explanation of where the heat on the surface they manufacture with their adjustments is actually coming from. It can’t be CO2 or humidity or clouds as those are not confirming the theory either.
On the other hand we have the assurance from Hansen and Mann that the theory is solid, the temperatures this last year were hotter than EVER and that we are the cause of it all for sure. So, now I am reassured. I will wait for the explanations in the next life I guess.
Please check out my articles:
https://logiclogiclogic.wordpress.com/2016/01/21/48-inconvenient-truth-nytimes-lies-2015-wasnt-the-hottest-year-on-record/
and
https://logiclogiclogic.wordpress.com/2015/12/21/failures-of-global-warming-models-and-climate-scientists/
I meant for most scientists 1 in 10,000 they are wrong then they can claim to be correct.
Dr Hodgart, in his replies here as in the head posting, continues to fail to see the elephant in the room, which is that according to IPCC predictions there should be, at the very least, a continuing warming at about twice or thrice what the datasets show, but instead the rate of warming has been declining and is now at just about its least value since the satellites began watching in1979.
The simplest way to illustrate this decline in the warming rate – a decline not predicted by the models – is to calculate each month how far back one can go without finding any global warming at all. No amount of desperate statistical prestidigitation on Dr Hodgart’s part can conceal that fact.
And he should not have repeatedly misrepresented what my monthly temperature reports actually say. His posting was not science but mere politics.
The point of highlighting the ‘Pause” Ito the AGW campers claim is to show as an outcome, their claim that man made CO2 is ( not) the cause and driver of a runaway temperature.. One that we should through billions of dollars at now. Its really a question of opportunistic use of resources. Eg. Use the money to clean up soot, replace wood or dung fires, and reduce poverty now instead of jetsetting around the globe restricting fosil fuel uses, and the results will be better than whats going on at ptsent.
Oops. Did not check for typos. Ps. Any statistics or ploting makes no difference to temperatures. Know what I mean?
“The red line gives him his “Pause” (he uses a capital letter); but the brown line says that over the same time interval temperatures continued to rise. So which ? The trend can’t be doing both.”
As has been pointed out here at WUWT, you could make the same claim about a 40 year time series of my height. Yet I clearly stopped growing taller 20 years ago! With this example, one can see the fallacy of claiming the trend of my height can’t be doing both (pausing and increasing.)
If Moncton (by way of observations) shows a significant period contains no trend in the temperature while CO2 trend rises, then this is in fact a very significant observation. I’m not sure why that isn’t a crucial exercise in theory testing regardless of where that period falls. I don’t understand the problem.
It may also be worth noting that the slope of the brown line has been decreasing for the entire length of the pause.
It’s also worth noting that Dave’s height will most likely decline if his spine experiences degenerative disc disease as he ages. (Sorry Dave!)
I am sorry, but being a ‘visiting reader’ of anything has never been a credential for, well, anything. You must know that around here…
What?
Huh?
I’m always amused by comments that seek to question credentials over addressing the actual content. It strikes me as either being too lazy, or incapable of finding fault with it, or both.
That is not to say the content can’t be faulted: I’m still ruminating over the “can’t be both” remark, since it’s a comparison of trends over different time periods (in which case, of course they can be different, i.e. “doing both”)
“I’m always amused by comments that seek to question credentials”
Likewise I am always amused by articles where the authors seek to display credentials in an effort to buttress their content. (argumentum ad verecundiam) anyone….anyone
As an interested amateur-thanks!
@Matt – what is your point? Please re-write your sentence(s).
I prefer this graph to view the “drastic” global temperature change since 1880:
The source is the GISS data.
https://suyts.wordpress.com/2013/02/22/how-the-earths-temperature-looks-on-a-mercury-thermometer/
“Reader” is UK-ese for “professor.” The fact that “visiting reader” would be bizarre if it meant reader of this website should have tipped you off that you should look it up.
In the UK, if “reader” meant “janitor” or “village idiot” or “prince”, the post still doesn’t make any sense…
Isn’t this analysis similar to break point analysis, like tamino does?
piecewise regression, perhaps … with such auto-correlated time series sets and no independent observations possible, I’m not clear what this means for ‘trends’ aside from intriguing ways of exploring
“Any statistics or plotting makes no difference to temperatures.” Unless the temperatures have been adjusted prior to plotting to show the desired outcome and trend.
And if I look at the temperature trend for the entire HOLOCENE I see a definite trend. Trends are all over the place. So pick your timeframe and then ask, IS IT SIGNIFICANT. If you cannot see a trend change in global temp graphs starting around 1998-2000 then you are blind or warmist or a liar. The CO2 trend however, looks much more consistent, no concurrent decline like the temp trend. So, is it significant or not? If not, then neither are your “other” 18 years trends where ever they may be. Be anything you like, but damn, be consistent please.
Lovely
I am not at all good at math, but the various paleoclimate and historical temperature climate reports (other than Michael Mann’s) all looked like something producing an oscilliation with a great deal of noise. Presumably, any curve can be described mathematically, and this looks, in my ignorance, to be a descent effort.
Equally notably, the curve does not fit my understanding of the IPCC models expected results.
The comments above my last seem to be angrily agreeing with the author (who says it’s probably not a straight line)
“The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977…”
Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom – Clifford Stoll
You produce a cubic fit using a 168-month half window. So shouldn’t your resulting curve begin at 168 months after the first data point and end 168 months before the last data point? Just wondering.
[168 months at either end – not both. Could be 84 months on both ends – but that’s a different smoothing process. .mod]
Andrew, you have hit a considerable nail on the head: the problem of computing a smoothed curve all the way from start to stop of a finite time series when using data both before and after each computed point. You are right. This cannot be done when getting close to either end if you are trying to use the whole of an averaging window. This is sometimes known as the end-point problem (you get exactly the same problem with a simple running average).
There are 440(+1) data months and the so that in principle using the full-width smoothing window one can only generate a smoothed curve with a very limited duration of 440 – 2*168 = 104 months. To extend the curve ‘my’ solution is to allow the window to ‘run off the ends’. The shape and width of the window does not change but it has to work from less data. In the limit the very first smoothed month is using only the following 168 data months while the very last smoothed month is only using the preceding 168 data months. You may complain that this gives less effective smoothing and greater random error in the estimating curve towards either end and you would be right but it can’t be helped. Exactly the same problem exists when fitting high degree polynomials, and (I believe) with any process designed to smooth a finite length time series.
Admittedly there are other ways but I believe this to be the ‘natural’ solution. I follow the methodology implied by Terry Mills in one of his clever papers [ ] and what he calls a ‘non-parametric local trend fit’. One is running a weighted least square fit within whatever data is available (I use the standard tri-cube weighting rather than his choice of ‘Gaussian kernel’ but I don’t think it makes much difference).
[ ] Mills, TC (2009) Modelling current temperature trends, Journal of Data Science, 7, pp.89-97.
I have to ask what is the purpose of fitting a high order polynomial to /any/ data that arise from a time series? It can surely only be with intention of projecting the fitted curve beyond the range of the actual data. Fitting such a model implies a degree of belief in its existence, otherwise why use it?
The ubiquitous and naive linear model beloved by climatologists is certainly used in the hope of guessing something fairly reasonable regarding a potential extrapolation. Despite its inappropriateness it is usually the safest one to extrapolate. Its confidence intervals are simply two hyperbolae on either side of the least squares fit.
If you go to the trouble of computing the equivalent intervals for a second order fit – the simplest polynomial – you will find that any extrapolation produces rapidly diverging curves if you compute them outside the actual data range. With a cubic model these divergences become spectacular and for quartic and quintic models are ridiculous.
It always irritates me that people who display least squares fits to time series data seem never to bother with this vital piece of inferential statistics. The software I use (my own) carries out these operations on request. It was always very sobering for the scientists who presented their data to me, and so I developed the habit of demonstrating these properties of polynomials to my clients before they began their experimental work.
Please be careful with polynomials and /never/ extrapolate them.
(To RobinEdwards36). Polynomials etc… I disagree with your first proposition following your leading rhetorical question. You assert that the only point of fitting polynomials is to enable projection “..beyond the range of the actual data”. I do not think so. My purpose was to estimate the historical unknown smooth curve posited to exist IN the RSS data assuming signal + noise model (2). Nowhere did I imply or state that I had any intention in extrapolating the 5th degree fitting polynomial outside the range. I entirely agree that one gets ridiculous results using it outside. So I don’t.
But I do agree that the ‘ubiquitous and naive linear model… is usually the safest one to extrapolate”. That is exactly what I was suggesting (reluctantly) might be done with split regression on the HadCRUT4 data in fig 3 (second red line from 1941 onwards) according to the famous principle of insufficient reason (do it if you do not know any better). I am in fact highly dubious about any projections into the future. My final words were “It is of course unwise to make a projection into the future but if we trust neither the elaborate computer climate models favoured by the Met Office nor the projection of Mills- type all-stochastic models this is all we have got….”
I also disagree (totally) with your second proposition. I did not employ a 5th order polynomial in my fig. 2 on the RSS data because of any belief “in its existence”. It would be a travesty of the immensely complex physics to suppose that I did. Polynomials have desirable mathematical properties in approximating an unknown presumed smooth curve (Weierstrass theorem etc). Polynomial regression is one of many deterministic ‘parametric’ methods.
I used TWO methods if you noticed – the other being the popular non-parametric method known as loess. This assumes very little as to ‘what exists’ in the data. It is striking that with care the two smoothing techniques can be made to agree very closely – see also fig 2.
It seems to me that I have been as careful with using a polynomial regression as you would like. So why the lecture?
MSH
The reason to point out the Pause is to show that CO2 has continued to rise but the global temperature has not, so CO2 does not cause temperature rise !!
+100
At least the author has discussed the issue of increasing CO2 during a time of apparent temperature pause. The other issue he brings up is the two demonstrated periods of temperature decline while CO2 is increasing.
These are the issues that most skeptics argue. The length of time for zero slop in temperature may or may not be significant, what IS significant is that it is NOT following the trend in CO2.
Roger
Why is picking all the available satellite data better to determine a pause than picking the data that shows a pause? Temperature has definitely been around longer than our satellites, so picking “all” the satellite data still seems arbitrary to me. Pick the data that makes the point. If the point is that the trend appears to be near zero for a length of time, then pick that data… Jeez If the point is not a valid one, picking data is a pointless exercise. The point WAS NOT to show the trend since satellites became available.
Let’s go back to the original claims: CO2 drives most of the observed warming, CO2 is being released in ever greater amounts, therefore the RATE of warming has to increase. Since CO2 is the main driver and accounts for most warming, variability can slow it but not stop it (or the claim that CO2 accounts for most observed warming is FALSE). These are the claims of AGW.
If the temperature warming rate slows to almost zero over a long enough period of time, SOMETHING is wrong in the claims – PERIOD. Either CO2 does not drive most of the warming, or CO2 has not been released in greater amounts. Pick one. I choose to pick CO2 does not account for most of the observed warming since there is no evidence that burning fuels that produce CO2 has declined.
Lord Mockton is 100% correct for the limited point he is trying to make.
NOTE: I did not now or ever say CO2 does not cause some amount of warming. I believe it does. But it is not overwhelming the natural variability, does not account for most warming, and in general does more good than harm. Statistics THAT! LOL
You get two gold stars !! ….. Well said !!
Precisely, it is all about whether the temperatures are behaving as AGW science demands.
According to Santer et al 2011, we should be able to determine “human effects” within periods of 17 years or less. This gives us the ability to experimentally test AGW science. Isn’t that what scientists are supposed to do? That is the reason to look at subsets of the satellite data. As it turns out the 17 year criteria was first met in the summer of 2013. Fact is, no more work is necessary to state that “human effects” must not be nearly as strong as AGW science requires. But, the fact we continue to meet this criteria month after month after month just strengthens the case. That is what the pause is telling us. AGW (as defined by current climate models) has been scientifically falsified.
No one is trying to extrapolate the pause into the future. Lord Monckton makes this very clear.
What do the varying trends discussed above tell us about measuring methods/processes and changes in weather station inclusion/exclusion and adjustments for whatever reasons?
I care little for Lord Monckton’s clarity of thought or grasp of logic in general and even less for his character, but on this issue he was quite clear about what he was doing. He merely offered the length of time over which one can measure a non-positive least-mean-squares trend, for whatever value readers may see in it an indication of how important natural variation is in comparison with CO2 forcing.
It is self-evident that whether one sees a pause depends on the period over which one is measuring. I did not read Lord Monckton to suggest otherwise. In contrast, it is not clear what Dr. Hodgart is doing. It looks as though he thinks that at any given time there is only one true measure of trend or only one true criterion for whether there’s a pause, and that if one were only to hit upon the right technique he be able to answer the Ultimate Question. I recognize that plenty of people talk that way, but Dr. Hodgart gives the impression that he really believes it and is looking for a way to compute the unique true quantity:
Perhaps I am misunderstanding him. If so, some clarification of what exactly he thinks he’s looking for would be helpful.
Well said. +10
No matter the method, statistics has to deal withnthe fact that temperature series are autocorrelated. This, at a minimum, reduces effective degrees of freedom and increases uncertainty in whatever result. The best treatment IMO is McKitrick’s 2014? paper analyzing three of the main 6 series. He found no statistically significantbtrens for 16, 19, and 24 years respectively. That is sufficient to falsify climate models.
“That is sufficient to falsify climate models.”
A lack of statistical significance can’t falsify anything. It just means that your test did not have sufficient power to resolve the matter. It failed. The observed trend is still there.
And of course, as with McKitrick, you can if you wish design a test that almost guarantees a fail. That says something about the test, not the data.
Nonsense, when the science **requires** significant warming trends within a specified time frame and time frame is reached, you have falsification.
“when the science **requires** significant warming trends within a specified time frame and time frame is reached”
So what are those time frames? Who specified? Where do you get this stuff?
Neither you nor Rud understand how statistical tests work. Say you have data and think your hypothesis Y is supported. But first you should check if there is an alternative “obvious” alternative explainer, prop N (your Null Hyopthesis, often a variant of happen by chance). Here Y is positive trend, and N is zero trend with random variation.
A stat test tests N, not Y. If N can be rejected as making the result too improbable, then Y is strengthened. But if N is not rejected, Y is not weakened; it wasn’t tested. Either N or Y remains possible. The test yields nothing.
Nick Stokes:
You ask Richard M
You don’t know? I am astonished. I will enlighten you.
In 2008 the US Government’s National Oceanic and Atmospheric Administration (NOAA) reported
Ref. NOAA, ‘The State of the Climate’, 2008
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
Please don’t try the usual warmunist evasions of
(a) pretending the finding only concerns times of absence of ENSO
or
(b) pretending the 95% means pauses of “15 yr or more” happen 1 in 20 times.
ENSO is always present so accepting claim (a) would be an acceptance that all the predictions of warming are wrong.
Anyway, the present lack of warming exists whether it is assessed as having starting in 1997 before the Great el Niño of 1998 or starting in 2000 after the 1998 el Niño ended.
The 95% refers to the confidence in the finding that “The simulations rule out zero trends for intervals of 15 yr or more” which is why the finding suggests “an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
However, in 2012 when warming had ceased for seemingly 15 years, Phil Jones of the Climate Research Unit (CRU) in the UK insisted that
. This was a flagrant falsehood because in 2009 (when the ‘pause’ was already becoming apparent and being discussed by scientists) he had written an email (leaked as part of ‘Climategate’) in which he said of model projections,
Clearly, as recently as 2008 both NOAA in the US and the CRU in the UK agreed that “observed absence of warming” for 15 or more years would “create a discrepancy with the expected present-day warming rate” indicated by climate models. And this was a decade into the ‘pause’ which has now existed for probably more than 18 years.
Richard
Nick
You are correct Y has never been tested.
Richard Courteney,
“I will enlighten you.”
No enlightenment there. Your response is totally irrelevant. Richard M claims that ‘science **requires** significant warming trends’. You respond with a statement about observed warming trends.
But there is no pretence about the NOAA statement being about ENSO-corrected data. It’s a simple fact, and your quote, like most others, is designed to erase this essential requirement. The box containing the quote you provide begins
“Observations indicate that global temperature rise has slowed in the last decade (Fig. 2.8a). The least squares trend for January 1999 to December 2008 calculated from the HadCRUT3 dataset (Brohan et al. 2006) is +0.07±0.07°C decade —much less than the 0.18°C decade recorded between 1979 and 2005 and the 0.2°C decade expected in the next decade (IPCC; Solomon et al. 2007).”
They then say
“The trend after removing ENSO (the “ENSO-adjusted” trend) is 0.00°±0.05°C decade, implying much greater disagreement with anticipated global temperature rise. “
In fact their concern then was that the ENSO-adjusted trend was zero, implying much greater disagreement. The reason why the adjustment is important is that ENSO spikes can indeed induce a long following period of negative trend, as Lord M has been repetitively emphasising. But ENSO spikes cannot be easily fitted in to a random noise framework, and in any case do not tell anything about climate trend. As WUWT readers are about to find out, the resulting “pause” just vanishes when the next big ENSO spike comes along. Removing the ENSO effect makes occurrence of a long period of zero trend during warming a much less likely and more significant observation. That is why they set out to discover the duration of that sequence that might be significant, and they suggested fifteen years (for surface data). But only after removing ENSO.
Nick, Santer et al 2011 … “Our results show that temperature records of at least 17 years in length are required for identifying human effects on global‐mean tropospheric temperature.”
They obtained that value by testing climate models and 95% of the time they found statistical warming in 17 years or less. This is the 95% criteria normally used for scientific falsification.
“This is the 95% criteria normally used for scientific falsification.”
No, you’re turning the logic upside down. He’s saying that you need at least 17 years for attribution. He’s not saying that just 17 years of zero trend will suffice for falsification. He isn’t talking about falsification at all.
Mr Stokes’ standard response to the NOAA quote about 15 years of Pause indicating a discrepancy between prediction and reality is to bleat that they are talking about ENSO-corrected data – in other words, data tortured by yet another subjective, Humpty-Dumptyish tampering.
The truth is that over periods of 15 years or more the only correction needed to take account of the synoptic – i.e. self-cancelling – southern oscillation is to ensure that the period of study includes only complete El Niño+La Niña events.
The Pause starts before the event of 1997-2000 and at present remains a Pause notwithstanding the current El Niño spike with no countervailing La Niña yet. The discrepancy, therefore, is real. The models’ predictions have been falsified according to NOAA’s criterion – on the satellite datasets, at any rate. Best to admit that rather than trying to pretend otherwise.
Nick S.:
I give you full marks for comedy in your reply to me!
You write
I did not mention “observed warming trends”: I mentioned the observed LACK of warming trend for about 18 years.
The ” ‘science **requires** significant warming trends’ but there has been no warming trend so THE ‘SCIENCE’ IS WRONG.
And my quote was NOT “designed to erase this essential requirement”. On the contrary, I raised the issue of ENSO when I quoted the NOAA assessment then wrote
You have not addressed those points in your reply and, therefore, your reply claims that “all the predictions of warming are wrong”!
Richard
Lord M,
“Mr Stokes’ standard response to the NOAA quote about 15 years of Pause indicating a discrepancy between prediction and reality is to bleat that they are talking about ENSO-corrected data – in other words, data tortured by yet another subjective, Humpty-Dumptyish tampering.”
This is a pattern at WUWT. Someone says, say, “NOAA says 15 years is enough. Told you so”. I point out that NOAA actually said something different. Response – why would you quote “data tortured by yet another subjective, Humpty-Dumptyish tampering”? The NOAA is the authority that you quoted for the fifteen year claim.
“Bleat?”. I’m just wearily pointing out that conditionals matter. “Passengers with parachutes may safely exit the plane” is not the same as “Passengers may safely exit the plane”. Here in that box from which the 15 year quote is cherry-picked it says:
“El Niño–Southern Oscillation is a strong driver of interannual global mean temperature variations. ENSO and non-ENSO contributions can be separated by the method of Thompson et al. (2008)(Fig. 2.8a).”
They are talking about surface data, but ENSO is an especially strong driver in the troposphere. 1998 is still the warmest year in RSS, and trend calculations for following periods will be negative for quite some time. The NOAA is taking care to remove this effect, to see what trend remains. That is where their 15 years comes from.
Nick S:
I am offended by your reply to Viscount Monckton.
You say
NO! How dare you misrepresent what I wrote in such a manner!
I quoted the NOAA statement to you here because you claimed not to know of it. I referenced it and I linked to it.
Importantly, very importantly, as part of my explanation of what you had claimed to not know, I wrote
and
You replied by trying to pretend that the ENSO issue negates what I had reported but your reply made no mention of my explanations of why ENSO is an evasion and claimed
My quote together with the explanations I provided were “designed” to inform you of the truth.
Viscount Mockton responded to your nonsense by informing you of why your inferences that ENSO provided the ‘Pause’ are wrong. His explanation is an expansion of – and addition to – my having told you “the present lack of warming exists whether it is assessed as having started in 1997 before the Great el Niño of 1998 or started in 2000 after the 1998 el Niño ended”.
And I responded by pointing out that you had ignored my having said “ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong”. And your ignoring that point is tacit acceptance of it, so you have accepted that all the predictions of warming are wrong!
And – in attempt to pretend your behaviour is not reprehensible – you have only answered Viscount Monckton and not me when I was the one who tried to help you overcome the ignorance you had claimed.
Richard
Nick S.:
This post is a deliberate addendum because it is intended to avoid its point being confused with discussion of your obfuscations about ENSO which you provide to evade the significance of ‘the NOAA 15-year limit’ having been broken by the ~18 hear ‘Pause’.
I also told you
‘They’ must now be very “worried” that the model predictions have been shown to be wrong by providing the ‘Pause’ for more than 18 years.
And I infer that your response to this information implies you are among the “worried” ‘they’.
Richard
Richard,
“How dare you misrepresent what I wrote in such a manner!”
I was entirely responding to what Lord M wrote, as I said. I thought I had replied adequately to you earlier. I did not refer to what you wrote. However, to address:
““ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong””
It’s just nonsense. NOAA are addressing the probability of getting a 15 year zero trend, with a general warming perturbed by various sorts of noise. The probability rises with the amount of noise. NOAA are concerned with a specific perturbation, ENSO, which because of its large and fairly frequent peaks, greatly increases the probability of a “pause”. They want a stabler picture so they eliminate the ENSO perturbation. That greatly decreases the probability, or more aptly here, decreases the length at which a pause becomes significant. You want to use that ENSO-decreased length criterion, but keep the noise of ENSO in testing.
And OK, going back to
“I did not mention “observed warming trends”:”
I didn’t respond to that because the falsity is obvious. The core of your response was the quote from NOAA, and the core of that was:
“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
Why that makes your original response irrelevant is that both Richard M and ristvan were talkng about observed nonzero trends of a duration that lacked statistical significance (relative to 0). And my response to them explained why that was just a failed test from which nothing could be deduced. The NOAA quote concerned the likelihood of an observed zero trend.
Nick N:
In response to my repeatedly telling you
you now say to me
Rubbish!
Firstly, the finding was that
That clearly was a finding (n.b. NOT a tested parameter) because the sentence continues saying
This “suggestion” could not exist if – as you assert – “NOAA are addressing the probability of getting a 15 year zero trend”.
In other words, your assertion is plain wrong.
Secondly, your claim of what they did is merely an expansion of your erroneous assertion that “NOAA are addressing the probability of getting a 15 year zero trend”. They did NOT – as you assert – investigate ENSO and, therefore, NOAA were NOT “concerned with a specific perturbation, ENSO, … They want a stabler picture so they eliminate the ENSO perturbation.” They observed that “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more” but they report that ENSO could alter this finding because THE MODELS CANNOT EMULATE ENSO.
What cannot be modeled cannot be in a model so cannot eliminated from a model for any specific test.
Thirdly, if – as you assert – the “noise” of ENSO means the “general warming” becomes zero for more than 15 years then – as I said – ENSO is always present so (pretending the finding only concerns times of absence of ENSO) would be an acceptance that all the predictions of warming are wrong.
Finally, as both I and Lord Monckton have explained to you, the recent 18-year-Pause is observed to NOT be an effect of ENSO (which you call “noise”).
Richard
Richard, some of this is just mystifying. You quote
“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more,”
and say “That clearly was a finding”
Well, yes. So? And you seem to think that that contradicts “NOAA are addressing the probability of getting a 15 year zero trend”
But they set out exactly how they got that probability:
“The 10 model simulations (a total of 700 years of simulation) possess 17 nonoverlapping decades with trends in ENSO-adjusted global mean temperature within the uncertainty range of the observed 1999–2008 trend (−0.05° to 0.05°C decade –1). “
It’s a frequency count from a Monte Carlo (ENSO-ajusted). 17/700=2.5%. You say
“they report that ENSO could alter this finding because THE MODELS CANNOT EMULATE ENSO”
I can’t find that in the text. You say they can’t – they say they did. And GCM’s could, even then. Just a few days ago, I linked the video (made in 2008).
“would be an acceptance that all the predictions of warming are wrong”
Sorry, I just don’t understand that at all.
“the recent 18-year-Pause is observed to NOT be an effect of ENSO”
Well, you can’t observe that – you have to do some analysis, which you rarely do. It is a simple matter of arithmetic that a large peak at the starting point of the trend period will tip the trend down.
“17/700=2.5%.”
Actually, there aren’t 700 possible non-overlapping decades, so the chance of a decade “pause” will be a lot higher. But they would have done a similar count for 15 year pauses.
Nick S.:
You say in response to my last post
Your mystification is because you have filled your mind with nonsense and left no room for sense.
I write to clear some of the nonsense so you can see sense.
Firstly, ENSO is an emergent property of climate behaviour. It is NOT in the climate models because it cannot be: a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
However, you are claiming
NO! They cannot “eliminate” from a model what is not in the model and, they cannot “eliminate” from a model behaviour it does not exhibit. I repeat, ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
As I said
I remind you of the rhyme
I met a man who wasn’t there
by
Hughes Mearns
Last night I saw upon the stair
A little man who wasn’t there
He wasn’t there again today
Oh, how I wish he’d go away…
You are claiming the modellers ‘wished away’ an emulation of ENSO that was not in their models.
And all your arguments are based on your believing that impossible idea.
NOAA did not do the impossible that you claim. And NOAA did not claim to do the impossible that you claim.
Secondly, you are plain wrong when you assert
What NOAA actually did is what they said they did; viz.
i.e. NOAA said they were reporting the behaviour of their model(s).
NOAA were NOT doing as you suggest and addressing the probability of getting a 15 year zero trend. 1.
If they had done that then they would not have found “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.” They would not been examining “for intervals of a decade or less in the simulations”.
And 2.
They would not have reported “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”. A probability does not “rule out” anything.
And 3.
They would not have made the suggestion that “an observed absence of warming of {15 years} duration is needed to create a discrepancy with the expected present-day warming rate”: They would have reported the probability of “an observed absence of warming of {15 years}”.
I anticipated your confusion of (a) the simulations having 95% confidence with (b) the 95% confidence in their finding. I wrote
I hope that removes some of your (deliberate?) “mystification”.
Richard
Richard,
“I repeat, ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.“
You claim, absurdly, that models can’t model ENSO. The NOAA doc says that they can identify and remove the ENSO effect. I referred to one video I posted showing modelled ENSO behaviour; here is another:
As to all your nonsense about how NOAA is not addressing the probability of getting a fifteen year trend etc – if that is true, then why is the report being quoted? Because, it says, that after suitable filtering (mention omitted) the occurrence of a fifteen year stretch of zero trend would be sufficiently improbable (5%) that it would create a contradiction with the model.
Nick S:
I again repeat,
ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
But you say to me
Firstly, the NOAA doc does NOT say “that they can identify and remove the ENSO effect” FROM THE MODEL SIMULTIONS. You wish it did but it does not, and that is why you don’t quote it saying that.
The NOAA doc says
The dubious method of Thompson et al. (2008)(Fig. 2.8a) supposedly separates ENSO effects from climate observations of global temperature to enable the global temperature trend to be compared to model indications of global temperature because the models do NOT emulate ENSO. It is because the models do not emulate ENSO that use of the method of Thompson et al. (2008) is suggested in the NOAA doc.
The flows of water that comprise ENSO events are known so, yes, computer video games can generate pretty pictures of ENSO events. But that is NOT the same as a climate model emulating ENSO. Perhaps you do not know what is meant by emergent behaviour?
Past ENSO events are historical so the data used to generate the computer video games of ENSO could be included in a climate model making hindcasts of climate. But nobody knows the timing and magnitudes of future ENSO events and the climate models do not generate that behaviour so ENSO is NOT in – and cannot be in – the forecast models under discussion.
In hope that you will – at last – understand, I again repeat;
ENSO being an emergent property of climate behaviour is NOT in the climate models because it cannot be, and a climate model would exhibit ENSO behaviour if it was sufficiently good but none is.
Having got that out the way, please understand that your major problem is your refusal to read what the NOAA criterion is.
The NOAA criterion is an indication of their model indications. It says
It provides a reasonable caveat that ENSO effects could alter that finding because their models do not generate ENSO effects. And it points out that when ENSO effects are removed from climate data the ‘Pause’ becomes a greater discrepancy with anticipated (i.e. model predicted) temperature rise when it says
But you pretend the NOAA criterion is not what NOAA says it is and you claim
That is patent nonsense! A “probability” does not “rule out” anything.
And a single “probability” does not apply to 15 yr or more”: e.g. different probabilities would exist for 15 years and for 150 years
Richard
“A “probability” does not “rule out” anything.”
This comes back to the original quote from NOAA that you and Lord M have been brandishing
“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”
So what is 95% if not a probability?
Nick S.:
At last you have started to ask questions instead of asserting ignorant errors. You ask me
I answered that repeatedly saying
Please don’t try the usual warmunist evasions of
…
(b) pretending the 95% means pauses of “15 yr or more” happen 1 in 20 times.
Nobody has been “brandishing” anything. You asked
And I answered by citing and quoting the NOAA 2008 criterion and Phil Jones, then I concluded
Your response was to blatantly misrepresent the 2008 NOAA criterion and to ignore the Phil Jones comment despite being reminded of it.
Both Viscount Monckton and I corrected your misrepresentations and you have been attempting to justify your misrepresentations of the NOAA 2008 criterion by posting loads of bolllocks which I have been refuting. Neither I nor Viscount Monckton has “brandished” anything.
Part of my refutations has been my repeatedly supporting what NOAA said they did (i.e. NOAA reported indications of climate models) and refuting your nonsensical assertion that they did something else. You claim
I have repeatedly pointed out
(a) NOAA did NOT say they addressed “the probability of getting a 15 year zero trend”,
(b) NOAA did NOT report “the probability of getting a 15 year zero trend”
And
(c) What NOAA did report is NOT a “probability of getting a 15 year zero trend”.
And I have repeatedly pointed out to you that your pretending NOAA addressed the probability of getting a 15 year zero trend is daft because if NOAA had done that then
1.
NOAA would not have found “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.” They would not have been examining “for intervals of a decade or less in the simulations”.
And 2.
NOAA would not have reported “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more”. A probability does not “rule out” anything (as I said, “the 95% level” is the confidence they have in their simulations and not the indications of their simulations).
And 3.
They would not have made the suggestion that “an observed absence of warming of {15 years} duration is needed to create a discrepancy with the expected present-day warming rate”: They would have reported the probability of “an observed absence of warming of {15 years}”.
And 4.
A single “probability” does not apply to “an observed absence of warming of” “15 yr or more”: e.g. different probabilities would exist for 15 years and for 150 years.
The NOAA criterion says
Ref. NOAA, ‘The State of the Climate’, 2008
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
An “absence of warming” has existed for more than 15 years.
Richard
Nick Stokes,
Well, in a proverbial nutshell, it’s a 95% chance those models are not modeling the planet Earth . . What has happened, was “ruled out” to a 95% certainty, by the simulations behavior, so they don’t behave like the real climate, in a rather pronounced way.. (The attribution aspect is another layer of hokum pokum down still ; )
Richard,
You keep quoting your little excerpt from the report, with context excised. Here’s the context:
“ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate. “
You keep trying to hide ENSO-adjusted. It’s essential. And you claim 95% is a confidence, as if that is something other than probability. The context shows how they are calculating it. They count. “lies within the 90% range”. Frequency.
Here is the start of the box in which your quote appears:
http://www.moyhu.org.s3.amazonaws.com/2016/1/box.png
In the top plot they show the temperature series with and without ENSO, and the component removed. They show how after removal it has zero trend. In the second plot, they show how the ENSO-filtered trends run negative for up to 10 years for HADCRUT (but not for others). And they show the simulations, with the 70%, 90% and 95% levels clearly described as “range of trends” in the simulations. Frequencies.
Nick Stokes,
This;
“Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more . . ”
Is not the equivalent of this;
*Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations show zero trends for intervals of 15 yr or more, five percent of the time.*
Do you know how often (if at all) the simulation runs showed “pauses” of 15yrs or longer, or are you just assuming that happened 5% of the time? That’s not how people generally speak of something that happens 5% of the time~ Ruled out at the 95% level.
It looks to me like a sneaky way of implying that, perhaps, but not a clear indication it actually happened exactly five percent of the time.
JohnKnight
Yes, I should have said “less than 5% of the time”. They are basically doing a Monte Carlo test. You do lots of runs and see how often some event happens.
NickS:
I had thought this sub-thread was over but I have just now discovered you are still clinging to your straw. I write to remove your straw in hope you will notice you are sunk.
You say
Please state why you think
(a) NOAA did NOT say they addressed “the probability of getting a 15 year zero trend”,
(b) NOAA did NOT report “the probability of getting a 15 year zero trend”
And
(c) What NOAA did report is NOT a “probability of getting a 15 year zero trend”.
When – according to you – NOAA were not reporting the behaviour of their model(s) but had conducted a Monte Carlo Test to determine the probability of “an observed absence of warming of” “15 yr or more”.
To help your reply, I remind that I had written
Richard
” significantbtrens ” ?? CC’s gonna jump on you for that !! LOL
Lord Moncton is not saying there is a trend. He is simply calculating how many months you can go back in time and still show a zero temperature increase when compared to today. It is not a trend, it is a simple mathematical computation which is not open to interpretation – in other words, it is a fact.
.. A very inconvenient fact that alarmists hate !!
“Lord Moncton is not saying there is a trend.”
From the words of the Lord here:
“The hiatus period of 18 years 8 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend.”
what is up with the beginning of the blue dashed RSS (Figure 2) in that 5th degree polynomial ??
and it seems to me that despite some interesting data ‘digging’, calculations of error deviations from the fitted functions would be revealing …
“With this latest version of HadCRUT4 (now issue 4.4) we now get a low warming rate (of about 0.01 deg C/decade) from 2005 (compare flat response with the RSS data). I have not included the year 2015 which was not completed when running all these calculations.”
You should include 2015. It makes a very substantial difference. Using monthly data, the trend from Jan 2005 to Dec 2014 is 0.0169 C/dec. But to Dec 2015, it is 0.1291 C/decade (Hadcrut 4). This shows the fragility of relying on such short length trends. But it also casts doubt on whether 2005 is then a break year at all. And I think it shows the overall weakness of your model here.
No, you should never use half of an ENSO pair. You won’t know anything until both halves are complete.
Despite all your pontificating about avoiding ENSO when doing trend analysis in your comments above, you then go on and try to use a fragment of an ENSO event to do trend analysis in accordance with your bias (the worst possible act). Doesn’t this irony come across as just a tad hypocritical to you?
“you then go on and try to use a fragment of an ENSO event”
I’m not trying to use a fragment. I’m not advocating calculating 10 year trends at all. And I do think that for this analysis it would be better to remove ENSO effects. I’m simply pointing out that the article is not using all the data, and if you do, it makes a big difference.
What difference does it make whether the Earth’s temperature (whatever that means) is going up or down or staying constant? If it goes up in parallel with CO2, noting is proven by that. That doesn’t prove causation. The proof that the AGW theory is false comes from the complete absence of a physically sound mechanism. The precedence of temperature changes prior CO2 changes invalidates the IPCC paradigm. Too much is being make of temperature trends even by realists.
M.S.Hodgart:
I am surprised you have so clearly misunderstood the trend analysis of Viscount Monckton that you write
Not so, he has not “chosen” anything and he does not “disregard” anything.
He addresses the question of “How long has no linear trend been discernible up to now in the RSS time series of global average temperature?”
He considers the linear trends which exist for months prior to now. And the longest linear trend prior to now which has no discernible positive slope is his determination of the length of the ‘Pause’.
It turns out that his most recent determination is 18 years 8 months prior to the end of December 2015 and this is one month less than his determination for the length of the ‘Pause’ prior to the end of November 2015.
This is important because (as davidmhoffer lists above) all the model predictions of global temperature indicate that a ‘Pause’ of this length is not possible. Hence, there is empirical evidence of the models’ failing to provide useful indications of future global temperatures.
You say
Well, Viscount Monckton cannot be a “statistician” according to that definition because he makes no assumptions about his line (he assesses the linear rise predicted by modellers) and his analysis is conducted because he does not have a foregone conclusion.
Importantly, linear regression is the required analysis to discern if there has been a consistent rise of any form. Also, and relatively trivially, assumption of any form of trend other than linear requires a justified model of the form but no such model exists.
Richard
I find it hard to believe that M.S.Hodgart cannot understand such a simple concept ..I think he’s really a warmist at heart !
I agree, Marcus. He is to be congratulated for admitting to being a “lukewarmist” above, but, this self-awareness did not correct his bias. His belief in the lukewarm conjecture (for lukewarmism is still a BELIEF, a FEELING (usually based on assuming that the properties of CO2 in highly controlled laboratory conditions justify the belief that human CO2 can drive the climate of the earth) about CO2) has clearly biased his reporting for the warmist view above.
One evidence of his bias is (as Gary Pearse does a fine job of elaborating upon in this thread) his non-fact based moral equivalency falsity here:
The documented history of the science realists’ writing speaking does NOT reveal such a tendency on their part to selectively cite to any degree near the level that would make the realists come even CLOSE to the blatant bias displayed year after year by the AGWers.
Agreed, Marcus. Janice, I think you’ve hit the nail on the head.
Marcus, it is sadly all to easy to believe that you cannot understand more complex concepts. Look at Fig. 2.
And BTW did you see the blue “Lukewarmist” word?
Richard, have a look at fig 2. Does it show a plausible negative trend within half of Monkton’s ‘pause’ period? If so, surely it ADDS to the evidence that the effect of anthropogenic CO2 emissions are minimal?
TonyN:
You ask me and say
Yes, but that was not my point. I was explaining to M.S.Hodgart his misunderstanding of what Viscount Monckton has done.
Also, I would not want to discuss “evidence that the effect of anthropogenic CO2 emissions are minimal”. It is for those who claim anthropogenic CO2 emissions have an effect on global climate to provide evidence for their claim: to date they have provided no such evidence.
Discussion of whether there is evidence for minimal effect of anthropogenic CO2 emissions distracts from the fact that there is NO evidence of anthropogenic CO2 emissions having any effect on global climate, none, zilch, nada.
Richard
To richardscourtney
Clearly I did not manage to make my point. I have no difficulty in understanding Monckton’s procedure but I disagree with his interpretation. The calculated slope of a linear regression between two selected dates is assumed by him, you and many others – to define the trend in temperatures over that period. You just take this for granted. That is the “unwarranted assumption”. Accordingly when that slope is found to be zero (or near zero) over a selected period there has been a pause (or Pause) for all that time. But has there? There are real difficulties with this assumption.
Classically linear regression would be applied if there were a well-justified belief that Nature has created a straight line defined by an offset and slope buried in the data beneath additive noise v[k] – strictly a discrete zero-average weakly stationary stochastic sequence (model 1). The trend is then the slope of that line.
There is every reason to suppose that such a model is unrealistic here. You can easily see this if you run another regression over the years from Jan 1979 [ ] to Feb 1997 when Monckton starts his. You get a peculiar-looking graph where the ‘trend’ temperature jumps more than 0.2 deg C in just one month. Although he did not like my approach any better jim January 24, 2016 at 3:30 pm on Monckton says exactly this: “When calculating the slope he ignores all the data before each start date – which means the intercept for his fitted line is incorrect – it assumes temperatures magically jumped from the pre-pause levels to the level at which they “paused”, all within a single month. It makes no sense”.
Quite. There is however no problem with linear regression on these data under a different interpretation and if applied to the appropriate months. To start making better sense of things and still use linear regression I suggested that we have to envisage that Nature has created a sampled smooth curve s[k] running through all the data added to which there is again added noise v[k] ( model 2). When running a linear regression over any period on this model the calculated slope now has to have a different interpretation – as an average trend i.e average slope and not the actual slope.
To estimate this s[k] one can resort to the most convenient of a battery of well-known techniques. The most powerful approach seemingly is to use two that are very different but whose results are closest to each other: polynomial regression and the popular loess method. See blue curves in fig 2. I then argued to Jim that “the two segmented linear regressions are now justified as piece-wise linear approximations to these curves ……”. Monckton still gets his pause but it starts later in September 2003. I elaborated my justification on these lines in my response to Jim – if you want to read on.
Is that a bit more clear? I hope so [I apologise for my occasional typo in sometimes wrongly putting the start of the RSS data in 1977].
MSH
M.S.Hodgart:
Thankyou for your reply. Please don’t worry about typos.: I make them all the time.
You say
Sorry, but your clarification says you do misunderstand Monckton’s procedure when it refers to “two selected dates”.
As I tried to tell you, Viscount Monckton does NOT select two dates.
He adopts ‘now’ as being the most recent monthly datum for global average temperature anomaly (GATA) and uses that as a starting point. He then considers each and every monthly datum of GATA from that starting point. Each datum provides an end point of a time series of GATA. In each case he determines the linear regression slope for the time series between the start and end points. The shortest obtained time series that provides a positive slope (i.e. indicated warming) is his result.
In other words Viscount Monckton does NOT select “two dates”: the start point of ‘now’ is “selected” by efluxion of time (n.b. not Viscount Monckton) and the end point is his result which is “selected” by the data (n.b. not Viscount Monckton).
Secondly, it is not an “assumption” that use of linear regression to determine a trend is appropriate. Use of linear regression to determine a trend is a standard practice of so-called ‘climate science’: Viscount Monckton’s method is to assess data provided by ‘climate science’ and, therefore, he has adopted the accepted practice of ‘climate science’. Use of any other practice would not be appropriate.
You continue saying
Yes, but so what? The intention is not to model the time series of GATA.
Viscount Monckton’s method is intended to determine if there is discernible warming indicated by the time series of recent GATA according to practices of ‘climate science’.
Also, as I said
Richard
@ur momisugly richardscourtney – January 30, 2016 at 4:18 am
You say; “Viscount Monckton does NOT select two dates.”
YES HE DOES!
Date 1 is ‘now’. Date 2 is when he backtracks through the record and when the line between has a zero slope, he stops!
As he himself indicates, if future years produce higher or even lower numbers, his method will not produce a flat line AT ALL! … and his ‘pause’ will disappear.
You then say ” Use of linear regression to determine a trend is a standard practice of so-called ‘climate science’ …… he has adopted the accepted practice of ‘climate science’. Use of any other practice would not be appropriate.”
RIGHT if you are engaging in Politics (aka a zero-sum win/lose game] but WRONG if you are engaged in Science ( aka a win/win game which seeks to increase the sum of human knowledge)
In essence, Monckton is engaged in a political argument and is using less than robust metaphysics to refute the AGW case. He really should look at Hodgart’s method, which apart from providing us all with more understanding of ‘what nature is up to’ ….. gives him a much better metaphysical weapon for his essentially political ’tilting’.
There’s an error in the legend to Figure 1, namely, the stated “origin” years for the red and blue lines. Both are way too 1970s! 🙂
Why does anyone expect a straight line trend like a light switching on or off? With multiple inputs and factors that impact on global temperature, whether it be rising CO2 the AMO, the PDO, el Nino, ocean currents and layer mixing and sun output variations it must be a series of complex curves. Or am I just dumb?
No-one expects a srtaight line trend, and it isn’t observed. What you see is a pattern that is the sum of various random and quasi-periodic processes, and possibly a steady rise (or fall). Regression in effect passes this through a smoothing filter, which attenuates the cancelling high frequency effects, and leaves a linear trend unchanged (and a non-linear trend averaged).
“Or am I just dumb?”
NO. !
Those who refuse to see the cycles and singular events in the NATURAL environment, are likely to have much egg on their faces over the next few years.
AMO, PDO, Solar are all heading into a cooler phase.
Will be fun to watch. 🙂