Another impartial look at global warming…

Guest essay by M.S.Hodgart (Visiting Reader Surrey Space Centre University of Surrey)

 

A feature of the politicised debate – if such it may be called – over AGW (anthropogenic global warming) and so-called ‘climate change’- is the tendency on both sides to cite only the evidence supporting their views and to ignore what does not. Scientists of course are supposed to be above this sort of thing and to take into account all relevant evidence.

One finds a lot of partiality when it comes to interpretation of the trend in climate data – particularly the available time series of average temperature measurements on the surface of this planet. Is it going up or down or has it paused? What is happening?

Sceptical commentators were the first to draw attention to a recent pause or hiatus in global temperatures and are naturally tempted to see this as being persistent for as long as possible. The ‘warmist’ climate scientists – those that compiled the IPCC reports including those who work for or presumably get their research funding from the UK Meteorological Office have tended the other way. For a long time they were in a state of denial of any pause – not even conceding any reduction in warming rate – presumably because anything that detracted from the sacred dogma that an uncontested increase in atmospheric CO2 must entail a rise in temperature was very unwelcome.

But where both sides of the debate are often referring to the same data one must ask why it is not possible to come to a more objective conclusion.

I focus first on the time series of remote sensed TLT satellite measurements released by Remote Sensing Systems . I also look again at the HadCRUT4 data which were the object of my analysis in the WUWT of September 2013. It should be emphasised that the physical accuracy of any of these data is not under review here and is a separate issue.

Plotted either as monthly or annual updates the time series of globally averaged temperature measurements shows a substantial random-looking scatter from one month to the next (or year to the next). This scatter and a general lack of knowledge as to what exactly drives the temperatures makes it difficult to determine the trend. Yet so many people debate, write and comment as if the trend in these data were entirely obvious. They think they know – ignoring the fact that the scatter in the data makes for a significant problem, not least in establishing what a trend means. The distinguished econometrician Phillips has memorably written (see his introduction)

No one understands trends. Everyone sees them in data.

also (and not altogether ironically)

A statistician is a fellow that draws a line through a set of points based on unwarranted assumptions with a foregone conclusion.

In other words be careful if you run a linear regression on data like these. In the spirit of impartiality and with all respect for his warning I try here to draw reliable conclusions about the trend from these particular cited data. I must however put on record that like our ‘climate lord’ Matt Ridley I am a ‘luke-warmist’. My sympathies are with the ‘sceptics’ because there seems to have arisen an officially-sponsored global warming industry and a general scare-mongering by and of the scientifically ignorant. It has for example become a political ‘fact’ – contrary to all biology and chemistry – that CO2 in the atmosphere at present or worst-case future concentrations is or will be a pollutant i.e. a poison. It is not; its presence is essential to plant growth and therefore our survival. The material bulk of all trees and crops derives and is converted from CO2 in the air. Trees and crops grow out of the air not the ground! See the brilliant “Fun to imagine” TV series by Feynman. It is difficult to take seriously an unremitting propaganda that is prepared to distort the science as badly as this.

Lord Monckton and the RSS data

Viscount Monckton of Brenchley is a prominent climate sceptic. In a recent release to WUWT he emphasises what seems to him an obvious fact that global surface temperatures have paused for almost two decades. He is not alone in this view but let us see how he comes to this conclusion. He appeals first to the TLT satellite measurements released by Remote Sensing Systems (RSS). By the simple procedure of linear regression on their monthly data he finds for effectively a zero slope (his last cited month was September 2015) going back to February 1997. I replicate his result in my fig 1 (the red line). In consequence it seems obvious to him – and to so many others – that indeed global warming has stopped for all this time. But has it?

His problem

The problem is that he has chosen to disregard all the prior months of available measurements going back to January 1977. A linear regression over all these months yields a line (brown) with a slope of 0.12 deg C/decade. Although he acknowledges this effect he does not seem to realise that this longer regression makes his conclusion untenable, whatever assumptions are made as to what the linear regression achieves.

He probably assumes that the slope resulting from linear regression determines the trend in global temperature. In other words “whatever I choose to calculate and the way I do it defines the observed effect”. If he does then he runs into a flat contradiction. The red line gives him his “Pause” (he uses a capital letter); but the brown line says that over the same time interval temperatures continued to rise. So which ? The trend can’t be doing both. The RSS web-site plots only the longer span regression. For them there is no pause.

If however he were to make the more orthodox assumption that linear regression estimates a linear trend there are still difficulties. It could be that the data back to 1997 conforms to a classical signal + noise model with a straight line of some slope and offset (the signal) which one cannot see because of an obscuring random variation (the noise). The standard model is

clip_image002 1

(i) (ii)

where z[k] is the time series, the variable k is a count in months or years (it is easiest to start at zero), and the signal= trend in (i) is defined by the offset a and rate b. The noise terms v[k] in (ii) are introduced in order to give an account of that random-looking fluctuation we can see in the time series. Ideally they answer to a description of ‘white’ noise but the terms here exhibit some limited correlation – approximating what electrical engineers call ‘low pass noise’. Linear regression estimates an offset clip_image004and slope clip_image006 which are in error from the true a and b because of that scatter. There are then two problems – the minor one being that his zero slope is at best a likely estimate – it is not definite.

More importantly it is confusing to decide over just what span of years this model (1) could be valid. We could postulate that model (1) applies over a limited span. But it is asking a lot of Nature to oblige Monckton with even an approximation to a linear model as which just happens to start in Feb 1997. If it applies over all the years then the two regressions are estimating the same trend and the flat red regression is a ‘freak’ due to a chance combination of noise terms. Again one would conclude that only the longer regression had any validity.

clip_image008

Fig 1 RSS monthly data and linear regressions. Red line from Feb 1979 to September 2015 (Monckton’s regression). Blue line: regression from mid 1973. Brown line: regression through all data.

But there is hope for Lord Monckton still. It can be shown that the assumption that a linear trend runs over the whole is unlikely to be true. The difference in slope between the two regressions of 0.12 deg C/decade is too large to be attributable to ‘chance’ – as one can readily determine. The two regressions and also a third regression (blue line) calculated from mid-1993 with an intermediate slope strongly suggests that beneath the noise the trend is not following a straight line.

All three lines can be reconciled if we allow that there is a non-linear trend – as indeed the IPCC scientists readily concede in ‘Box 2.2’ of their latest report AR5. There has to be something more complicated than a straight line beneath the noise. A generalisation of (1) is the classic

clip_image010 2

where z[k] are again the data points, and the signal = trend s[k] follows an assumed but unknown curve. The v[k] are again noise terms. The curve hidden in the data can be assumed to cover the whole span of years. Model (1) is at best an approximation over a limited span.

A linear regression is not invalidated by this model but the computed slope has to be interpreted differently. It will have to be seen as an average of a trend with some actual variation within the span of years.

Accordingly the overall regression (brown line) computes an average trend of something which is non-linear between the years 1979 and 2015. But Monckton’s regression in principle is also no more than an average trend. So yes: there is a ‘Pause’ but its strict interpretation is that “an estimate of the average trend from Feb1997 to Sept 2015 happens to have a zero slope”. But no: he has not demonstrated what is the most likely actual trend over this time.

As I show below it is much more likely that temperatures were still rising past 1997 and that Monckton only gets his Pause from a later date. As many others have pointed out it is easy to get fooled in statistical analysis by an apparent pattern suggested by what turns out to be the influence of a random component in the data.

Monckton’s construction does have one useful consequence: he has shown that none of these linear regressions (including his own) is likely to be estimating a straight line.

Alternative stochastic model?

In this deterministic trend model (2) there is assumed to be some unknown but well-defined curve or line concealed by low-pass noise i.e. strictly a weak sense stationary stochastic process. We need to be aware of a substantial literature which views the entire time series as a generalised non-stationary stochastic process. It is ‘all noise’. This approach is the preferred choice of econometricians who have taken a look at climate data. In his extensive publications Professor Terence Mills has looked at both approaches but favours the all-stochastic. If identification of ARIMA processes is your meat then there is plenty to work on. I wish you luck! In my opinion the stochastic approach leads to paradox and a terminological confusion. The data series has to be regarded as the output of a feed-forward and feed-back machine whose input is a white noise. If this were true then every possible time series is ‘random’. So where is your anthropogenic global warming ? I will follow the climate scientists and stay with deterministic trend estimation in general and (2) in particular.

Estimating a non-linear trend

If we have to fall back on the generalisation which is (2) then we shall have to estimate s[k] while only having access to the data z[k]. This is an exercise in curve fitting– for which there are a plethora of methods.

The difficulty with all methods of curve fitting is that there are essentially two kinds of error to contend with: the random error or variance due to the omni-present noise v[k] ; and a systematic error or bias due to the poor fit of a proposed fitting function to the unknown hidden signal s[k]. Whatever method is adopted the unavoidable problem is to decide if the computed curve is over-fitting (too much random error) or is under-fitting (too much bias error). There is a model selection problem.

In my earlier release to WUWT back in 2013 analysis of the HadCRUT4 data I proposed using a cubic loess – which Mills shows is superior to quadratic or linear loess – and also a polynomial regression In the case of loess the problem is to decide on the effective window width and with a polynomial to decide on the degree.

For loess if the window width is too narrow random error dominates over the systematic and if too wide vice versa. For a polynomial regression if the degree is too high random error dominates over the systematic and if too low vice versa. There are many model identification methods designed to guide a choice – starting perhaps with Akaike Information Criteria, modifications such as that by Hurvich and Tsai and many more. There are also various forms of cross validation technique. But they seem to me (having tried some of them) to be uncertain and unreliable. Statistical experts may disagree.

Corroborating curve fitting

Whatever the procedure the would-be statistician is left with a degree of freedom in allocating a crucial parameter. Some years ago however I stumbled on the fact that a combination of cubic polynomial loess and a standard polynomial regressions offer a unique choice of window width for the former and degree for the latter which gives the least disparity between the two generated curves. The one selects the other. The combination is self-selective. This idea seemed to work well on the HadCRUT4 data. This serendipitous result is now found to apply to the RSS data. In fig 2 a (half) window width of 168 months for a cubic polynomial loess and a polynomials degree of 5 give the closest agreement to each other (shown in blue dashed lines with no attempt to distinguish between them).

These very similar curves are perhaps the most likely deterministic estimates of the trend but they cannot be the exact truth. The uncertainty is again due to the noise present in the data. Assuming however that they are ‘close enough’ what they have in common, if we disregard the discernable oscillation, is a depiction of a rising trend followed by a pause effectively starting around 2003 – and not 1997.

Alternative segmented linear regression

The shape of these curves provides also a motivation for a different idea: to apply a split or segmented regression. The idea is to run two regressions over all the data years but with a break point which offers the least discontinuity between the two segments.

The break point is found after a trial and error search to be September 2003. Monckton still gets his pause but it is now reduced to the last 12 years. The first segment of the proposed regression in fig 2 from 1997 to 2003. finds for a computable rate of 0.16 deg C/decade. There is a pause after that over which the trend is indeed flat. The trend does not literally switch in slope on the month of September 2003. The purpose is to provide a meaningful computable rate.

clip_image012

Fig 2 RSS monthly data Jan 1979 to September 2015. Dashed blue curves: cubic polynomial loess with 168 month half window width; polynomial regression with degree 5. Continuous red lines: segmented linear regression with break point September 2003.

However each regression is seen by comparison with the loess and polynomial curves to be an acceptable approximation. The two segments are plausible averages over respectively separate ranges of data. The apparently contradictory or competitive regressions in fig 1 are now explained by more than just positing average slopes of a non-linear trend. Some information has been gleaned as to what that trend consists.

Application to HadCRUT4 data

The RSS data tell us nothing about global trends before 1979 and one has to turn to the publicly available land and sea-based surface measurements. The UK compilation HadCRUT4 goes back to 1850 but the two US series go back only to 1880. It is not my intention to try and assess the accuracy and reliability of any of these compilations. It is clearly a difficult exercise relying on measurements which were never intended for a systematic global experiment. Particular difficulties must be associated with sea temperature measurements which historically were very crude indeed. The series is of course under continual review from both its compilers and from sceptical critics – which can only be a good thing. Avoiding the very important issue of measurement error what can be inferred about the trend in global temperature if we should decide to trust HadCRUT4? To repeat: in my previous submission to WUWT in September 2013 I used this self-checking combination of a high degree polynomial fit and a cubic loess. But now let try something simpler – a succession of split linear regressions. We will need more than one break year. The same criterion will be adopted: that there needs to be the least discontinuity between successive regressions. All the break years meeting this requirement have to be searched and discovered by trial and error.

The result of this exercise is shown in fig.3 on the annually updated time series.

clip_image014

Fig 3 HadCRUT 4.4 annual boxed connected points to 2014 . Discrete heavy spots are Met Office approved discrete decadal averages. Brown lines are sequential regression segments. Arbitrary start from 1870; break point years 1910, 1942, 1975, 2005. Estimated r.m.s noise clip_image016= 0.098 deg C. Red lines estimate average trend; discovered break point year 1941; post-war average trend 0.087± 0.012 (2 s.d) deg C /decade from 1941 to 2014.

I start on the same year 1870 as in my previous report to WUWT. We need four break years – splitting the trend estimate into five segments (see brown lines). It should be noted that these break years are discovered – not arbitrary choices. The heavy points also depicted are discrete decadal averages of temperature located in the middle of each decade – a simple statistic which the UK Met Office has long favoured and was adopted for the first time by the IPCC in their AR5 report (see part 2.4.3 AR5 )

As can be seen the proposed line regressions are in excellent agreement with these averages. This agreement surely promotes confidence in both procedures. Comparison with my earlier presentation also shows a good agreement with optimally chosen cubic loess and polynomial regression. One can see a broad similarity with the RSS time series from the 80s onwards. The temperatures started rising from 1975 and no pause is found until a break year of 2005 (two years later than for the RSS data). With this latest version of HadCRUT4 (now issue 4.4) we now get a low warming rate (of about 0.01 deg C/decade) from 2005 (compare flat response with the RSS data). I have not included the year 2015 which was not completed when running all these calculations.

One should emphasise that (i) these computed lines are probabilities not certainties; (ii) they are not meant to be taken literally but to be seen as approximants to some postulated smooth curve which is hidden from view and for which the loess and polynomial regressions may be better estimates.

The split regression segments graphically convey the impression that there were two long periods when temperatures were actually falling. Temperatures fell from at least 1870 to 1910, but rose from 1910 to 1942. They then were falling again from 1942 to 1975. From 1975 to 2005 warming resumed with a probable rate of 0.20 deg C /decade. But the warming did not persist at this rate. It seems to me to be probable that a third half- period has begun in which if there is now a pause (but with revised HadCRUT4.4 it is now a very slow warming).

This recent pause looks to be a continuation of an oscillation of global temperatures with a period of slightly more than 60 years going right back through the record imposed on a generally rising mean trend. I am not of course the first ‘sceptic’ to point this out.

I come to much the same conclusion as in my 2013 report. It seems that the much simpler sequential regressions are as convincing a way of specifying the trend in the data as my previous effort using polynomial regression and cubic loess.

What is the matter with the UK Met Office and the IPCC scientists?

In the summer of 2013 the UK Met Office, and the academics which they support, called a press conference in London to concede (reluctantly) a pause or a ‘hiatus’ in global temperatures and also confess they hadn’t clue as to why it was happening. The rather critical BBC journalist David Shukman who was present noted that

….the scientists say .. pauses in warming were always to be expected. This is new – at least to me…I asked why this had not come up in earlier presentations. No one really had an answer, except to say that this “message” about pauses had not been communicated widely..

Indeed! The press conference coincided with a reports by the Met Office (report 1, report 2, report 3) on the same theme. What the Met Office scientists did not discuss or even concede in that 3-part report is the presence of substantial oscillation over the historical record. This oscillation surely cannot be attributed to increasing concentration of atmospheric CO2 and it accounts for half the faster rate of warming in the 80s and 90s.

I find it troubling that presumably intelligent scientists (and they have competent statisticians also) cannot bring themselves to acknowledge – let alone explain or even properly discuss – the statistical fact that two extended cooling periods have featured in the past while CO2 levels were presumably always rising .

The reader will find the same statistical obfuscation in the two most recent reports (AR4 and AR5) released by the IPCC. A pause (or hiatus or standstill) is most unwelcome. Yet there is surely something to explain here for those who believe in the dominant anthropogenic effect on global warming. Since at least 1958 with the Keeling measurements (Mauna Loa etc) – and no doubt long before that – atmospheric CO2 levels have been rising monotonically (after seasonal averaging). It is hard to avoid the impression that there has been political pressure not to acknowledge the obvious: that an ever-rising concentration of atmospheric CO2 cannot be the only effect determining global surface temperature.

Trend v. average trend

In principle an oscillation does not have a trend. There is a need therefore to identify a mean trend which discounts that obvious oscillation. As suggested before one can differentiate

trend in the data = mean trend in the data + quasi-periodic oscillation

How then to estimate this mean trend? My previous effort was perhaps too elaborate. The following may be more convincing. One can construct a split regression with just two segments (the two red lines in fig. 3). To my mind the lines are steering a convincing middle course between the oscillating trend conveyed by the multiple split regressions. They may be about right. The break year of 1941 is again not an arbitrary choice: it has to be searched in order to ensure the least discontinuity between the two regressions with this construction. This notional mean trend is being estimated by two average trends computed by linear regression between favourable years. The post war average trend is found to be 0.087 ± 0.011 ( 2 s.d) deg C /decade i.e. less than 0.1 deg/decade which is half the rate of the actual trend which peaked (temporarily) in the 80s and 90s. The error limits are computed after first estimating the standard deviation of the noise of clip_image018 0.098 deg C.

It is extraordinary that in their various releases neither the UK Met Office nor the IPCC seem to want to confront these statistical facts in their own data. It is of course unwise to make a projection into the future but if we trust neither the elaborate computer climate models favoured by the Met Office nor the projection of Mills- type all-stochastic models this is all we have got. One can only note that in the 85 years from now to 2100 the projected increase could be around 0.0087 ´ 85 = 0.74 degrees. Could this be realistic and if so is that a cause for alarm? I only ask.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
206 Comments
Inline Feedbacks
View all comments
Mike
January 22, 2016 5:35 am

Trend v. average trend
In principle an oscillation does not have a trend.
Oh yes is does ! If you pick your dates properly. 😉
http://climategrog.files.wordpress.com/2013/04/warming-cosine.png
There is a need therefore to identify a mean trend which discounts that obvious oscillation.
No there is a need to realise that neither the trend nor the mean trend have any meaning.
The random element is in the “forcings” ie in dT/dt. The temperature record is simply the integral of this randomness. That makes it a “random walk” which will have periods of time with trends. It’s that simple.
If much of the variation is assumed to be random or “stochastic” all the various “trends” are simply fitting to segments of a random walk , they mean NOTHING.
When I started reading I was hoping that the author was going to address. Instead he seems to be buying into the trends game.
One can only note that in the 85 years from now to 2100 the projected increase could be around 0.0087 ´ 85 = 0.74 degrees. Could this be realistic and if so is that a cause for alarm? I only ask.
No, this is no more realistic than anyone else’s cherry-picked arbitrary trend fitting and totally spurious projection way outside the data fitting period.
Sorry , it’s garbage. I’d say that if a warmist did something similar and I’ll say it if M.S.Hodgart (Visiting Reader Surrey Space Centre University of Surrey) does it.

TonyN
Reply to  Mike
January 22, 2016 6:55 am

Mike, from your post I’d say there is a need to re-read the OP. Especially the last para.

January 22, 2016 6:42 am

bobfj
January 21, 2016 at 2:47 pm
In Fig 3 (HadCRUT 4.4)… one could just as validly say that there are ‘plateaus’ centred around 1945 and developing around 1910 by a simple process known as “eyeball”… The earliest study that I know of to imply this is by two Russians, Lyubushin & Klyastorin (2003):
http://www.biokurs.de/treibhaus/180CO2/Fuel_Consumption_and_Global_dT-1.pdf
They predicted cooling starting in 5-10 years (from 2003). Russian scientists seem to follow the data and science where it takes them – very refreshing. One bit of data that I’m surprised has not been brought forward over the years by skeptics – I seem to recall seeing something on WUWT once- is that of the Russian astronomer Adussamatov. I love this news report from no less than National Geographic!!
” Habibullo Abdussamatov, head of space research at St. Petersburg’s Pulkovo Astronomical Observatory in Russia, says the Mars data (the data is from NASA!!!) is evidence that the current global warming on Earth is being caused by changes in the sun.
“The long-term increase in solar irradiance is heating both Earth and Mars,” he said.
NASA noticed in 2005 that the southern polar ice cap on Mars had been shrinking for 3 years (likely shrinking before but not so noticeable or ignored?)
I especially like the last sentence on page 1 leading to an outcry on page two that is worth reading:
“Abdussamatov’s work, however, has not been well received by other climate scientists….”
I urge everyone to read this.

steveta_uk
January 22, 2016 8:56 am

Oh dear, Mr M.S.Hodgart, you’ve really misunderstood Monckton’s point, haven’t you?
Analogy: picture that you are walking along a plateau at the top of a hill. Your companion notices that you have not been climbing the hill for over 300 yards. He proves this by using his handy theodolite that shows how far back the slope has been flat.
And you respond by saying that you cannot measure just back to where the hill flattens out – you must measure right back to the base of the hill, and so you are clearly still rising, though at a reducing rate.
Can you not see just how silly that sounds?

TonyN
Reply to  steveta_uk
January 22, 2016 9:47 am

Clearly you know a lot. So, perhaps you would be good enough to do a series of five linear regressions on the past 5,10,15, 20, 25 years respectively, and show us what each of your regressions tell us about ‘The Pause’ ?
And then pehaps you could also tell us why some Climatologists have been claiming a recent series of ‘hottest ever’ years, during ‘The Pause’.

TonyN
Reply to  TonyN
January 22, 2016 10:34 am

Opps! The above post was meant for Steveta_UK

JohnKnight
Reply to  TonyN
January 22, 2016 4:24 pm

Tony,
Do you have a point?
“And then pehaps you could also tell us why some Climatologists have been claiming a recent series of ‘hottest ever’ years, during ‘The Pause’.”
Maybe they are lying, or are taken in by lies. That’s what it looks like to me . .

steveta_uk
Reply to  TonyN
January 23, 2016 6:27 am

TonyN, while walking along the plateau at the top of the hill, would you really expect your climatologist friends to repeatedly points out that for the last 10 yards, you never been higher, over and over again?

TonyN
Reply to  TonyN
January 23, 2016 10:45 am

steveta_uk
You do realise that your ‘plateau’ is riven with pinnacles and crevasses, and according to Monckton is shrinking, and may well disappear altogether ?

Janice Moore
Reply to  TonyN
January 23, 2016 10:56 am

TonyN, you misunderstood Monckton. He got into his Chevy Suburban and drove [from] the wall (a wall only God can see beyond) at the end of the 18-mile-long plateau, disregarding minor fluctuations (to make an issue of which, per Dr. Richard Lindzen, is, per se, dishonest), checked the odometer and saw that his earlier reading of the length of the plateau was off by a few feet.
In case you misunderstood the basics:
1. The Earth has been, in ups and downs, generally cooling for about 6,000 years.
2. The earth has been, apparently, warming slightly since the end of the LIA.
3. The earth is, now, not warming. That’s all we know. To call this stop in warming a “hiatus” (in warming) is presumption.
[if the “wall” represents a known elevation up the flank of a mountain whose final slope and height is unknown, Monckton in your example is driving back away from that “wall” into the past. .mod]

Janice Moore
Reply to  TonyN
January 23, 2016 10:56 am

“…drove to the wall.”

JohnKnight
Reply to  steveta_uk
January 22, 2016 6:09 pm

Do you mean those who have obviously asserted that we are headed for a climate meltdown? Or do mean to imply that others must somehow prove a negative in that regard?

TonyN
Reply to  steveta_uk
January 23, 2016 1:30 pm

Janice,
I quote from Monckton’s first posting upthread;
“…… The el Nino persists in region 3.4, and that is likely to keep the temperature rising for some months, and perhaps to extinguish the Pause for a time, and perhaps for good”
Monckton’s plateau or Pause is, according to him, likely to be impermanent. It has already shrunk by a year!
If you want a case to show that Anthropogenic CO2 emissions are not a significant cause of warming when compared with other natural causes, you will need more robust evidence that does not melt away with temperature rises from natural causes.
Hodgart gives you that tool, which points to warming and cooling within the recent data-series, and these indelibe facts will not melt away, firstly with more warming, and secondly becase it will not be possible for the Warmists to doctor these recent records.
Look at his Fig 2. And if you have the time, read his OP again.

JohnKnight
Reply to  TonyN
January 23, 2016 3:07 pm

Tony,
“If you want a case to show that Anthropogenic CO2 emissions are not a significant cause of warming when compared with other natural causes, you will need more robust evidence that does not melt away with temperature rises from natural causes.”
It’s the same record, nothing will melt away if temps start rising (or falling) in a sustained fashion, except the present tense in referring to this “pause” in temp change.
Temps have not risen in eighteen years
Temps didn’t rise significantly for eighteen years
See how that works? You just change the wording a bit, nothing melts away.
Do you understand that much?

brians356
Reply to  TonyN
January 25, 2016 9:41 am

Plot temperature for the past ~18 years against CO2 concentration. Gaze upon it. Think. Is CO2 really what’s forcing temperature? None of the plethora of IPCC models can account for the disconnect between temperature and CO2 for such an extended period. CO2 concentration climbing like a homesick angel – temperature essentially flat (the “plateau” discussed elsewhere.)

TonyN
Reply to  steveta_uk
January 24, 2016 12:07 am

John Knight, re your post recent post.
Even Monckton acknowledges that the Pause may disappear.
” “…… The el Nino persists in region 3.4, and that is likely to keep the temperature rising for some months, and perhaps to extinguish the Pause for a time, and perhaps for good”
You are right that it will remain in th record as a period within which you can get a flat line.As this may well be the case for other periods,it could then be argued that it is a ‘cherry-pick’ To guard against that criticism, look again at Hodgart’s paper,

JohnKnight
Reply to  TonyN
January 24, 2016 11:56 am

Is there something in Mr Hodgart’s paper that you feel is beyond dispute/criticism? Something that the same CAGW pushers who (ridiculously ) speak of treating the recent past as particularly significant in trying to determine what is curr3ently happening; “cherry picking”, will instead be left speechless by if Mr. Monckton mentions it?
Yet, you don’t even mention what that might be as you tell people to read the paper again? Perhaps you mean he ought to just tell those CAGW clansmen to read Mr. Hodgart’s paper, and that alone will shut them up? ; )
Seriously, say it, please or I can’t hear it . .

Michael C
January 22, 2016 10:46 am

Thank you Mr Hodgart for this lucid summary
The obvious ‘trend’ in the first graph that strikes me is no trend, but two stable regimes intersected by a pulse increase in 98 – the so-called 98 El Nino. El Ninos according to the current simplistic model, simply pump water around with wind. They cannot increase the global warmth status. They only redistribute heat. 98 is the only obvious anomaly. Get to work on that chaps and you make some inroads: was it an increase in incoming energy or a decrease in outgoing? – or (my arm-wave) an injection of tectonic heat?
Aside from this:
“ It should be emphasised that the physical accuracy of any of these data is not under review here and is a separate issue”.
The data is still within a field of 1c/century. Knowing how data was collected in the first half of the century I would hate to be floundering around in the dark on a mountain with these odds
Personally I feel we are ignoring the very high probability of negative feedback. This has to be the most important influence on noise. It ain’t noise. It is the mechanism that has preserved our environment for so long

J Martin
January 22, 2016 1:44 pm

I would have liked to have seen a comparison between the result obtained and the same result with any warming effects of El Ninos removed. Likely that the rate of warming would have been lower still.

brians356
January 22, 2016 2:21 pm

I think I heard His Ludship banging away with gust on a Smith Corona …

brians356
Reply to  brians356
January 22, 2016 2:22 pm

gusto
“Edit” button …please!

Proud Skeptic
January 22, 2016 2:40 pm

I still fail to understand why people even accept the underlying premise that we can accurately measure the average temperature of the Earth. Further, I reject the idea that we have anything of sufficient accuracy to compare it to in order to make a claim like…”the Earth has warmed 0.8 C over the last 100 years.”
Everything I have read on this and other sites leads me to the conclusion that we just don’t know this stuff.

Michael C
Reply to  Proud Skeptic
January 22, 2016 11:48 pm

You are right. No one knows this stuff, yet. I hope I live long enough to see some real understanding emerge

ImranCan
January 22, 2016 7:23 pm

I dont think the writer has understood what Monkton has done. He also suffers from the illusion that everything started in 1979 ( which he incorrectly states as 1977), just because the dataset starts there. Sure there is a rising tend since then, just as there is a flat trend over the last 18+ years. And if you go babk 10,000 years the tend will be declining, go back 50,000 years and the trend is increasing. The point is whatever trend you identify is a funtion of the length of the time series. Its not complicated.

TonyN
Reply to  ImranCan
January 23, 2016 2:21 am

ImranCan; Are you sure you understand what Hodgart has done? Look at Fig 2.

Ryan Stephenson
January 23, 2016 9:20 am

This seems to somewhat misrepresent the situation.
Monkton does not need to “prove” a particular trend nor to make some new prediction. Thus the exact trend that could be derived does not need to fit any particular trend line. It is up to the proponents of AGW theory to demonstrate that THEY can fit their data to a particular trend.
Up to now proponents of AGW theory have claimed that exponentially rising increases in CO2 concentration in the atmosphere have led to increasing global temperatures. They have pointed to temperatures from 1945 to 1997 from their own (dubious) data to demonstrate a trend – fair enough, but Monkton has demonstrated that this trend does not hold for the period 1997 to 2015. Thus the AGW theory remains unproven. This is all he needed to do.
Now the reason for the divergence in the initial trend to the trend over the last 18 years can be one of many. Perhaps there was AGW but we reached saturation point? Perhaps the data massaged before 1997 was massaged in the wrong way creating a false trend? Perhaps using thermometers in Stephenson screens isn’t an accurate way to measure climatic temperatures?
I’m going for the last one. Thermometers in Stephenson screens situated in the UK do not measure climate temperatures. They measure the impacts of [1] The season [2] The level of cloud cover [3] The wind speed and direction. These can cause 20 degree differences in temperature over the course of 24 hours. Much bigger than the signal you are looking for. Of course you can apply a simple low-pass filter to this “noise” signal to try and find a totally different measurement underneath – but you are far more likely to simply remove all the high frequency random noise leaving a low frequency noise signal that gives the impression you are looking at a trend over a given period of interest when actually you are looking only at a low frequency random signal. What climate scientists should have been concentrating on over the last 35 years is finding a way to measure the climate temperatures with less of the confounding noise signals from cloud cover and wind – they need this to not only prove their theories might be correct – they also need it to demonstrate and inform their computer models. Without accurate measurements of the climate temperatures, how can they possibly test their own climate models? It’s a bit of a disgrace that the climate science community hasn’t spotted this flaw in their science and corrected it long before now.

January 23, 2016 3:42 pm

Lord Monckton proves the GREAT PAUSE, as he calls it by his linear trend..for 17+ years.
This “GREAT PAUSE” as “TEMP PLATEAU” was first discovered back in 2003, at a time, at which
this pause was barely/not at all visible. A good sense and knowledge was need to counter-argument those hyped-up MILLENIAL climate warmist predictions of AR3, which came out in Mar 2003, followed by only months afterwards by a PLATEAU-study.
This plateau study, according to some blog replies further up, was:
“””” The earliest study that I know of to imply this is by two Russians, Lyubushin & Klyastorin (2003):
http://www.biokurs.de/treibhaus/180CO2/Fuel_Consumption_and_Global_dT-1.pdf “”””
and Lord Monckton proves this study right, showing now, 13 years later, that the CO2 did not
increase global temps, anf trashing AR3, as nonsense-science.
Therefore…….he Lyubushin&Klyashtorin PLATEAU is the ONE AND ONLY correct study.
According to L&K., the PLATEAU will continue until 2040 and Lord Monckton will, wishing him a long life, continue to show each month the continuing temp plateau …..
and I am sorry that the author Hodgart left this PLAREAU background of the Lord Monckton GREAT PAUSE unconsidered….JS.

January 26, 2016 10:32 am

So I got curious and decided to evaluate the data another way. I picked a random data point, a weather station. I picked a month, January. I looked at each day, January 1 for instance, from 1990 to 2015. I did this to 12 weather stations. They track as a pause for a lot of years. Now if each weather station was biased, it would be biased the same way. So if they all track, even if they were off they are off by the same amount. So by real logic, we must assume that the actual temperature variation at the stations is consistent with the pause. I wonder how they would argue against that?

January 27, 2016 2:19 pm

This microscopic view of short term temperature trends ignores the rather significant macro trend which is our little upswing since the little ice age that has stalled is but a minor upswing like prior upswings in temps that smooth to indicate we are still on a descent to the next ice age. Our current peak is less than the prior ones and those peak tops all show an overall descending temperature curve. Cruz’s home country and much of the UK are gone. Now there is climate change to be concerned about. There are real research issues to be dug into as there are still no clear explanations for all this natural variability that surrounds this current slight increase in temps.
That being said, it’s still fun to see simulation models clearly disproved in this shorter time frame and everyone arguing against this obvious proof with their cherry picking arguments. Faith is a religious attribute and it’s pretty clear that the religion of ‘name of the moment’ has sucked in a number of adherents.
The sad state of affairs is that much of it’s priesthood is clearly aware of the paleo-climate history so is knowingly deceiving the masses.
If they are not aware of paleo-climate history then they are hardly worthy of the term climate scientist.

johann wundersamer
January 31, 2016 9:19 pm

that essay by M.S.Hodgart in no aspect accordingly answers to
1. In the real world, until present time, temperatures are global stagnant.
2. Looking back from present date the pause in global warming lasts more than 18 ys.
3. During that more of 18 ys lasting pause of global warming the whatever miniscule share of CO2 on the mass of atmosphere is growing continually.
4. Says: since more than 18 ys Mother Nature falsifies the
theory of CO2 driving temperatures global.
Thankfully Monckton of Brenchley has the tools and continues the excellent work of giving a realistic, plain true sight of an aspect critical to climate science.
Best regards – Hans