Temperature analysis of 5 datasets shows the 'Great Pause' has endured for 13 years, 4 months

Time to sweep away the flawed, failed IPCC

By Christopher Monckton of Brenchley

HadCRUT4, always the tardiest of the five global-temperature datasets, has at last coughed up its monthly global mean surface temperature anomaly value for June. So here is a six-monthly update on changes in global temperature since 1950, the year when the IPCC says we might first have begun to affect the climate by increases in atmospheric CO2 concentration.

The three established terrestrial temperature dataset that publish global monthly anomalies are GISS, HadCRUT4, and NCDC. Graphs for each are below.

clip_image002

clip_image004

clip_image006

GISS, as usual, shows more global warming than the others – but not by much. At worst, then, global warming since 1950 has occurred at a rate equivalent to 1.25 [1.1, 1.4] Cº/century. The interval occurs because the combined measurement, coverage and bias uncertainties in the data are around 0.15 Cº.

The IPCC says it is near certain that we caused at least half of that warming – say, 0.65 [0.5, 0.8] Cº/century equivalent. If the IPCC and the much-tampered temperature records are right, and if there has been no significant downward pressure on global temperatures from natural forcings, we have been causing global warming at an unremarkable central rate of less than two-thirds of a Celsius degree per century.

Roughly speaking, the business-as-usual warming from all greenhouse gases in a century is the same as the warming to be expected from a doubling of CO2 concentration. Yet at present the entire interval of warming rates that might have been caused by us falls well below the least value in the predicted climate-sensitivity interval [1.5, 4.5] Cº.

The literature, however, does not provide much in the way of explicit backing for the IPCC’s near-certainty that we caused at least half of the global warming since 1950. Legates et al. (2013) showed that only 0.5% of 11,944 abstracts of papers on climate science and related matters published in the 21 years 1991-2011 had explicitly stated that global warming in recent decades was mostly manmade. Not 97%: just 0.5%.

As I found when I conducted a straw poll of 650 of the most skeptical skeptics on Earth, at the recent Heartland climate conference in Las Vegas, the consensus that Man may have caused some global warming since 1950 is in the region of 100%.

The publication of that result provoked an extraordinary outbreak of fury among climate extremists (as well as one or two grouchy skeptics). For years the true-believers had gotten away with pretending that “climate deniers” – their hate-speech term for anyone who applies the scientific method to the climate question – do not accept the basic science behind the greenhouse theory.

Now that that pretense is shown to have been false, they are gradually being compelled to accept that, as Alec Rawls has demonstrated in his distinguished series of articles on Keating’s fatuous $30,000 challenge to skeptics to “disprove” the official hypothesis, the true divide between skeptics and extremists is not, repeat not, on the question whether human emissions may cause some warming. It is on the question how much warming we may cause.

On that question, there is little consensus in the reviewed literature. But opinion among the tiny handful of authors who research the “how-much-warming” question is moving rapidly in the direction of little more than 1 Cº warming per CO2 doubling. From the point of view of the profiteers of doom (profiteers indeed: half a dozen enviro-freako lobby groups collected $150 million from the EU alone in eight years), the problem is that 1 Cº is no problem.

Just 1 Cº per doubling of CO2 concentration is simply not enough to require any “climate policy” or “climate action” at all. It requires neither mitigation nor even adaptation: for the eventual global temperature change in response to a quadrupling of CO2 concentration compared with today, after which fossil fuels would run out, would be little more than 2 Cº –well within the natural variability of the climate.

It is also worth comparing the three terrestrial and two satellite datasets from January 1979 to June 2014, the longest period for which all five provide data.

clip_image008

clip_image010

clip_image012

clip_image014

clip_image016

We can now rank the results since 1950 (left) and since 1979 (right):

 

image

 

Next, let us look at the Great Pause – the astonishing absence of any global warming at all for the past decade or two notwithstanding ever-more-rapid rises in atmospheric CO2 concentration. Taken as the mean of all five datasets, the Great Pause has endured for 160 months – i.e., 13 years 4 months:

clip_image018

The knockout blow to the models is delivered by a comparison between the rates of near-term global warming predicted by the IPCC and those that have been observed since.

The IPCC’s most recent Assessment Report, published in 2013, backcast its near-term predictions to 2005 so that they continued from the predictions of the previous Assessment Report published in 2007. One-sixth of a Celsius degree of warming should have happened since 2005, but, on the mean of all five datasets, none has actually occurred:

clip_image020

The divergence between fanciful prediction and measured reality is still more startling if one goes back to the predictions made by the IPCC in its First Assessment Report of 1990:

clip_image022

In 1990 the IPCC said with “substantial confidence” that its medium-term prediction (the orange region on the graph) was correct. It was wrong.

The rate of global warming since 1990, taken as the mean of the three terrestrial datasets, is half what the IPCC had then projected. The trend line of real-world temperature, in bright blue, falls well below the entire orange region representing the interval of near-term global warming predicted by the IPCC in 1990.

The IPCC’s “substantial confidence” had no justification. Events have confirmed that it was misplaced.

These errors in prediction are by no means trivial. The central purpose for which the IPCC was founded was to tell the world how much global warming we might expect. The predictions have repeatedly turned out to have been grievous exaggerations.

It is baffling that each successive IPCC report states with ever-greater “statistical” certainty that most of the global warming since 1950 was attributable to us when only 0.5% of papers in the reviewed literature explicitly attribute most of that warming to us, and when all IPCC temperature predictions have overshot reality by so wide – and so widening – a margin.

Not one of the models relied upon by the IPCC predicted as its central estimate in 1990 that by today there would be half the warming the IPCC had then predicted. Not one predicted as its central estimate a “pause” in global warming that has now endured for approaching a decade and a half on the average of all five major datasets.

There are now at least two dozen mutually incompatible explanations for these grave and growing discrepancies between prediction and observation. The most likely explanation, however, is very seldom put forward in the reviewed literature, and never in the mainstream news media, most of whom have been very careful never to tell their audiences how poorly the models have been performing.

By Occam’s razor, the simplest of all the explanations is the most likely to be true: namely, that the models are programmed to run far hotter than they should. They have been trained to yield a result profitable to those who operate them.

There is a simple cure for that. Pay the modelers only by results. If global temperature failed to fall anywhere within the projected 5%-95% uncertainty interval, the model in question would cease to be funded.

Likewise, the bastardization of science by the IPCC process, where open frauds are encouraged so long as they further the cause of more funding, and where governments anxious to raise more tax decide the final form of reports that advocate measures to do just that, must be brought at once to an end.

The IPCC never had a useful or legitimate scientific purpose. It was founded for purely political and not scientific reasons. It was flawed. It has failed. Time to sweep it away. It does not even deserve a place in the history books, except as a warning against the globalization of groupthink, and of government.

Get notified when a new post is published.
Subscribe today!
4.5 2 votes
Article Rating
190 Comments
Inline Feedbacks
View all comments
Arno Arrak
July 30, 2014 9:22 am

davidmhoffer July 29, 2014 at 8:31 pm says:
“What the current pause suggests is two things:
1. Sensitivity is far lower than IPCC estimates
2. The climate models are invalid based on the metrics of the modelling community themselves”
You are on the right track but do not go far enough. First, sensitivity is actually zero because carbon dioxide is not the cause of warming. Second, the models are so bad that the entire modeling enterprise should be closed down. It started with Hansen in 1988 who tried to predict the climate out to 2019. His “business as usual” curve was so far off reality that it was ridiculous to see it as we lived through his predicted years. They have now had 26 years to improve their product, have switched from an IBM mainframe to supercomputers costing millions of dollars, are using million-line code in their software, and their results are no better than Hansen’s and actually worse for the twenty-first century. If you just look at a CMIP5 ensemble you will see that the dozens of whips in it, each one from a different supercomputer, slope up and indicate warming while we actually experience a temperature standstill that has lasted since the beginning of the century. It is easy enough to see where this stupidity comes from. Despite the fact that global temperature is at a standstill atmospheric carbon dioxide at the same time is increasing. They have it coded into their software that increasing carbon dioxide means increasing global temperature and that is what gives them the nerve to come out with predictions of warming that do not exist. Fact is that the alleged greenhouse effect from this carbon dioxide simply does not exist. There is no experimental proof of it and it all depends on accepting the greenhouse theory that goes back to Arrhenius. He observed that carbon dioxide in his laboratory absorbed IR radiation and got warm. From that he deduced that doubling the amount of atmospheric carbon dioxide would raise global temperature by four or five degrees. More current calculations put this value at 1.1 degrees Celsius. But carbon dioxide is not the only or even the most important GHG in the atmosphere, water vapor is. And Arrhenius cannot handle several greenhouse gases simultaneously absorbing in the IR. The only theory that can do this is the Miskolczi greenhouse theory (MGT). What it predicts is what we see: addition of carbon dioxide to the atmosphere does not warm it. According to MGT carbon dioxide and water vapor jointly establish an IR absorption window in the atmosphere with an optical thickness of 1.87. If you now add carbon dioxide to the atmosphere it will start to absorb, just as the Arrhenius theory says. But this will increase the optical thickness. And as soon as this happens water vapor will start to diminish, rain out, and the original optical thickness is restored. The carbon dioxide that was introduced will continue to absorb of course but it will not be able to generate heat because the reduction of water vapor compensates for the potential warming this could create. That is the explanation of why there is no warming today despite a rising atmospheric carbon dioxide. It follows that this suppression of greenhouse warming by water vapor is universal wherever water vapor is present. Hence, there is no such thing as anthropogenic global warming. AGW is just a pseudo-scientific fantasy, promulgated by true believers who babble about water vapor tripling their imaginary greenhouse warming.

Jim Cripwell
July 30, 2014 10:04 am

Richard Courtney, you write “Yes, but the truth is what it is, and it is not affected by any group refusing to acknowledge it.”
Yes but if the truth is not agree by the warmsts, then NOTHING is going to happen. Politicians are going to believe CAGW for ever.

peakweather
July 30, 2014 10:21 am

I enjoyed reading RB Alleys Two Mile Time machine and have re-read it quite a few times. For someone with more than a passing interest in climate change I was always intrigued by the graph in his book showing the temperature in Greenland over the past 800,000 Years. The graph shows that the current warm period is well past its bedtime and for some reason the Earth has had a very stable warm temperature for some 2000 years have now. The temperature has been on slight cooling trend throughout the 2000 yrs. Note this is based on just one graph in a book but an interesting one. So was the Little Ice age an attempt to start the major cooling trend again? are we seeing another attempt now? Are the planetary forces overcoming thebit of warming we have seen over 150 years prior to 14 years ago. Does 150 extra particles of anything per million make that much difference? Why is the current warm period of the Ice Age we are in differing from the previous 4 going back 800,000 years? i.e. they peaked very sharply then cooled very rapidly unlike the teetering of the current warm 2000 years. Thanks some great debate and information on this site.

July 30, 2014 10:39 am

Arno Arrak said:
“If you now add carbon dioxide to the atmosphere it will start to absorb, just as the Arrhenius theory says. But this will increase the optical thickness. And as soon as this happens water vapor will start to diminish, rain out, and the original optical thickness is restored.”
What seems to happen is that if CO2 increases radiative capability from within an atmosphere then the radiation direct to space reduces the amount of energy that can be returned to the surface in adiabatic descent. Less energy is then returned adiabatically than is taken upward adiabatically which is a net cooling effect at the surface.
That weakens convective overturning which leads to less wind and so less evaporation and less water vapour as per Miskolczi’s observations.
The mistake of radiative theory lies in thinking that CO2 leads to more convective overturning rather than less.
They don’t realise that the surface cannot be warmed by more DWIR because at the same time an equivalent amount of radiation is leaking out of the adiabatic exchange to space so that the reduction in energy getting back to the surface from weaker adiabatic descent exactly offsets the effect of the DWIR.

July 30, 2014 10:44 am

Roger… Fall I was told was around Aug 1 in the Northern Hemisphere. That’s when the leaves first being to fall. There won’t be very many, just a few. (on their own, not storm or insect damage) It is no longer spring or high summer. It is a small but definite change. The leaves that fall will be yellow and supple. You will find none of them in May or June.
The reason that 17 yrs and 10 months is better than 13 years is that the longer the pause continues, the more distance between the relationship of temperatures and co2. At some time point, it will become obvious to all that there is no relationship, or only marginal at best. It’s not the sky falling and we don’t need to tax ourselves into an icy cold death, starvation, or years of economic downturn. … As an aside, no tomatoes this year, it has been too cold.
We don’t need to bring the west down to a poverty level like the rest of the world. We need to bring the rest of the world up to a decent standard of living. Petty despots and throwback communists have no agenda for actually improving peoples lives. The middle east is alive with such people who ascribe to do as I say or die. A command economy has never brought people out of poverty, and never will. A transfer of wealth like the UN agenda 21 will not improve people’s lives. It will enrich the few that are already rich or in control in those countries.

July 30, 2014 11:14 am

well
actually
according to my own global data set
[which unlike the other global data sets has been properly balanced]
we are busy globally cooling
did you all not notice this?
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

ripshin
Editor
July 30, 2014 11:20 am

AlecM says:
July 30, 2014 at 1:04 am
So, post it here. It’s better than letting it sit on your shelf collecting dust. Should be an interesting read.
rip

Frank
July 30, 2014 11:53 am

Lord Monckton: Most, if not all, of you peers expect the current pause to end sooner or later – perhaps later this year if a strong El Nino develops. It would make more sense to focus people’s attention on a phenomena certain to last longer – the unambiguous over-projection of warming since AOGCMs were first developed in the 1980s. This discrepany will survive the next big El Nino, the pause may not.

richardscourtney
July 30, 2014 12:05 pm

Jim Cripwell:
Genuine thanks for your post at July 30, 2014 at 10:04 am which says in total

Richard Courtney, you write “Yes, but the truth is what it is, and it is not affected by any group refusing to acknowledge it.”
Yes but if the truth is not agree by the warmsts, then NOTHING is going to happen. Politicians are going to believe CAGW for ever.

You address the most important issue pertaining to the anthropogenic (i.e. man-made) global warming scare (i.e. the AGW-scare). I tend to avoid it because whenever it is raised then certain ultra-right-wing trolls hijack threads with their bile.
As I see it, the issue is as follows.
The AGW-scare was killed at the failed 2009 IPCC Conference in Copenhagen. I said then that the scare would continue to move as though alive in similar manner to a beheaded chicken running around a farmyard. It continues to provide the movements of life but it is already dead. And its deathly movements provide an especial problem.
Nobody will declare the AGW-scare dead: it will slowly fade away because politicians never proclaim that they were wrong. This is similar to the ‘acid rain’ scare of the 1980s. Few remember that scare unless reminded of it but its effects still have effects; e.g. the Large Combustion Plant Directive (LCPD) exists. Importantly, the bureaucracy which the EU established to operate the LCPD still exists. And those bureaucrats justify their jobs by imposing ever more stringent, always more pointless, and extremely expensive emission limits which are causing enforced closure of UK power stations (Didcot is being demolished now and its cooling towers were felled this week).
Bureaucracies are difficult to eradicate and impossible to nullify.
As the AGW-scare fades away those in ‘prime positions’ will attempt to establish rules and bureaucracies to impose those rules which provide immortality to their objectives. Guarding against those attempts now needs to be a serious activity. Warmunist activists will be working to promote and install the rules and bureaucracies.
Publicising issues such as the ‘pause’ informs the public of a need to oppose imposition of the rules and bureaucracies. Politicians will never publicly admit they were wrong.
Richard

July 30, 2014 12:06 pm

James Abbott
…which happens to give a favoured result for those looking for the least warming.
RSS is selected because it is by far the best data set. All the terrestrial sets are heavely adjusted in a warming biased direction. Over and over this blog has shown how verious stations are adjusted in the wrong direction, and have overly influenced other nearby stations due to the gridding techniqu.

July 30, 2014 1:17 pm

Eric Worrall says:
July 29, 2014 at 4:51 pm

Hot d@mn, my tomato plants on the subtropical Fraser Coast are stunted weeds, because the weather has been gotten too cold for them to grow. I was hoping for at least a *little* global warming this year… 🙂

No need to thank me.

July 30, 2014 2:42 pm

One thing His Lordship fails to mention is that if you calculate 95% confidence intervals on the slopes he discusses, it will turn out that the slope for the shorter period (13 yrs, 4 mo, or whatever) is not only statistically indistinguishable from zero, but it’s also statistically indistinguishable from the longer-term slope of around 0.16 °C per decade. So if he wants to be really up-front and honest about all this, he should mention that the short-term slope is too uncertain (due to the strength of the interannual noise) to say whether it is essentially flat, or whether it is essentially the same as the long-term slope over the last several decades.
Another important fact he seems to have forgotten to mention is that back in 1990, the models the IPCC used didn’t include such things as ocean circulation. And yet, they still got the direction right, and the magnitude of the trend so far within a factor of 2. For Monckton, this means the model is falsified, but anyone who actually does modeling of natural systems knows that all models are wrong by definition, but some are useful. Getting such a simple model of the global climate system to work so well is a rather stunning achievement, and quite useful.
He also fails to mention that nobody ever claimed that current AOGCMs should be good at getting the timing of quasi-periodic oscillations like ENSO right, although some of them are reasonably good at mimicking the periodicity (not the exact timing). So the idea that the models should have predicted the La Niña-dominant conditions of the last decade or more is sheer nonsense. It’s a nice debate tactic to raise the bar so high that no real model could possibly be expected to pass the test, and then dismiss them all on that basis, but that’s not going to impress any scientists who have to model messy natural systems for a living.
There are quite a lot of uncertainties involve in climate modeling. But given that paleoclimate data (which has its own problems, but different ones,) gives a central estimate of around 3 °C/2xCO2 for climate sensitivity, we have reason to believe that the standard AOGCMs are probably in the ballpark. It certainly isn’t impossible that the sensitivity is as low as His Lordship seems to want it to be–it’s just not very probable, given the present state of the data.
And so the actual climate scientists keep trying to refine their models, because they know there are always bound to be things that could be improved. Are the estimates of aerosol forcing right? Does the spatial distribution of aerosols matter (yes). Can we get better measurements of deep ocean temperatures to test the ocean circulation in the models? They’re looking at all that, just like they should be.

richardscourtney
July 30, 2014 3:25 pm

Barry Bickmore:
Your post at July 30, 2014 at 2:42 pm says two points of significance and they are each very misleading.
You say

One thing His Lordship fails to mention is that if you calculate 95% confidence intervals on the slopes he discusses, it will turn out that the slope for the shorter period (13 yrs, 4 mo, or whatever) is not only statistically indistinguishable from zero, but it’s also statistically indistinguishable from the longer-term slope of around 0.16 °C per decade.

True, but if you consider the previous 13 years the lower bound of the linear trend IS positive at 95% confidence: (see this). In other words, discernible global warming stopped at least 13 years ago according to the data sets.
And you say

And so the actual climate scientists keep trying to refine their models, because they know there are always bound to be things that could be improved. Are the estimates of aerosol forcing right? Does the spatial distribution of aerosols matter (yes). Can we get better measurements of deep ocean temperatures to test the ocean circulation in the models? They’re looking at all that, just like they should be.

Refining worthless clunkers is pointless. The existing climate models should be scrapped and the “estimates of aerosol forcing” are a good illustration of why.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

July 30, 2014 3:37 pm

In answer to apologists for the models, the IPCC in 1990 expressed “substantial confidence” that by 2025 there would be 1.0 [1.5, 0.7] K global warming – i.e., around 0.68 K by now. Instead, the rate of warming has been half the IPCC’s then central estimate and considerably below even its then least estimate. The long-term warming trend of less than 0.12 K/decade since 1950 is indeed statistically indistinguishable (over a sufficiently short period) from a zero trend: but that fact, of course, reinforces the absurdity of any pretense that we are – thus far, at any rate – facing any kind of “climate crisis” driven by global warming.
Nor is it credible to praise the models for “having gotten the direction right”. One could have gotten the direction right with the toss of a coin. The plain fact is that the models in 1990 predicted double the global warming that has occurred, and that the IPCC, far from seeking to pretend any longer that its original near-term predictions were appropriate, has all but halved them – and they are still too high.
Paleoclimate data, often prayed in aid by apologists for high climate sensitivity, can be tortured to give any desired sensitivity. But the least unreliable of the relevant paleoclimate records – the ice-core temperature reconstructions from Vostok station, Antarctica (Jouzel et al., 2007) – show (after due allowance for polar amplification) that global temperatures have probably fluctuated by little more than 1%, or 3 K, either side of the 810,000-year mean. That powerfully indicates a thermostatic rather than a feedback-dominated climate object, and indicates that the models – unduly obsessed with the radiative transports – are undervaluing the non-radiative transports. To take just one example, evaporation has been measured to increase as a result of warmer weather at a rate thrice the models’ central estimate.
Besides, the climate object is demonstrably chaotic in the variables that govern climate sensitivity. For this reason, models are unable to represent key features of the climate correctly – notably the ocean oscillations and the Nino/Nina cycles, whose causes are still poorly understood (there is a growing body of evidence in the literature that el Ninos are triggered by quasi-periodic subsea volcanism).
The bottom line is that the rate of warming since 1950 – at less than 0.12 K/decade – is far too little to be definitively distinguishable from natural internal variability in the climate. It is also far too little to justify the crippling taxes and charges that have flung more than a quarter of Scotland’s population, and almost a tenth of the UK population, unnecessarily into fuel poverty. While the profiteers of doom get rich at the expense of the poor, the poor are dying in large numbers because they cannot afford to heat their homes (7000 excess deaths, over and above the usual 24,000 excess winter deaths, in the UK alone in the cold 2012/13 winter). No small fraction of the doubling of fuel and power prices in recent years is attributable to cross-subsidies to useless wind farms and solar panels that make no measurable difference to (non-existent) planetary warming. Time to accept that the models have failed, desubsidize “green” energy, cut fuel and power bills in half, and spend the worldwide savings of $1 billion a day on something less murderous, less destructive, and more likely to do some good in the world.

July 30, 2014 4:00 pm

richardscourtney (July 30, 2014 at 3:25 pm) says:
“True, but if you consider the previous 13 years the lower bound of the linear trend IS positive at 95% confidence: (see this). In other words, discernible global warming stopped at least 13 years ago according to the data sets.”
This is nonsense. The systematic rise in temperature is slow enough, compared to the magnitude of interannual noise, that there will ALWAYS be some time period over which the slope is not statistically distinct from zero. Sometimes it will be a shorter time period, and sometimes longer, but there will always be one. But it is a mistake to assume that the null hypothesis always has to be that the slope is zero. Why not have the null hypothesis be that the slope is the same as it has been for last several decades? There are reasons for doing either one, but if you can’t rule out either null hypothesis, then you can’t rule out either null hypothesis. It’s as simple as that.

July 30, 2014 4:45 pm

Monckton of Brenchley (July 30, 2014 at 3:37 pm) says:
“In answer to apologists for the models, the IPCC in 1990 expressed “substantial confidence” that by 2025 there would be 1.0 [1.5, 0.7] K global warming – i.e., around 0.68 K by now. Instead, the rate of warming has been half the IPCC’s then central estimate and considerably below even its then least estimate.”
And like I said, the major reasons for this are obvious. Ocean circulation is important, for instance. Is this surprising to anyone?
“The long-term warming trend of less than 0.12 K/decade since 1950 is indeed statistically indistinguishable (over a sufficiently short period) from a zero trend….”
No it isn’t. You are confused, here. The long-term trend since 1950 would have very small error bars, so it would be quite readily distinguishable from zero. Saying that it is indistinguishable “over a sufficiently short period” is the same as saying that you aren’t really talking about the trend since 1950. This longer-term trend is NOT statistically distinguishable from the short-term trends you calculate, however, because the error bars on THOSE are quite large.
Do you know what I’m talking about? That is, do you know how to calculate a confidence interval (error bar) for a slope? (They generally don’t teach this in introductory statistics courses.)
“Nor is it credible to praise the models for ‘having gotten the direction right’. One could have gotten the direction right with the toss of a coin.”
Yes, but when the known natural forcings (insolation, volcanoes) have been pushing toward slight cooling, it’s considerably more impressive that they got the direction right. Like I said before, getting the slope correct within a factor of 2 with such obviously oversimplified models isn’t bad at all.
“Paleoclimate data, often prayed in aid by apologists for high climate sensitivity, can be tortured to give any desired sensitivity. But the least unreliable of the relevant paleoclimate records – the ice-core temperature reconstructions from Vostok station, Antarctica (Jouzel et al., 2007) – show (after due allowance for polar amplification) that global temperatures have probably fluctuated by little more than 1%, or 3 K, either side of the 810,000-year mean. That powerfully indicates a thermostatic rather than a feedback-dominated climate object, and indicates that the models – unduly obsessed with the radiative transports – are undervaluing the non-radiative transports.”
I fail to see your point. Supposing you are correct, that means the difference between a glacial period and an interglacial is up to 6 °C. That actually seems roughly reasonable to me, because the best estimates for the difference between now and the last glacial maximum are in the range of 4-5 °C. Using the change in temperature, glacial extent (to estimate albedo change), and GHG concentrations since the last glacial max is one way paleoclimatologists estimate climate sensitivity. It’s about the same as the models.
Your faith in the Earth’s “thermostat” fails to be comforting, given that a 4-5 °C difference in global mean temperature gives you the difference between now, and a time when ice sheets thousands of feet thick covered much of the Northern Hemisphere. And there’s the little fact that the temperature oscillations correlate well with Milankovitch forcing, and that isn’t large enough to explain the changes without some hefty positive feedback.
“Besides, the climate object is demonstrably chaotic in the variables that govern climate sensitivity. For this reason, models are unable to represent key features of the climate correctly – notably the ocean oscillations and the Nino/Nina cycles, whose causes are still poorly understood (there is a growing body of evidence in the literature that el Ninos are triggered by quasi-periodic subsea volcanism).”
A system that is “chaotic” exhibits unpredictable behavior in the short term, but long-term averages can still be quite predictable. (Look up “strange attractors”, for instance, and determine why they are called “attractors”.)
In other words, you have produced quite a bit of hand-waving and bluster, but not any cogent arguments.

JFD
July 30, 2014 6:28 pm

Your Lordship, my understanding is that it is not correct to draw a linear line through clear break points as was done in your graphs. The proper way is to do a linear regression though data points without a break point then add the trends together to get the overall trend.
For example, in the first GISS chart the period 1950 to 1979 would be one regression, 1980 to 1996 would be a second regression, and 1997 to 2014 would be a third regression. The three trends would then be added together to get the overall trend. This gives a more correct trend which is considerably lower. You can easily see that the trend line drawn through 1950 to 1979 is totally incorrect. The actual trend is essentially zero yet doing it your way it is shown to be .34C per 20 years.
Physically, the break points are climate shifts caused by known natural ocean cycles. It is much easier to see break points using actual temperatures instead of anomalies. I suspect that you (and others) have fallen into the Believers spin trap. Anomalies hide a multitude of spins (pun intended).

Curious George
July 30, 2014 7:46 pm

Barry: “The long-term trend since 1950 would have very small error bars.” Would? Did you actually do it? Did you use raw data or adjusted data?

Werner Brozek
July 30, 2014 9:30 pm

Barry Bickmore says:
July 30, 2014 at 2:42 pm
One thing His Lordship fails to mention is that if you calculate 95% confidence intervals on the slopes he discusses, it will turn out that the slope for the shorter period (13 yrs, 4 mo, or whatever) is not only statistically indistinguishable from zero, but it’s also statistically indistinguishable from the longer-term slope of around 0.16 °C per decade. So if he wants to be really up-front and honest about all this, he should mention that the short-term slope is too uncertain (due to the strength of the interannual noise) to say whether it is essentially flat, or whether it is essentially the same as the long-term slope over the last several decades.
I will apologize in advance if I am misinterpreting your statement above. But by only mentioning the presumably positive “0.16 °C per decade” without mentioning the negative “0.16 °C per decade”, it seems to me that you are confusing two different issues.
One issue is for how long the slope is actually zero with error bars of equal size above and below the zero.
The other is the length of the pause that could include zero. I do not know the numbers for the combination of the five data sets, however Hadcrut4 alone is pretty close. It is zero for 13 years and 5 months going to the latest update of May 2014 at Nick Stokes site.
However it is 17 years and 7 months for a time that could include zero at the 95% level.
The information below is taken from here:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html
Temperature Anomaly trend
Jan 2001 to May 2014 
Rate: -0.037°C/Century;
CI from -0.696 to 0.622;
Temperature Anomaly trend
Nov 1996 to May 2014 
Rate: 0.572°C/Century;
CI from -0.023 to 1.167;
Lord Monckton’s title was:
“Temperature analysis of 5 datasets shows the ‘Great Pause’ has endured for 13 years, 4 months”
It could just as easily have been something like:
“Temperature analysis of 5 datasets shows the ‘Great Pause’ has endured for 17 years and 7 months at a rate that is not distinguishable from zero at the 95% level”.

July 30, 2014 10:09 pm

Apologists for the computer models are perhaps too indulgent of their manifest failures. Recall that the IPCC in 1990 expressed “substantial confidence” that by 2025 there would be 1.0 [1.5, 0.7] K global warming – i.e., around 0.68 K by now. Instead, the rate of warming since 1990 has been half the IPCC’s then central estimate and considerably below even its then least estimate. If, as one apologist here suggests, ocean circulation is so important that it was obvious that the models in use in 1990 would fail, the modelers, even if they had not yet incorporated ocean circulation into the models, would surely have been aware of this obvious point. In that event it must be inferred that the IPCC’s expression of “substantial confidence” in its interval of projections was intended to mislead. It would have been less dishonest if, right from the start, the IPCC had said that the models in their then state were inadequate and that, therefore, it was not possible to express “substantial confidence” in their output. But any such honesty might have been fatal to the IPCC’s own profitable continuance.
As to the significance of various trends, I trust that it is now agreed that the trend since 1950 is not =0.16 K/decade but less than +0.12 K/decade. The trend on the Central England Temperature Record (the world’s oldest regional record) from 1694-1733 was +0.39 K/decade, and that was before the industrial revolution began. Since the Central England record is a not unreasonable proxy for global temperature change, one may reasonably infer that warming of 0.12 K/decade is comfortably within the interval of natural internal variability.
Over any period long enough to show a trend in excess of +/- 0.15 K, the trend at least becomes sufficient to overcome the combined measurement, coverage, and bias uncertainties in the data, as published alongside the data themselves in the HadCRUT4 series. We may, therefore, infer that there has been some warming of the climate since 1950. However, it is possible to go back 18 years 6 months, to the beginning of 1996, before one finds a trend on the mean of the five principal global-temperature datasets that is in excess of +0.15 K and hence distinguishable from the published uncertainties.
It is trivially true that in any sufficiently stochastic time-series that exhibits an overall trend (though not, of course, in any time series) there will be periods during which the trend will be zero. However, the global temperature trend has been indistinguishable from zero for well over 18 years during which record CO2 emissions have been recorded. That that circumstance is startling may be deduced from the fact that not one of the CMIP4 or CMIP5 models predicted that outcome as its central estimate, and – as best I can determine – very few predicted that outcome even within the 95%-confidence interval. No surprise then, that the IPCC – under the advice of expert reviewers such as me – has accepted that it can no longer get away with its absurdly overblown medium-term predictions. Its current interval of predictions is so much below its 1990 interval that the two barely overlap at any point. If the apologists for the models think the IPCC ought not to have taken account of the discrepancy between models’ past predictions and the far less exciting observed trend in global temperatures over the past quarter of a century, they should address their concerns not to me but to the IPCC.
It is interesting to note that the apologists for the models are reduced to asserting that in 1990 the models did well to make the coin-toss prediction whether temperatures would rise or fall. However, by 1990 temperatures had been rising appreciably for a decade and a half since the sudden climate shift of 1976, so – particularly since theory would lead us to expect that adding greenhouse gases to the atmosphere would be likely, all other things being equal, to cause some warming – one did not really need models at all to predict that some warming would continue. Indeed, though the solar physicists are telling us that they expect global temperature to begin falling in the next few years, I suspect that over a sufficiently long period – i.e. a full 60-year cycle of the ocean oscillations – temperature may well continue to rise, though probably not by much.
Apologists for the models should not underestimate the remarkable circumstance that absolute temperature has varied by little more than 1%, or 3K, either side of the 810,000-year mean, notwithstanding the substantial forcings to which the climate has been subjected in that time. Since we are already close to the upper bound on the interval of inferred temperature change over recent millennia, another degree or two of warming is the most that might be expected to occur before all recoverable fossil fuels were exhausted: for the climate object is manifestly better characterized as near-perfectly thermostatic than as driven by large positive feedbacks. And a degree or two of warming, given that nearly all of the ice on Earth has melted over the past 11,400 years, would be likely to do more good than harm.
Finally, the apologists for the models have long exhibited very little understanding of the relevant characteristics of an object – such as the climate – that behaves as a chaotic object. It is often falsely assumed by those inexperienced in the modeling of chaotic objects that across a sufficient interval the behavior of the object becomes respectably predictable. It is necessary only to read the paper that founded chaos theory (albeit without mentioning the word “chaos”), Lorenz (1963), to understand why any such notion is unscientific. For it is a key property of chaotic objects that a small perturbation in one of the initial conditions may cause a bifurcation which, while deterministic, is not determinable unless the initial conditions are known to a precision that is and will aye be unavailable in the climate. The IPCC itself understands this, and said so in paragraph 14.2.2.2 of its 2001 Third Assessment Report. So, if the apologists for the models consider that the IPCC has insufficient appreciation of the significance of Lorenz attractors, they should address their concerns not to me but to the Secretariat.
The IPCC attempts to overcome the inherent unpredictability of the climate over the very long term (i.e., over more than a few weeks) by the use of probability-density functions. However, it is a property of such functions that they require more understanding of the initial conditions and evolutionary processes of the object under examination than would be necessary to attempt a simple central estimate flanked by error-bars. Probability-density functions, therefore, are peculiarly unsuitable as a device to attempt the reliable, long-term prediction of the future states of chaotic objects such as the climate. It is time for those who prefer models to mere reality to accept that every model is an analogy, that by definition every analogy breaks down at some point, and that for well-understood mathematical reasons the climate models were broken from the outset. Models were not, are not and will never be capable of determining climate sensitivity, especially while it remains profitable to their operators to arrange for them to predict implausibly high rates of global warming.

richardscourtney
July 31, 2014 1:32 am

Barry Bickmore:
Thankyou for your post addressed to me at July 30, 2014 at 4:00 pm in reply to my post at July 30, 2014 at 3:25 pm.
Werner Brozek gave an excellent reply to your assertions about temperature trends in his post at July 30, 2014 at 9:30 pm. I see no reason for me to attempt to compete with that so I refer you to it.
I write to correct your failure to understand the Null Hypothesis which you state when you write.

“True, but if you consider the previous 13 years the lower bound of the linear trend IS positive at 95% confidence: (see this). In other words, discernible global warming stopped at least 13 years ago according to the data sets.”

This is nonsense. The systematic rise in temperature is slow enough, compared to the magnitude of interannual noise, that there will ALWAYS be some time period over which the slope is not statistically distinct from zero. Sometimes it will be a shorter time period, and sometimes longer, but there will always be one. But it is a mistake to assume that the null hypothesis always has to be that the slope is zero. Why not have the null hypothesis be that the slope is the same as it has been for last several decades?

The “nonsense” is your attempts
(a) to assert I claimed a Null hypothesis which I did not
and
(b) to replace the scientific method with a Null Hypothesis of your choosing.
I never cease to be amazed at how often I am called upon to explain the Null Hypothesis as it applies to anthropogenic (i.e. man-made) global warming (AGW). This is the second time this morning.
The Null Hypothesis says it must be assumed a system has not experienced a change unless there is evidence of a change.
The Null Hypothesis is a fundamental scientific principle and forms the basis of all scientific understanding, investigation and interpretation. Indeed, it is the basic principle of experimental procedure where an input to a system is altered to discern a change: if the system is not observed to respond to the alteration then it has to be assumed the system did not respond to the alteration.
In the case of climate science there is a hypothesis that increased greenhouse gases (GHGs, notably CO2) in the air will increase global temperature. There are good reasons to suppose this hypothesis may be true, but the Null Hypothesis says it must be assumed the GHG changes have no effect unless and until increased GHGs are observed to increase global temperature. That is what the scientific method decrees. It does not matter how certain some people may be that the hypothesis is right because observation of reality (i.e. empiricism) trumps all opinions.
Please note that the Null Hypothesis is a hypothesis which exists to be refuted by empirical observation. It is a rejection of the scientific method to assert that one can “choose” any subjective Null Hypothesis one likes. There is only one Null Hypothesis: i.e. it has to be assumed a system has not changed unless it is observed that the system has changed.
However, deciding a method which would discern a change may require a detailed statistical specification.
In the case of global climate no unprecedented climate behaviours are observed so the Null Hypothesis decrees that the climate system has not changed.
Importantly, an effect may be real but not overcome the Null Hypothesis because it is too trivial for the effect to be observable. Human activities have some effect on global temperature for several reasons. An example of an anthropogenic effect on global temperature is the urban heat island (UHI). Cities are warmer than the land around them, so cities cause some warming. But the temperature rise from cities is too small to be detected when averaged over the entire surface of the planet, although this global warming from cities can be estimated by measuring the warming of all cities and their areas.
Clearly, the Null Hypothesis decrees that UHI is not affecting global temperature although there are good reasons to think UHI has some effect. Similarly, it is very probable that AGW from GHG emissions are too trivial to have observable effects.
The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be probably too small to discern because natural climate variability is much, much larger. This concurs with the empirically determined values of low climate sensitivity.
Empirical – n.b. not model-derived – determinations indicate climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 equivalent. This is indicated by the studies of
Idso from surface measurements
http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
and Lindzen & Choi from ERBE satellite data
http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf
and Gregory from balloon radiosonde data
http://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf
Indeed, because climate sensitivity is less than 1.0°C for a doubling of CO2 equivalent, it is physically impossible for the man-made global warming to be large enough to be detected (just as the global warming from UHI is too small to be detected). If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).
To date there are no discernible effects of AGW. Hence, the Null Hypothesis decrees that AGW does not affect global climate to a discernible degree. That is the ONLY scientific conclusion possible at present.
Richard

July 31, 2014 1:39 am

Monckton: Provide specific references to your sources if you want anyone but sheep to believe you. The IPCC did not state with ‘substantial confidence’ that there would be 0.68 K of global surface warming by now. They might have said, with ‘substantial confidence’ that given CO2 emissions in a small projected range that we would have 0.68K of global surface warming by now. Just because you and your friends have reading comprehension problems doesn’t mean the rest of us do.

richardscourtney
July 31, 2014 2:08 am

cesium62:
Your arrogant, ignorant and stupid post at July 31, 2014 at 1:39 am rudely says in total

Monckton: Provide specific references to your sources if you want anyone but sheep to believe you. The IPCC did not state with ‘substantial confidence’ that there would be 0.68 K of global surface warming by now. They might have said, with ‘substantial confidence’ that given CO2 emissions in a small projected range that we would have 0.68K of global surface warming by now. Just because you and your friends have reading comprehension problems doesn’t mean the rest of us do.

I can do better than that. I cite the IPCC prediction (n.b. PREDICTION not projection) of “committed warming” from CO2 emissions made in the past.
The explanation for this is in IPCC AR4 (2007) Chapter 10.7 which can be read at
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html
It says there

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.
This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.
So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 7 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.
The linear global temperature rise prior to now from year 2000 from “committed warming” should have been more than the 0.68K reported by the Third Viscount Monckton of Brenchley.
Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).
This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.
Richard