No significant warming for 17 years 4 months

By Christopher Monckton of Brenchley

As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.

Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

clip_image002

On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.

The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.

Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.

If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.

However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.

In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.

Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.

Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.

It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:

clip_image004

The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.

The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.

Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.

From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.

In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.

UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
429 Comments
Inline Feedbacks
View all comments
John Tillman
June 13, 2013 8:18 pm
barry
June 13, 2013 8:22 pm

He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

There is a propensity to quote one sentence from the Santer paper (jn the abstract) as if it is the defning point therein, and weild it as a benchmark for the surface data stat sig, or for model verification, or to claim that the anthropogenic signal is lost. This is a profound misunderstanding of the paper, which conlcudes;

In summary, because of the effects of natural internal climate variability, we do not expect each year to be inexorably warmer than the preceding year, or each decade to be warmer than the last decade, even in the presence of strong anthropogenic forcing of the climate system. The clear message from our signal-to-noise analysis is that multi-decadal records are required for identifying human effects on tropospheric temperature.

This is not a discrepancy with the abstract, which maintains that you need *at least* 17 years of data from the MSU records, but that may not always be sufficient.

When trends are computed over 20-year periods, there is a reduction in the amplitude of both the control run noise and the noise superimposed on the externally forced TLT signal in the 20CEN/A1B runs. Because of this noise reduction, the signal component of TLT trends becomes clearer, and the distributions of unforced and forced trends begin to separate (Figure 4B). Separation is virtually complete for 30-year trends
…On timescales longer than 17 years, the average trends in RSS and UAH near-global TLT data consistently exceed 95% of the unforced trends in the CMIP-3 control runs (Figure 6D), clearly indicating that the observed multi-decadal warming of the lower troposphere is too large to be explained by model estimates of natural internal variability….
For timescales ranging from roughly 19 to 30 years, the LAD estimator yields systematically higher values of pf”– i.e., model forced trends are in closer agreement with observations….

The 17-year quote is a minimum under one of their testing scenarios. They do not recommend a ‘benchmark’ at all, but point out that the signal to noise ratio declines the more data you have.
It is not enough to cite a quote out of context. Data, too must be analysed carefully, and not simply stamped with pass/fail based on a quote. Other attempts at finding a benchmark (a sound principle) are similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).

ferdberple
June 13, 2013 8:26 pm

ditto on the comments of praise for @rgbatduke postings above. home run after home run. I felt what I was reading was truly inspired. I would like to echo the other comments that these posting be elevated to a blog article. Perhaps just collected “as is” into a posting.
The logic to me is inescapable. Ask 10 people the answer to a question. If you get 10 different answers then one can be pretty sure than at least 9 of them are wrong, and 1 of them might be right. You cannot improve the (at most) 1 possibly right by averaging it with the other (at least) 9 wrong answers.
So why, when we have 30 models that all give different answers, do we average them together? Doesn’t this means that the climate scientists themselves don’t know which one is right? So how can they be so sure that any of them are right?
If you asked 30 people the answer to a question and they all gave the wrong answer, what are the odds that you can average all the wrong answers and get a right answer? Very likely one of the wrong answers is closer to the right answer than is the average.

Steve from Rockwood
June 13, 2013 8:27 pm

10 years minimum, but 15 years practically, 17 years for confirmation, 20 years with padded proof, 30 years would eliminate any natural effects, 60 years would clarify the long term natural trends and 90 years would definitely answer some important questions…but if we had 120 years of worldwide satellitle coverage I couldn’t really predict what we would know…surely we should collect such data and then reconvene.

SAMURAI
June 13, 2013 8:29 pm

Thank you Lord Monckton of Benchley, for a job well done.
I especially enjoyed seeing the R2 value of the 17 year 4 month trend……0.11…
0.11?… 0.11!? Are you frigging kidding me?
And we still take these grant whor….umm.. bed-wetters seriously?
It is to laugh.
It it weren’t for the $TRILLIONS being wasted on this hoax, it would almost be funny…Almost…
The eventual cost to the scientific community’s credibility and the actual economic and social destruction this silly hoax has inflicted on the world’s economy so far has not been so humorous; tragic comes to mind.

RockyRoad
June 13, 2013 9:07 pm

Samurai, I also nearly dropped my uppers when I saw the R2 value is 0.11.
It’s almost ZERO! Close enough to almost call it zero. At least it isn’t negative, but then, it could start to be without much of a change.

JimF
June 13, 2013 9:10 pm

Anthony/moderators: You come down hard on others, like the dragons or whatever, and some others. Why not give Nick Stokes his one little chance at puerile nastiness, then cut off all his even more juvenile following posts?
Mosher: In re “falsification” as used by rgb@duke. I don’t think he used it in the sense that you think he did. I think he used it in the sense of something a tort lawyer would love to sink his claws into; i.e., “climate scientists” lying through their teeth and misappropriating public funds either through sheer venality or total lack of skill. You may want to clarify that with Mr. RGB – who has clearly posted some of the best thinking we’ve seen on this matter of GCMs.
REPLY: Well, as much as I think Nick has his head up some orifice at times due to his career in government making him wear wonk blinders, he did do one thing that sets him apart from many others who argue against us here. When our beloved friend and moderator Robert Phelan died, Nick Stokes was the only person on the other side of the argument here (that I am aware of) who made a donation to help his family in the drive I setup to help pay for his funeral.
For that, he deserves some slack, but I will ask him to just keep it cool. – Anthony

John Archer
June 13, 2013 9:26 pm

Finally, there is no such thing as falsification. There is confirmation and disconfirmation.” — Steven Mosher, June 13, 2013 at 11:54 am
I agree with that. Just as verification, in the sense of proving a truth, can’t be had in any science, neither can falsification which is the same thing — proving a truth, the truth that something is false. Haha! Doh!
Even Popper realized this in the end as did Feynman.
Feynman, of course, is no surprise. Besides, I understand he didn’t have a lot of time for philosophy. You can see why. 🙂
But Popper, on the other hand, is a surprise to me. I’ve read some of his stuff and similar but not all of it by a long shot, and on the whole I am very sympathetic to it, except for your point above where I thought he had a big blind spot. He kept banging on about corroboration, for instance, when confirming (putative) truths but seemed a little more adamant when it came to falsification. Dogmatic I’d say.
In fact, the last I heard on this—and that was at least about a couple of decades ago maybe—was that he used to throw a hissy fit if someone brought the symmetry up. Ooh, touchy! 🙂
I didn’t know he recanted though. That’s news to me. Good for him.
The upshot is that he took us all round the houses and back to where we started in the first place — stuck with induction. Haha. Fun if you have nothing better to do.

AJB
June 13, 2013 9:49 pm

The video that perked my scepticism in the whole hullabaloo:
http://edge.org/conversation/the-physics-that-we-know
“Is there anything you can say at all?” … about 7:45 mins in.

June 13, 2013 9:49 pm

pottereaton says: June 13, 2013 at 7:02 pm

Nick Stokes is here to quibble again. rgbatduke has written some very compelling posts today so Nick must punish him by quibbling over trivial points which customarily arise from Nick’s deliberately obtuse misreading

And if there is anything at which Nick Stokes has proven himself to be the numero uno expert, it is in the art and artifice of “deliberately obtuse misreading” (although, much to my disappointment, there have been times – of which this thread is one – that Steve Mosher has been running neck and neck with Stokes)
But that aside … having just subjected myself (albeit somewhat fortified by a glass of Shiraz) to watching the performances (courtesy of Bishop Hill) across the pond of so-called experts providing testimony at a hearing of the U.K. House of Commons Environmental Audit Committee, I’ve come to the conclusion that ‘t would have been a far, far better thing had they requested the appearance and testimony of rgbatduke than they have ever done before!

JimF
June 13, 2013 9:54 pm

Anthony: fair enough. I will just “ignore” him. ‘Nuff said.

June 13, 2013 10:08 pm

“But Popper, on the other hand, is a surprise to me. I’ve read some of his stuff and similar but not all of it by a long shot, and on the whole I am very sympathetic to it, except for your point above where I thought he had a big blind spot. He kept banging on about corroboration, for instance, when confirming (putative) truths but seemed a little more adamant when it came to falsification. Dogmatic I’d say.”
In the end of course he had to admit to the fact that real scientists don’t actually falsify theories. They adapt them. I’m refering to his little fudge around the issue of auxilary hypothesis.
“As regards auxiliary hypotheses we propose to lay down
the rule that only those are acceptable whose
introduction does not diminish the degree of falsi ability
or testability of the system in question, but, on the
contrary, increases it.”
That in my mind is an admission that scientists in fact have options when data contradicts a theory: namely the introduction of auxilary hypothesis. Popper tried to patch this with a “rule”
about auxiliary hypotheses, but the rule in fact was disproved. Yup, his philosophical rule was
shown to be wrong.. pragmatically.
In Popper formulation we are only allowed to introduce auxiliary hypotheses if those are testable
and if they dont “diminish” falsifiability ( however you measure that is a mystery ) This approach to science was luckily ignored by working scientists. The upshot of Poppers approach is that one could reject theories that were actually true.
in the 1920’s physicists noted that in beta decay( a neutron into a proton and electron) the combined energy of the proton and the electron was greater than the energy of the neutron.
This lead ssome physicists to claim that conservation of energy was falsified.
Pauli suggested that there was also an invisible particle emitted. Fermi named it neutrino.
However at the time there was no way of detecting this. By adding this auxiliary hypothesis conservation of energy was saved, BUT the auxiliary hypotheses was not testable. Popper’s rule would have said “thou shalt not save the theory”
Of course in 1956 the neutrino was detected and conservation of energy was preserved, but by Poppers “rulz” the theory would have been tossed. The point being is that theories dont get tossed. They get changed. Improved. and there are no set rules for how this happens. Its a pragmatic endeavor. So that scientists will keep a theory around, even one that has particles that can’t be detected, as long as that theories is better than any other. Skepticism is a tool of science its not science itself.
If you want an even funnier example see what Feynman said about renormalization.
“The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”
So there you go. in order to keep a theory in play, a theory that worked, Feynman used a process that he thought was mathematically suspect. haha, changing math to fit the theory.

June 13, 2013 10:23 pm

Hilary
“And if there is anything at which Nick Stokes has proven himself to be the numero uno expert, it is in the art and artifice of “deliberately obtuse misreading” (although, much to my disappointment, there have been times – of which this thread is one – that Steve Mosher has been running neck and neck with Stokes)”
I find your intolerance of Nick’s contrary opinions and other contrary opinions to be out of line with the praise for this site which the good Lord bestowed just the other day.
Lets be clear on a couple things. Feynman is no authority on how science works. read his opinion on renormalization and you will understand that he did not practice what he preached.
Popper was likewise wrong about science. This isnt a matter of philosophical debate, its a matter of historical fact.
Here is a hint. You can be a sceptic and not rely on either of these guys flawed ideas about how science in fact operates. Theories rarely get “falsified” they get changed, improved, or forgotten when some better theory comes along. Absent a better theory, folks work with the best they have.

John Tillman
June 13, 2013 10:25 pm

Steven Mosher says:
June 13, 2013 at 11:54 am
Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
even Popper realized this in the end as did Feynman.
———————————————–
Please confirm with actual statements Popper & Feynman that they “realized” this. Absent your providing evidence to this effect, I think that you have misunderstood the mature thought of both men.
The physicists and philosophers of science Alan Sokal and Jean Bricmont, among others, could not have disagreed with you more. In their 1997 (French; English 1998) book “Fashionable Nonsense” they wrote, “When a theory successfully withstands an attempt at falsification, a scientist will, quite naturally, consider the theory to be partially confirmed and will accord it a greater likelihood or a higher subjective probability… But Popper will have none of this: throughout his life he was a stubborn opponent of any idea of ‘confirmation’ of a theory, or even of its ‘probability’…(however) the history of science teaches us that scientific theories come to be accepted above all because of their successes”.
The history of science is rife with instances of falsification, which neither Popper nor Feynman would I’m sure deny (again, please provide evidence against this view, given their well known support of the theory of falsifiability). There very much indeed is such a thing. Nor would either deny that to be scientific an hypothesis must make falsifiable predictions. If either man did deny this tenet, please show me where.
For instance, Galileo’s observation of the phases of Venus conclusively falsified the Ptolemaic system, without confirming Copernicus’ versus Tycho’s.
As you’re probably aware, Popper initially considered the theory of natural selection to be unfalsifiable, but later changed his mind. I have never read anywhere in his work that he changed his mind about falsifiability. The kind of ad hoc backpedaling in which CACCA engages is precisely what Popper criticized as unscientific to the end. If I’m wrong, please show me where & how.
And that goes double for Feynman.

pat
June 13, 2013 10:36 pm

Samurai says –
“If it weren’t for the $TRILLIONS being wasted on this hoax, it would almost be funny…Almost…
The eventual cost to the scientific community’s credibility and the actual economic and social destruction this silly hoax has inflicted on the world’s economy so far has not been so humorous; tragic comes to mind.”
INVESTORS are really, really concerned about CAGW and the environment!!! nil chance they’ll ever admit it’s a hoax:
13 June: Reuters: Laura Zuckerman: Native Americans decry eagle deaths tied to wind farms
A Native American tribe in Oklahoma on Thursday registered its opposition to a U.S. government plan that would allow a wind farm to kill as many as three bald eagles a year despite special federal protections afforded the birds…
They spoke during an Internet forum arranged by conservationists seeking to draw attention to deaths of protected bald and golden eagles caused when they collide with turbines and other structures at wind farms.
The project proposed by Wind Capital Group of St. Louis would erect 94 wind turbines on 8,400 acres (3,400 hectares) that the Osage Nation says contains key eagle-nesting habitat and migratory routes.
The permit application acknowledges that up three bald eagles a year could be killed by the development over the 40-year life of the project…
The fight in Oklahoma points to the deepening divide between some conservationists and the Obama administration over its push to clear the way for renewable energy development despite hazards to eagles and other protected species.
The U.S. Fish and Wildlife Service, the Interior Department agency tasked with protecting eagles and other wildlife to ensure their survival, is not sure how many eagles have been killed each year by wind farms amid rapid expansion of the facilities under the Obama administration.
UNDERESTIMATED EAGLE DEATHS
***Reporting is voluntary by wind companies whose facilities kill eagles, said Alicia King, spokeswoman for the agency’s migratory bird program.
She estimated wind farms have caused 85 deaths of bald and golden eagles nationwide since 1997, with most occurring in the last three years as wind farms gained ground through federal and state grants and other government incentives…
***Some eagle experts say federal officials are drastically underestimating wind farm-related eagle mortality. For example, a single wind turbine array in northern California, the Altamont Pass Wind Resource Area, is known to kill from 50 to 70 golden eagles a year, according to Doug Bell, wildlife program manager with the East Bay Regional Park District.
Golden eagle numbers in the vicinity are plummeting, with a death rate so high that the local breeding population can no longer replace itself, Bell said.
The U.S. government has predicted that a 1,000-turbine project planned for south-central Wyoming could kill as many as 64 eagles a year.
***It is illegal to kill bald and golden eagles, either deliberately or inadvertently, under protections afforded them by two federal laws, the Migratory Bird Treaty Act and the Bald and Golden Eagle Protection Act…
In the past, federal permits allowing a limited number of eagle deaths were restricted to narrow activities such as scientific research…
***Now the U.S. Fish and Wildlife Service is seeking to lengthen the duration of those permits from five to 30 years to satisfy an emerging industry dependent on investors seeking stable returns…
http://in.reuters.com/article/2013/06/13/usa-eagles-wind-idINL2N0EP1ZS20130613
——————————————————————————–

June 13, 2013 10:54 pm

rgbatduke at 1:17 pm – Oh Yes, follow the money. Corporate America, which of course includes Big Oil, has consistently been the main supplier of money to the Green Movement for decades.

Thomas
June 13, 2013 11:50 pm

ferdberple 7:30 pm. Impressive cherrypicking of a partial sentence there to make it sound as if I’m wrong.

June 14, 2013 12:26 am

In the past I ahve defended Nick Stokes for making pertinent points despoite ther being unpopular here.
However he has really made a fool of himself here.
The question isn’t who made this particular average of model outputs it is whether anyone should make an average of model outputs at all. Clearly, Monckton has made this average of model outputs to criticise the average of model outputs in the forthcoming AR5 (read the post).
Yet, the posts of rgbatduke persuasively argue that making an average of model outputs is a meaningless exercise anyway.
But criticising Monckton for taking the methodology of AR5 seriously is daft.
Criticising AR5 for not being serious is the appropriate response.
I look forward to Nick Stokes strongly condemning any averaging of meolds in AR5. But I fear I may be disappointed.

David Cage
June 14, 2013 12:32 am

Why do all these predictions get based on a linear projection? Try putting a cyclic waveform on the noisy one and compare the correlations then. They beat the hell out of any linear ones.

Nick Stokes
June 14, 2013 1:05 am

M Courtney says: June 14, 2013 at 12:26 am
“However he has really made a fool of himself here.
The question isn’t who made this particular average of model outputs it is whether anyone should make an average of model outputs at all.”

Model averaging is only a small part of the argument here. Let me just give a few quotes from the original RGB post:
“Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”
“What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. “
“Why even pay lip service to the notion that or for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning?”
“This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning.”
My simple point is that these are features of Lord Monckton’s graphs, duly signed, in this post. It is statistical analysis that he added. There is no evidence that the IPCC is in any way responsible. Clear?
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge. In model world, we can rerun simultaneously to try to get a common signal. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify. But I repeat, the fancy statistics here seem to be Monckton’s. If there is a common signal, averaging across model runs is the way to get it.

RichardLH
June 14, 2013 1:57 am

David Cage says:
June 14, 2013 at 12:32 am
Why do all these predictions get based on a linear projection? Try putting a cyclic waveform on the noisy one and compare the correlations then. They beat the hell out of any linear ones.
Indeed this analysis (which shows short term cyclic forms in the UAH data) http://s1291.photobucket.com/user/RichardLH/story/70051 supports the non-linear argument.

June 14, 2013 2:14 am

It’s true that models form an imperfect population, and fancy population statistics may be hard to justify.

Which is the point.
Monckton can justify it by referring ot AR5 which he is commenting on. Whatever fancy stastistics he uses is not relevant to the question of whether including different models – that have no proven common physics – is appropriate at all. He is commenting on AR5.
The point of the original RGB post, as you quote, is the latter idea: The question of whether including different models that have no common physics is appropriate at all.
So what Monckton did is irrelevant to the original RGB post. Monckton was addressing AR5.
AR5 is the problem here (assuming the blending of disparate models still occurs in the published version).

William Astley
June 14, 2013 3:58 am

The following is a summary of the comments concerning the observed and unexplained end of global warming. The comments are interesting as they show a gradual change in attitudes/beliefs concerning what is the end of global warming.
Comment:
If the reasoning in my above comment is correct the planet will now cool which would be an end to global warming as opposed to a pause in global warming.
Source“No Tricks Zone“
http://notrickszone.com/2013/06/04/list-of-warmist-scientists-say-global-warming-has-stopped-ed-davey-is-clueless-about-whats-going-on/
5 July, 2005
“The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant…,” Dr. Phil Jones – CRU emails.
7 May, 2009
“No upward trend…has to continue for a total of 15 years before we get worried,” Dr. Phil Jones – CRU emails.
15 Aug 2009
“…This lack of overall warming is analogous to the period from 2002 to 2008 when decreasing solar irradiance also countered much of the anthropogenic warming…,” Dr. Judith L. Lean – Geophysical Research Letters.
19 November 2009
“At present, however, the warming is taking a break.[…] There can be no argument about that,” Dr. Mojib Latif – Spiegel.
19 November 2009
“It cannot be denied that this is one of the hottest issues in the scientific community. [….] We don’t really know why this stagnation is taking place at this point,” Dr. Jochem Marotzke – Spiegel.
13 February 2010
Phil Jones: “I’m a scientist trying to measure temperature. If I registered that the climate has been cooling I’d say so. But it hasn’t until recently – and then barely at all.”
BBC: “Do you agree that from 1995 to the present there has been no statistically-significant global warming?”
Phil Jones: “Yes, but only just.”
2010
“…The decade of 1999-2008 is still the warmest of the last 30 years, though the global temperature increment is near zero…,” Prof. Shaowu Wang et al – Advances in Climate Change Research.
2 June 2011
“…it has been unclear why global surface temperatures did not rise between 1998 and 2008…,” Dr Robert K. Kaufmann – PNAS.
18 September 2011
“There have been decades, such as 2000–2009, when the observed globally averaged surface-temperature time series shows little increase or even a slightly negative trend1 (a hiatus period)…,” Dr. Gerald A. Meehl – Nature Climate Change.
14 October 2012
“We agree with Mr Rose that there has been only a very small amount of warming in the 21st Century. As stated in our response, this is 0.05 degrees Celsius since 1997 equivalent to 0.03 degrees Celsius per decade.” Source: metofficenews.wordpress.com/, Met Office Blog – Dave Britton (10:48:21) –
30 March 2013
“…the five-year mean global temperature has been flat for a decade,” Dr. James Hansen –
The Economist.
7 April 2013
“…Despite a sustained production of anthropogenic greenhouse gases, the Earth’s mean near-surface temperature paused its rise during the 2000–2010 period…,” Dr. Virginie Guemas – Nature Climate Change.
22 February 2013
“People have to question these things and science only thrives on the basis of questioning,” Dr. Rajendra Pachauri – The Australian.
27 May 2013
“I note this last decade or so has been fairly flat,” Lord Stern (economist) – Telegraph.

Patrick
June 14, 2013 4:16 am

“13 February 2010
Phil Jones: “I’m a scientist trying to measure temperature. ….”
I can read a thermometer AND can use Microsoft Excel. Now where is my grant money?

Paul Mackey
June 14, 2013 4:41 am

Actually, I am very surprised to hear the Guardian still has ANY readers left.

1 5 6 7 8 9 18