Met Office Confirms 2014 Continues Global Warming ‘Pause’
Guest essay by Dr David Whitehouse, via The GWPF
With the release of the 2014 HadCRUT4 data by the UK Met Office, and the previous release of global temperature data by Berkeley Earth, Nasa and Noaa, the main conclusion to be drawn from the data is that 2014 was a warm year, but not statistically distinguishable from most of the years of the past decade or so meaning that the “pause” in global annual average surface temperatures continues.
The Met Office said:
“The HadCRUT4 dataset (compiled by the Met Office and the University of East Anglia’s Climatic Research Unit) shows last year was 0.56C (±0.1C) above the long-term (1961-1990) average. Nominally this ranks 2014 as the joint warmest year in the record, tied with 2010, but the uncertainty ranges mean it’s not possible to definitively say which of several recent years was the warmest.”
![new-hadcrut4[1]](https://wattsupwiththat.files.wordpress.com/2015/01/new-hadcrut41.jpg?resize=720%2C86&quality=83)
Looking in detail at why 2014 was a warm year shows that it was down to unusually warm temperatures for a few months in the northeast Pacific. It is also obvious that had December not been such a warm month 2014 would have been much cooler. The Met Office says in its press release:
“Phil Jones, of the University of East Anglia, said: 2014 was an exceptionally warm year which saw warm tropical pacific temperatures, despite not being officially regarded as an El Niño.”
Unusually warm Pacific temperatures in the region they were observed indicates that what made 2014 interesting was not down to any predicted manifestation of “global warming.”
However, the Met Office considers that the temperature attained in 2014, and therefore all of the years of the past decade or so, would not have been achieved without human influence. In a press release put out in December (when HadCRUT4 data was available to October), when it was still possible that 2014 would have set a “record” and could have been treated as a separate event, they said that new research techniques developed by them allow for rapid assessment of how human influence might have affected the chances of breaking temperature records. They said:
“This technique, known as an attribution study, uses climate models and observations to see how likely an event would be in the real world and in a world without human greenhouse gas emissions – enabling assessment of how human influence has altered the chances of an event.”
Peter Stott, Head of Climate Attribution at the Met Office, said: “Our research shows current global average temperatures are highly unlikely in a world without human influence on the climate.” Such attribution research is highly speculative and should have been flagged as such in a press release whose aim was the get the media to print a story suggesting that 2014 would be a ‘record’ year, and give them an explanation for it. As it turned out November’s and December’s HadCRUT4 data whittled away the chances of 2014 being a “record.”
In general the Met Office and before them the Berkerley Earth project were reasonable about the data in pointing out that a new record was not established unequivocally because of the large error bars that encompass 2014 and many other recent years. This is in contrast to the stance taken by NASA who proclaimed without doubt, and without even quoting the temperature and any error information, that 2014 was the warmest year ever.
2014 fits in perfectly with the suggestion that for the past 18 years HadCRUT4 is best represented by a constant temperature.
Feedback: david.whitehouse@thegwpf.com
– See more at: http://www.thegwpf.com/met-office-confirms-2014-continues-global-pause/#sthash.sp1Zg6FC.dpuf
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Words like ‘Face’ & ‘Egg’ spring to mind . . . .
The media in the EU and US continue to ignore ALL of this interesting information. It is in a very black box. Locked by the UN. Totally air tight. Few outside of us angry people will be told this information, they would have to dig it out of the Boston snow banks today.
The met office said it is one of the ten warmest years on record.
http://www.metoffice.gov.uk/news/release/archive/2015/2014-global-temperature
That seems to me to be using the right amount of circumspection bearing in mind the tiny margins between the recent warm years and the margins of error.
BEST were also circumspect with their claims.
Tonyb
I am trying to enjoy the Inter-Glacial guys, give me a break!
Tar and Feathers more likely.
(A wasted posting effort by a banned sockpuppet. Comment DELETED. -mod)
They had to read it on here first though !
Josh, you’ve out done yourself. So many layers of satire.
That last bit: “a 1 in 27E6 chance that a Climate Alarmist will tell the truth.” is wickedly funny.
NASA will need to work on rebuilding its credibility if this keeps up.
NASA should just focus on space exploration.
It’s programs like NASA GISS that makes American citizens shake their heads when congress seemingly can’t find ways to reduce the deficit. Cutting GISS would save over $1 Billion each year.
Some of them actually do. I had lunch today in the NASA Ames Research cafeteria on Moffet Field in Mt View / Sunnyvale Ca.
And we talked about his modeling and hypersonic wind tunnel validation of CFD models of hypersonic flight problems.
And yes he told me that nothing flies anywhere if the wind tunnel doesn’t agree with the floppy disc. [Their] CFD model does predict what they can check out in their giant vacuum cleaner. I can hear that thing at my desk, when they are “adjusting their data. ”
Well they don’t do that like Hansen does. He’s also an expert on the insulating heat shield technology, so we gabbed a bit about the good old days of ablative heat shield cooling systems.
No I dunno who he is, but you never know who’s going to sit beside you when you eat at the (CA State sales tax free) NASA Ames Research Cafeteria. It’s fed property so Nyet on CA State taxes on base.
Oh he did tell me who is running the HS wind tunnel, and it is nice to know that someone is paying attention to the store.
G
I really do not think they are capable of such activity, may God truly help the next men on the Moon if they try!
Hope you don’t mind but I’ve forwarded this to the Daily Telegraph (UK).
Strictly going by the numbers, without regards to error bars, 2014 was first on Hadcrut4.
But if we assume that any anomaly that is 0.03 above or below the 2014 ranking is in a statistical tie with 2014, then there is a statistical 4 way tie from ranks 1 to 4.
However if we assume that any anomaly that is 0.1 above or below the 2014 ranking is in a statistical tie with 2014, then there is a statistical 11 way tie from ranks 1 to 11.
Almost the identical thing can be said for GISS. The only difference is that the second tie is 3 ways and the last is 10 ways.
But satellites are way different!
If we assume that any anomaly that is 0.1 above or below the 2014 ranking is in a statistical tie with 2014, then both UAH version 5.5 and RSS show a statistical 12 way tie from ranks 3 to 14.
No matter how “unusual” an event is, or how low the probability of it occurring, so long as it is NOT zero, and not in violation of any of the known laws of Physics; it can occur again tomorrow, and then again on the next day.
But, after thousands or millions of opportunities, or occasions for such an event to occur, you can expect the frequency of such occurrences to coalesce around the rates calculated from the statistics.
But you can NEVER rule it out on any one occasion, no matter how low its non zero probability is.
Statistics is all about results you already have observed; and has no place in contemplation of events not yet observed. Well it might suggest how surprised you are going to be when you get the next data, but it doesn’t effect what that data will be.
G
Well said! The next 10 years may be hotter than each of the preceding ones, but if the next 10 are all cooler than the preceding ones, then overall warming will be zero +/-.
Lady Luck has no memory!
My guess is that the uncertainty of the extrapolated ground-based measurement estimate of global temperature anomaly for recent years could easily be as much as 0.5C or more for each year. There are many sources of uncertainty, but the greatest are probably the simple lack of spatial coverage and uncertainties associated with representativeness of the measurements. The uncertainties are likely to be even larger for older estimates that are based on even more sparse data subjected to “homogenization” that may actually introduce further uncertainty. The satellite derived estimates of global lower atmospheric temperature anomalies have better spatial coverage but suffer from poor temporal coverage and may have uncertainties as large as for our current ground-based measurement estimates.
then things are said. perfect.
Don’t trust the Met Office anyway, they have trotted out too much garbage in the past, like this…
http://youtu.be/WyDmdcPw7Uw
Did this ridiculous woman lose her job? No.
Wait for it. Phil-dot or Nick Stokes will be along any minute to “prove” she wasn’t “very far off”.
For those who may be interested, this is how Hadcrut3 would have done if it were still around. Assuming that Hadcrut3 would have gone up as much from 2013 to 2014 as Hadcrut4 did, then Hadcrut3 would have had a 2014 anomaly of 0.529. This would have placed it in 2nd place. Prior to this year, 1998 was at 0.548 and 2005 was at 0.482.
The HadCRUt3v anomaly in 2004 was 0.444. Had it gone up by 0.3 degrees then it would be 0.744. The last full year recorded under this set was May 2013 to May 2014. This means the anomaly was 0.472. Phil-dot or Nick Stokes cannot say this ridiculous woman was anywhere even remotely close. It was an absurd prediction from an absurd woman. The Met Office should be ashamed of their conduct. As a climate prediction centre, it is completely useless.
The anomaly cannot be measured to three decimal places and thus cannot be calculated to three decimal places.
Don’t forget the perrenial Betts, Wilson et al
“Quoting the temperature to one hundredth of a degree and the error on that measurement to a tenth of a degree is not normal scientific practice.”
There was an asterisk on the original which has gone missing. And it pointed to a footnote:
“*0.1° C is the 95% uncertainty range.”
So first, the normal practice referred to would use the standard deviation, which is less. But more importantly, it isn’t the measurement error. It is the CI’s from an ensemble of simulations, which principally express the variation you might expect if you had been able to measure in other places.
Doesn’t “*0.1° C is the 95% uncertainty range.” sound so much more certain than
“*0.1° C is the 5% certainty range.” ??
Nick. “the normal practice referred to would use the standard deviation,” has nothing to do with measurement error. Measurement error is the sum of all of those errors that go into a particular measurement. If some of those are statistically derived, then that component of the error may be estimated and have an associated standard deviation. Not all measurement error is estimated. The OP is correct. If you quote a value with a higher precision than the precision of the measuring device, you will get into trouble, particularly if what you say influences monetary decisions. Of course, there are a lot of things that climate scientists get away with that securities regulators would jail others for.
“Not all measurement error is estimated.”
Then where does it come from?
“If you quote a value with a higher precision than the precision of the measuring device”
Again, the 0.1°C is not related to the precision of the measuring device.
Nick: You said:
“Not all measurement error is estimated.”
Then where does it come from?
You got me there. Should have left that bit out as it detracts from my original point. Regarding unestimated error, I am referring to standard measurement accuracy of instruments or ‘test equipment variation’, one of at least 5 sources of measurement error. You are correct. It is estimated and has an associated standard deviation. Lab folk, like myself, take this error as an absolute, but it is indeed a standard deviation. That being said. It is highly dangerous to quote results with a precision greater than the measurement error. Securities regulators refer to it as “fraudulent”.
Second (and last) point. Nick. You said “Again, the 0.1°C is not related to the precision of the measuring device.” First of all, given that we are picking nits, we are talking about accuracy, not precision. Moderate accuracy RTD’s are usually, at best, ±0.15°C. (yes, yes the platinum thingy’s are 0.003°C, but they are not used outside of very expensive labs). Pretty sure most weather station devices don’t have that accuracy. Possibly better precision, but we are talking accuracy, not precision. ±0.1°C is a pretty close to the accuracy of most weather monitoring devices. I would be all ears if you could point to a weather monitoring device that has an accuracy greater than 0.1°C and is in wide use. Argos don’t count. They are not weather monitoring devices.
Just in case the flagged word gets my post deleted. Otherwise, this will be a duplicate.
Nick: You said:
“Not all measurement error is estimated.”
Then where does it come from?
You got me there. Should have left that bit out as it detracts from my original point. Regarding unestimated error, I am referring to standard measurement accuracy of instruments or ‘test equipment variation’, one of at least 5 sources of measurement error. You are correct. It is estimated and has an associated standard deviation. Lab folk, like myself, take this error as an absolute, but it is indeed a standard deviation. That being said. It is highly dangerous to quote results with a precision greater than the measurement error.
John,
The uncertainty of 0.1°C is based on the paper of Brohan et al, 2006. It includes instrument error, but that is a very small component. An annual average is formed from about a million daily readings. Conventionally, the error of the mean of instrument error is down by about sqrt(N), or 1000, on the individual. Even if you allow for lack of independence, it is still pretty small. The big factor is the uncertainty of temperatures elsewhere from where measured.
@Nick Stokes
The pass on inaccuracy you get from large numbers applies only to measurements of the same sample. Since the temperatures being measured vary in both time and space, generally you have a sample size, “N”, of exactly “1”. Just like you never step into the same river twice, neither do you measure the same temperature twice. Or a million times, for that matter.
Normal practice is to use 2SD (95%CI) when making a judgement about two values being equal or significantly different.
Are you implying an error when you quote the article? I’m sure that its correct to round your results so that error is only one significant figure (or two if the first is 1) and the value is to the same decimal place as the error.
The measurements in question are not measurements, they are calculated values so the precision can go out to 3, 10, 15 places or however far as one wants to go. The question is does it make sense to calculate values out to hundredths when measuring devices are precise to tenths, and collecting, collating, re-factoring, homogenizing, and in-filling introduces many opportunities for error as well as increased muddiness in the data. In my opinion numbers published out to hundredths of a degree based on those factors defies common sense.
The fact that this data manipulation consistently makes the past cooler and the future warmer may be an indicator of a warming trend, but is just as likely to be an indicator of bias; an assertion is made and then the assertion becomes the proof driving assumptions and adjustments.
You cannot calculate a value to a greater precision than that of the original measurement. The error figure represents accuracy in measurement rather than it’s precision. If the accuracy is +/- 0.1 then the precision cannot exceed one decimal place without regard to how accuracy is determined.
Health warning. Nick Stokes defending the indefensible ….. again. Is he for real?
Sadly yes, although he does add some merriment to the blog. every now and then.
Nick comes here and speaks his mind. He disagrees with many of the things here. When he posts, it is usually a clear statement that can be argued. He is civil and intelligent. I disagree with much of what he says, particularly his defense of homogenization techniques, but I always read his posts. As Willis says, if you disagree with something he says, quote it and prove him wrong. Otherwise you are just a drive-by sniper.
And whats your problem with snipers ? Are you even American?
/sarc
What would it have been if they were still using the semi-adjusted HADCRUT3 version instead of the full-adjusted HADCRUT4.
12 month
average
anomaly HADCRUT3 HADCRUT4
Dec 1998 0.55 0.52
Dec 2011 0.34 0.40
Increase/ -0.21 -0.12
The new version increases warming (or rather decreases cooling) since 1998 by 0.09C, a significant amount for a 13 year time span. Whilst the changes should not affect the trend in future years, they will affect the debate as to whether temperatures have increased in the last decade or so.
https://notalotofpeopleknowthat.wordpress.com/2012/10/10/hadcrut4-v-hadcrut3/
“This technique, known as an attribution study, uses climate models and observations to see how likely an event would be in the real world and in a world without human greenhouse gas emissions – enabling assessment of how human influence has altered the chances of an event.”
IF they could predict global temperatures to exquisite precision with a “baseline” climate model, I might buy the idea that they could then compare new temperatures against that baseline and try to attribute any differences as “other than baseline”.
BUT they can’t predict global temperatures to exquisite precision, they can’t even accurately HINDCAST global temperatures with their model. And they certainly can’t say that any perceived difference in current temps is caused specifically by CO2 or even by human activity in general, only that something has changed from the baseline conditions of their model. They don’t even know what number to set the Climate Sensitivity knob to on their CO2 model.
The whole premise is laughable and fundamentally unscientific.
I would presume that a warmist might say that even though 2014 temps might be tied with 2005 & 2010, it was still close to being a record. And that simply joining those 3 extreme temperatures with a line is evidence of nothing…
R
My rule of thumb is that if you double (at least) the statistical
error you may, just, get somewhere close to the overall survey error.
I guess the missing heat wasn’t hiding in 2014.
Back to The Time Tunnel!
The year 2014 was supposed to have been be the hottest year on record globally according to NOAA/NASA. Yet a review of the numbers shows otherwise. It was not a record for half the globe namely Southern Hemisphere land and oceans, nor a record for total global land areas, nor a record for North and South America. Even the global temperatures measured by satellites showed it was not a record.
The 2014 record global annual temperature was mostly a NORTH PACIFIC record SST event.
If one tracks the global ocean temperatures, It was the Northern Hemisphere ocean temperature and more particularly the North Pacific Ocean starting from October to December that pushed 2014 into the record temperature range. In September the Northern Hemisphere ocean temperature anomaly was still tied with 1998, but by October, 2014 became the record highest all the way to the year end. So the prime reason for the warm 2014 record was the due to the extra warming of North Pacific during the last 3 month of the year. It was not a global yearlong event at all.
Quoting the temperature to one hundredth of a degree and the error on that measurement to a tenth of a degree is not normal scientific practice.
Its also BS has they have no way of actually having that degree of accuracy, expect via ‘models’
But I will happily give you 50-1 , that this year will also be claimed has the ‘warmest ever’ no matter what actually happens. Life is just so much easer when its you controlling the ‘data ‘ used to make a judgement on if your right or wrong.
I’m sure the BBC will be reporting the MetO update to the masses any moment now just like they always do being honest journalists. Any moment now. Any moment. Any moment, right about…now. Now?
How about now?
How ridiculous to start with 1961, then use the words “hottest year EVER”….The rest of the conversations is a waste of time. 🙂
So Herkimer, excuse me if this question has been answered here. I cannot possibly read every thread or comment.
Where did the extra warming of the North Pacific originate? And why was this crucial information not included in media press releases?
Is there more than one theory on the origin of the warming in the North Pacific?
The vast majority of heat in the oceans comes from the sun. An almost trivial amount comes from geothermal activity, and a (really) minuscule amount from anthropogenic activity. Virtually none comes from LWIR emissions from carbon dioxide.
Mick,
If I’ve understood Bob’s explanations, warmed surface water is piled up by steady winds, exposing cooler water that is heated and driven. Relaxed winds firstly allow that mounded water to smear back across the surface and then be warmed but not blown away. The result is a larger surface area for the same volume of warm water.
They may have learned from the GISS fiasco since UKMet circumspection has not previously been evident. Hopeful sign?
Mick
You can read some past postings of Bob Tisdale
https://bobtisdale.wordpress.com/2015/01/17/on-the-biases-caused-by-omissions-in-the-2014-noaa-state-of-the-climate-report/
Thanks, Dr. Whitehouse, for bringing some scientific common sense to the discussion.
Since the very strong El Niño in 1998, global temperatures stopped rising while CO2 kept on increasing. Atmospheric CO2 can not be the controlling factor for global temperatures.
2014 was a weak El Niño year (MEI around 0.8).
Good to hear. I essentially said the thing in a post the day after the NYT story came out. https://luysii.wordpress.com/2015/01/18/the-new-york-times-and-noaa-flunk-chem-101/
The New York Times and NOAA flunk Chem 101
As soon as budding freshman chemists get into their first lab they are taught about significant figures. Thus 3/7 = .4 (not .428571 which is true numerically but not experimentally) Data should never be numerically reported with more significant figures than given by the actual measurement.
This brings us to yesterday’s front page story (with the map colored in red) “2014 Breaks Heat Record, Challenging Global Warming Skeptics“. Well it did if you believe that a .02 degree centigrade difference in global mean temperature is significant. The inconvenient fact that the change was this small was not mentioned until the 10th paragraph. It was also noted there that .02 C is within experimental error. Do you have a thermometer that measures temperatures that exactly? Most don’t, and I doubt that NOAA does either. Amusingly, the eastern USA was the one area which didn’t show the rise. Do you think that measurements here are less accurate than in Africa, South America Eurasia? Could it be the other way around?
It is far more correct to say that Global warming has essentially stopped for the past 14 years, as mean global temperature has been basically the same during that time. This is not to say that we aren’t in a warm spell. Global warming skeptics (myself included) are not saying that CO2 isn’t a greenhouse gas, and they are not denying that it has been warm. However, I am extremely skeptical of models predicting a steady rise in temperature that have failed to predict the current decade and a half stasis in global mean temperature. Why should such models be trusted to predict the future when they haven’t successfully predicted the present.
It reminds me of the central dogma of molecular biology years ago “DNA makes RNA makes Protein”, and the statements that man and chimpanzee would be regarded as the same species given the similarity of their proteins. We were far from knowing all the players in the cell and the organism back then, and we may be equally far from knowing all the climate players and how they interact now.
I think you are wrong. Division always increases the number of significant digits. So 3/7=0.43
Undoing the division shows why, for your example : 0.43 x 7 = 3.0 (< 1% error) ; 0.4 x 7 = 2.8 (10% error).
I think you are wrong. Division can increase the number of significant digits. So 3/7=0.43
Undoing the division shows why, for your example : 0.43 x 7 = 3.0 (< 1% error) ; 0.4 x 7 = 2.8 (10% error). Hidden behind both the 3 and 7 is an extra digit of uncertainty (+/- 0.5) that does not disappear. If your rule were correct 3 x 7 would become 20 whereas it's actually 21 which has 2 digits of accuracy just as 20 does. Does that make sense?
Assuming the numerator and denominator are measured, 3/7 has a range of 0.34 to 0.52. 3 x 7 is 16 to 25. So 0.4 (which has a possible true value of 0.35 to 0.44) and 20 (possible true value 15 to 24) are the correct answers for round-off and everything has 1 significant digit. In short, division cannot increase the number of sigfigs.
MICK
this may be of help as well
https://bobtisdale.wordpress.com/2014/08/16/on-the-recent-record-high-global-sea-surface-temperatures-the-wheres-and-whys/
I still think the alarmist crowd had this “2014, warmest year on record” scenario planed many months ago. Remember, the alarmist crowd was hoping for a strong El Nino signal.