From the GWPF
By Dr David Whitehouse
This new paper does not affect the fact that the temperature databases, with their own allowances for data-free regions, show no warming for 16-years, or at the very least no warming for about 95% of the globe for 16-years.
The ‘pause’ seen in the land and ocean global surface temperature during the last 16 years is one of the major talking points of climate science. It has been said by some politicians and journalists that ‘sceptics’ have used the ‘pause’ to undermine climate science. Actually there are a great many scientists and others working hard to understand the ‘pause.’ The ‘Pause’ IS climate science.
Many factors have been put forward as an explanation such as the warming going into the oceans, soot in the atmosphere, natural decadal variability, El Nino/La Nina variations, solar effects, and fluctuations in stratospheric water vapour to give just a few.
The ‘pause’ is seen across databases. It is a remarkable property of the HadCrut4, NasaGiss and NOAA surface temperature datasets and the UAH and RSS satellite lower atmosphere observations.
As we have said before in these pages, it is very curious that the global surface temperature for the last 16 years is flat given the increasing pressure of greenhouse forcing from the ever-rising concentrations of greenhouse gasses. We have also pointed out that the 16-year duration of the pause is not cherry-picked but comes purely from the properties of the data, and contrary to the belief of many the super El Nino year of 1998 makes no statistical difference to the length of the pause because of the following two cool La Nina years.
Even if there are currently more explanations for the ‘pause’ than can possibly be the case (or combine curiously to produce a straight line for 16 years) other explanations are to be welcomed and scrutinised. Hence the interesting new paper by Cowtan and Way in the Quarterly Journal of the Royal Meteorological Society.
Its central premise is not new. Global datasets might not be properly accounting for the recent warming Arctic due to poor sampling. Arctic temperatures are increasing faster than the global average. This would make such datasets cooler than they should be by a factor that depends upon the temperature rise and the area concerned. Cowtan and Way consider HadCrut4 which has gaps in its polar coverage. It should be noted that NasaGiss does carry out some extrapolation to infill missing Arctic data, and HadCrut4 takes into account the missing data in its uncertainty estimates.
In addition to the uncertainty estimates due to polar gaps the shift in 2012 from the HadCrut3 temperature database to HadCrut4 (which included more than 400 extra weather stations in Arctic regions to improve its polar coverage) resulted in an extra 0.04 deg C in warming in the global figure between 1998 and 2010. That extra warming has since been reduced because subsequent years have been cooler than 2010. HadCrut4 turned out to be a little warmer than HadCrut3 post-2005, though statistically it was actually flatter post-1997 than its predecessor.
To illustrate the lack of coverage problem Cowtan and Way take the global surface temperature datasets and reduce them in area to the coverage given by Hadcrut4 and compare before and after, their Fig 2. Click on image to enlarge.
This gives an estimate of the potential bias which is of the order of 0.02 deg C. Three datasets are shown though most researchers in this field use only Giss and UAH. I do not agree with the researchers comments: “All the global series show a rapidly increasing cool bias over the past few decades and a sharp decline starting around 1998.” The minor deviation seems to be at 2005 to me.
Access Denied
Cowtan and Way wanted to find a way to infill the absent data from the Arctic. It’s not an easy thing to do as there are spatial and temporal variations in all the data sets. The researchers used two methods, an infilling method to estimate missing data called Kriging, and a method based on satellite data. They determined a relationship between satellite and ground data and used it to estimate the ground temperature in regions where there is satellite but no ground data. Both techniques have to be applied very carefully.
The researchers created what they call a hybrid global temperature dataset from the satellite and ground data. When ground data is available they used that. When it was not they adjusted the satellite data over that region to produce an estimate of the ground data. They created global temperature databases based on their two approaches. They also removed data at the start and saw if their method was any good in reproducing the deleted data.
Their Fig 3 shows the differences between estimates and observations. Click on image to enlarge.
The typically degree plus differences to my mind suggests there is too much uncertainty to draw any detailed conclusions.
No infilling technique was consistently the best performer. The hybrid method was the best when there was no data, in general kriging was better for the rest of the world. However, looking more carefully shows that the hybrid system was generally best for land whilst neither of them showed any predictive skill over Antarctica.
It is slightly worrying that the researchers then picked the best reconstruction method for various parts of the Earth to create a mosaic of methods to represent global reality. They call this “blended” data. To a paper that wanted to infill missing data in the polar regions, and to a lesser extent Africa, this selection of models to represent other regions of the world as well adds a new layer of complexity if not a biased selection effect.
Ultimately does this reconstruction make any difference?
Looking at their Fig 6 the result is that the temperature period 1997 -2005 remains unchanged and flat. Click on image to enlarge.
That of 1997 – 2007 could have an extra 0.02 deg C warming, and 1997-2011 (the last year they consider) perhaps an increase of 0.03 deg C. Looking at HadCrut4 over this period puts those changes into perspective as they are about 5% of the interannual variations. Click on image to enlarge.
The claim has been made that when the adjustments are taken into account the post-1997 trend is two-and-a-half times higher for HadCrut4 than it was, increasing to 0.12 deg C over the period as opposed to 0.05 deg C – still not statistically secure with one sigma errors of about 0.08 deg C. That’s still considerably less than a degree per century, though closer to the IPCC’s canonical 0.2 deg C per decade.
Given that Antarctica shows no overall warming and that the missing Arctic region is a very small section, about 6 per cent, of the globe, it is curious, perhaps even a fluke that such a small region of the Earth has come to the rescue of climate science from the undermining ‘pause?’
This new work doesn’t affect the fact that the temperature databases, with their own allowances for data-free regions, show no warming for 16-years, or at the very least no warming for about 95% of the globe for 16-years. That in itself is inconsistent with the climate models.
This research is interesting but doesn’t live up to the headline that it explains the ‘pause.’ It also does not warrant such an extensive press release, complete with explanatory videos. It is clear that it has been used as a political tool to deride ‘sceptics’ who rightly see the ‘pause’ as significant. By aiming at ‘sceptics’ such an approach also derides many working scientists who are trying to explain the ‘pause.’ This is regrettable.
Feedback: david.whitehouse@thegwpf.org
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




If the missing extra heat is in the deep ocean, or perhaps in the Arctic, then the warming is hardly global. The paper by Cowtan and Way makes the case for nonglobal warming. CAGWers should be glad that the parts of the Earth actually inhabited by people are doing just fine, if their data is right.
SR
If their data is off, the parts of the Earth inhabited by people is still doing just fine.
SR
It’s unbearable!
Stop it! You’re killin’ me. I’m dyin’ here! I’m dyin’ here!
Put the humor on pause. Please!
Laurie Ayres. That is cojones (balls) rather than cajones (drawers as in storage you keep things in)
Instead of just estimating the temperature in the Arctic by extrapolation from sites hundreds of miles away, why can’t someone be sent occasionally to check the temperature on the ground with actual thermometer readings, to see if their guesswork is correct. There seem to be plenty of expeditions going to the Arctic to ‘prove’ global warming, so budgetary constraints shouldn’t be too great. (Although would such results be biased?) Maybe the scientists involved don’t like to get out of their warm labs and experience real weather.
Infilled temperature estimates are not data. Temperature records containing infilled numbers are not data sets. Bad data cannot be “adjusted” to become good data. Missing data are forever missing.
If the temperature at a site is important, install an instrument to measure it.
“This is climate science”.
Like any scientific test of a theory or model, this is as much a test of the data as the model.
It is now very clear that none of the models predicted the data showing a pause. The argument has been put forward that if we look at other data, then this somehow shows the models are not wrong.
This is false.
Even if that were true (it isn’t) it would mean that the data on which the models were created was wrong. They cannot validate their models by changing the data as it was this data which was used to construct the models.
If the models are based on invalid data, then by inference the models are wrong whatever they predict.
I can phrase that better:
Like any scientific test of a theory or model, this is as much a test of the data as the model.
If they do not match there are these scenarios – that the model is wrong – that the data on which the models were created is wrong – that both data and model is wrong.
If these surface data is now found to be the wrong data – then the model built on this data is wrong. So it is an invalid model. So, if they want to use a new dataset, they also have to use a new model built on that dataset and not on the one that was supposedly faulty.
But most importantly, if this is science it must be disprovable. As such they have to specify the dataset they intend to use to validate their model. And if they don’t have a suitable dataset then they cannot have a suitable model based on that dataset.
“Submitted for your consideration:”
The assertion from AR5’s Summary for Policymakers, that it is “extremely likely” (95%+ confidence level) that human activity has caused “more than half” of the global warming since 1950
New study shows half of the global warming in the USA is artificial
Posted on July 29, 2012 by Anthony Watts
PRESS RELEASE – U.S. Temperature trends show a spurious doubling due to NOAA station siting problems and post measurement adjustments.
I think Anthony might have identified the “human activity” which caused the “more than half” of the global warming since 1950.
There is one very plausible and likely explanation for the pause. There is no correlation between CO2 and temperature, Climate ( subject to natural variability ) is random, and even if it wasn’t there is nothing we can do about. The sooner the global powers that be realise this the better. Global economies would be way better off. Living standards would rise. Global poverty would be reduced.The landscape wouldn’t be ruined and scientists,commentators , politicians, businessmen would find better things to do.
However, most importantly I would have an extra hour or two a day to do more productive activities than read web sites like this ( although I do enjoy it).
The sooner the Climate denialists ( these are people who deny that the climate is due to natural variations ie warmists) admit defeat the sooner we can all get on with life.
I have always never understood why people who think that climate is due to natural variation are denialists ( the obvious explanation)., but people who concoct some convoluted theory are rationalists. Surely it’s the other way around.
David – The lack of correlation between CO2 and temperature is demonstrated at http://averageglobaltemperature.blogspot.com
Re: “Global datasets might not be properly accounting for the recent warming Arctic due to poor sampling. Arctic temperatures are increasing faster than the global average. This would make such datasets cooler than they should be by a factor that depends upon the temperature rise and the area concerned.”
If temperature measurements for the Arctic are as sparse as they are generally supposed to be, then nobody knows by how much temperatures in the Arctic are increasing more rapidly than the global average.
Perhaps I missed it. Does anyone else find it odd that Cowtan and Way can take HadCrut data, that shows no warming for the past 17 years, and combine it with UAH data, which shows no warming for the past 17 years, to show that there has been warming over the past 17 years? We’re all accustomed to bogus statistical manipulation by the AGW zealots, but this seems particularly blatant.
WonkotheSane says:
November 20, 2013 at 1:53 pm
Perhaps I missed it. Does anyone else find it odd that Cowtan and Way can take HadCrut data, that shows no warming for the past 17 years, and combine it with UAH data, which shows no warming for the past 17 years, to show that there has been warming over the past 17 years?
Actually it is only RSS that has a 0 slope over 17 years. For Hadcrut4, it is 12 years and 10 months and for UAH it is only 5 years and 5 months. So it is not surprising that they get a positive slope over 17 years. What I would like to know is from when their hybrid has a slope of 0.
One post mentioned the huge melting of the Arctic Ice in the summer of 2012……not so the ice sheet was quite healthy until a large storm came over from Russia between I think it was the 4th August to the 5th August and broke up the ice.
It soon refroze.
Meant 4th -9th August…please correct
Global datasets might not be properly accounting for the recent warming Arctic due to poor sampling.
Surely in this case either climate scientists are frauds or incompetent as they have claimed for decades the science was beyond question. There is no posible other explanation and no amount of weasel words from their apologists will change this. When science is beyond question the predictions it makes are correct and ontime.