From the GWPF
By Dr David Whitehouse
This new paper does not affect the fact that the temperature databases, with their own allowances for data-free regions, show no warming for 16-years, or at the very least no warming for about 95% of the globe for 16-years.
The ‘pause’ seen in the land and ocean global surface temperature during the last 16 years is one of the major talking points of climate science. It has been said by some politicians and journalists that ‘sceptics’ have used the ‘pause’ to undermine climate science. Actually there are a great many scientists and others working hard to understand the ‘pause.’ The ‘Pause’ IS climate science.
Many factors have been put forward as an explanation such as the warming going into the oceans, soot in the atmosphere, natural decadal variability, El Nino/La Nina variations, solar effects, and fluctuations in stratospheric water vapour to give just a few.
The ‘pause’ is seen across databases. It is a remarkable property of the HadCrut4, NasaGiss and NOAA surface temperature datasets and the UAH and RSS satellite lower atmosphere observations.
As we have said before in these pages, it is very curious that the global surface temperature for the last 16 years is flat given the increasing pressure of greenhouse forcing from the ever-rising concentrations of greenhouse gasses. We have also pointed out that the 16-year duration of the pause is not cherry-picked but comes purely from the properties of the data, and contrary to the belief of many the super El Nino year of 1998 makes no statistical difference to the length of the pause because of the following two cool La Nina years.
Even if there are currently more explanations for the ‘pause’ than can possibly be the case (or combine curiously to produce a straight line for 16 years) other explanations are to be welcomed and scrutinised. Hence the interesting new paper by Cowtan and Way in the Quarterly Journal of the Royal Meteorological Society.
Its central premise is not new. Global datasets might not be properly accounting for the recent warming Arctic due to poor sampling. Arctic temperatures are increasing faster than the global average. This would make such datasets cooler than they should be by a factor that depends upon the temperature rise and the area concerned. Cowtan and Way consider HadCrut4 which has gaps in its polar coverage. It should be noted that NasaGiss does carry out some extrapolation to infill missing Arctic data, and HadCrut4 takes into account the missing data in its uncertainty estimates.
In addition to the uncertainty estimates due to polar gaps the shift in 2012 from the HadCrut3 temperature database to HadCrut4 (which included more than 400 extra weather stations in Arctic regions to improve its polar coverage) resulted in an extra 0.04 deg C in warming in the global figure between 1998 and 2010. That extra warming has since been reduced because subsequent years have been cooler than 2010. HadCrut4 turned out to be a little warmer than HadCrut3 post-2005, though statistically it was actually flatter post-1997 than its predecessor.
To illustrate the lack of coverage problem Cowtan and Way take the global surface temperature datasets and reduce them in area to the coverage given by Hadcrut4 and compare before and after, their Fig 2. Click on image to enlarge.
This gives an estimate of the potential bias which is of the order of 0.02 deg C. Three datasets are shown though most researchers in this field use only Giss and UAH. I do not agree with the researchers comments: “All the global series show a rapidly increasing cool bias over the past few decades and a sharp decline starting around 1998.” The minor deviation seems to be at 2005 to me.
Cowtan and Way wanted to find a way to infill the absent data from the Arctic. It’s not an easy thing to do as there are spatial and temporal variations in all the data sets. The researchers used two methods, an infilling method to estimate missing data called Kriging, and a method based on satellite data. They determined a relationship between satellite and ground data and used it to estimate the ground temperature in regions where there is satellite but no ground data. Both techniques have to be applied very carefully.
The researchers created what they call a hybrid global temperature dataset from the satellite and ground data. When ground data is available they used that. When it was not they adjusted the satellite data over that region to produce an estimate of the ground data. They created global temperature databases based on their two approaches. They also removed data at the start and saw if their method was any good in reproducing the deleted data.
Their Fig 3 shows the differences between estimates and observations. Click on image to enlarge.
The typically degree plus differences to my mind suggests there is too much uncertainty to draw any detailed conclusions.
No infilling technique was consistently the best performer. The hybrid method was the best when there was no data, in general kriging was better for the rest of the world. However, looking more carefully shows that the hybrid system was generally best for land whilst neither of them showed any predictive skill over Antarctica.
It is slightly worrying that the researchers then picked the best reconstruction method for various parts of the Earth to create a mosaic of methods to represent global reality. They call this “blended” data. To a paper that wanted to infill missing data in the polar regions, and to a lesser extent Africa, this selection of models to represent other regions of the world as well adds a new layer of complexity if not a biased selection effect.
Ultimately does this reconstruction make any difference?
Looking at their Fig 6 the result is that the temperature period 1997 -2005 remains unchanged and flat. Click on image to enlarge.
That of 1997 – 2007 could have an extra 0.02 deg C warming, and 1997-2011 (the last year they consider) perhaps an increase of 0.03 deg C. Looking at HadCrut4 over this period puts those changes into perspective as they are about 5% of the interannual variations. Click on image to enlarge.
The claim has been made that when the adjustments are taken into account the post-1997 trend is two-and-a-half times higher for HadCrut4 than it was, increasing to 0.12 deg C over the period as opposed to 0.05 deg C – still not statistically secure with one sigma errors of about 0.08 deg C. That’s still considerably less than a degree per century, though closer to the IPCC’s canonical 0.2 deg C per decade.
Given that Antarctica shows no overall warming and that the missing Arctic region is a very small section, about 6 per cent, of the globe, it is curious, perhaps even a fluke that such a small region of the Earth has come to the rescue of climate science from the undermining ‘pause?’
This new work doesn’t affect the fact that the temperature databases, with their own allowances for data-free regions, show no warming for 16-years, or at the very least no warming for about 95% of the globe for 16-years. That in itself is inconsistent with the climate models.
This research is interesting but doesn’t live up to the headline that it explains the ‘pause.’ It also does not warrant such an extensive press release, complete with explanatory videos. It is clear that it has been used as a political tool to deride ‘sceptics’ who rightly see the ‘pause’ as significant. By aiming at ‘sceptics’ such an approach also derides many working scientists who are trying to explain the ‘pause.’ This is regrettable.