Over at JunkScience.com Steve Milloy writes:
Skeptic Setback? ‘New’ CRU data says world has warmed since 1998 But not in a statistically significant way.
Gerard Wynn writes at Reuters:
Britain’s Climatic Research Unit (CRU), which for years maintained that 1998 was the hottest year, has published new data showing warmer years since, further undermining a sceptic view of stalled global warming.
The findings could helpfully move the focus from whether the world is warming due to human activities – it almost certainly is – to more pressing research areas, especially about the scale and urgency of human impacts.
After adding new data, the CRU team working alongside Britain’s Met Office Hadley Centre said on Monday that the hottest two years in a 150-year data record were 2005 and 2010 – previously they had said the record was 1998.
None of these findings are statistically significant given the temperature differences between the three years were and remain far smaller than the uncertainties in temperature readings…
And Louise Gray writes in the Telegraph: Met Office: World warmed even more in last ten years than previously thought when Arctic data added
Some of the change had to do with adding Arctic stations, but much of it has to do with adjustment. Observe the decline of temperatures of the past in the new CRU dataset:
===============================================================
UPDATE: 3/21/2012 10AM PST – Joe D’Aleo provides updated graphs to replace the “quick first look” one used in the original post, and expands it to show comparisons with previous data sets in short and long time scales. In the first graph, by cooling the early part of the 20th century, the temperature trend is artificially increased.In the second graph, you can see the offset of CRUtemp4 being lower prior to 2005, artificially increasing the trend. I also updated my accidental conflation of HadCRUT and CRUTem abbreviations.
===============================================================
Data plotted by Joe D’Aleo. The new CRUTem4 is in blue, old CRUTem3 in red, note how the past is cooler (in blue, the new dataset, compared to red, the new dataset), increasing the trend. Of course, this is just “business as usual” for the Phil Jones team.
Here’s the older CRUTem data set from 2001, compared to 2008 and 2010. The past got cooler then too.
On the other side of the pond, here’s the NASA GISS 1980 data set compared with the 2010 version. More cooling of the past.
And of course there’s this famous animation where the middle 20th century got cooler as if by magic. Watch how 1934 and 1998 change places as the warmest year of the last century. This is after GISS applied adjustments to a new data set (2004) compared with the one in 1999
Hansen, before he became an advocate for protest movements and getting himself arrested said:
The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.
Source: Whither U.S. Climate?, By James Hansen, Reto Ruedy, Jay Glascoe and Makiko Sato — August 1999 http://www.giss.nasa.gov/research/briefs/hansen_07/
In the private sector, doing what we see above would cost you your job, or at worst (if it were stock data monitored by the SEC) land you in jail for securities fraud. But hey, this is climate science. No worries.
And then there’s the cumulative adjustments to the US Historical Climatological Network (USHCN)
Source: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
All up these adjustments increase the trend in the last century. We have yet to witness a new dataset release where a cooling adjustment has been applied. The likelihood that all adjustments to data need to be positive is nil. This is partly why they argue so fervently against a UHI effect and other land use effects which would require a cooling adjustment.
As for the Arctic stations, we’ve demonstrated recently how those individual stations have been adjusted as well: Another GISS miss: warming in the Arctic – the adjustments are key
The two graphs from GISS, overlaid with a hue shift to delineate the “after adjustment” graph. By cooling the past, the century scale trend of warming is increased – making it “worse than we thought” – GISS graphs annotated and combined by Anthony Watts
And here is a summary of all Arctic stations where they cooled the past:. The values are for 1940. and show how climate history was rewritten:
CRU uses the same base data as GISS, all rooted in the GHCN, from NCDC managed by Dr. Thomas Peterson, who I have come to call “patient zero” when it comes to adjustments. His revisions of USHCN and GHCN make it into every global data set.
Watching this happen again and again, it seems like we have a case of:
Those who cool the past are condemned to repeat it.
And they wonder why we don’t trust them or their data.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


![1998changesannotated[1]](http://wattsupwiththat.files.wordpress.com/2012/03/1998changesannotated1.gif?resize=500%2C355)
![ts.ushcn_anom25_diffs_urb-raw_pg[1]](http://wattsupwiththat.files.wordpress.com/2012/03/ts-ushcn_anom25_diffs_urb-raw_pg1.gif?resize=640%2C494)


Andrew.
You seem to share a common view that the satellite data is some sort of accurate reference that the surface data can be compared against. Firstly, the satellite data isn’t reporting Surface temperatures at all. It is reporting temperatures several thousand metres up. Next, the Satellite data has a lot of its own issues and isn’t necesarily any more accurate. The two principle groups who have been working on it – UAH & RSS – have produced results that have converged. However other groups who have looked at the data – Vinnikov & Grody, Fu & Johansen, Zou et al have all come up with higher values than the other 2. What can be said is that UAH & RSS use fairly similar methods so its not surprising that they get similar results. That doesn’t mean they are getting the correct result.
Zou et al for example here http://www.star.nesdis.noaa.gov/smcd/emb/mscat/mscatmain.htm are calculating a substantially higher trend than UAH or RSS for the mid Troposphere channel – TMT.
The lower Troposphere channel reported by UAH & RSS – TLT – which is the one most often cited when comparing to the surface records isn’t actually a physical channel. Rather they are using additional processing to extract a lower Troposphere signal from the mid-Troposphere channel.
Zou et al aren’t yet producing a TLT product yet although they have plans to. However, if their result for TMT is giving 0.126 C/Decade where as UAH/RSS are getting 0.08+ then there is a fairly good chance that Zou’s TLT product will show a significantly higher trend than UAH/RSS
Have you actually followed the link I put up to USHCN. The largest source of change was Time of Observation effects. Why would that occur? Because the US Weather Service changed the time of day that the readings were being taken.
Donald L Klipstein says:
March 19, 2012 at 10:35 pm
I like to look at what happened from the ~1944 peak to the ~2005 peak.
What if you took the difference between 1883 and 1944 versus 1944 and 2005 and assumed the difference was due to CO2? Either way, it is not catastrophic.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1880/plot/hadcrut3gl/from:1883/to:1944/trend/plot/hadcrut3gl/from:1944/to:2005/trend
But Werner, we have been able to safely say that CAGW has not happening since the radiosonde data showing the absence of the tropospheric “hotspot” were finally released several years ago. The existence of a hotspot was an absolute predicate for all the GCMs and its absence during the last 25-30yr warming period (to the late 1990s) therefore completely invalidated all the GC models and thus their predictions because its absence invalidated the amplification factors they used used to sell the CAGW hysteria. And those same GC models continue to remain invalidated. Nothing has changed. Not even the flagarant manipulations and creation of new data (out of thin air) for the landbased temp datasets. No hotspot, no CAGW. Case closed.
cui bono says:
March 19, 2012 at 11:46 am (Edit)
Is this right?
(1) All of the major non-satellite datasets, including GISS, HadCrut and Best, rely on adjustments made to individual stations made by NCDC.
############
err no. NCDC have both adjusted and un adjusted data. People need to actually catch up on things. If you like you can de select GHCN monthly data from the Berkeley Earth data.
Answer comes out…. the same.
Glenn Tamblyn says:
March 19, 2012 at 11:26 pm (Edit)
Andrew.
You seem to share a common view that the satellite data is some sort of accurate reference that the surface data can be compared against. Firstly, the satellite data isn’t reporting Surface temperatures at all. It is reporting temperatures several thousand metres up. Next, the Satellite data has a lot of its own issues and isn’t necesarily any more accurate. The two principle groups who have been working on it – UAH & RSS – have produced results that have converged. However other groups who have looked at the data – Vinnikov & Grody, Fu & Johansen, Zou et al have all come up with higher values than the other 2. What can be said is that UAH & RSS use fairly similar methods so its not surprising that they get similar results. That doesn’t mean they are getting the correct result.
##########################
Yes, the modelling involved to get the “temperature” from the brightness at the sensor is not without assumptions. and assumptions bring with them uncertainty.
When is WUWT going to undertake a project that documents how these historical temperatures are changed, and for what ostensible reason?
We need a white paper.
Glenn Tamblyn says:
Andrew
You seem to share a common view that the satellite data is some sort of accurate reference that the surface data can be compared against.
Yes. But assessed against would be a better way to say it. The satellite data are in actual fact far more reliable than the surface data records with fewer sources of error (particularly sources of human error) and thus yes it is valid, in the absence of a better alternative, to refer to them as a standard with which to appraise the surface dataset…
Firstly, the satellite data isn’t reporting Surface temperatures at all. It is reporting temperatures several thousand metres up.
Yes, but not relevant. We are concerned with temperature trends and changes in trend from one time period to another. The key questions, after all concern temperature trends, and not whether the surface is warning at a different rate to the atmosphere several kilometres up – because of course we know this will be the case. But general patterns of warming observed in the atmosphere and at the surface would be expected to be of a similar form (eg. have the same sign in the years since 1998)…
Next, the Satellite data has a lot of its own issues and isn’t necessarily any more accurate.
Do you really stand by that statement? The rest of your answer though of interest is really just hand-waving. Again, we are concerned with temperature trends and changes in those trends through time…
Have you actually followed the link I put up to USHCN. The largest source of change was Time of Observation effects. Why would that occur? Because the US Weather Service changed the time of day that the readings were being taken.
I wasn’t questioning the whether the bias needed to be corrected I was simply making the point that it was one of many biases in the surface data – most of which are of a human origin – that have to be “fixed” and that the satellite data have far fewer biases that need to get “fixed” in comparison.
American Patriot says:
March 19, 2012 at 12:54 pm
Hansen is a Marxist. He lamented to Clinton years ago about the injustices of global wealth distribution. He’s been outed many times but these Marxists are like zombies. You have to whack them more than once.
=======================
Gambino said, you should only ever need to “whack” them once. First time was a botched job … needs to be done properly.
Andrew
“The satellite data are in actual fact far more reliable than the surface data records with fewer sources of error (particularly sources of human error)” You need to read up a LOT about Satellite data and its issues – Orbital Decay, Diurnal drift, differences between satellites and how you ‘stitch them together’, changing instrument calibrations, time dependent varietc.
Then the source of human error in the surface record. There is certainly human error as a part of recording the initial data. But because this is a large numbers of separate humans all around the world, human error can be expected to balance out in the recording stage. But there is much less human error in the homogenisation/adjustment process that is done now because it is done by programs that apply algorithms to the data looking for patterns to then be used to try and adjust the data to get closer to the accurate result. So there isn’t scope for human error on a station by station basis. There could be biases introduced in the algorithms, but that isn’t human error.
Unless of course you think there are people who spend their days poring over a single station before they decide to ‘adjust’ that station.
“But general patterns of warming observed in the atmosphere and at the surface would be expected to be of a similar form (eg. have the same sign in the years since 1998)…”
And they are! Not the same magnitude of trend but definitely the same form. So what is your point here.
“Do you really stand by that statement? ”
Absolutely. Extracting temperature data from surface stations is a doddle compared with trying to put together a temperature record from the satellites. The UAH/RSS teams have converged on one answer based on their underlying methodology. Zou et al are producing a different result from the same raw data.
“surface data – most of which are of a human origin – that have to be “fixed” and that the satellite data have far fewer biases that need to get “fixed” in comparison.”
Actually Andrew, whenyou have a data source that has a large number of random inaccuracies, from a huge number of measurement instruments, the average of that data source will tend to home in on the true value because the random errors/inaccuracies tend to cancel each other out.
In contrast a data source that uses very few instruments will be less prone to random inaccuracies. But the underlying biases of those few instruments then become the dominant issue because those few instruments are used to measure everything. So instrument induced bias is a much bigger issue when you have very few instruments – satellites.
Imagine that the worlds surface temperature record was obtained for 2 dozen thermometers that were moved from site to site rapidly to measure everything. Understanding the biases of those few thermometers would become a much biigger issue then wouldn’t it.
It isn’t a question of what the trends are. Its a question of how accurately we are measuring the trends.
Steven Mosher says:
March 19, 2012 at 12:43 pm
Its not surprising that when you add more Northern Latitude data that the present warms.
This has been shown before. It’s pretty well known.
As you add SH data you will also cool the past. This is especially true in the 1930-40 period as well as before.
———————————
Isn’t this a bit too simplistic?
In my view, a very good measure of warming is a comparison between the temperatures of last cyclical high in the 1940s and the recent cyclical high. The difference has now increased after adjustments from 0.2 to 0.4 degrees in 70 years, or 0.6 degrees per century. This may be partly due to Greenhouse gases, but also longer term solar effects or other things.
Now, the arctic temperatures do not appear do have been higher in the last couple of years than in the 1940s. (Perhaps anybody may combine this data separately for verification). If so, there is no additional warming coming from here.
Then, the adjustments in the past go far beyond adding data. The main issue are the changes of sea surface measurement methods ober the past. This has been described by McIntyre
http://climateaudit.org/2011/07/12/hadsst3/
Among the issues with these adjustment, one stands out in my view, and this is responsible for 0.1 degrees since the 1940s or 25% of the warming.
In this adjustment, 30% of data is OVERWRITTEN, and bucket observations were reassigned as ERI observations. That is a huge manipulation and alteration of documented data and the justification is extremely poor (see McIntyre). The authors write boldly about their manipulation that it is to “correct the uncertainty”. I would think such an alteration of documents is not part of the scientific method and uncertainty of the measurement method should have been addressed with increased error ranges but never with alteration of documented data.
Werner Brozek says:
March 19, 2012 at 10:48 pm
“Donald L Klipstein says:
March 19, 2012 at 9:29 pm
There are 2 versions of the annual figures of HadCRUT3.
OK. This version has 1998 at 0.529 and 2010 at 0.470.
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt
This version has 1998 at 0.548 and 2010 at 0.478.
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
This version has 1998 at 0.52 and 2010 at 0.50.
http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates
That is three different versions. Which one of these three, if any, is being changed? If none of these, what are the numbers for the real one being changed?”
Have to look at the satellite data to see reality, rather than using the massaged and homogenised data produced by the CRU, which is subject to confirmation bias of their mistaken belief that CO2 plays a major role in climate…
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_February_2012.png
(The 3rd order polynomial fit to the data (courtesy of Excel) is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.)
As an aside, I find it ironic that Roy’s simple 3rd order poly fit is doing a better job that the IPCC GCM ensemble!
Seems to me that this is more moving-the-goalposts-in-order-to-win-the-game. I don’t trust these so-called “adjustments” to temperature instrument readings enough to say what the trend is. It all is suspicious, given the predilections of warmists. Does that make me anti-science? No. It just makes me even more skeptical of the idea that scientists should be giving advice on political matters. The bias we should be worried about is not “Time of Observation Bias.” It is another, more nefarious bias.
RE: Ted G says:
March 19, 2012 at 10:23 am
“….. Their manipulations are so obvious that even amateurs like myself can see them.”
To read a thermometer is not rocket science. It was when Hansen said so many people-of-the-past had read thermometers incorrectly that I first caught the whiff of fraud’s reek.
It was due to McIntyre and Climate Audit that my unease snapped into focus back in 2007:
http://climateaudit.org/2007/08/08/a-new-leaderboard-at-the-us-open/
It was amazing to me the back-lash I ran into, back in 2007, when I simply stated the obvious: “To read a thermometer is not rocket science.”
Glenn
That’s an awful lot of hand-waving going on there. But in fact, the kind of human error I had in mind was more of the – let’s site the thermometer next to an a/c vent, surrounded by 10,000 sqare feet of tarmac and shielded from the wind by a nice brick wall… who’s gonna know?… kind of human error.
Or the: let’s selectively cull the populaiton of thermometers with an empahsis on removing those sited at the higher and lower latitides and from higher altitudes (ie. cooler sites) through time and pretend the dataset are hsitorically comparable….
Or the: let’s simply create data out of thin air and treat them as if they were actual measurements from locations which never had themometers and add that in and pretend the product is a bona fide dataset that says something reliable about changes that occurred in the real physical world (ie, external to our warmist fantasies)..
Or human errors of the form documented eloquently documented here:
http://kenskingdom.wordpress.com/2012/03/13/near-enough-for-a-sheep-station/
But it’s true: i’m not an expert on satellites or the algorithms used to convert input light signals to output temperature readings. That’s true. But from my understanding the early issues concerning calibration, the correct algorithms to use – were to dealt with to the satisfaction of almost everyone – perhaps though not yours?. yes, decaying orbital trajectories, diurnal drift and the like … but these are trivial matters corrected for in the modern age of mathemtics, understanding of relativity and computational power. Or perhaps i’m being too flipant. What specific concerns do you have regarding how the satellite data are treated?
Again, the general point I make is that satellite-generated temperature data are considered to be far more reliable, with far far fewer sources of more easily quantifiable errors (and thus easier to correct for) than surface-generated (thermometer) data. The biases in siting of the thermometers (urban heat islands, altitudinal, latitudinal); variations in surface topography and terrain; human erros concerned with reading and handling instruments, the accuracu of the instuments;, rounding/ recording errors etc. etc.
And thats before the GISS and the CRU get their hands on the data andn beat it to death…
But I don’t believe you believe that the surface temp data area more reliable record of temperature anomoly trends than the satellite temp records – do you?
“Those who cool the past are condemned to repeat it.”
Brilliant.
The saddest thing about this farce is the irreparable damage that these people are doing to the name of ‘science’.
I could almost cry.
Which would be meaningful if you had that information.
Andrew it’s simple They (Glenn etc.., cannot accept any data that does not follow the AGW agenda, Therefore you are wrong. Sarcasm hmmm… The manipulation by Hadcrut and GISS is legion, see real-science.com, there are literally thousands of examples LOL
@ur momisugly Piers Corbyn
March 19, 2012 at 7:00 pm
……….
Well said, absolutely agree.
@ur momisugly Steven Mosher
Hi there friend, I hope you survived the onslaught up-thread. Good pasting!
I need not add to it, however tempted.
Oh dear. All the temperature records seem to turn to blancmange when looked at closely.
I knew that satellite records had to be adjusted for orbital decay, diurnal drift, etc (Dr. Spencer is working on this at the moment, is he not) but now Steven Mosher points to other people who interpret the data in a different way and reach different conclusions about the trends.
Meanwhile, surface data is ‘adjusted’ all over the shop.
Perhaps no-one really has a clue. Compared to all these uncertainties, the HadCrut4 change of 0.04C from HadCrut3 is picayune.
One question. Given that approx. one-third of stations show a net cooling trend over the last several decades, and these are often interspersed with those showing a warming trend, how can we be sure that running an algorithm to reduce inconsistencies between stations is not obliterating genuine differences due to local climate factors? Why assume that the people who read the thermometers all had tessellated eyeballs?
If a corporate accountant was caught constantly ‘adjusting’ and making stuff up he could go to jail. Perhaps the wrong people were arrested after Enron. These temperature adjustments are far, far costlier to the Earth than anything Enron could have done. That’s because it is helping to drive policy, green taxes and climate science funding – globally. Damned these climate bandits!
Some people say there is a strong correlation. 😉 Imagine what would happen if we got positive cooling over the next 15 years. What are they going to do? We are watching. The satellites are watching. ;-(
The world is getting warmer BUT it’s not statistically significant. The first part is meant for the media. The second part is for those who bother to read the details. In other words there is no evidence that the world has not got warmer.
Frank K. says:
March 19, 2012 at 1:31 pm
Steven Mosher says:
March 19, 2012 at 12:43 pm
Questions for Steve:
(1) Where is this new data coming from? Are people today suddenly discovering lost climate data under their beds or in their closets?
(2) Do you have links to this new data?
(3) Can you conclusively demonstrate that the past will always cool and the present will always warm?
Thanks.
—
Well, I came back to see if Steve had answered my basic questions. Apparently not. Can anyone else show that new data will always cool the past and warm the present as Steve asserts? Thanks.
—
Meanwhile – regarding the Time of Observation bias (TOB). As someone above observed, this is the biggest single adjustment in the climate data. Does anyone have a link the the specific algorithm (and computer codes) which calculate the TOB? I have seen some generic descriptions in the past, put no specific algorithm or code that is being used on current data. Thanks in advance.
Here’s what NCDC says:
Time of Observation Bias Adjustments
Next, monthly temperature values were adjusted for the time-of-observation bias (Karl et al. 1986; Vose et al., 2003). The Time of Observation Bias (TOB) arises when the 24-hour daily summary period at a station begins and ends at an hour other than local midnight. When the summary period ends at an hour other than midnight, monthly mean temperatures exhibit a systematic bias relative to the local midnight standard (Baker, 1975). In the U.S. Cooperative Observer Network, the ending hour of the 24-hour climatological day typically varies from station to station and can change at a given station during its period of record. The TOB-adjustment software uses an empirical model to estimate and adjust the monthly temperature values so that they more closely resemble values based on the local midnight summary period. The metadata archive is used to determine the time of observation for any given period in a station’s observational history.
Anyone have this “TOB-adjustment software”?
How much have all these adjustments increased the trend?
We don’t actually know.
The only comparable value we have is that the NCDC has increased the USHCNv2 temperature trend by +0.425C (from 1920 to 2010). It has probably been increased since that time.
Doubleplusgood. But don’t forget to adjust the ice extent records for the past, upwards, to make them coincide with the “new” cooler historical temperatures.
Oh, and we have always been at war with Eastasia.