CRU's new CRUTem4, hiding the decline yet again

Over at JunkScience.com Steve Milloy writes:

Skeptic Setback? ‘New’ CRU data says world has warmed since 1998 But not in a statistically significant way.

Gerard Wynn writes at Reuters:

Britain’s Climatic Research Unit (CRU), which for years maintained that 1998 was the hottest year, has published new data showing warmer years since, further undermining a sceptic view of stalled global warming.

The findings could helpfully move the focus from whether the world is warming due to human activities – it almost certainly is – to more pressing research areas, especially about the scale and urgency of human impacts.

After adding new data, the CRU team working alongside Britain’s Met Office Hadley Centre said on Monday that the hottest two years in a 150-year data record were 2005 and 2010 – previously they had said the record was 1998.

None of these findings are statistically significant given the temperature differences between the three years were and remain far smaller than the uncertainties in temperature readings…

And Louise Gray writes in the Telegraph: Met Office: World warmed even more in last ten years than previously thought when Arctic data added

Some of the change had to do with adding Arctic stations, but much of it has to do with adjustment. Observe the decline of temperatures of the past in the new CRU dataset:

===============================================================

UPDATE: 3/21/2012 10AM PST – Joe D’Aleo provides updated graphs to replace the “quick first look” one used in the original post, and expands it to show comparisons with previous data sets in short and long time scales. In the first graph, by cooling the early part of the 20th century, the temperature trend is artificially increased.In the second graph, you can see the offset of CRUtemp4 being lower prior to 2005, artificially increasing the trend. I also updated my accidental conflation of HadCRUT and CRUTem abbreviations.

===============================================================

Data plotted by Joe D’Aleo. The new CRUTem4 is in blue, old CRUTem3 in red, note how the past is cooler (in blue, the new dataset, compared to red, the new dataset), increasing the trend. Of course, this is just “business as usual” for the Phil Jones team.

Here’s the older CRUTem data set from 2001, compared to 2008 and 2010. The past got cooler then too.

image

On the other side of the pond, here’s the NASA GISS 1980 data set compared with the 2010 version. More cooling of the past.

image

And of course there’s this famous animation where the middle 20th century got cooler as if by magic. Watch how 1934 and 1998 change places as the warmest year of the last century. This is after GISS applied adjustments to a new data set (2004) compared with the one in 1999

Hansen, before he became an advocate for protest movements and getting himself arrested said:

The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.

Source: Whither U.S. Climate?, By James Hansen, Reto Ruedy, Jay Glascoe and Makiko Sato — August 1999 http://www.giss.nasa.gov/research/briefs/hansen_07/

In the private sector, doing what we see above would cost you your job, or at worst (if it were stock data monitored by the SEC) land you in jail for securities fraud. But hey, this is climate science. No worries.

And then there’s the cumulative adjustments to the US Historical Climatological Network (USHCN)

Source: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

All up these adjustments increase the trend in the last century. We have yet to witness a new dataset release where a cooling adjustment has been applied. The likelihood that all adjustments to data need to be positive is nil. This is partly why they argue so fervently against a UHI effect and other land use effects which would require a cooling adjustment.

As for the Arctic stations, we’ve demonstrated recently how those individual stations have been adjusted as well: Another GISS miss: warming in the Arctic – the adjustments are key

The two graphs from GISS, overlaid with a hue shift to delineate the “after adjustment” graph. By cooling the past, the century scale trend of warming is increased – making it “worse than we thought” – GISS graphs annotated and combined by Anthony Watts

And here is a summary of all Arctic stations where they cooled the past:. The values are for 1940. and show how climate history was rewritten:

CRU uses the same base data as GISS, all rooted in the GHCN, from NCDC managed by Dr. Thomas Peterson, who I have come to call “patient zero” when it comes to adjustments. His revisions of USHCN and GHCN make it into every global data set.

Watching this happen again and again, it seems like we have a case of:

Those who cool the past are condemned to repeat it.

And they wonder why we don’t trust them or their data.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

270 Comments
Inline Feedbacks
View all comments
3x2
March 20, 2012 1:35 pm

FOIA2009 1254108338.txt /1254147614.txt (Jones, Wigley and Santer)
Here are some speculations on correcting SSTs to partly explain the 1940s warming blip.[….] So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean [significant for claims of warmer years in the 1930’s/40’s that is]
Shocked I tell ye that the 30’s/40’s have now been found in a pit with a bullet hole in the back of the head right next to the corpses of MWP and LIA. Just in time for AR5 and the 2012 Rio ‘one world government’ bong fest (the cause). Remember boy’s and girl’s … cigarettes are chock full of vitamins and minerals and history is whatever you pay your historians to write for you.

Lou S
March 20, 2012 2:13 pm

The surface record over land # global mean temperature.
It also suffers from serious and well known errors
that follow from irregular monitoring and urbanization.
Even without these limitations, however, the claimed difference
between 2010 and 1998 is an artifact of the choice of averaging.
Reference to the monthly record from MSU
http://www.climate4you.com/GlobalTemperatures.htm#UAH%20MSU%20TempDiagram
the only true measurement of global temperature, shows what has been
widely recognized. Except for temporary bounces, global temperature
has been flat over the last decade, arguably since the El Nino in 1997.
Averaging over one set of months will result in temperature that differs
from that which results from averaging over another set of months.
The actual monthly data, however are unambiguous.
Global temperature during the most recent El Nino in 2010
was clearly COOLER than it was during the earlier El Nino in 1998.
That’s consistent with the general behavior since the turn of the century: FLAT.

Hot under the collar
March 20, 2012 2:21 pm

Maybe they were going to release it on April 1st and it was supposed to be called HADyou4.

Hot under the collar
March 20, 2012 2:36 pm

3×2,
Good detective work, I think you’ve hit the audit trail!

wayne Job
March 20, 2012 3:01 pm

The adjustments have to keep pace with falling temperatures plus a little bit, or they would all have been made redundant a decade ago. Fiddling the data is Piltdown man stuff.

jeff
March 20, 2012 3:04 pm

Please climate-change denier morons…
Apologies, was in a rush – this version corrects my typos (incidentally, I don’t expect this to be published as you’ll most likely censor it won’t you?)
Please climate-change deniers,

That was one heck of a “typo.” By random chance your fingers typed the word “morons”?

Gail Combs
March 20, 2012 3:26 pm

RobW says:
March 19, 2012 at 8:18 pm
OK so 2010 was the warmest year on record eh? Time for a straw poll. Who agrees? Please global answers are best.
_______________________________________
I did this two years ago in the fall before Masters at Wunderground scrubbed the data. I compared 2010 (Solar minimum) to 2004 (a year after 2nd Solar Max for cycle 23) I only looked at number of days over 90F for April through July.
In Sanford North Carolina, the middle of the State, I count by July tenth 43 days over ninety F for 2004 vs 26 days for 2010, and four days of 98F in 2010 vs nine days of 98F in 2004
Central North Carolina (Sanford)Monthly temps over 90F for.2004.&.2010
April 2010 (1)………..April 2004 (6)
1day – 91F……………..2 days – 91F
…………………………….4 days – 93F
In 2011 the April highs ranged from 55F to 86F we did not see temps over 90F (91F) until May23th!!!
May 2010 (4)………………May 2004 (17)
4day – 91F……………..6 days – 91F
…………………………….6 days – 93F
…………………………… 2 days – 95F
…………………………….1 days – 96F
…………………………….2 days – 98F
June 2010 (18)……June 2004 (11)
5 day – 91F……………1 days – 91F
5 days – 93F………….7 days – 93F
2 days – 95F……………none
2 days – 96F……………2 days – 96
4 days – 98F…………..1 days – 98F
July 2010 (3)…………..July 2004 (9)
1 days – 91F………………2 day – 91F
1 days – 93F…………….1 days – 93F
1 days – 96F……………none
none………………………6 days – 98F
For the whole month of July 2004 (24)
……………4 day – 91F
……………11 days – 93F
……………1 days – 95F
…………1 days – 96F
…………2 days – 98F
In Sanford I count 43 day over ninety F for 2004 by July tenth vs 26 days for 2010, and four days of 98F in 2010 vs nine days of 98F.
You are not going to convince me that 2010 was a “Very Hot year” based on my experience of the summer in North Carolina.

Tilo Reber
March 20, 2012 3:36 pm

Zeke: “The Berkeley folks went out of their way to avoid using any pre-adjusted data as they wanted to develop their own homogenization process unbiased by past efforts.”
Berkeley uses GHCN raw, and GHCN tells you directly that they don’t account for adjustments made by their sources. So why would Berkeley do more to assure the rawness of their other sources of data than they do for GHCN?

Gail Combs
March 20, 2012 3:51 pm

My biggest problem is how on God’s little green earth anyone can extract two decimal places from data that has NO decimal places in the first place! http://joannenova.com.au/2012/03/australian-temperature-records-shoddy-inaccurate-unreliable-surprise/
~ From data that is spliced and extrapolated and “Adjusted”
~ From data whose environment has changed over time.
~In every one of these the accuracy and precision is lost.
I spent over thirty years in laboratories using thermometers and there is no way you will convince me thermometer readings are even good to the nearest degree when talking about thousands of thermometers spread over time and space and read by volunteers. BTDT and had a screaming fight with the other lab managers in my company about our inability to get duplicate readings on lab grade thermometers.
AJ Strata has a good analysis on the error of temperature readings. http://strata-sphere.com/blog/index.php/archives/11420
As far as I am concerned when you go past a whole degree you are arguing how many angels can dance on the head of a pin territory. If the accuracy and precision is not in the original reading you are not going to magically put it back in with statistical tricks.

johanna
March 20, 2012 3:52 pm

Seb says:
March 20, 2012 at 10:07 am
Please climate-change denier morons,
Look at this which address Mr Watts above post:
http://nailsandcoffins.blogspot.co.uk/2012/03/anthony-watts-misleading-his-readers.html
Mr Watts yourself – are you truly convinced by your own arguments? Or have you got a SERIOUS vested interest in trying to mislead people with the utter drivel you write?
—————————————————————
Thanks for letting this one through, mods. It is salutary to be reminded occasionally of the low standards of Anthony’s opponents. Notice that this one is still running the ‘vested interest’ meme. I bet Anthony wishes he would identify the Caribbean island that Big Oil bought him so he can spend some quality time there with his family.

Glenn Tamblyn
March 20, 2012 4:06 pm

Andrew
Oh my. Some people just can’t let a good conspiracy theory go can they, even when every aspect of the theory has been totally debunked.
Station quality issues – yeas there are bad quality stations as well as good ones. But the analysis done by a wide range of different investigators is that those station quality issues haven’t had any impact. By comparing results using just good stations with those using all stations, any difference has repeatedly found to be negligible. They do have an impact on the daily temperature range but virtually none on the average. And this isn’t just officialdom saying this. a range of independent online people have looked at this and come up with the same conclusion. Perhaps the authority you personally might give greatest creedence to is one Anthony Watts. He was co-author of a study that looked at all this and found exactly that. [REPLY: Uh, no, that’s just your coauthor of “Skeptical Science” version interpretation of it. Readers can read my paper here – http://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf – Anthony]
You could also look at the comments above by Zeke Hausfather above. He is one of the people who has done these sorts of comparisons. Read his comments, follow some of his links.
Then there was the recent BEST project. Looking at the station data afresh, looking at many more stations, using quite different methods. Result? Essentially the same.
Then the whole ‘march of the thermometers’ meme. You make reference to a decline in the number of ‘cooler’ sites, implying that this will introduce a systematic bias. Say What! To introduce a bias you would need to drop ‘cooling’ sites, not cooler ones. Remember, the temperature records are calculated based on temperature anomalies. We are looking for how much temperatures are changing, not their absolute value. Your point suggests that you think that global temperatures are averaged together to then see how much change there has been. And your right, if that were the case, dropping cooler stations would add a warm bias.
Which is exactly why it isn’t calculated that way. The records all work by comparing each station against its own long term average to produce an anomaly for that station. Only then are these anomalies averaged together. So removing a cooler station will only introduce a warming bias to the record if that station is also cooling. So how does removing high latitude stations in places like northern Canada, where there is high warming introduce a warming bias? If anything it will add a cooling bias.
Then you talk about using data where there wasn’t any. Since you are vague about what you mean here, I will assume that you are referring to the 1200 km averaging radius used by GISTemp. The reason why this is valid is that temperature anomalies are quite closely coupled over large distances and altitudes. For example, if a cold weather system passes over Adelaide Australia today, the same weather system will likely pass over Melbourne tomorrow. If a warm front passes over Santiago in Chile, down near sea level, it will also pass over the high Andes right at its doorstep. So their weather will tend to change in synch. So it is valid to average out over significant distances. And that distance was determined by observation, looking at the degree of correlation between large numbers of random pairs of stations at varying distances apart. The correlation is stronbgest over land and stronger in the high north.
This same factor of ‘teleconnection’ then determines how many stations you need to adequately sample a region.So the total station count isn’t the point, it is the percentage station coverage that matters.
So why are fewer stations used now? Because they don’t need to use that many stations to obtain sufficient coverage. More isn’t better.
But it’s true: i’m not an expert on satellites or the algorithms used to convert input light signals to output temperature readings. That’s true. But from my understanding the early issues concerning calibration, the correct algorithms to use – were to dealt with to the satisfaction of almost everyone – perhaps though not yours?. yes, decaying orbital trajectories, diurnal drift and the like … but these are trivial matters corrected for in the modern age of mathemtics, understanding of relativity and computational power. Or perhaps i’m being too flipant. What specific concerns do you have regarding how the satellite data are treated?
“Again, the general point I make is that satellite-generated temperature data are considered to be far more reliable, with far far fewer sources of more easily quantifiable errors (and thus easier to correct for) than surface-generated (thermometer) data. The biases in siting of the thermometers (urban heat islands, altitudinal, latitudinal); variations in surface topography and terrain; human erros concerned with reading and handling instruments, the accuracu of the instuments;, rounding/ recording errors etc. etc.”
The satellite record has one possible source of structural bias that is not easily quantifiable – that the basic algorithms may introduce an unknown striuctural bias. That is why the Zou et al work is so interesting. Using a very different method of ‘stitching together’ the data from multiple satellites over time they have come up with a quite different result. That says to me that the jury is still out on what the real satellite trends are.
And as I explained earlier, most of the issues in your last paragraph aren’t important because of the way the record is calculated. This is why Averaging the Anomalies, rather than taking the Anomaly of the Averages substantially bullet-proofs the calculation against the very issues you are raising. What are left are true random errors and biases, and these tend to cancel out.
If you are interested in reading about this in more detail, including reasons why the station issues you mention may not be as significant as you think I wrote a 4 part series some time ago that covers all these issues here http://www.skepticalscience.com/OfAveragesAndAnomalies_pt_1A.html

Werner Brozek
March 20, 2012 4:54 pm

Seb says:
March 20, 2012 at 10:09 am
As for you, Mr Watts – are you truly convinced by your own arguments? Or have you got a SERIOUS vested interest in trying to mislead people with the utter drivel you write?

Your link says this:
…deeply disturbing points in its fabrication I’d like to raise.
Watts: Data plotted by Joe D’Aleo. The new HadCRUT4 is in blue, old HadCRUT3 in red, note how the past is cooler, increasing the trend. Of course, this is just “business as usual” for the Phil Jones team.
What Watts means by the “past is cooler” is that over the period ~1975-2000 the blue line (HadCRUT4) in the graph is lower than the red line (HadCRUT3). But here’s a proper comparison of HadCRUT4 and HadCRUT3 by the Hadley Centre. Notice that it’s the period post-2000 that is warmer in HadCRUT4. The period 1975-2000 is about the same.

Above this particular graph in the article, the following is stated:
Observe the decline of temperatures of the past in the new CRU dataset:
I could be wrong here, but it seems to me that Anthony Watts should have said CRU below the graph like he did above the graph. It seems to me to be an innocent slip up. But are you arguing against the guts of Watt’s assertion that the past was made cooler albeit on CRU and not HadCRUT?

Frank K.
March 20, 2012 5:03 pm

Zeke Hausfather says:
March 20, 2012 at 9:18 am
Frank K.,
For a global land reconstruction, I generally just use all raw data (with no tob adjustments). For U.S.-specific analysis, I usually use tob adjusted data as a starting point.
That said, there has been some interesting work lately looking at how automated methods (Menne’s PHA or Rhode’s scalpel) can automatically correct for most tob issues. Williams et al talks about it a bit in this paper: http://www.agu.org/pubs/crossref/2012/2011JD016761.shtml

Thanks Zeke – I’ll have a look at that.

March 20, 2012 5:44 pm

Werner Brozek says March 19, 2012 at 10:48 pm:
> Donald L Klipstein says March 19, 2012 at 9:29 pm:
>>There are 2 versions of the annual figures of HadCRUT3.
>OK. This version has 1998 at 0.529 and 2010 at 0.470.
>http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt
This is UEA version of global HadCRUT3V, which is “variance adjusted”.
(“my words”, possibly quoting inaccurately.)
> This version has 1998 at 0.548 and 2010 at 0.478.
> http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
This is UEA version of HadCRUT3, before “variance adjustment”.
(“my words”, possibly quoting inaccurately.)
> This version has 1998 at 0.52 and 2010 at 0.50.
> http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates
I check into that, and it appears to me that this is HadCRUT4.
Also, the Hadley Centre of UK Met Office and UEA used different methods
averaging 12 monthly figures to come up with annual figures for HadCRUT3.
UEA uses “ordinary averaging”, while Hadley Centre uses what they call
“optimized averaging”. The Hadley Centre version appears to me to show
slightly more warming and slightly less ~62-64 year component in the past
~40-50 years than the UEA one does.
The Hadley Centre text file for their annual figures for HadCRUT3 is at:
http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/global/nh+sh/annual
1998 reported as .517, 2010 reported as .499.
> That is three different versions. Which one of these three, if any, is being
> changed? If none of these, what are the numbers for the real one being
> changed?
I consider “most original” of these 4 to be UEA version of HadCRUT3.
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
There is the matter that the 2001 version of HadCRUT could be “less
adjusted still” and possibly be more accurate. Perchance, could someone
supply a link to this?

barry
March 20, 2012 5:46 pm

Much of the new data adjustment was done by the Met Offices around the world responsible for them, according to the paper describing the changes in CRUTEM4.
So if Steve Milloy is right that the adjustments are improper fudging, then the conspiracy would seem to extend to all the Met Offices around the world.

Tilo Reber
March 20, 2012 5:51 pm

Glenn: “Station quality issues”
The issue isn’t station quality, it’s station adjustments. Different subject entirely.

Tilo Reber
March 20, 2012 5:56 pm

Glenn: “The satellite record has one possible source of structural bias that is not easily quantifiable – that the basic algorithms may introduce an unknown striuctural bias.”
Why is it not easily quantifiable? They can make direct comparisons of the output of the algorithms with real thermometer readings from radiosonde’s.

March 20, 2012 6:08 pm

Mods my post appears to be missing.

REPLY:
Dear Phil at Princeton. Not missing, it was rude and insulting to Joe D’Aleo, so I bit bucketed it – be as upset as you wish. – Anthony

March 20, 2012 6:11 pm

Werner Brozek says March 19, 2012 at 11:28 pm:
(I edit slightly for line count and space)
> Donald L Klipstein says March 19, 2012 at 10:35 pm
>>I like to look at what happened from the ~1944 peak to the ~2005 peak.
>What if you took the difference between 1883-1944 versus 1944-2005
>and assumed the difference was due to CO2? Either way, it is not
>catastrophic.
>http://www.woodfortrees.org/plot/hadcrut3gl/from:1880/plot/hadcrut3gl/from:1883/to:1944/trend/plot/hadcrut3gl/from
I think more comparable is from one peak to another. As in for the earlier
time period, starting with 1877. OK, that will underreport warming trend
because of starting with a “century class” El Nino peak.
Maybe starting with 1878.25 or 1878.5 knocks that peak down to
“comparable size”. Holy poop – that’s still about .041 degree/decade.
There is also the matter of warming before and possibly during mid 1920’s
coming from recovery from a “tripple whammy” of solar minimums, including
the Dalton and Maunder ones.
Not that I am arguing for global climate sensitivity to change of CO2 being
more than 1.5 degree C per 2x CO2 change in recent or future decades. I
have seen some indication, as I mentioned before, that this figure could be
as low as .67 degree C per 2x change of CO2. (On log scale.) Compare
this to ~3 degrees C (sometimes more) per 2x change of CO2 favored by
most advocates of existence of anthropogenic global warming.

barry
March 20, 2012 7:12 pm

the only true measurement of global temperature, shows what has been widely recognized. Except for temporary bounces, global temperature has been flat over the last decade, arguably since the El Nino in 1997.

It should be widely recognized, but somehow isn’t, that the linear trend for the time period since 1998 is not yet statstically signifcant, and therefore can tell you little about what trend there actually is. The soonest we can get a trend that passes statistical significance testing, for satellite data, will be about 2014, but probably longer (satellite data is noisier than surface records).
The period in question could itself be one of these ‘temporary bounces. We won’t know without more data. As of yet, there is no statistically signficant data that says the long-term temperature trend (which IS statistically significant) is now flat.
What data there is shows a slight warming trend (same data source as given). That, too, is virually meaningless, because the trend is statstically insignificant.
20 years is a good minimum to ensure a statistically significant rend WRT satellite global temperature data – but significance tests should still be observed.

barry
March 20, 2012 7:18 pm

Donald K,
a bit confused by two seemingly contradictory comments in your post.
1. “Not that I am arguing for global climate sensitivity to change of CO2 being more than 1.5 degree C per 2x CO2 change in recent or future decades. I have seen some indication, as I mentioned before, that this figure could be as low as .67 degree C per 2x change of CO2”
More CO2 causes some warming.
2. “Compare this to ~3 degrees C (sometimes more) per 2x change of CO2 favored by
most advocates of existence of anthropogenic global warming.”
Here you call into doubt the ‘existence’ of AGW.
The only way I can reconcile these two comments is that you think there is some doubt that human industry is responsible for increases of CO2 in the atmosphere.

Glenn Tamblyn
March 20, 2012 8:46 pm

Tilo Reber
“it’s station adjustments. Different subject entirely.”
I was replying to Andrew who had raised other issues.
“Why is it not easily quantifiable? They can make direct comparisons of the output of the algorithms with real thermometer readings from radiosonde’s.”
2 reasons. The Radiosondes don’t have anything like enough geographical or temporal coverage to give anything but a very very rough confirmation of the trends and to allow problems with the algorithms to be evaluated. Secondly, there are well known issues with the radiosonde data related to heating and cooling of the instrument body at high altitude and changes in the instrumentatiuon packages over the years. For example, the raw radiosonde data showsn little warming in the upper troposphere which is unphysical. Thats not a signature of AGW. Its a signature of any warming from any source. Garth Paltridge has expressed doubts about the radiosonde record and Richard Lindzen has said the upper tropospheric warming must be happening, it is physically impossible for it not to be and that if the data doesn’t show that then the data is suspect.
So the differences between UAH/RSS and Zou et al are well within the quite broad range of readings that the RadioSondes give us.
The biggest issue with building the satellite record has been reliably stitching together data from multiple satellites. To do that UAH/RSS need each pair of satellites to be in service at the same time for long enough to allow statistical analysis of the difference between what each one is reporting and this nneeds a year or more of data. For example there appears to have been a problem that caused a divergence between UAH & RSS because the overlap time between NOAA-9 and NOAA-10 was only a few months. It took years for this difference to slowly work its way out of their results and contributed to their results drawing closer together in recent years.
In contrast the Zou analysis uses a very different method to calculate satellite overlap. They use Synchronous Nadir Overpasses – periods in the satellite orbits when 2 satellites are passing over the same point on Earth at the same time and thus are looking at the same location below. This allows them to focus just on intersatellite differences rather than trying to extract them from the general range of biases the satellites are experiencing. This method strikes me as more robust than the UAH/RSS method and given that this also agrees better with the higher figures obtained by Vinnikov & Grody and Fu & Johansen says to me that there is at least reasonable grounds to think that both UAH & RSS may have an unrecognised cool bias in their processing algorithms
It will be interesting to see what Zou’s TLT product looks like when they finally produce it.

Werner Brozek
March 20, 2012 9:18 pm

Donald L Klipstein says:
March 20, 2012 at 5:44 pm
This version has 1998 at 0.52 and 2010 at 0.50.
> http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates
I check into that, and it appears to me that this is HadCRUT4.
The Hadley Centre text file for their annual figures for HadCRUT3 is at:
http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/global/nh+sh/annual
1998 reported as .517, 2010 reported as .499.

Thank you. That explains it all. The values quoted above: 0.52 and 0.50 are the numbers 0.517 and 0.499 rounded to two digits. But that is NOT Hadcrut4. Hadcrut4 changes these to 0.52 for 1998 (so no change here), but 2010 becomes 0.53. As a result, 2010 is 0.01 C higher than 1998.

Werner Brozek
March 20, 2012 9:52 pm

barry says:
March 20, 2012 at 7:12 pm
The soonest we can get a trend that passes statistical significance testing, for satellite data, will be about 2014

Unless we get a strong El Nino soon, I believe we will reach it this year yet. Santer talked about 17 years being needed. And at least as far as RSS is concerned, we are at 15 years and 3 months now. So that leaves another 21 months. And if every month in the future also pushes things back a month, we should be very close by the end of this year. See:
http://www.woodfortrees.org/plot/rss/from:1995/plot/rss/from:1996.9/trend/plot/rss/from:1995.56/trend

March 20, 2012 11:21 pm

Tilo Reber says:
March 20, 2012 at 5:56 pm
Glenn: “The satellite record has one possible source of structural bias that is not easily quantifiable – that the basic algorithms may introduce an unknown striuctural bias.”
Why is it not easily quantifiable? They can make direct comparisons of the output of the algorithms with real thermometer readings from radiosonde’s.
#######################################
dear god, you again.
http://www.ssmi.com/msu/msu_data_validation.html