CRU’s new CRUTem4, hiding the decline yet again

Over at JunkScience.com Steve Milloy writes:

Skeptic Setback? ‘New’ CRU data says world has warmed since 1998 But not in a statistically significant way.

Gerard Wynn writes at Reuters:

Britain’s Climatic Research Unit (CRU), which for years maintained that 1998 was the hottest year, has published new data showing warmer years since, further undermining a sceptic view of stalled global warming.

The findings could helpfully move the focus from whether the world is warming due to human activities – it almost certainly is – to more pressing research areas, especially about the scale and urgency of human impacts.

After adding new data, the CRU team working alongside Britain’s Met Office Hadley Centre said on Monday that the hottest two years in a 150-year data record were 2005 and 2010 – previously they had said the record was 1998.

None of these findings are statistically significant given the temperature differences between the three years were and remain far smaller than the uncertainties in temperature readings…

And Louise Gray writes in the Telegraph: Met Office: World warmed even more in last ten years than previously thought when Arctic data added

Some of the change had to do with adding Arctic stations, but much of it has to do with adjustment. Observe the decline of temperatures of the past in the new CRU dataset:

===============================================================

UPDATE: 3/21/2012 10AM PST – Joe D’Aleo provides updated graphs to replace the “quick first look” one used in the original post, and expands it to show comparisons with previous data sets in short and long time scales. In the first graph, by cooling the early part of the 20th century, the temperature trend is artificially increased.In the second graph, you can see the offset of CRUtemp4 being lower prior to 2005, artificially increasing the trend. I also updated my accidental conflation of HadCRUT and CRUTem abbreviations.

===============================================================

Data plotted by Joe D’Aleo. The new CRUTem4 is in blue, old CRUTem3 in red, note how the past is cooler (in blue, the new dataset, compared to red, the new dataset), increasing the trend. Of course, this is just “business as usual” for the Phil Jones team.

Here’s the older CRUTem data set from 2001, compared to 2008 and 2010. The past got cooler then too.

image

On the other side of the pond, here’s the NASA GISS 1980 data set compared with the 2010 version. More cooling of the past.

image

And of course there’s this famous animation where the middle 20th century got cooler as if by magic. Watch how 1934 and 1998 change places as the warmest year of the last century. This is after GISS applied adjustments to a new data set (2004) compared with the one in 1999

Hansen, before he became an advocate for protest movements and getting himself arrested said:

The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.

Source: Whither U.S. Climate?, By James Hansen, Reto Ruedy, Jay Glascoe and Makiko Sato — August 1999 http://www.giss.nasa.gov/research/briefs/hansen_07/

In the private sector, doing what we see above would cost you your job, or at worst (if it were stock data monitored by the SEC) land you in jail for securities fraud. But hey, this is climate science. No worries.

And then there’s the cumulative adjustments to the US Historical Climatological Network (USHCN)

Source: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

All up these adjustments increase the trend in the last century. We have yet to witness a new dataset release where a cooling adjustment has been applied. The likelihood that all adjustments to data need to be positive is nil. This is partly why they argue so fervently against a UHI effect and other land use effects which would require a cooling adjustment.

As for the Arctic stations, we’ve demonstrated recently how those individual stations have been adjusted as well: Another GISS miss: warming in the Arctic – the adjustments are key

The two graphs from GISS, overlaid with a hue shift to delineate the “after adjustment” graph. By cooling the past, the century scale trend of warming is increased – making it “worse than we thought” – GISS graphs annotated and combined by Anthony Watts

And here is a summary of all Arctic stations where they cooled the past:. The values are for 1940. and show how climate history was rewritten:

CRU uses the same base data as GISS, all rooted in the GHCN, from NCDC managed by Dr. Thomas Peterson, who I have come to call “patient zero” when it comes to adjustments. His revisions of USHCN and GHCN make it into every global data set.

Watching this happen again and again, it seems like we have a case of:

Those who cool the past are condemned to repeat it.

And they wonder why we don’t trust them or their data.

About these ads

270 thoughts on “CRU’s new CRUTem4, hiding the decline yet again

  1. Also, I’m commenting under a new (genuine) email address because I’m told to log in via a method that I’m not sure exists.

  2. There is of course nothing wrong with amending data in the light of increasing knowledge. But that implies a process which can be explained and justified, which is transparent and is published so that all can understand and comment.

    To date I’m not aware of whether we’ve seen anything other than approaches which are obscure, unexplained, opaque and hidden.

  3. “‘New’ CRU data says world has warmed since 1998 But not in a statistically significant way.”

    When are scientists and writers going to get that these statements are oxymoronic?

  4. The arctic adjustments just in the last few weeks by GISS have been truly amazing. It is really bold, blatant in-your-face stuff, which as far as I know has not been explained. Steve Goddard explains how the Iceland bureau of meteorology is not impressed, but it looks like Real-Science.com is broken at the moment or I would provide the link. It looks like Envisat might be infected too, the sea level just increased 4mm a couple of days ago, on a system that showed a downtrend.

    Check your spelling above… look for artic.

  5. Do you know if the base period changed from version to version (of either HadCrut or GISS)?

    I am just curious, because if the baseline changed from 1951 – 1980 to 1961 – 1990, I could see how this would “cool” the past, as the baseline period was warmer than the previous baseline.

    Anyway – just curious.

  6. Data fudging has become a pay packet and a way of life for them that is addictive, an addict will at least admit he is addicted but guy’s like Hanson are incapable of telling the truth, their world is a nefarious underworld of lies ,dam lies and more lies. Their manipulations are so obvious that even amateurs like myself can see them.

  7. So, really, it’s not that the planet’s getting warmer, it’s just that history keeps getting colder.

  8. The answer is a question: Do you get more money from an increasing average global temperature or from a global temperature that stays the same?

    In our increasingly bueaucratic world the answer is always that which benefits the bureaucracy most.

  9. These guys are obviously frauds but what gets me is that they aren’t even particularly good frauds. So why does anyone believe their crap?

  10. Statistically, shouldn’t adjustments themselves have zero trend? It looks like the adjustments themselves account for 0.5 deg C per century of warming.

  11. But if the past keeps getting colder and colder, shouldn’t we worried about the massive increases in glaciation that will result from that colder past?

  12. So if I’m reading this correctly, 60% (0.6°/1.0°) of all of the change in temperature anomaly, aka global warming, from any cause, is due to adjustments… Got it.

  13. Climate science is to science as:
    social justice is to actual justice
    a strait jacket is to a dinner jacket
    a people’s republic is to an actual republic

  14. “Worse than we thought” is looking better all the time!

    The problem the warmest are going to run into by cooling the past is that the average Joe is going to look around and realize that all this alleged warming hasn’t really caused any significant problems. Meanwhile, cap and trade, or other regulations, are hitting the average Joe pretty hard, with the promise of a lot more pain in the near future.

    They may be able to tweak some emotions with their virtual temperature shenanigans, but they have little control over reality, which is consistently telling the masses that these folks are crying wolf. There just isn’t an problem with this ‘ worse than we thought’ warming. People are asking themselves, “If this warming is so bad, why aren’t we suffering at least a little bit from it now?”

    Its a good question that the warmest can only answer with hand waving. The average Joe is just not as dumb as the warmest think. And since the ‘intelligentsia’ no longer have complete control over the dissemination of information, the average Joe is getting a little smarter.

  15. There are some of us ‘hard core’ skeptics that question if the Earth has warmed at all over the last century. Looking at the magnitude of the one-way ‘corrections’, can you blame us?

  16. 0.04C in 12 years – this is significant (even without error bars)? Poor LuLu…(poor DT).

  17. Obvously, using state of the art trend analysis, the solution to present day warming is to wait for the future. Today gets cooler and cooler after 25 years and then cooler yet every 7 years further into the future. “Further undermining the alarmist view of a warming world.”

  18. So, the future isn’t becoming warmer, its the past that’s getting colder! I therefore predict a 20 deg C anomaly increase by 2100. Please forward my Nobel prize to: B Mount, c/o CRU Promotions, UK.

  19. Interesting to find that Lerwick is in the Arctic.
    And just a tiny nitpick – “Shetland”, not “Shetland Isles”.

  20. The arctic adjustments also carry more geographical weight as the GISS 1,200 K radius is not moderated by near by stations, as there are no “near by” stations. Adjust the right stations and you pretty well cover the entire arctic all the way to the pole.

  21. Of course Hadcrut and GISS will now move even further from the satelite measurements. Any minute now we can expect Steve Mosher to swing by to tell us that is all a-ok, when clearly it is not.

  22. CRU appears to have taken a page straight out of Peter Gleick’s strategy book. If the raw data does not fit your beliefs then manufacture the data you expected and then publish it.

    MacArthur Geniuses all of ‘em!

  23. Maurizio Morabito (omnologos) says:
    March 19, 2012 at 10:13 am
    What’s wrong with Ust Cilma in Russia, where they could not for the life of them cool the past?
    ————————–
    They don’t like that kind of stuff.

  24. Jim Clarke says:
    March 19, 2012 at 10:40 am
    “…..Its a good question that the warmest can only answer with hand waving………”

    and for our UK brethren out there in the blogosphere, a Viz Profanosuarus entry would be testiculating – adverb- to wave ones arms around and talk bollocks. That’s closer to the mark for our warmista friends.

  25. Iceland Met Office monthly temps for Reyjavik 1940:
    1.6 1.7 -0.2 3.0 7.6 ….
    CRUTEM4 (as used in HADCRUT4) from Met Office crutem4 download page:
    1.2 1.3 -0.6 2.6 7.2 ….
    So CRU are also cooling the past in Iceland like GHCN/GISS, but not so badly, ‘only’ 0.4 degrees

  26. I personally consider this to be counter productive to “the cause”.

    With all these adjustments, one can have no confidence in the record. GISS has been adjusted perhaps a dozen times. Why have all these adjustments been necessary? Why were the past adjustments wrong? At the very least, it suggests incompetence or at any rate that the person making the adjustment does not know what they are doing. One cannot easily justify why it is necessary to have say twelev attempts to get something right. I think that the lay person readily understands that (i) it suggests incompetence, and/or (ii) it is indicative of some agenda.

    Whilst I consider that there probably has been some warming this past century, I cannot with any measure of confidence conclude that it is warmer today than it was in the 1930s or the 1880s and I am fairly confident that as far as the USA is concerned, it was warmer in the USA in the 1930s than today.

    The more that they adjust temperatures upwards, the more it suggests that there is no significant harm in rising temperatures. There has been no statistically significant increase in huricans, typhoons, flooding etc so what is the problem?

  27. They’ve been getting away with it since 1987 so why would they stop now.

    The satellite record is the only reliable one since we can’t even be sure that the Raw NCDC climate database is still using the old records as they received them.

  28. By Louise Gray

    Now a new analysis of land and sea temperatures, that includes new data from weather stations in the Arctic, has found the world is warming even more than previously thought.

    Between 1998 and 2010, temperatures rose by 0.11C, 0.04C more than previously estimated.

    Professor Phil Jones, director of CRU, who was at the heart of the Climategate scandal, said the temperature series is slightly warmer because it includes the new data from the Arctic, where the world is warming faster.

    With regards to the top comment, how can “the world (be) warming even more” if it has not been warming for 15 years by their own admission? For proof of the lack of warming for about 15 years, see:

    http://www.woodfortrees.org/plot/hadcrut3gl/from:1995/plot/hadcrut3gl/from:1997.25/trend

    As for the second statement, I think it was phrased poorly. On the HadCRUT3, 1998 was 0.07 C hotter than 2010. But apparently now 2010 is 0.04 C hotter than 1998, so the net relative change is 0.11 C. And presumably, this is mainly due to the Arctic as the third statement implies. The RSS data only go to 82.5 degrees north. I do not know about the original HadCRUT3 data, but if we assume the same, and if we assume just the northern arctic is affected since that is all we mainly hear about, that represents 1/230 of the total area of the earth. So how much warmer does this area have to be to make a net difference of 0.11 C? That would be 0.11 C x 230 = 25.3 C! See:

    http://ocean.dmi.dk/arctic/meant80n.uk.php

    I do not see a huge difference between 1998 and 2010. Do you? Are we really expected to believe that in all cases where there was missing data, the 1998 values were cooler by a huge margin and 2010 was warmer by a huge margin?

  29. Yup. Global warming is whatever HADCRU says it is on any given Monday.

    First they got rid of the MWP. Now they’ve gotten rid of flatlined temperatures over the past 15 years.

  30. Nothing wrong with that. They’ve got a time machine. They just go back and change the actual temperatures. Nothing to see here.

  31. The only problem with the GHCN is bad story is that I dont use it and I get the same answer.
    go figure.

  32. As George Orwell wrote in his most famous novel, 1984:

    “he who controls the past controls the future, and he who controls the present controls the past”.

    Of course, as far as the CRU is concerned Orwell forgot to insert the words “temperature data” after “past”, “present” and “future.”

  33. The ‘dancing data’ animation alone demands one hell of a good explanation. Someone should make a badge of it.

  34. Good Grief Mann,
    It was that cold when I was born I must have been a polar bear!
    OK I admit it, that was me in the photo on the melting ice, I was trying to get somewhere warmer, it was blooming cold mum but I didn’t know I was a polar bear……..my life makes sense now…

  35. I want to see people go to jail for this – this continuing deception and the barefaced efforts to perpetuate it deserve no less.

  36. And, now we can all see the human influence on our temps. ……. apparently, the is the “A” in CAGW.

    I fluid and dynamic history. Orwell couldn’t have written it any better.

  37. Is this right?

    (1) All of the major non-satellite datasets, including GISS, HadCrut and Best, rely on adjustments made to individual stations made by NCDC.
    (2) NCDC alter historical data without telling anyone who altered the data, or why.
    (3) NCDC is headed by Dr Tom Peterson of “It’s a knife fight” fame and who’s views on climate and Climategate were shown by Anthony in:

    http://wattsupwiththat.com/2011/01/16/ncdcs-dr-thomas-peterson-its-a-knife-fight/

    (4) There is no oversight or accountability of the process.

    Yeesh! Put me in charge of economic data for the Havana Greater Development Region, yet…

  38. Hmm. Feel sorry for the Team now as they have to “re calibrate” their models to match less warming from the years leading to 1960 and higher warming after. Of course we have to flip the effect of CO2 saturation to make all of this work but hey, its just Team physics!

    Also think of the carbon footprint these redo calculations are going to leave! By the way, do they have an error bar adjustment term built in to their models (ideologically driven to take an exponential form and ,from a self interest perspective, be a function of Funding Cycle and time to next IPCC Assessment Report)?

  39. “Between 1998 and 2010, temperatures rose by 0.11C, 0.04C more than previously estimated.”

    Fantastic you couldn’t make this up they did. :-)

  40. Odd how it always goes ‘their’ way!
    My, they must be laughing into their tenured beers tonight.
    But pride comes before a fall, as they say.

  41. Am I going senile, or did I read last year that they were in the process of removing some Arctic Stations to raise the average temperature.

  42. The fact is, their data machining does not have any significant impact on temperatures diverging from multi-model mean. What about taking that mean as baseline and displaying those data as anomalies from it? I gues that’d be an interesting view despite all the adjustments…

  43. This has certainly undermined my view of stalled global warming while bolstering my view of steadily increasing past global cooling.

    Phil Jones – nudge, nudge, wink, wink

  44. Its not surprising that when you add more Northern Latitude data that the present warms.

    This has been shown before. It’s pretty well known.

    As you add SH data you will also cool the past. This is especially true in the 1930-40 period as well as before.

    More data. Folks used to clamor for more data. Here is a clue. If you look at the distribution of places that were not measured in the past ( and in the present) and if you understand polar amplification, it should be pretty clear that as you add data you can expect the past to cool.
    And as you add more current data from the extreme high latitude you can expect the present to warm. The changes won’t be huge, but just looking at the distribution of “unsampled” places and the fact of polar amplification, would clue most people in.

    Note: there is more data out there that has yet to be digitized. prior to 1950 data. wanna bet what it will show?

  45. Steven Mosher says:

    The only problem with the GHCN is bad story is that I dont use it and I get the same answer.

    That is not a problem with the GHCN is bad story, unless you are holding up yourself and the data you use as infallible.

    Yours is the same logic that says “Mann’s work is right because Wahl’s work agrees, which is right because it agrees with Briffa’s, which is right because it agrees with Mann’s”. Sorry, but no

  46. http://news.bbc.co.uk/1/hi/7139797.stm

    Here’s a prediction from some of these wonderful climate scientists, courtesy of modern day Pravda stating that the ice in the Arctic will be gone by summer 2013 – well not long before this will be another failed computer model.

    I really hope we can get some traction exposing these failed predictions.

  47. If these guys can just make stuff up and change the past; why don’t we ask them to change the present to what we want? Could you please make Central Florida a little less hot next August? (I need to save on the A/C bill)

  48. How can we believe anything that the CRU says, after climategate 1 and 2 they are a joke. Even if they are right and their facts are correct, no one is going to believe them anyway.

  49. It’s worse than we thought,
    by the time we get to HadCRUT10 the Earth will be a second sun.

  50. Hansen is a Marxist. He lamented to Clinton years ago about the injustices of global wealth distribution. He’s been outed many times but these Marxists are like zombies. You have to whack them more than once.

  51. I just watched the video of Peter Stott from the UK Hadley Centre. He claims the two changes for HADCRUT4 are
    1) after the second world war the British ships threw buckets over their bow to measure sea surface temperatures. They have now “corrected” these measurements for cooling by evaporation of water in the buckets during the few minutes before the temperature was actually measured. Hence the drop in temperatures during the 40’s is reduced !
    2) They have “discovered” some new station data in the Russian Arctic region covering recent decades. This region has been warming more than elsewhere and adding these new data the global temperatures have consequently “increased” a bit.

    My questions.
    – What about poor coverage in the southern hemisphere which has shown little if any warming ? Have new stations been discovered there ?
    – Surely any new Arctic stations within a single 5×5 degree cell will only effect the average temperature in that one particular cell reducing any error – certainly not the global average.

    I hope they release the new station data soon so that these claims can be independently verified !

  52. This is blatant fraud and has nothing to do with science. Political activists have hijacked climate science. These people have a political agenda – the destruction of capitalism, freedom of speech and thought and the imposition of a green dictatorship. They will do anything to impose their foolish goals . . . Anything ! Morality means nothing to these sick-in-the-head people.

  53. So those that insist on integrity in data sets are anti-science Luddites? Those that commit fraud are the Defenders of the Earth (TM).
    It’s no good trying to reason with these people. It’s like talking to farm animals.

  54. Another way of looking at it, perhaps?
    1) First attempt – they got it wrong
    2) they adjusted, saying it was now right but still it was wrong
    3) they adjusted yet again and still not right
    4) more adjustments, so all of the previous adjustments and there reasons were incorrect or badly judged. And yet they NOW say it’s RIGHT!
    Hmm, this really is crying wolf a bit too often……..

  55. Peter Ward says:
    March 19, 2012 at 10:14 am
    There is of course nothing wrong with amending data in the light of increasing knowledge. But that implies a process which can be explained and justified, which is transparent and is published so that all can understand and comment

    EVERYTHING is WRONG about adjusting past data. Past data is sacresanct. You may manipulate data in the present but you MUST save the old and raw data for post-analysis. These clowns manipulate the data and then discard the original.

  56. Obviously the way to fix global warming is just to wait. Apparently the temperature in the past gets lower all by itself. By 2022 the 2012 temperature will have dropped all the way to normal – whatever that is.

  57. A pity that they’ve no doubt learned to be more reticent with their email and we won’t get to see them discussing this. No doubt someone would have argued for the need to counter the “no recent warming” meme. Anything for the cause you know.

  58. Just out of interest have any of these ‘Climate Centers’ passed a Quality Audit – the ISO-9000 series for example? Are governments making decisions based on data that does not meet the government mandates of quality assurance for companies that are suppliers?

  59. “Steven Mosher says:
    March 19, 2012 at 12:43 pm …”

    So what was (according to Moshtemp) the average temperature in Reykjavik in 1940: 5°C or 3°C?

  60. Steven Mosher, yes its just ‘lucky chance ‘ that all adjustments happen to be in one direction and help to support ‘the cause’ which is keeping them in gravy. Now hows about they provide good scientific reasons for the need for these ‘adjustments’ and make the process transparent , for if their science is OK they really can’t have any reason not to do so.

  61. so Steve M…if I have a stalls selling apples, and I want to compare this years sales with last years sales, can I add my new stall I opened this year, then compare the 2 stalls and decide that last year I had a terrible year selling apples, as I sold many more this year?
    Next year when I open my third stall, I will realise that my original stall hardly sold any apples at all. And I thought I was doing quite well, that first stall allowing me to open new stalls and all. Gosh how little I knew.
    Blimey…good job I don’t sell oranges or pears as well or I would not know what to think.

    Ok…sarc off. I think we all are grown up to know exactly why extra data is added, and why its not added sometimes. We have all sat around enough board rooms and decided how to make it fit what we want to project. But lets not actually believe the output of our data manipulations ourselves eh?
    I think the posters pouring scorn and satire have the right attitude. It is, after all, laughable.

  62. Steven Mosher says:
    March 19, 2012 at 12:43 pm

    Questions for Steve: Where is this “new” data coming from? Are people today suddenly discovering “lost” climate data under their beds or in their closets? Do you have links to this “new” data? Can you conclusively demonstrate that the past will always cool and the present will always warm? If the new data is located physically close to existing stations, are the “new” old data consistent with the “old” old data?

    Thanks.

  63. Steve Mosher,
    Please explain where greenhouse theory allows the surface to warm at a faster rate than the troposphere. As I recall, that is not what the “theory” predicts. You have avoided this contradiction at every turn.

  64. In 1865 meteorologists in Sion, Switzerland, measured an average temperature of 10.5°C; that is about 1.3°C above the mean from 1961-1990, which is 9.2°C.
    According to the “homogenized” data of Meteoswiss, it was actually 7.9°C, that is 1.3°C below the average:

    http://www.meteoschweiz.admin.ch/web/de/klima/klima_heute/homogene_reihen.Par.0054.DownloadFile.tmp/vergleichoriginalhomogen.pdf

    So Hansen is, in comparison, actually quite cautious, almost conservative.

  65. Mosher,

    That can’t explain why they are slicing between .3 and 2C of temperatures off individual stations. Such a strategy implies they view their data adjustments (model) as more reliable than the actual data.

  66. March 19, 2012 at 12:43 pm

    Its not surprising that when you add more Northern Latitude data that the present warms.

    This has been shown before. It’s pretty well known.

    As you add SH data you will also cool the past. This is especially true in the 1930-40 period as well as before.

    Are you trying to say that adding stations in the Southern Hemisphere makes (eg) Reykjavik’s past colder and present warmer? I would love to see the physical mechanism for that.

  67. An ultimate way to hide the decline is the Metoffice website.

    http://www.metoffice.gov.uk/climatechange/science/monitoring/hadcrut3.html

    Declining temperatures are obscured by a side ad.

    @Steven Mosher, more Arctic data does not mean the global average would become warmer. Average is not calculated as a simple average of available stations, but more Arctic stations means the individual Arctic grid values would be more precise.
    As somebody told above, those new Arctic data must be warm like hell, if the relatively tiny Arctic now skews the WHOLE GLOBAL RECORD upwards instead of decline. Gotta wait for McIntyre analysis.

  68. If you add in the Arctic stations, the 1940s should get warmer (not colder).

    This rationale is completely bogus. The records previously showed polar amplification made the 1940s even warmer in the Arctic than the general global temp anomaly.

    (But it might explain the recent flurry of Arctic records changing 1940s cooling – they were making room for Hadcrut4 to increase the warming trend).

    On the other hand, the Gore Effect will now strike. The Arctic is going to get very cold now. Then they will have to come up with some other adjustment process etc. etc. (why don’t they put 5.35*ln(CO2now/CO2orig)*0.81C into the global temperature record right now – they know they want to – just do it).

  69. Kevin says:
    March 19, 2012 at 10:33 am

    Statistically, shouldn’t adjustments themselves have zero trend? It looks like the adjustments themselves account for 0.5 deg C per century of warming.
    _______________________________________
    Yes, It is one of the methods of figuring out if you have fraud going on in a laboratory. I have fired a few rears after catching some idiot tech messaging data (ain’t statistics great). Too bad the taxpayer can not do the same with these jokers.

  70. I think we need a dose of the way-back machine again. I was more or less ignoring “global warming” until 2008, when I attended a conference and in one session the speaker said, “You don’t need to take a poll or google it or ask your friends about global warming, everything you need to know about global warming is right here.” (I’m paraphrasing, of course, that was 5+ years ago and I didn’t record anything) and gave the realclimate web address. He also said, “The debate is over…” and that’s when I was sure I needed to research this.

    So when I got back home to my desk, I first checked out the realclimate website. This conference was the annual GovEnergy conference, held in Phoenix 4-6 August 2008, although I wasn’t back at my desk ’til Monday the 11th, so I visited realclimate some time after that, I can’t remember if it was days, weeks or months. The very 1st thing, top of the bar, I think, but could have been posted before I went to the conference, was an article declaring, “…we know global warming is happening, but we can’t find it in the record. So the record must be wrong… and we’re setting about to fix that.” I can’t remember who wrote and/or posted this article, but I thought it was the Webmaster, Mr. Gavin Schmidt himself. But I could also be wrong about that. Nonetheless, I took that as my clue to research fast, before everything disappeared. Furthermore, this altering of the records comes as no surprise, we were warned!

    Since then, however, I have tried to search realclimate archives and I can’t find that article. This is why I guess we need the way-back machine, I think it was “disappeared”. Is there anybody that can help?

  71. First, reading about is some weeks ago when it was announced, I got annoyed
    about the HOT SPOT CHASING and then REDRAWING PAST TEMPS……
    Now I got calmed down because once we will have the Warmists out one day, we
    will redraw the temp statistics to the previous HadCRUT2 and 3 values….and put the
    forgers to where they belong……Result:
    A forgery is NOT permanent by only temporary for 1 – 2 decades until Warmists
    will throw in the towel…..
    But all their forging will not be capable of HIDING THE Temp-DECLINE OF THE
    FUTURE….global temps reached their top plateau and cannot rise any further,
    they even will go down 0.1’C per decade……. considering ongoing forgery
    towards HADCRUT5, 6,7,8 aimed at hiding this 0.1’C decline, then the Plateau
    will continue for the next decades and stay constant instead of decreasing
    So we will have to live with it until the tide turns…
    JS

  72. Andrew30 says:
    March 19, 2012 at 11:10 am

    This is why the Arctic Sea Ice levels are so important.
    ____________________________
    Who says those can not be “Adjusted” too? We are not the keepers of the data “They” are. Thank goodness for Dr. Spencer.

  73. What’s the problem of most of you people here with climate change? I give three options:

    (1) Nothing changes and the ‘scientists’ got it all wrong.
    (2) There is some minor, negligible change.
    (3) The earth is warming and changes the world as we know it.

    If I think about which option to choose, I go for number (3). Why? First of all, looking at this ‘adjusted data’, look at the US economic data that is changed, i.e. ‘adjusted’ all the time. There is preliminary data that is corrected after all the information has been gathered. One simply can’t know all the facts at the time. The weather station on my terrasse will tell me that it was like 45 °C the last summer. In my area, air temperature never was 45 °C. My weather station just is not a reliable source. This reading is required to be corrected for direct sun irradiation, adjacent walls that heat up, and so on and so on.

    And anyway: we can’t destroy earth. Life is bigger than human existance. We could bomb the whole planet, life would not be gone. There would be some kind of climate and some kind of animals and plants. So why care about the current climate? Because I really like the climate the way it is right now. I mean: drylands could become fertile with climate change, wetlands could became farm land. What is positive and what is negative? The world will go on and adapt to every change we make, but will you still want to be where you are right now? Doing what you are doing right now?

    These ‘scientists’ are trying to preserve is the world as we know it.

  74. Mosh, could you elaborate or point me in the right direction? Why would adding more data cool the past and warm the present in reference to average global temp anomaly? I am assuming the data is previously unaccounted for temp anomalies. I don’t understand how you can assume a long term temperature anomaly, one way or the other, from geographic location.

    Thanks,
    JM

  75. How can anyone make any decisions based on historical trends when the history keeps changing?

    Why tomorrow 1951 may become as hot as Venus or as cold as Pluto! And the next day it could reverse, depending on political and budgetary needs.

  76. “David A says:
    March 19, 2012 at 11:08 am
    Of course Hadcrut and GISS will now move even further from the satelite measurements. Any minute now we can expect Steve Mosher to swing by to tell us that is all a-ok, when clearly it is not.”

    You were wrong, It actually took 1 hour and 35 minutes.

  77. Anybody from Canada know why the UK was deprived of your data or was it by Met Office choice?

    The following was issued by the Met office on 2nd Dec 2010:-

    “Global-average annual temperature forecast”

    http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/glob-aver-annual-temp-fc

    Scan down to the bottom of the page:-

    Figure 3: “The difference in coverage of land surface temperature data between 1990-1999 and 2005-2010. Blue squares are common coverage. Orange squares are areas where we had data in the 90s but don’t have now and the few pale green areas are those where we have data now, but didn’t in the 90s. The largest difference is over Canada.”

    Why did the Met Office no longer have the Canadian land surface temperature data? I am not aware of any of the stations being closed? If they have been could somebody please point me in the right direction?

  78. No doubt they will argue that these adjustments were required in the interests of accuracy. And no doubt also they have all their arguments marshalled ready and waiting should someone wish to dispute what they have done. However there is no need for us to actually dispute the reasons for these adjustments. It is enough to note that there is a clear pattern of temperature readings declining with age of reading for whatever reason. We really don’t care about why that is so. It doesn’t matter. The mere fact that temperature readings fall with age of reading is indisputably something that should be taken into account in computing temperature trends.

    The way to do this would be to graph temperature reading decline against age of reading and model it (as is traditional) with a linear model. This “rate of decline with age” should clearly be deducted when computing temperature trends.

    Let me stress that this argument is indisputably correct regardless of whether the adjustments were made for proper reasons. That is because we may expect that current measurements of temperature will also be adjusted downwards as they age. The reasons for that adjustment are irrelevant. To refute this argument it would be necessary to explain why the clear pattern of temperature measurements declining as they age should not be expected to continue into the future.

  79. New data or new interpretation?
    It is a bit confusing both in the post and in the comments no clear distinction is made between the data as they are collected in the past on the one hand and the interpretation of them as presented in a graph on the other hand.
    This distinction is paramount. Tot be able to understand the reasoning behind the interpretation and the conclusions on the basis of the data as they are presented you cannot do without.

    Citation: “Britain’s Climatic Research Unit (CRU), which for years maintained that 1998 was the hottest year, has published new data (…)”

    As I read on I get the impression it is not about the original data but about ways they are interpreted, corrected, and presented in graphs with new data added.

    When you coin a phrase like “changing the data” it gives the impression someone has messed up the original collected “primary” or “raw” data. No surprise you see words like “fraud” or “should go to jail” in the comments. But has this really happened?

    In my opinion it is up to anyone any time to interpret primary data as they like or combine with new data, as long as they don’t change the original “raw” datasets. And of course, as a scientist, you better explain your interpretation and be open to scrutiny.

  80. Steven Mosher says:
    March 19, 2012 at 12:43 pm

    So you’re saying that Hansen’s method of filling in the gaps doesn’t work because adding more data changes the result?

  81. Surely fraud charges have to be in the works by now. A new twist on “hide the decline” or same ol same ol for “climate science” which btw is NOT science.

  82. Changed data or new interpretation?
    It is a bit confusing both in the post and in some of the comments no clear distinction is made between the data as they are collected in the past on the one hand and the interpretation of them as presented in a graph on the other hand.
    To be able to understand the reasoning behind the interpretation and the conclusions on the basis of the data, you cannot do without this distinction.

    Citation: “Britain’s Climatic Research Unit (CRU), which for years maintained that 1998 was the hottest year, has published new data (…)”

    As I read on I get the impression it is not about the original data but about ways they are interpreted, corrected, and presented in graphs with new data added.

    When you coin a phrase like “changing the data” it gives the impression someone has messed up the original collected “primary” or “raw” data. No surprise you see words like “fraud” or “should go to jail” in the comments. But has this really happened?

    In my opinion it is up to anyone any time to interpret primary data as they like or combine with new data, as long as they don’t change the original “raw” datasets. And of course, as a scientist, you better explain your interpretation and be open to scrutiny.

  83. In real science you try and get the models to fit the data.

    In ‘climate science’, the data has to be changed to fit the models.

    Most ‘climate science’ models are pre-set to demonstrate catastrophe is imminent. The real data shows this is nonsense, so the data has to be manipulated/tortured/adjusted to confirm that catastrophe is imminent.

    The inconvenient fact is that real data is not allowed in climate science unless it has been properly manipulated to reflect the findings of the models; this process of manipulation is now starting to accelerate.

  84. All these comments but no real answers as to what can be done. Why do we accept the manipulations of data that emanate from HADCRU et al? What’s the point of posters here agreeing with one another that this sort of behaviour by so called “scientists” is totally unacceptable. It is but what can be done about it? I dunno but surely someone must. Aren’t you all totally frustrated and aghast at the bastardisation of science? I am but what is to be done? Nothing it appears as this type of data manipulation is increasing. Governments don’t want to know. Climate scientists are protecting their salaries so we’re stuffed!! Good bloody Oh

  85. I’ll bet they got a shipment of the Arctic temperature data that Hansen makes up from nothing extrapolating Arctic warming of huge regions with no data, using data from Arctic Rim sites that are suffering UHI as all of these sites are in or near settlements or airports.

    I’ll bet good ol’ Hansen is such a talented “scientist” that he can take a datum point from a single temperature site and come up with a entire temperature map of the world and a long term trend. It must be awesome to be so talented!

  86. “This has been shown before. It’s pretty well known.”

    I see that Steven Mosher can still make vague, unscientific references.

    Andrew

  87. This makes me so f&$*&%g outraged…

    How are they getting the Arctic data? There’s very few temp sensors there? Do these temps agree with what we can get from the satellite data? What does the satellite data show?

  88. Billy Liar says:
    March 19, 2012 at 2:44 pm

    So you’re saying that Hansen’s method of filling in the gaps doesn’t work because adding more data changes the result?

    No, he’s saying the Phil Jones was told to ‘get with the program’ and added in Hansen’s faked-up Arctic data where formerly there was no data.

  89. This is just plain out wrong, we have OBJECTIVE SATELLITE MEASUREMENTS SINCE 1979 that have an accuracy of 99.99%, don’t see no reason to change it, but organizations like NOAA, NASA, and CRU think it’s ok to make up data sand use it to prove their anti-capatialist, global warming alarmist agenda. This should be a good sign for the skeptics, as we know that things aren’t turning out as ‘planned” for them, and the world isn’t really warming as they though it would. I would expect more data tampering and misrepresentation in the future from organizations like these. Regardless, the AMSU temperatures since 2008 (the flip to cold PDO) have cooled about half a degree centigrade, and mid tropospheric temperatures are near ALL TIME LOWS, so clearly, the IPCC’s hot spot from CO2 isn”t happening. This fall doesn’t look any better, as the Japanese climate model, which accurately predicted last winter and fall’s temperatures, predicts that the world will enter the “Icebox”. With the oncoming El Nino coming on next winter, it won’t give any reason for the alarmists to indicate any “catastrophic global warming” here in the US, like they did this year.

  90. Morten Sperger says:
    March 19, 2012 at 2:20 pm

    What’s the problem of most of you people here with climate change? I give three options:….

    ….Because I really like the climate the way it is right now. I mean: drylands could become fertile with climate change, wetlands could became farm land. What is positive and what is negative? The world will go on and adapt to every change we make, but will you still want to be where you are right now? Doing what you are doing right now?

    These ‘scientists’ are trying to preserve is the world as we know it.
    ________________________________________
    Cow Manure!

    The Climatologists are in it to move the world into a “Socialist” one world totalitarian government. CAGW is just the lever they are using to do it and lying, dishonest activities are perfectly acceptable if used in the furtherance of the “CAUSE” We have had ample evidence that they lie and cheat, Gleick being just the latest.

    I suggest you look at: Finally somebody comes right out and says it: climate + world governance is a match made in green heaven here at WUWT and Climate Coup — The Politics and Climate Coup — The Science at Jonova’s site.

    Activist admits lying is good politics: http://atomicinsights.com/2012/03/conversation-with-an-anti-society-antinuclear-activist.html

    Most of us here have been looking at the science AND THE POLITICS for years. Only actual hard evidence not fudged data sets will convince us.

    Here is an example of the fudging: http://notalotofpeopleknowthat.wordpress.com/2012/03/15/an-adjustment-like-alice/

    and another:

    http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

    http://wattsupwiththat.com/2009/12/20/darwin-zero-before-and-after/

  91. Steven Mosher says:
    March 19, 2012 at 12:43 pm
    And as you add more current data from the extreme high latitude you can expect the present to warm.

    Well that sounds plausible enough. But my huge problem is with the title of Louise Gray’s article: “Met Office: World warmed even more in last ten years than previously thought when Arctic data added”

    I thought global warming started in 1750 and really should have taken off around 1945 when CO2 greatly increased. So exactly what supposedly happened in the last ten years that was different from what happened in 1998? It seems to me that “global warming” is the wrong term. The correct terms should be “recent alleged north polar warming”.

    There is also the matter of enthalpy of the air. It seems as if most of the increased warming is due to extremely cold and relatively dry Arctic air warming up. It takes a lot less energy to warm dry air at -40 C to -30 C than to warm moist air from +30 C to +40 C. However I do not believe this is factored in.

  92. Presumably there is a full audit trail of the changes made by NCDC, showing the changes made, the date of change and the reason?

  93. James Allison says:
    March 19, 2012 at 2:42 pm

    How does HUDCRUT4 data compare with satellite data?

    We will not know for sure until it comes out and we can plot it. But see the graphs below. Presumably HadCRUT4 will be more like GISS.

    In the graphs below, there are 4 slopes from December 1978 when the satellites started. Without looking, do you care to guess which one is GISS?

    http://www.woodfortrees.org/plot/uah/from:1978.9/trend/offset:0.31/plot/rss/from:1978.9/trend/offset:0.22/plot/gistemp/from:1978.9/trend/plot/hadcrut3gl/from:1978.9/trend/offset:0.08

  94. Green Sand says:
    March 19, 2012 at 2:38 pm

    ….Figure 3: “The difference in coverage of land surface temperature data between 1990-1999 and 2005-2010. Blue squares are common coverage. Orange squares are areas where we had data in the 90s but don’t have now and the few pale green areas are those where we have data now, but didn’t in the 90s. The largest difference is over Canada.”

    Why did the Met Office no longer have the Canadian land surface temperature data? I am not aware of any of the stations being closed? If they have been could somebody please point me in the right direction?
    _______________________________________
    That makes absolutely no sense. I knew the guy in Toronto who was doing the data keeping for Canada. As far as I know he was still alive and kicking in 2007.

  95. Werner Brozek says, March 19, 2012 at 11:29 am:

    “With regards to the top comment, how can “the world (be) warming even
    more” if it has not been warming for 15 years by their own admission? For
    proof of the lack of warming for about 15 years, see:”

    http://www.woodfortrees.org/plot/hadcrut3gl/from:1995/plot/hadcrut3gl/from:1997.25/trend

    The 15 year period starting with 1997.25 starts with a century class El Nino
    and ends with a double-dip La Nina.

    A much more fair period would be one selected for lack of upward or
    downward trend in ENSO or AMO. For example, the 13 year period from the
    beginning of 1999 to the beginning of 2012. There, HadCRUT3gl has an
    upward linear trend of .044 degree/decade. I suspect this may be close to the
    actual rate of warming from anthropogenic increase of CO2.

    Since CO2 has increased at a rate around .066 log scale doubling per
    decade from 1980 to 2010, climate sensitivity to CO2 change *may be*
    .67 degree C per doubling or halving of CO2.

  96. On the one hand, as scientists we are ethically bound to the scientific method. On the other hand, we are not just scientists but human beings as well. To do that we need to get some broad based support, to capture the public’s imagination. That, of course, means getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. Each of us has to decide what the right balance is between being effective and being honest.
    ~Steven Schneider, Climate Alarmist

    Hadcrut4 and GISS have made their choice: they have decided to be effective.

    And this thread wouldn’t be complete without Edenhofer’s candid quote:

    “One must say clearly that we redistribute de facto the world’s wealth by climate policy. One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with environmental policy anymore.”
    ~Ottmar Edenhofer, Co-Chair UN/IPCC WG-3

  97. Mosher: “Its not surprising that when you add more Northern Latitude data that the present warms.”

    Really? What do you get when you amplify zero, Mosher. If the Northern Latitudes show warming amplification from what the rest of the globe experiences, then the amplification of zero warming is still zero warming. So adding northern stations should not effect the trend for the past 14 years.

    “This has been shown before. It’s pretty well known.”

    Yeah, now all you have to do is think about what it means.

    “As you add SH data you will also cool the past.”

    Why?

    More data. Folks used to clamor for more data.

    This is stupid Mosher. You want to claim that the areas of the earth that were not measured had different temperature trends than the ones that were. But convection would assure that no areas would maintain their own trends for more than a couple of decades. You can’t suddenly show up with some new stations and claim a complete change in trend. It’s simply impossible. Areas of the earth have their own weather, not their own climate. Of course if you overrepresent stations with a shore ice effect you can diddle the numbers. If you drop the reading of arctic SSTs when those are available and replace those readings with shore stations subject to shore ice effect, you can diddle the numbers.

    You know that the satellites are much closer to reality and that the surface stations are nothing more than a political shell game. So why do you go on with this dumb charade Mosher. I can only imagine the rationalizations that you let your left hemisphere invent for you.

  98. ddd says:
    March 19, 2012 at 2:47 pm

    1998 is cooler by 0.01°C in the new hadcrut4 not more than 0.1° like in the false joe d’aleo graph….also the anomaly values are all wrong.

    Thank you for that. But now I have several questions and comments. First of all, there is a different HadCRUT3 data set at:

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    This one does show a 0.07 C gap between 1998 and 2010 that Louise Gray alluded to with:

    “Between 1998 and 2010, temperatures rose by 0.11C, 0.04C more than previously estimated.”

    Things get confusing with two different HadCRUT3 sets! Another point is that with the one you are referencing, 1998 does NOT change by even 0.01 C. However all other dates in the 2000s change from 0.03 to 0.06. It looks suspicious to me.

  99. cui bono says:
    March 19, 2012 at 4:03 pm

    Presumably there is a full audit trail of the changes made by NCDC, showing the changes made, the date of change and the reason?
    _______________________________
    Only if they were REAL scientists. If you try and get that info you will probably get the usual “My Polar Bear Ate My Data” (Another cartoon idea Josh)

    This time they are saying they added in the Arctic temps except there is not much data. And the data has problems see: http://wattsupwiththat.com/2010/09/22/arctic-isolated-versus-urban-stations-show-differing-trends/
    …All the GISS temperature anomaly maps show the Arctic warming faster than the rest of the globe, especially northern Alaska and Siberia, but the satellite data shows a different pattern…

  100. Ian H says:
    March 19, 2012 at 2:38 pm

    No doubt they will argue that these adjustments were required in the interests of accuracy. And no doubt also they have all their arguments marshalled ready and waiting should someone wish to dispute what they have done. However there is no need for us to actually dispute the reasons for these adjustments. It is enough to note that there is a clear pattern of temperature readings declining with age of reading for whatever reason. We really don’t care about why that is so. It doesn’t matter. The mere fact that temperature readings fall with age of reading is indisputably something that should be taken into account in computing temperature trends.

    The way to do this would be to graph temperature reading decline against age of reading and model it (as is traditional) with a linear model. This “rate of decline with age” should clearly be deducted when computing temperature trends.

    Let me stress that this argument is indisputably correct regardless of whether the adjustments were made for proper reasons. That is because we may expect that current measurements of temperature will also be adjusted downwards as they age. The reasons for that adjustment are irrelevant. To refute this argument it would be necessary to explain why the clear pattern of temperature measurements declining as they age should not be expected to continue into the future.
    —————————————————————
    Ian, I like the way you think!

    Of course, following your argument to its logical conclusion, it means that either they will have to truncate the decline in past temperatures at some point, or admit that their global temperature estimates are chronically overstated. And if they are, it will play hell with their forcing models. Alternatively, if they truncate the downward revisions, they can expect some pretty searching questions about splicing issues.

    Although, not a lot of remorse about data splicing has been exhibited in the past.

  101. Donald L Klipstein says:
    March 19, 2012 at 4:11 pm
    The 15 year period starting with 1997.25 starts with a century class El Nino
    and ends with a double-dip La Nina.

    I agree with your sentiment. If it makes any difference, there are two reasons that I could not start the graph in January of 1997. The January 2012 anomaly of 0.218 has been deleted and the February anomaly is not out yet. Of the February anomalies that are out, RSS and UAH show a slight decrease and GISS shows a slight increase. So I would expect that if February were out, I could go back to January 1997 showing basically a flat line. For HadCRUT3, anything below 0.4 would push the months back where the slope is 0. However RSS does go back to the middle of the La Nina since there I can go to December 1996 and get a flat line. And with the January and February values, the claim that this is the warmest La Nina on record is no longer as convincing as it was before. See: http://www.woodfortrees.org/plot/rss/from:1995/plot/rss/from:1996.9/trend

  102. And so it was The Little River Band who wrote a forward looking ode to the climate change scientists of today….

    Time for a cool change
    I know that it’s time for a cool change
    And I know that my life is so pre-arranged
    I know that it’s time for a cool change

  103. Gail Combs says (March 19, 2012 at 4:31 pm)
    —-
    Nice one about the polar bears! And I guess if the northern land ice melts, polar bears are going to have to s**t in the Medieval Warm Period woods revealed beneath. :-)

  104. These global warming alarmists insist that loss of sea ice would be BAD for animals like the polar bear but, you wonder why the scientific name of the polar bear means “sea bear”, hmm……. wonder why that is? :)

  105. Gail Combs says:
    March 19, 2012 at 4:12 pm
    That is why there has been such a fight over getting the raw data…
    – – – – – – – –
    You would think any publication is only to be considered of scientific merit if at least the primary data are available to the scientific community. There are more criteria of course, but this one looks pretty basic to me. If it is not met, something is very wrong.

    How many publications in “climate science” will pass this simple test? It should be a “sine qua non”. In any science. Not open to debate you would think…

  106. i just love how this year, when the rest of the world has been in an Icebox, the warmists scream at the US warmth as proof of AGW, when Europe experienced some of the worst cold in over 40 years and Alaska almost broke the all-time record low temperature for North America. Then to top it off, areas like Anchorage are within 2 inches of breaking their all time record for most snow, and the Bering Sea is still over halfway covered in ice, and it’s Spring!! Then Siberia and most of northern Asia experienced yet another cold and brutal winter, and only now are warming up. Then, Arctic Sea Ice is approaching the satellite era average, with the Antarctic above average and Global tropical cyclone numbers are near all time lows. Don’t hear a peep out of them. If they find it hard now to find areas of warmth, just wait until next fall and winter comes around, nature is going to give them quite the cold shoulder!!

  107. To paraphrase comrade Stalin, “Controlling who counts the votes, is more important than controlling who votes.”

  108. Donald: “The 15 year period starting with 1997.25 starts with a century class El Nino
    and ends with a double-dip La Nina.”

    This is nonsense. That 98 El Nino was immediatly followed by two years of La Nina. The effect of the two on the trend cancelled out. That is why ENSO corrected data has almost exactly the same slope as uncorrected data – mainly, none.

    “A much more fair period would be one selected for lack of upward or
    downward trend in ENSO or AMO. For example, the 13 year period from the
    beginning of 1999 to the beginning of 2012.”

    Wrong again, you have chosen to start at the beginning of a long La Nina. The best option is to use both the 98 El Nino and the following two years of La Nina.

  109. Werner Brozek says:
    March 19, 2012 at 4:06 pm

    Yeah thanks – thought that would be the case. I can see why “The Team” are now consistently ridiculed by the majority of climate followers and only supported by a few fringe far left eco-greenies with uninformed religious like fervour. And really, we hardly see any of them trolling WUWT these days. Who’s left , Physee, Lazy, Gatetsy spring to mind …. my apology to those I didn’t mention. Its almost kinda sad except for the legacy of increased costs, destroyed tropical forests, opportunity cost of tax money spent and world governments lapping up increased taxes etc etc. Anyway I hear the next new religion is gonna be environmental sustainability. Something I’m sure they’ll be able to latch onto with equal fervour resulting in similar socio- economic costs and consequences to the societies we live in. Amen.

  110. “Who controls the past controls the future. Who controls the present controls the past.” ~ George Orwell

    I guess now that past and present ground-based temperatures are “fixed” (he, he), the satellite temperatures will need “recalibration”, too to match the: Brave New Reality…

    And so it goes……until it doesn’t…..

  111. Maurizio – Omnologos asks why cannot the Russians cool the past too? My understanding is that it has already been substantially cooled in Stalin’s time especially in winters because the state handouts to the towns depended on how cold it was, and no meteorologist worth his life would report temperatures less cold than required to get enough to eat. What we should ask (if I am not mistaken) is what efforts have there been or can there be to put these Stalinised temperatures right? – making the past warmer! (oh-oh forbidden zone)
    Today we have a similar problem. None of the GW fraudsters will do anything to negate the message of their gravy train.
    I think we can safely conclude without fear of contradiction by any rational objective sentient being:-
    1. The ‘corrected’ figures are fraud.
    2. The temperature variations real or fraudulent are of no consequence to man, plant or beast, the temperature changes themselves, if real, about 0.5C in a century are not something that humans can even feel in a day.
    The obsession with temperatures by all sides in this non-debate is insane, what changes weather and climate is jet stream shifts and other circulation pattern changes, and these control temperature rather than the other way around. Of course – wait for it – solar activity (magnetic and particle) and lunar modulation thereof is the main controller of weather and climate change. This approach is now able to reliably forecast certain key major changes on the sun – eg Earth-facing coronal holes, and key extreme features of USA weather weeks and months ahead. For example see our WeatherAction USA, confirmed predictions of the present very warm center and E/SE USA alongside cold West and the preceding severe thunder, tornadoes and giant hail in lower mid-west around mid month – from (in this case) 3 weeks ahead to timing of a day or so:

    http://www.weatheraction.com/docs/WANews12No17.pdf

    Anybody or organisation which is serious about the weather needs these forecasts. However I am finding that those in authority forego what they need to fulfill their supposed ‘duties’ to protect the public and save lives, in order to preserve the fraudulent ideology of man-made climate change.
    Thanks,
    Piers Corbyn

  112. I expect that, if these past-cooling developments continue, in 2020, NASA scientists will find out that the Younger Dryas ended in 1970.
    And young scientists will write papers, trying to find out how Medieval cathedrals were constructed when, according to their data, Europe must have been covered by miles of ice…

  113. Great piece Anthony.

    “Those who cool the past are condemned to repeat it.”

    Hahaha!! Brilliant.

    In a similar though I admit much less wittier (Conradesque) vein:

    The shamlessness! the shamelessness!!

    Does the truth have no bounds for the climate hysterics and shills?

  114. Now the temperature increase since the last cyclical high in the 1940s has gone from 0.2 to 0.4 degrees, a 100% leap, and only generated by adjustments.

    All these adjustments have taken place after 2003, the year when climate science became settled, and so called scientists and affiliated climate and other agenda warriors at universities, media and politics started to go after individuals, editors and journals with dissenting views.

  115. American Patriot said on March 19, 2012 at 12:54 pm:

    Hansen is a Marxist. He lamented to Clinton years ago about the injustices of global wealth distribution. He’s been outed many times but these Marxists are like zombies. You have to whack them more than once.

    High quality peer-reviewed published research has conclusively shown in the case of a zombie plague, humanity’s best option is extremely intense violence. Quarantine attempts and cures that return them to normal are losing strategies, you must eradicate the zombies whenever and however possible, as many as possible. Download the pdf, it’s all very scientific with world-class modeling and theory.

    It’s already known how to stop zombies, you have to destroy the brain. If you wish to treat Hansen the Marxist as a zombie, then you know what to do. Going by the available info on Hansen as activist, the required action is known and has been contemplated by many people, yet still no one has been willing to deliver a strong steel-toed kick to Hansen’s butt.

  116. The future is known. It’s the past that keeps changing.

    – old Communist Soviet Union saying.

  117. OK so 2010 was the warmest year on record eh? Time for a straw poll. Who agrees? Please global answers are best.

  118. Anthony

    You said “All up these adjustments increase the trend in the last century. We have yet to witness a new dataset release where a cooling adjustment has been applied. The likelihood that all adjustments to data need to be positive is nil. This is partly why they argue so fervently against a UHI effect and other land use effects which would require a cooling adjustment.”

    Don’t you bother checking your sources before publishing something? If you read the USHCN desctiption of the adjustment process here http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html you will see the descripton of the major adjustments. Step 6 says:

    “The final adjustment is for an urban warming bias which uses the regression approach outlined in Karl, et al. (1988). The result of this adjustment is the “final” version of the data. Details on the urban warming adjustment are available in “Urbanization: Its Detection and Effect in the United States Climate Record” by Karl. T.R., et al., 1988, Journal of Climate 1:1099-1123. “.

    And if you look here

    you see the effects of each of the different adjustments. Including a cooling adjustment for UHI.

    ‘they’ don’t ‘argue so fervently against a UHI effect’. They recognise it and have already included an adjustment! Check your facts a bit better Anthony.

    The largest single adjustment over time is for Time Of Observation Bias. When the time of day that readings are taken has changed, that would introduce bias into the record unless an adjustment is made for it. Other adjustments include for instrumentation changes. missing readings and checking for bad readings.

  119. RobW says:
    March 19, 2012 at 8:18 pm

    OK so 2010 was the warmest year on record eh? Time for a straw poll. Who agrees?

    Not according to the two satellite records.
    RSS
    1 {1998, 0.55},
    2 {2010, 0.476},
    3 {2005, 0.334},

    UAH (http://motls.blogspot.ca/2012/01/uah-amsu-2011-was-4th-coldest-in-this.html)
    1 {1998, 0.428},
    2 {2010, 0.414},
    3 {2005, 0.253},

    But according to records with UHI issues, surface station issues and adjustments…..well who are you going to believe?

  120. The only reason for trying to splice together this kid of global temperature record from land based stations is to allow comparison with the historical record. There isn’t a historical record over the Arctic so I don’t see the point in tying to extend such measures to those regions by extrapolation.

    For temperature trends in modern times either look at SST or the satellite record both of which are far more reliable.

  121. Werner Brozek says in part, March 19, 2012 at 4:29 pm:

    “First of all, there is a different HadCRUT3 data set at:

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    There are 2 versions of the annual figures of HadCRUT3. The UEA version
    uses ordinary averaging of the 12 monthly figures. The version by the Hadley
    Centre of the UK Met Office uses “optimized averaging” of the 12 monthly
    figures.

    The Hadley Centre version appears to me to show slightly more warming
    trend from 1998 onward, and slightly less ~62 year periodic component, in
    annual figures than the UEA version.

  122. In response to Glen Tamblyn’s message (above):

    Glen, you argue that the adjustments to the land temp datsets are necessary and that all is above board and you refer to the largest adjustment being for time-of-day issues. But you seem to forget – we have a record of unbiased satellite data that comprises far higher numbers of readings taken across the globe (exclusing the poles) 24/7, 365 days a year and going back in time to 1979. So these are available as a true reference dataset – to compare against the land temperature readings and adjustments…

    And they tell us that from 1979 when the reference data set began, the (adjusted) landbased trends have borne increasingly less resemblence to the historical reality as more and more adjustments have been made… how do you keep a straight face mate?

  123. Something wrong with that claimed 1980 GISS graph. The new colored graph is indeed of anomalies relative to 1951-1980. You can see the average looks like zero, as it should be.

    But the black graph seems to be quite positive through the period, except for a very small and brief dip. In fact, I very much doubt that GISS was using the 1951-1980 base in 1980.

  124. Werner Brozek says March 19, 2012 at 5:14 pm:
    >>Donald L Klipstein says: March 19, 2012 at 4:11 pm
    >>The 15 year period starting with 1997.25 starts with a century class El Nino
    >>and ends with a double-dip La Nina.

    > However RSS does go back to the middle of the La Nina since there I can go
    > toDecember 1996 and get a flat line. And with the January and February
    >values, the claim that this is the warmest La Nina on record is no longer as
    >convincing as it was before. See:
    >http://www.woodfortrees.org/plot/rss/from:1995/plot/rss/from:1996.9/trend

    1996.9 to February 2012 has a century class El Nino peaking about 1.4 years
    after the start time, and ends with a double-dip La NIna (for 2 NH winters) but
    does not begin with one. So, I see linear trend here being contaminated by
    downward linear trend of ENSO.

    1996.1 to February 2012 is “more fair”, but has stronger La Nina activity
    towards its end than its beginning, and a stronger El Nino towards its
    beginning than its end. I think that outweighs AMO peaking slightly after
    the center of that period. To me, this means that the linear trend here of
    about .027 degree C per decade underreports the actual effect of growth
    of CO2, but possibly not by much.

  125. The figures in that graph showing NASA temperatures as of 1980 match Fig 3 of this Hansen paper, though it isn’t the same graph.

    There’s no mention of an anomaly period. It looks very much as though their standard was the average over the period plotted, which was 1880-1980. So of course they look higher than current figures on a 1951-1980 base. The reference base is lower.

  126. Tilo Reber says, March 19, 2012 at 6:29 pm:
    >Donald:
    >>The 15 year period starting with 1997.25 starts with a century class El Nino
    >>and ends with a double-dip La Nina.”

    > This is nonsense. That 98 El Nino was immediatly followed by two years of
    > La Nina. The effect of the two on the trend cancelled out. That is why ENSO
    >corrected data has almost exactly the same slope as uncorrected data –
    > mainly, none.

    >>“A much more fair period would be one selected for lack of upward or
    >>downward trend in ENSO or AMO. For example, the 13 year period from the
    >>beginning of 1999 to the beginning of 2012.”

    > Wrong again, you have chosen to start at the beginning of a long La Nina.
    > The best option is to use both the 98 El Nino and the following two years of
    > La Nina.

    I disagree. I see better to use a period beginning and ending with two
    double-dip La Ninas, and lack of linear trend in ENSO indices. Starting at
    a time that barely includes the 1998 El Nino and ends with 2nd double-dip
    La Nina in 5 NH winters has downward linear trend in ENSO indices.

  127. After Climategate, a number of enquiries cleared Hansen and Jones et al of any wrong-doing. I believe this may have emboldened them and they can now do virtually what they please as no-one in authority will be able to take them to task – you know, the ‘oh dear, those poor scientists who are just doing their job and being bullied by those nasty deniers’ type of defence.

  128. How about commenting on the 2001 version of HadCRUT global temperature?

    I like to look at what happened from the ~1944 peak to the ~2005 peak.

    It appears to me that 2001 version of HadCRUT warmed by .215 degree C in
    61 years, roughly one cycle of AMO and of the periodic component visible in
    HadCRUT of 2001-2010 versions.

    To make the ~1944 peak appear to resemble the ~2005 peak, I would adjust
    the ~1944 peak downward, by up to .04 degree C.

    This means global temperature trend from ~1944 to ~2005 has increase of
    .215-.255 degree C in ~61 years. This means .035-.042 degree/decade.

    I have “somewhat figured” CO2 increase in that period on log scale to be
    averaging about 75% of the 1980-2010 rate of ~.066 log-scale-doublings per
    decade. This means climate sensitivity of .71-.85 degree C per doubling of
    CO2, before adjusting slightly downward for that period having a significant
    temporary growth of anthropogenic GHGs other than CO2.

  129. Donald L Klipstein says:
    March 19, 2012 at 9:29 pm
    There are 2 versions of the annual figures of HadCRUT3.

    OK. This version has 1998 at 0.529 and 2010 at 0.470.

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt

    This version has 1998 at 0.548 and 2010 at 0.478.

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    This version has 1998 at 0.52 and 2010 at 0.50.

    http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates

    That is three different versions. Which one of these three, if any, is being changed? If none of these, what are the numbers for the real one being changed?

  130. There is far too much adjustment of raw temperature data for far too many reasons. Beneath many adjustments there is a fundamental problem with the ‘anomaly method’ of presentation.
    Here is a graph from a location I was working up last evening. Cape Leeuwin is on the S-W tip of Australia and the site has not sensibly moved in over 100 years. It is remote from UHI sources.

    The anomaly method uses a calibration period, very often years 1960-1991, from which an average is taken. This average is subtracted from temperatures outside that period to give the residual anomaly so often presented in papers.
    Many counties from the British Commonwealth started their temperature recording in degrees Fahrenheit. Australia changed to degrees Celsius in many places about September 1972. There are different problems reading and rounding F thermometers than with reading and rounding C thermometers. http://joannenova.com.au/2012/03/australian-temperature-records-shoddy-inaccurate-unreliable-surprise/

    The bottom block of the C. Leeuwin graph has a couple of pertinent unanswered questions. Additionally, I often wonder how often the calibration period is adjusted when adjustments of temperatures either side of it are applied for specified reasons. Are we in a loop?

    Conclusion: THE TEMPERATURES OF THE WORLD NEED A PROPER AUDIT, COUNTRY BY COUNTRY, continuing the work in the style of the JoNova blog. Much work so far has not dug deep enough into the raw.

  131. I recently said in part, March 19, 2012 at 10:35 pm:

    (Referring to ~1944 to ~2005)
    “This means climate sensitivity of .71-.85 degree C per doubling of
    CO2, before adjusting slightly downward for that period having a significant
    temporary growth of anthropogenic GHGs other than CO2.”

    It appears to me that anthropogenic increase of GHGs other than CO2
    accounts for about 20% of the increase of “GHG effect” from the early 1970’s
    to about 1995, and much less at other times in the 1944-2005 time period.
    I would like to estimate that in the 1944-2005 time period, increase of GHG
    effect via GHGs other than CO2 not continuing after 2005 accounts for about
    10% of radiation forcing in that time period. At this rate, I suspect global
    climate sensitivity to CO2 change *may be* roughly around .65-.77 degree C
    per 2x change of CO2.

  132. I recently said at March 19, 2012 at 10:58 pm:

    “At this rate, I suspect global climate sensitivity to CO2 change *may be*
    roughly around .65-.77 degree C per 2x change of CO2.”

    However, if the WWII peak of AMO was stronger than the recent one,
    this figure gets larger. But, I doubt climate sensitivity to CO2 change
    exceeds 1.5 degrees C per 2x CO2.

  133. Nick Stokes.

    Looking at the 2 GISS curves – 1980 & 2010 there is apparently another problem.The 2010 curve looks like it is the Land & Ocean index rather than the Land Only index. But the 1980 curve has to be land only since GISS hadn’t started incorporating SSTs back then. So the two curves aren’t comparing apples with apples.

  134. Donald L Klipstein says:
    March 19, 2012 at 9:57 pm

    1996.1 to February 2012 is “more fair”

    We may reach that point in a few months, but it would be almost impossible to stay there since 1995 was warmer than 2011, so if we reached 1996.1, we would almost automatically get into 1995 for the longest straight line. Fairness is one issue. And it is one thing to discuss fairness with a 6 year period, but if it gets to 16 years, then you can safely say CAGW is not happening.

  135. Andrew.

    You seem to share a common view that the satellite data is some sort of accurate reference that the surface data can be compared against. Firstly, the satellite data isn’t reporting Surface temperatures at all. It is reporting temperatures several thousand metres up. Next, the Satellite data has a lot of its own issues and isn’t necesarily any more accurate. The two principle groups who have been working on it – UAH & RSS – have produced results that have converged. However other groups who have looked at the data – Vinnikov & Grody, Fu & Johansen, Zou et al have all come up with higher values than the other 2. What can be said is that UAH & RSS use fairly similar methods so its not surprising that they get similar results. That doesn’t mean they are getting the correct result.

    Zou et al for example here http://www.star.nesdis.noaa.gov/smcd/emb/mscat/mscatmain.htm are calculating a substantially higher trend than UAH or RSS for the mid Troposphere channel – TMT.

    The lower Troposphere channel reported by UAH & RSS – TLT – which is the one most often cited when comparing to the surface records isn’t actually a physical channel. Rather they are using additional processing to extract a lower Troposphere signal from the mid-Troposphere channel.
    Zou et al aren’t yet producing a TLT product yet although they have plans to. However, if their result for TMT is giving 0.126 C/Decade where as UAH/RSS are getting 0.08+ then there is a fairly good chance that Zou’s TLT product will show a significantly higher trend than UAH/RSS

    Have you actually followed the link I put up to USHCN. The largest source of change was Time of Observation effects. Why would that occur? Because the US Weather Service changed the time of day that the readings were being taken.

  136. But Werner, we have been able to safely say that CAGW has not happening since the radiosonde data showing the absence of the tropospheric “hotspot” were finally released several years ago. The existence of a hotspot was an absolute predicate for all the GCMs and its absence during the last 25-30yr warming period (to the late 1990s) therefore completely invalidated all the GC models and thus their predictions because its absence invalidated the amplification factors they used used to sell the CAGW hysteria. And those same GC models continue to remain invalidated. Nothing has changed. Not even the flagarant manipulations and creation of new data (out of thin air) for the landbased temp datasets. No hotspot, no CAGW. Case closed.

  137. cui bono says:
    March 19, 2012 at 11:46 am (Edit)
    Is this right?

    (1) All of the major non-satellite datasets, including GISS, HadCrut and Best, rely on adjustments made to individual stations made by NCDC.

    ############

    err no. NCDC have both adjusted and un adjusted data. People need to actually catch up on things. If you like you can de select GHCN monthly data from the Berkeley Earth data.
    Answer comes out…. the same.

  138. Glenn Tamblyn says:
    March 19, 2012 at 11:26 pm (Edit)
    Andrew.

    You seem to share a common view that the satellite data is some sort of accurate reference that the surface data can be compared against. Firstly, the satellite data isn’t reporting Surface temperatures at all. It is reporting temperatures several thousand metres up. Next, the Satellite data has a lot of its own issues and isn’t necesarily any more accurate. The two principle groups who have been working on it – UAH & RSS – have produced results that have converged. However other groups who have looked at the data – Vinnikov & Grody, Fu & Johansen, Zou et al have all come up with higher values than the other 2. What can be said is that UAH & RSS use fairly similar methods so its not surprising that they get similar results. That doesn’t mean they are getting the correct result.

    ##########################
    Yes, the modelling involved to get the “temperature” from the brightness at the sensor is not without assumptions. and assumptions bring with them uncertainty.

  139. When is WUWT going to undertake a project that documents how these historical temperatures are changed, and for what ostensible reason?

    We need a white paper.

  140. Glenn Tamblyn says:
    Andrew

    You seem to share a common view that the satellite data is some sort of accurate reference that the surface data can be compared against.

    Yes. But assessed against would be a better way to say it. The satellite data are in actual fact far more reliable than the surface data records with fewer sources of error (particularly sources of human error) and thus yes it is valid, in the absence of a better alternative, to refer to them as a standard with which to appraise the surface dataset…

    Firstly, the satellite data isn’t reporting Surface temperatures at all. It is reporting temperatures several thousand metres up.

    Yes, but not relevant. We are concerned with temperature trends and changes in trend from one time period to another. The key questions, after all concern temperature trends, and not whether the surface is warning at a different rate to the atmosphere several kilometres up – because of course we know this will be the case. But general patterns of warming observed in the atmosphere and at the surface would be expected to be of a similar form (eg. have the same sign in the years since 1998)…

    Next, the Satellite data has a lot of its own issues and isn’t necessarily any more accurate.

    Do you really stand by that statement? The rest of your answer though of interest is really just hand-waving. Again, we are concerned with temperature trends and changes in those trends through time…

    Have you actually followed the link I put up to USHCN. The largest source of change was Time of Observation effects. Why would that occur? Because the US Weather Service changed the time of day that the readings were being taken.

    I wasn’t questioning the whether the bias needed to be corrected I was simply making the point that it was one of many biases in the surface data – most of which are of a human origin – that have to be “fixed” and that the satellite data have far fewer biases that need to get “fixed” in comparison.

  141. American Patriot says:
    March 19, 2012 at 12:54 pm
    Hansen is a Marxist. He lamented to Clinton years ago about the injustices of global wealth distribution. He’s been outed many times but these Marxists are like zombies. You have to whack them more than once.
    =======================

    Gambino said, you should only ever need to “whack” them once. First time was a botched job … needs to be done properly.

  142. Andrew

    “The satellite data are in actual fact far more reliable than the surface data records with fewer sources of error (particularly sources of human error)” You need to read up a LOT about Satellite data and its issues – Orbital Decay, Diurnal drift, differences between satellites and how you ‘stitch them together’, changing instrument calibrations, time dependent varietc.

    Then the source of human error in the surface record. There is certainly human error as a part of recording the initial data. But because this is a large numbers of separate humans all around the world, human error can be expected to balance out in the recording stage. But there is much less human error in the homogenisation/adjustment process that is done now because it is done by programs that apply algorithms to the data looking for patterns to then be used to try and adjust the data to get closer to the accurate result. So there isn’t scope for human error on a station by station basis. There could be biases introduced in the algorithms, but that isn’t human error.

    Unless of course you think there are people who spend their days poring over a single station before they decide to ‘adjust’ that station.

    “But general patterns of warming observed in the atmosphere and at the surface would be expected to be of a similar form (eg. have the same sign in the years since 1998)…”
    And they are! Not the same magnitude of trend but definitely the same form. So what is your point here.

    “Do you really stand by that statement? ”
    Absolutely. Extracting temperature data from surface stations is a doddle compared with trying to put together a temperature record from the satellites. The UAH/RSS teams have converged on one answer based on their underlying methodology. Zou et al are producing a different result from the same raw data.

    “surface data – most of which are of a human origin – that have to be “fixed” and that the satellite data have far fewer biases that need to get “fixed” in comparison.”
    Actually Andrew, whenyou have a data source that has a large number of random inaccuracies, from a huge number of measurement instruments, the average of that data source will tend to home in on the true value because the random errors/inaccuracies tend to cancel each other out.

    In contrast a data source that uses very few instruments will be less prone to random inaccuracies. But the underlying biases of those few instruments then become the dominant issue because those few instruments are used to measure everything. So instrument induced bias is a much bigger issue when you have very few instruments – satellites.

    Imagine that the worlds surface temperature record was obtained for 2 dozen thermometers that were moved from site to site rapidly to measure everything. Understanding the biases of those few thermometers would become a much biigger issue then wouldn’t it.

    It isn’t a question of what the trends are. Its a question of how accurately we are measuring the trends.

  143. Steven Mosher says:
    March 19, 2012 at 12:43 pm
    Its not surprising that when you add more Northern Latitude data that the present warms.
    This has been shown before. It’s pretty well known.
    As you add SH data you will also cool the past. This is especially true in the 1930-40 period as well as before.
    ———————————

    Isn’t this a bit too simplistic?

    In my view, a very good measure of warming is a comparison between the temperatures of last cyclical high in the 1940s and the recent cyclical high. The difference has now increased after adjustments from 0.2 to 0.4 degrees in 70 years, or 0.6 degrees per century. This may be partly due to Greenhouse gases, but also longer term solar effects or other things.

    Now, the arctic temperatures do not appear do have been higher in the last couple of years than in the 1940s. (Perhaps anybody may combine this data separately for verification). If so, there is no additional warming coming from here.

    Then, the adjustments in the past go far beyond adding data. The main issue are the changes of sea surface measurement methods ober the past. This has been described by McIntyre

    http://climateaudit.org/2011/07/12/hadsst3/

    Among the issues with these adjustment, one stands out in my view, and this is responsible for 0.1 degrees since the 1940s or 25% of the warming.

    In this adjustment, 30% of data is OVERWRITTEN, and bucket observations were reassigned as ERI observations. That is a huge manipulation and alteration of documented data and the justification is extremely poor (see McIntyre). The authors write boldly about their manipulation that it is to “correct the uncertainty”. I would think such an alteration of documents is not part of the scientific method and uncertainty of the measurement method should have been addressed with increased error ranges but never with alteration of documented data.

  144. Werner Brozek says:
    March 19, 2012 at 10:48 pm
    “Donald L Klipstein says:
    March 19, 2012 at 9:29 pm
    There are 2 versions of the annual figures of HadCRUT3.

    OK. This version has 1998 at 0.529 and 2010 at 0.470.

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt

    This version has 1998 at 0.548 and 2010 at 0.478.

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    This version has 1998 at 0.52 and 2010 at 0.50.

    http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates

    That is three different versions. Which one of these three, if any, is being changed? If none of these, what are the numbers for the real one being changed?”

    Have to look at the satellite data to see reality, rather than using the massaged and homogenised data produced by the CRU, which is subject to confirmation bias of their mistaken belief that CO2 plays a major role in climate…

    (The 3rd order polynomial fit to the data (courtesy of Excel) is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.)

    As an aside, I find it ironic that Roy’s simple 3rd order poly fit is doing a better job that the IPCC GCM ensemble!

  145. Seems to me that this is more moving-the-goalposts-in-order-to-win-the-game. I don’t trust these so-called “adjustments” to temperature instrument readings enough to say what the trend is. It all is suspicious, given the predilections of warmists. Does that make me anti-science? No. It just makes me even more skeptical of the idea that scientists should be giving advice on political matters. The bias we should be worried about is not “Time of Observation Bias.” It is another, more nefarious bias.

  146. RE: Ted G says:
    March 19, 2012 at 10:23 am
    “….. Their manipulations are so obvious that even amateurs like myself can see them.”

    To read a thermometer is not rocket science. It was when Hansen said so many people-of-the-past had read thermometers incorrectly that I first caught the whiff of fraud’s reek.

    It was due to McIntyre and Climate Audit that my unease snapped into focus back in 2007:

    http://climateaudit.org/2007/08/08/a-new-leaderboard-at-the-us-open/

    It was amazing to me the back-lash I ran into, back in 2007, when I simply stated the obvious: “To read a thermometer is not rocket science.”

  147. Glenn

    That’s an awful lot of hand-waving going on there. But in fact, the kind of human error I had in mind was more of the – let’s site the thermometer next to an a/c vent, surrounded by 10,000 sqare feet of tarmac and shielded from the wind by a nice brick wall… who’s gonna know?… kind of human error.

    Or the: let’s selectively cull the populaiton of thermometers with an empahsis on removing those sited at the higher and lower latitides and from higher altitudes (ie. cooler sites) through time and pretend the dataset are hsitorically comparable….

    Or the: let’s simply create data out of thin air and treat them as if they were actual measurements from locations which never had themometers and add that in and pretend the product is a bona fide dataset that says something reliable about changes that occurred in the real physical world (ie, external to our warmist fantasies)..

    Or human errors of the form documented eloquently documented here:

    http://kenskingdom.wordpress.com/2012/03/13/near-enough-for-a-sheep-station/

    But it’s true: i’m not an expert on satellites or the algorithms used to convert input light signals to output temperature readings. That’s true. But from my understanding the early issues concerning calibration, the correct algorithms to use – were to dealt with to the satisfaction of almost everyone – perhaps though not yours?. yes, decaying orbital trajectories, diurnal drift and the like … but these are trivial matters corrected for in the modern age of mathemtics, understanding of relativity and computational power. Or perhaps i’m being too flipant. What specific concerns do you have regarding how the satellite data are treated?

    Again, the general point I make is that satellite-generated temperature data are considered to be far more reliable, with far far fewer sources of more easily quantifiable errors (and thus easier to correct for) than surface-generated (thermometer) data. The biases in siting of the thermometers (urban heat islands, altitudinal, latitudinal); variations in surface topography and terrain; human erros concerned with reading and handling instruments, the accuracu of the instuments;, rounding/ recording errors etc. etc.

    And thats before the GISS and the CRU get their hands on the data andn beat it to death…

    But I don’t believe you believe that the surface temp data area more reliable record of temperature anomoly trends than the satellite temp records – do you?

  148. The saddest thing about this farce is the irreparable damage that these people are doing to the name of ‘science’.

    I could almost cry.

  149. The largest single adjustment over time is for Time Of Observation Bias.

    Which would be meaningful if you had that information.

  150. Andrew it’s simple They (Glenn etc.., cannot accept any data that does not follow the AGW agenda, Therefore you are wrong. Sarcasm hmmm… The manipulation by Hadcrut and GISS is legion, see real-science.com, there are literally thousands of examples LOL

  151. @ Piers Corbyn
    March 19, 2012 at 7:00 pm
    ……….
    Well said, absolutely agree.

    @ Steven Mosher
    Hi there friend, I hope you survived the onslaught up-thread. Good pasting!
    I need not add to it, however tempted.

  152. Oh dear. All the temperature records seem to turn to blancmange when looked at closely.

    I knew that satellite records had to be adjusted for orbital decay, diurnal drift, etc (Dr. Spencer is working on this at the moment, is he not) but now Steven Mosher points to other people who interpret the data in a different way and reach different conclusions about the trends.

    Meanwhile, surface data is ‘adjusted’ all over the shop.

    Perhaps no-one really has a clue. Compared to all these uncertainties, the HadCrut4 change of 0.04C from HadCrut3 is picayune.

    One question. Given that approx. one-third of stations show a net cooling trend over the last several decades, and these are often interspersed with those showing a warming trend, how can we be sure that running an algorithm to reduce inconsistencies between stations is not obliterating genuine differences due to local climate factors? Why assume that the people who read the thermometers all had tessellated eyeballs?

  153. If a corporate accountant was caught constantly ‘adjusting’ and making stuff up he could go to jail. Perhaps the wrong people were arrested after Enron. These temperature adjustments are far, far costlier to the Earth than anything Enron could have done. That’s because it is helping to drive policy, green taxes and climate science funding – globally. Damned these climate bandits!

  154. I beg to differ says:
    March 19, 2012 at 10:28 am

    The answer is a question: Do you get more money from an increasing average global temperature or from a global temperature that stays the same?

    Some people say there is a strong correlation. ;-) Imagine what would happen if we got positive cooling over the next 15 years. What are they going to do? We are watching. The satellites are watching. ;-(

    The world is getting warmer BUT it’s not statistically significant. The first part is meant for the media. The second part is for those who bother to read the details. In other words there is no evidence that the world has not got warmer.

  155. Frank K. says:
    March 19, 2012 at 1:31 pm

    Steven Mosher says:
    March 19, 2012 at 12:43 pm

    Questions for Steve:
    (1) Where is this new data coming from? Are people today suddenly discovering lost climate data under their beds or in their closets?
    (2) Do you have links to this new data?
    (3) Can you conclusively demonstrate that the past will always cool and the present will always warm?
    Thanks.

    Well, I came back to see if Steve had answered my basic questions. Apparently not. Can anyone else show that new data will always cool the past and warm the present as Steve asserts? Thanks.

    Meanwhile – regarding the Time of Observation bias (TOB). As someone above observed, this is the biggest single adjustment in the climate data. Does anyone have a link the the specific algorithm (and computer codes) which calculate the TOB? I have seen some generic descriptions in the past, put no specific algorithm or code that is being used on current data. Thanks in advance.

    Here’s what NCDC says:

    Time of Observation Bias Adjustments

    Next, monthly temperature values were adjusted for the time-of-observation bias (Karl et al. 1986; Vose et al., 2003). The Time of Observation Bias (TOB) arises when the 24-hour daily summary period at a station begins and ends at an hour other than local midnight. When the summary period ends at an hour other than midnight, monthly mean temperatures exhibit a systematic bias relative to the local midnight standard (Baker, 1975). In the U.S. Cooperative Observer Network, the ending hour of the 24-hour climatological day typically varies from station to station and can change at a given station during its period of record. The TOB-adjustment software uses an empirical model to estimate and adjust the monthly temperature values so that they more closely resemble values based on the local midnight summary period. The metadata archive is used to determine the time of observation for any given period in a station’s observational history.

    Anyone have this “TOB-adjustment software”?

  156. How much have all these adjustments increased the trend?

    We don’t actually know.

    The only comparable value we have is that the NCDC has increased the USHCNv2 temperature trend by +0.425C (from 1920 to 2010). It has probably been increased since that time.

  157. Doubleplusgood. But don’t forget to adjust the ice extent records for the past, upwards, to make them coincide with the “new” cooler historical temperatures.

    Oh, and we have always been at war with Eastasia.

  158. They have found another way to “Hide the decline”. Has anyone actually calculated how many Arctic stations were required and byu how much lower their readings would need to be in order to reduce the 1998 peak anomaly by more than 0.0125 as shown in their graph?
    If the relatively few Russian stations in the Arctic can have this affect on the whole global database then it only goes to show how shaky HadCRUT’s sampling is.
    But, as usual, Piers is right. “1. The ‘corrected’ figures are fraud.
    2. The temperature variations real or fraudulent are of no consequence to man, plant or beast, the temperature changes themselves, if real, about 0.5C in a century are not something that humans can even feel in a day.”

  159. Strange. The ones defending the upward “adjustments” by adding data now as being perfectly warranted and scientifically sound are the same ones who argued removing thousands of records (the great march of the thermometers) had no affect at all.

  160. Mosher,
    Why are you making up stuff as you go along? Hansen and others (IPCC) predicted the Antarctic would be as warm or warmer than the Arctic. As you like like say, GIYF.
    …. Your claiming adding SH “data” (I use that term loosely) will cool the past is pulled out of your butt.

  161. So, I’ve been playing around with the new versions available of both Crutemp4 and HadSST3.

    One thing we will have to watch for is the 1961-1990 base period.

    The adjustments that seem to occuring have different relative values between the datasets in these periods.

  162. Morten Sperger says:
    March 19, 2012 at 2:20 pm
    First of all, looking at this ‘adjusted data’, look at the US economic data that is changed, i.e. ‘adjusted’ all the time. There is preliminary data that is corrected after all the information has been gathered.

    Apples and kumquats. US economic reports, such as those issued by the Bureau of Labor Statistics, are based on estimates from models (sound familiar?). They are only corrected after the actual data comes in.

    Then they’re further “adjusted” (i.e., manipulated) if the results don’t make The Boss’ economic policies look good — which is just as much a lie as “adjusting” hard scientific data because it doesn’t make your agenda look good.

  163. This is all rather silly. Don’t like GHCN? Well, don’t use it! http://curryja.files.wordpress.com/2012/02/berkeley-fig-3.png

    Don’t like adjusted data? Use raw data!

    Don’t like common anomaly method? Try multiple different methods!

    http://rankexploits.com/musings/2011/comparing-land-temperature-reconstructions-revisited/

    You may have some marginal changes (e.g. things like whether or not you use a land mask can make a reasonable difference), but the overall picture doesn’t change much.

  164. Zeke Hausfather says:
    March 20, 2012 at 8:17 am

    Hi Zeke – Do you know where to find the TOB-adjustment software that NCDC uses?

    Thanks.

  165. In my area, air temperature never was 45 °C. My weather station just is not a reliable source. This reading is required to be corrected for direct sun irradiation, adjacent walls that heat up, and so on and so on.

    What a good scientist does is stop recording the data, move the thermo to a better site recommence taking the data. Save both sets for later comparaison. DON’T FIDDLE !!

  166. Zeke Hausfather says:
    March 20, 2012 at 8:17 am

    Cock up as usual. Go back and review where BEST got their data. Some of it was pre-ajusted because the raw had been lost. Invaid data!!

  167. Thanks Zeke. The link doesn’t work for me (may be a firewall issue).

    Another question for you. When you process the raw temperature data (without TOB), do you apply your own TOB algorithm? Or do you accept the data as processed by NCDC with TOB and other homogeneity-related adjustments?

    Thanks.

  168. Moose says:
    March 20, 2012 at 7:49 am (Edit)
    Some questions for the proprietor and his votaries:

    http://nailsandcoffins.blogspot.co.uk/2012/03/anthony-watts-misleading-his-readers.html

    ————————————————————————————————————————-

    Thanks for the link. I went across and read the article ( and a few more ) . Interesting but moreso for what he leaves out than what he actually covers in his critique.

    I also left an invite for him and his readers to come over here and give the old girl a whirl round the dance floor and see if they like it. The blog owner seems very shy though so perhaps he won’t enjoy the rough and tumble here.

  169. Frank K.,

    For a global land reconstruction, I generally just use all raw data (with no tob adjustments). For U.S.-specific analysis, I usually use tob adjusted data as a starting point.

    That said, there has been some interesting work lately looking at how automated methods (Menne’s PHA or Rhode’s scalpel) can automatically correct for most tob issues. Williams et al talks about it a bit in this paper: http://www.agu.org/pubs/crossref/2012/2011JD016761.shtml

  170. DWR54 says:
    March 20, 2012 at 9:01 am
    Your comparison of the trends between the surface and satellite data sets uses the wrong off-setting values.

    Is there even such a thing as a correct offset? Either way, they show the GISS has the highest slope.

    P.S. Andrew says:
    March 19, 2012 at 11:46 pm

    I agree, but others are not convinced. So for them, perhaps temperatures will do the job.

    P.S. Tenuc says:
    March 20, 2012 at 1:57 am

    Thank you. However this does not tell me which version of HadCRUT3 was used to make HadCRUT4.

  171. Mosher: “Yes, the modelling involved to get the “temperature” from the brightness at the sensor is not without assumptions. and assumptions bring with them uncertainty.”

    It’s about more than just assumptions. They have been calibrated to Radiosondes that used real thermometers.

  172. Glenn: “It is reporting temperatures several thousand metres up.”

    Yes, and according to CO2 models that means that the satellites should actually be showing higher temperature anomalies than the ground stations. The fact that they are showing lower anomalies means that the ground stations are even more overcooked that we observe just by looking at the raw differences between the satellites and the ground stations.

  173. Apologies, was in a rush – this version corrects my typos (incidentally, I don’t expect this to be published as you’ll most likely censor it won’t you?)

    Please climate-change deniers,

    Look at this link which addresses Mr Watts’ above post:

    http://nailsandcoffins.blogspot.co.uk/2012/03/anthony-watts-misleading-his-readers.html

    As for you, Mr Watts – are you truly convinced by your own arguments? Or have you got a SERIOUS vested interest in trying to mislead people with the utter drivel you write?

  174. Zeke: “This is all rather silly. Don’t like GHCN? Well, don’t use it! ”

    No, Zeke, your assertion is all rather silly. First of all, BEST uses GHCN. And if you use GHCN data, then you can use what they call their “raw” data. But as they will tell you themselves, their raw data comes to them adjusted from their other sources. If you use other sources like BEST and exclude the GHCN data then you are getting data that is too unstable and fragmented for GHCN to use. Plus, you are still getting adjustments that were made by the sources of those data. And in most cases you cannot find the metadata that explains why adjustments were made. In other words, you can never go from the real raw data to the end product and reproduce what was done. And without reproducability, you don’t have science.

    I spent a couple of hours looking at individual stations in the GHCN record last night; and looking at the adjusted and unadjusted comparisons. While I went through a lot of upward adjusted stations, I didn’t run into a single downward adjusted station. And that is just plain politically oriented junk science.

  175. Tilo Reber,

    The graph I linked compared GHCN-M stations to all the non-GHCN-M station data Berkeley has collected. While I cannot guarantee that all of it is 100% raw (e.g. some 1920s records may have had tobs adjustments), I can say that in nearly all cases it is a compilation of the rawest data that exists today. The Berkeley folks went out of their way to avoid using any pre-adjusted data as they wanted to develop their own homogenization process unbiased by past efforts.

  176. I have a question for the group and its slightly off topic but relates to the ‘missing original temperature data’, plus the latest raw data CRU refuses to give up. As a meteorologist I know the basic unadjusted data has to be available if some detective work took place. Pick any country first because its easier, obtain the AWS data from the metoffice responsible. I’m sure Miss Marple could find it. Yes, its a wee bit of work, but rather than lament I don’t understand why some gung ho youngster doesn’t run with it.

  177. My balcony really is not a good place to record temperatures. But what if some weather guys put a station in a nice place outside the city in – lets say – 1940, which by now is part of that same city? Of course one can put up a new one, but in order to have a complete record (and reliable time series are everything): what was the temperature at that new spot before? And then it starts! Keep the old station, but correct data for local influence? Or try to estimate the past temperatures for the new location?
    The US and Europe have weather stations for a century or more. But what about the rest? How to include these other data in order to get a complete picture of some pre-satellite era?
    And further more: raw data is not hard fact. As I said: I can put a thermometer with any given accuracy in the sun, and it will not be able to indicate the correct air temperature. Still, its not wrong. It will indicate ITS measured temperature.

    http://www.guardian.co.uk/environment/2012/feb/21/heartland-institute-leak-climate-attack

  178. FOIA2009 1254108338.txt /1254147614.txt (Jones, Wigley and Santer)

    Here are some speculations on correcting SSTs to partly explain the 1940s warming blip.[....] So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean [significant for claims of warmer years in the 1930's/40's that is]

    Shocked I tell ye that the 30’s/40’s have now been found in a pit with a bullet hole in the back of the head right next to the corpses of MWP and LIA. Just in time for AR5 and the 2012 Rio ‘one world government’ bong fest (the cause). Remember boy’s and girl’s … cigarettes are chock full of vitamins and minerals and history is whatever you pay your historians to write for you.

  179. The surface record over land # global mean temperature.
    It also suffers from serious and well known errors
    that follow from irregular monitoring and urbanization.

    Even without these limitations, however, the claimed difference
    between 2010 and 1998 is an artifact of the choice of averaging.
    Reference to the monthly record from MSU

    http://www.climate4you.com/GlobalTemperatures.htm#UAH%20MSU%20TempDiagram

    the only true measurement of global temperature, shows what has been
    widely recognized. Except for temporary bounces, global temperature
    has been flat over the last decade, arguably since the El Nino in 1997.

    Averaging over one set of months will result in temperature that differs
    from that which results from averaging over another set of months.
    The actual monthly data, however are unambiguous.
    Global temperature during the most recent El Nino in 2010
    was clearly COOLER than it was during the earlier El Nino in 1998.
    That’s consistent with the general behavior since the turn of the century: FLAT.

  180. Maybe they were going to release it on April 1st and it was supposed to be called HADyou4.

  181. The adjustments have to keep pace with falling temperatures plus a little bit, or they would all have been made redundant a decade ago. Fiddling the data is Piltdown man stuff.

  182. Please climate-change denier morons…

    Apologies, was in a rush – this version corrects my typos (incidentally, I don’t expect this to be published as you’ll most likely censor it won’t you?)

    Please climate-change deniers,

    That was one heck of a “typo.” By random chance your fingers typed the word “morons”?

  183. RobW says:
    March 19, 2012 at 8:18 pm

    OK so 2010 was the warmest year on record eh? Time for a straw poll. Who agrees? Please global answers are best.
    _______________________________________

    I did this two years ago in the fall before Masters at Wunderground scrubbed the data. I compared 2010 (Solar minimum) to 2004 (a year after 2nd Solar Max for cycle 23) I only looked at number of days over 90F for April through July.

    In Sanford North Carolina, the middle of the State, I count by July tenth 43 days over ninety F for 2004 vs 26 days for 2010, and four days of 98F in 2010 vs nine days of 98F in 2004

    Central North Carolina (Sanford)Monthly temps over 90F for.2004.&.2010
    April 2010 (1)………..April 2004 (6)
    1day – 91F……………..2 days – 91F
    …………………………….4 days – 93F

    In 2011 the April highs ranged from 55F to 86F we did not see temps over 90F (91F) until May23th!!!

    May 2010 (4)………………May 2004 (17)
    4day – 91F……………..6 days – 91F
    …………………………….6 days – 93F
    …………………………… 2 days – 95F
    …………………………….1 days – 96F
    …………………………….2 days – 98F

    June 2010 (18)……June 2004 (11)
    5 day – 91F……………1 days – 91F
    5 days – 93F………….7 days – 93F
    2 days – 95F……………none
    2 days – 96F……………2 days – 96
    4 days – 98F…………..1 days – 98F

    July 2010 (3)…………..July 2004 (9)
    1 days – 91F………………2 day – 91F
    1 days – 93F…………….1 days – 93F
    1 days – 96F……………none
    none………………………6 days – 98F

    For the whole month of July 2004 (24)
    ……………4 day – 91F
    ……………11 days – 93F
    ……………1 days – 95F
    …………1 days – 96F
    …………2 days – 98F

    In Sanford I count 43 day over ninety F for 2004 by July tenth vs 26 days for 2010, and four days of 98F in 2010 vs nine days of 98F.

    You are not going to convince me that 2010 was a “Very Hot year” based on my experience of the summer in North Carolina.

  184. Zeke: “The Berkeley folks went out of their way to avoid using any pre-adjusted data as they wanted to develop their own homogenization process unbiased by past efforts.”

    Berkeley uses GHCN raw, and GHCN tells you directly that they don’t account for adjustments made by their sources. So why would Berkeley do more to assure the rawness of their other sources of data than they do for GHCN?

  185. My biggest problem is how on God’s little green earth anyone can extract two decimal places from data that has NO decimal places in the first place! http://joannenova.com.au/2012/03/australian-temperature-records-shoddy-inaccurate-unreliable-surprise/

    ~ From data that is spliced and extrapolated and “Adjusted”

    ~ From data whose environment has changed over time.

    ~In every one of these the accuracy and precision is lost.

    I spent over thirty years in laboratories using thermometers and there is no way you will convince me thermometer readings are even good to the nearest degree when talking about thousands of thermometers spread over time and space and read by volunteers. BTDT and had a screaming fight with the other lab managers in my company about our inability to get duplicate readings on lab grade thermometers.

    AJ Strata has a good analysis on the error of temperature readings. http://strata-sphere.com/blog/index.php/archives/11420

    As far as I am concerned when you go past a whole degree you are arguing how many angels can dance on the head of a pin territory. If the accuracy and precision is not in the original reading you are not going to magically put it back in with statistical tricks.

  186. Seb says:
    March 20, 2012 at 10:07 am

    Please climate-change denier morons,

    Look at this which address Mr Watts above post:

    http://nailsandcoffins.blogspot.co.uk/2012/03/anthony-watts-misleading-his-readers.html

    Mr Watts yourself – are you truly convinced by your own arguments? Or have you got a SERIOUS vested interest in trying to mislead people with the utter drivel you write?
    —————————————————————
    Thanks for letting this one through, mods. It is salutary to be reminded occasionally of the low standards of Anthony’s opponents. Notice that this one is still running the ‘vested interest’ meme. I bet Anthony wishes he would identify the Caribbean island that Big Oil bought him so he can spend some quality time there with his family.

  187. Andrew

    Oh my. Some people just can’t let a good conspiracy theory go can they, even when every aspect of the theory has been totally debunked.

    Station quality issues – yeas there are bad quality stations as well as good ones. But the analysis done by a wide range of different investigators is that those station quality issues haven’t had any impact. By comparing results using just good stations with those using all stations, any difference has repeatedly found to be negligible. They do have an impact on the daily temperature range but virtually none on the average. And this isn’t just officialdom saying this. a range of independent online people have looked at this and come up with the same conclusion. Perhaps the authority you personally might give greatest creedence to is one Anthony Watts. He was co-author of a study that looked at all this and found exactly that. [REPLY: Uh, no, that's just your coauthor of "Skeptical Science" version interpretation of it. Readers can read my paper here - http://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf - Anthony]

    You could also look at the comments above by Zeke Hausfather above. He is one of the people who has done these sorts of comparisons. Read his comments, follow some of his links.

    Then there was the recent BEST project. Looking at the station data afresh, looking at many more stations, using quite different methods. Result? Essentially the same.

    Then the whole ‘march of the thermometers’ meme. You make reference to a decline in the number of ‘cooler’ sites, implying that this will introduce a systematic bias. Say What! To introduce a bias you would need to drop ‘cooling’ sites, not cooler ones. Remember, the temperature records are calculated based on temperature anomalies. We are looking for how much temperatures are changing, not their absolute value. Your point suggests that you think that global temperatures are averaged together to then see how much change there has been. And your right, if that were the case, dropping cooler stations would add a warm bias.

    Which is exactly why it isn’t calculated that way. The records all work by comparing each station against its own long term average to produce an anomaly for that station. Only then are these anomalies averaged together. So removing a cooler station will only introduce a warming bias to the record if that station is also cooling. So how does removing high latitude stations in places like northern Canada, where there is high warming introduce a warming bias? If anything it will add a cooling bias.

    Then you talk about using data where there wasn’t any. Since you are vague about what you mean here, I will assume that you are referring to the 1200 km averaging radius used by GISTemp. The reason why this is valid is that temperature anomalies are quite closely coupled over large distances and altitudes. For example, if a cold weather system passes over Adelaide Australia today, the same weather system will likely pass over Melbourne tomorrow. If a warm front passes over Santiago in Chile, down near sea level, it will also pass over the high Andes right at its doorstep. So their weather will tend to change in synch. So it is valid to average out over significant distances. And that distance was determined by observation, looking at the degree of correlation between large numbers of random pairs of stations at varying distances apart. The correlation is stronbgest over land and stronger in the high north.

    This same factor of ‘teleconnection’ then determines how many stations you need to adequately sample a region.So the total station count isn’t the point, it is the percentage station coverage that matters.

    So why are fewer stations used now? Because they don’t need to use that many stations to obtain sufficient coverage. More isn’t better.
    But it’s true: i’m not an expert on satellites or the algorithms used to convert input light signals to output temperature readings. That’s true. But from my understanding the early issues concerning calibration, the correct algorithms to use – were to dealt with to the satisfaction of almost everyone – perhaps though not yours?. yes, decaying orbital trajectories, diurnal drift and the like … but these are trivial matters corrected for in the modern age of mathemtics, understanding of relativity and computational power. Or perhaps i’m being too flipant. What specific concerns do you have regarding how the satellite data are treated?

    “Again, the general point I make is that satellite-generated temperature data are considered to be far more reliable, with far far fewer sources of more easily quantifiable errors (and thus easier to correct for) than surface-generated (thermometer) data. The biases in siting of the thermometers (urban heat islands, altitudinal, latitudinal); variations in surface topography and terrain; human erros concerned with reading and handling instruments, the accuracu of the instuments;, rounding/ recording errors etc. etc.”

    The satellite record has one possible source of structural bias that is not easily quantifiable – that the basic algorithms may introduce an unknown striuctural bias. That is why the Zou et al work is so interesting. Using a very different method of ‘stitching together’ the data from multiple satellites over time they have come up with a quite different result. That says to me that the jury is still out on what the real satellite trends are.

    And as I explained earlier, most of the issues in your last paragraph aren’t important because of the way the record is calculated. This is why Averaging the Anomalies, rather than taking the Anomaly of the Averages substantially bullet-proofs the calculation against the very issues you are raising. What are left are true random errors and biases, and these tend to cancel out.

    If you are interested in reading about this in more detail, including reasons why the station issues you mention may not be as significant as you think I wrote a 4 part series some time ago that covers all these issues here http://www.skepticalscience.com/OfAveragesAndAnomalies_pt_1A.html

  188. Seb says:
    March 20, 2012 at 10:09 am

    As for you, Mr Watts – are you truly convinced by your own arguments? Or have you got a SERIOUS vested interest in trying to mislead people with the utter drivel you write?

    Your link says this:

    …deeply disturbing points in its fabrication I’d like to raise.

    Watts: Data plotted by Joe D’Aleo. The new HadCRUT4 is in blue, old HadCRUT3 in red, note how the past is cooler, increasing the trend. Of course, this is just “business as usual” for the Phil Jones team.

    What Watts means by the “past is cooler” is that over the period ~1975-2000 the blue line (HadCRUT4) in the graph is lower than the red line (HadCRUT3). But here’s a proper comparison of HadCRUT4 and HadCRUT3 by the Hadley Centre. Notice that it’s the period post-2000 that is warmer in HadCRUT4. The period 1975-2000 is about the same.

    Above this particular graph in the article, the following is stated:

    Observe the decline of temperatures of the past in the new CRU dataset:

    I could be wrong here, but it seems to me that Anthony Watts should have said CRU below the graph like he did above the graph. It seems to me to be an innocent slip up. But are you arguing against the guts of Watt’s assertion that the past was made cooler albeit on CRU and not HadCRUT?

  189. Zeke Hausfather says:
    March 20, 2012 at 9:18 am

    Frank K.,

    For a global land reconstruction, I generally just use all raw data (with no tob adjustments). For U.S.-specific analysis, I usually use tob adjusted data as a starting point.

    That said, there has been some interesting work lately looking at how automated methods (Menne’s PHA or Rhode’s scalpel) can automatically correct for most tob issues. Williams et al talks about it a bit in this paper: http://www.agu.org/pubs/crossref/2012/2011JD016761.shtml

    Thanks Zeke – I’ll have a look at that.

  190. Werner Brozek says March 19, 2012 at 10:48 pm:
    > Donald L Klipstein says March 19, 2012 at 9:29 pm:

    >>There are 2 versions of the annual figures of HadCRUT3.

    >OK. This version has 1998 at 0.529 and 2010 at 0.470.
    >http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt

    This is UEA version of global HadCRUT3V, which is “variance adjusted”.
    (“my words”, possibly quoting inaccurately.)

    > This version has 1998 at 0.548 and 2010 at 0.478.
    > http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    This is UEA version of HadCRUT3, before “variance adjustment”.
    (“my words”, possibly quoting inaccurately.)

    > This version has 1998 at 0.52 and 2010 at 0.50.
    > http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates

    I check into that, and it appears to me that this is HadCRUT4.

    Also, the Hadley Centre of UK Met Office and UEA used different methods
    averaging 12 monthly figures to come up with annual figures for HadCRUT3.
    UEA uses “ordinary averaging”, while Hadley Centre uses what they call
    “optimized averaging”. The Hadley Centre version appears to me to show
    slightly more warming and slightly less ~62-64 year component in the past
    ~40-50 years than the UEA one does.

    The Hadley Centre text file for their annual figures for HadCRUT3 is at:

    http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/global/nh+sh/annual

    1998 reported as .517, 2010 reported as .499.

    > That is three different versions. Which one of these three, if any, is being
    > changed? If none of these, what are the numbers for the real one being
    > changed?

    I consider “most original” of these 4 to be UEA version of HadCRUT3.

    http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

    There is the matter that the 2001 version of HadCRUT could be “less
    adjusted still” and possibly be more accurate. Perchance, could someone
    supply a link to this?

  191. Much of the new data adjustment was done by the Met Offices around the world responsible for them, according to the paper describing the changes in CRUTEM4.

    So if Steve Milloy is right that the adjustments are improper fudging, then the conspiracy would seem to extend to all the Met Offices around the world.

  192. Glenn: “Station quality issues”

    The issue isn’t station quality, it’s station adjustments. Different subject entirely.

  193. Glenn: “The satellite record has one possible source of structural bias that is not easily quantifiable – that the basic algorithms may introduce an unknown striuctural bias.”

    Why is it not easily quantifiable? They can make direct comparisons of the output of the algorithms with real thermometer readings from radiosonde’s.

  194. Mods my post appears to be missing.

    REPLY:
    Dear Phil at Princeton. Not missing, it was rude and insulting to Joe D’Aleo, so I bit bucketed it – be as upset as you wish. – Anthony

  195. Werner Brozek says March 19, 2012 at 11:28 pm:
    (I edit slightly for line count and space)
    > Donald L Klipstein says March 19, 2012 at 10:35 pm
    >>I like to look at what happened from the ~1944 peak to the ~2005 peak.

    >What if you took the difference between 1883-1944 versus 1944-2005
    >and assumed the difference was due to CO2? Either way, it is not
    >catastrophic.
    >http://www.woodfortrees.org/plot/hadcrut3gl/from:1880/plot/hadcrut3gl/from:1883/to:1944/trend/plot/hadcrut3gl/from

    I think more comparable is from one peak to another. As in for the earlier
    time period, starting with 1877. OK, that will underreport warming trend
    because of starting with a “century class” El Nino peak.
    Maybe starting with 1878.25 or 1878.5 knocks that peak down to
    “comparable size”. Holy poop – that’s still about .041 degree/decade.

    There is also the matter of warming before and possibly during mid 1920’s
    coming from recovery from a “tripple whammy” of solar minimums, including
    the Dalton and Maunder ones.

    Not that I am arguing for global climate sensitivity to change of CO2 being
    more than 1.5 degree C per 2x CO2 change in recent or future decades. I
    have seen some indication, as I mentioned before, that this figure could be
    as low as .67 degree C per 2x change of CO2. (On log scale.) Compare
    this to ~3 degrees C (sometimes more) per 2x change of CO2 favored by
    most advocates of existence of anthropogenic global warming.

  196. the only true measurement of global temperature, shows what has been widely recognized. Except for temporary bounces, global temperature has been flat over the last decade, arguably since the El Nino in 1997.

    It should be widely recognized, but somehow isn’t, that the linear trend for the time period since 1998 is not yet statstically signifcant, and therefore can tell you little about what trend there actually is. The soonest we can get a trend that passes statistical significance testing, for satellite data, will be about 2014, but probably longer (satellite data is noisier than surface records).

    The period in question could itself be one of these ‘temporary bounces. We won’t know without more data. As of yet, there is no statistically signficant data that says the long-term temperature trend (which IS statistically significant) is now flat.

    What data there is shows a slight warming trend (same data source as given). That, too, is virually meaningless, because the trend is statstically insignificant.

    20 years is a good minimum to ensure a statistically significant rend WRT satellite global temperature data – but significance tests should still be observed.

  197. Donald K,

    a bit confused by two seemingly contradictory comments in your post.

    1. “Not that I am arguing for global climate sensitivity to change of CO2 being more than 1.5 degree C per 2x CO2 change in recent or future decades. I have seen some indication, as I mentioned before, that this figure could be as low as .67 degree C per 2x change of CO2″

    More CO2 causes some warming.

    2. “Compare this to ~3 degrees C (sometimes more) per 2x change of CO2 favored by
    most advocates of existence of anthropogenic global warming.”

    Here you call into doubt the ‘existence’ of AGW.

    The only way I can reconcile these two comments is that you think there is some doubt that human industry is responsible for increases of CO2 in the atmosphere.

  198. Tilo Reber
    “it’s station adjustments. Different subject entirely.”
    I was replying to Andrew who had raised other issues.

    “Why is it not easily quantifiable? They can make direct comparisons of the output of the algorithms with real thermometer readings from radiosonde’s.”

    2 reasons. The Radiosondes don’t have anything like enough geographical or temporal coverage to give anything but a very very rough confirmation of the trends and to allow problems with the algorithms to be evaluated. Secondly, there are well known issues with the radiosonde data related to heating and cooling of the instrument body at high altitude and changes in the instrumentatiuon packages over the years. For example, the raw radiosonde data showsn little warming in the upper troposphere which is unphysical. Thats not a signature of AGW. Its a signature of any warming from any source. Garth Paltridge has expressed doubts about the radiosonde record and Richard Lindzen has said the upper tropospheric warming must be happening, it is physically impossible for it not to be and that if the data doesn’t show that then the data is suspect.

    So the differences between UAH/RSS and Zou et al are well within the quite broad range of readings that the RadioSondes give us.

    The biggest issue with building the satellite record has been reliably stitching together data from multiple satellites. To do that UAH/RSS need each pair of satellites to be in service at the same time for long enough to allow statistical analysis of the difference between what each one is reporting and this nneeds a year or more of data. For example there appears to have been a problem that caused a divergence between UAH & RSS because the overlap time between NOAA-9 and NOAA-10 was only a few months. It took years for this difference to slowly work its way out of their results and contributed to their results drawing closer together in recent years.

    In contrast the Zou analysis uses a very different method to calculate satellite overlap. They use Synchronous Nadir Overpasses – periods in the satellite orbits when 2 satellites are passing over the same point on Earth at the same time and thus are looking at the same location below. This allows them to focus just on intersatellite differences rather than trying to extract them from the general range of biases the satellites are experiencing. This method strikes me as more robust than the UAH/RSS method and given that this also agrees better with the higher figures obtained by Vinnikov & Grody and Fu & Johansen says to me that there is at least reasonable grounds to think that both UAH & RSS may have an unrecognised cool bias in their processing algorithms

    It will be interesting to see what Zou’s TLT product looks like when they finally produce it.

  199. Donald L Klipstein says:
    March 20, 2012 at 5:44 pm
    This version has 1998 at 0.52 and 2010 at 0.50.
    > http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates

    I check into that, and it appears to me that this is HadCRUT4.
    The Hadley Centre text file for their annual figures for HadCRUT3 is at:

    http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/global/nh+sh/annual

    1998 reported as .517, 2010 reported as .499.

    Thank you. That explains it all. The values quoted above: 0.52 and 0.50 are the numbers 0.517 and 0.499 rounded to two digits. But that is NOT Hadcrut4. Hadcrut4 changes these to 0.52 for 1998 (so no change here), but 2010 becomes 0.53. As a result, 2010 is 0.01 C higher than 1998.

  200. barry says:
    March 20, 2012 at 7:12 pm
    The soonest we can get a trend that passes statistical significance testing, for satellite data, will be about 2014

    Unless we get a strong El Nino soon, I believe we will reach it this year yet. Santer talked about 17 years being needed. And at least as far as RSS is concerned, we are at 15 years and 3 months now. So that leaves another 21 months. And if every month in the future also pushes things back a month, we should be very close by the end of this year. See:

    http://www.woodfortrees.org/plot/rss/from:1995/plot/rss/from:1996.9/trend/plot/rss/from:1995.56/trend

  201. Tilo Reber says:
    March 20, 2012 at 5:56 pm
    Glenn: “The satellite record has one possible source of structural bias that is not easily quantifiable – that the basic algorithms may introduce an unknown striuctural bias.”

    Why is it not easily quantifiable? They can make direct comparisons of the output of the algorithms with real thermometer readings from radiosonde’s.

    #######################################

    dear god, you again.

    http://www.ssmi.com/msu/msu_data_validation.html

  202. Tilo Reber says:
    March 20, 2012 at 10:19 am (Edit)
    Zeke: “This is all rather silly. Don’t like GHCN? Well, don’t use it! ”

    No, Zeke, your assertion is all rather silly. First of all, BEST uses GHCN. And if you use GHCN data, then you can use what they call their “raw” data. But as they will tell you themselves, their raw data comes to them adjusted from their other sources. If you use other sources like BEST and exclude the GHCN data then you are getting data that is too unstable and fragmented for GHCN to use.

    ####################

    more nonsense. GHCN Monthly was assembled long ago and the stations were selected long ago from what was available. Other sources, such as GHCN Daily, now contain more sources, updated daily. This data is not to “unstable” to include in GHCN Monthly; its simply not a part of the inventory. Over time more and more stations are being added to GHCN daily as agreements come on line and people deliver data. The colonial record wont be added in GHCN Monthly, The CRN network wont be added to GHCN Monthly and its records are super stable. triple redundant sensors, readings every 5 minutes.

    Tilo, you dont know what you are talking about

  203. Tilo Reber says:
    March 20, 2012 at 9:59 am (Edit)
    Mosher: “Yes, the modelling involved to get the “temperature” from the brightness at the sensor is not without assumptions. and assumptions bring with them uncertainty.”

    It’s about more than just assumptions. They have been calibrated to Radiosondes that used real thermometers.

    #########################################

    you obviously havent read the calibration documents. And you dont understand the assumptions that go into the radiative physics that are used to MODEL the temperature.

    You want to understand the structural uncertainty in UHA or RSS. LOOK AT THE HISTORY OF CORRECTIONS! that should be your first clue

  204. Manfred says:
    March 20, 2012 at 1:43 am (Edit)
    Steven Mosher says:
    March 19, 2012 at 12:43 pm
    Its not surprising that when you add more Northern Latitude data that the present warms.
    This has been shown before. It’s pretty well known.
    As you add SH data you will also cool the past. This is especially true in the 1930-40 period as well as before.
    ———————————

    Isn’t this a bit too simplistic?

    In my view, a very good measure of warming is a comparison between the temperatures of last cyclical high in the 1940s and the recent cyclical high.

    ###############

    bad choice Manfred. Look at the spatial distribution of measurements in the 30-40s against the spatial distribution of measurements now. What you will find is that the 30-40s over sampled the NH relative to the southern hemisphere. In other words, in the current period the NH and SH are
    both well sampled. In the 30-40s, the SH was NOT sampled as well, which can lead to a over estimation of the warmth in that period.

    There is more data to recover from old archives. Based on what we known about polar amplification and the unsampled regions, if you want to lay a bet, here is the bet you will
    lay:

    As more data comes into the system the past will generally cool. It will not stay the same. It can only be higher or lower. Given the existing sampling distribution, bet on lower. Just sayin.

  205. The other reason why people should not be shocked by CRU series increasing is this.

    1. We knew CRU was low because of the way they handled the arctic. I’ll dig up the
    climategate mail on this. Also, this was discussed at RC in the past I believe.

    2. Skeptical estimates using BETTER methods than CRU indicated it was low

    http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/

    3. Every estimate I have done ( adjustment free lads ) was higher than CRU.

    REPLY: Note the updated graphs, such cooling of the past preceded the recent Arctic update. – Anthony

  206. Glenn Tamblyn says:
    March 20, 2012 at 4:06 pm
    “Then the whole ‘march of the thermometers’ meme. You make reference to a decline in the number of ‘cooler’ sites, implying that this will introduce a systematic bias. Say What! To introduce a bias you would need to drop ‘cooling’ sites, not cooler ones. Remember, the temperature records are calculated based on temperature anomalies. We are looking for how much temperatures are changing, not their absolute value. Your point suggests that you think that global temperatures are averaged together to then see how much change there has been. And your right, if that were the case, dropping cooler stations would add a warm bias.

    Which is exactly why it isn’t calculated that way. The records all work by comparing each station against its own long term average to produce an anomaly for that station. Only then are these anomalies averaged together. So removing a cooler station will only introduce a warming bias to the record if that station is also cooling. So how does removing high latitude stations in places like northern Canada, where there is high warming introduce a warming bias? If anything it will add a cooling bias.

    Then you talk about using data where there wasn’t any. Since you are vague about what you mean here, I will assume that you are referring to the 1200 km averaging radius used by GISTemp. ”

    Glenn, I don’t know who you are, so I don’t know who the “we” is you speak for; but what you’re saying to dicredit the “death of the thermometers” meme makes no sense.

    First you say that “we” is only interested in looking at how a station compares to itself; later you mention why GISTemps 1200 km radius smoothing is supposed to work.

    Think for a moment. How can the globally extrapolated and gridded anomaly NOT be affected by the geografically systematic Death Of The Thermometers.

    You should not make the mistake of mentioning that first meme (that “we” is only interested in comparing like with like) in the same comment as the second meme (the GISTemps 1200 km smoothing makes no difference).

    It becomes too obvious when you do.

  207. @Vukcevic:

    Gee… until recently, wasn’t the EU shipping boat loads of money to the ex-USSR for “carbon credits”? Think that might be an inducement to continue showing the world was warming and carbon credits were needed?…

  208. Glenn Tamblyn says: March 20, 2012 at 4:06 pm

    [...] totally debunked.

    As soon as I see that, I flag it as a probable Troll or at a minimum “True Believer Steeped In The Propaganda Talking Points”. Since when is science about “debunking” instead of presenting evidence? “debunk” is a propaganda term, IMHO, as used these days.

    Station quality issues – yeas there are bad quality stations as well as good ones. But the analysis done by a wide range of different investigators is that those station quality issues haven’t had any impact.

    The necessary conclusion from that line of reasoning is that quality is irrelevant. Any old crappy stations in high UHI area and with grass fields replaced by airport tarmac is just fine…

    The more correct conclusion would be that the “looking at” was not done very well.

    Then the whole ‘march of the thermometers’ meme. You make reference to a decline in the number of ‘cooler’ sites, implying that this will introduce a systematic bias. Say What! To introduce a bias you would need to drop ‘cooling’ sites, not cooler ones. Remember, the temperature records are calculated based on temperature anomalies. We are looking for how much temperatures are changing, not their absolute value. Your point suggests that you think that global temperatures are averaged together to then see how much change there has been.

    If only that were true… Look, I’ve wandered through the rats nest of code that is GISTemp. It does NOT start with anomalies. It starts with temperatures.

    It then interpolates them, homogenizes them, and eventually, near the end, makes Grid Box Anomalies out of them. But the anomaly process comes much nearer the end than the beginning. LONG after a load of infilling, merging and averaging is done. Heck, even the “data” it starts from, the GHCN and USHCN are temperatures created by a bunch of averaging and adjusting and homogenizing steps (including, BTW, something called a Quality Control that amounts to saying that if the temperature is too far away from the expected (and ill conceived notion…) it will be replaced with an AVERAGE of nearby ASOS stations. Yes, the Procrustian Bed to which all data must be cut is an average of airports…

    BTW, I did make a process that started with the very first step being “make an anomaly”. It showed that different months were doing different things. Some warming, some cooling. Sometimes adjacent stations were going in different directions. My conclusion was that the adjustments are the source of any aggregate ‘trend’. Oh, and the way different instruments are spliced together in what is laughably called “homogenizing”. It’s largely a splice artifact dressed up in fancy clothes.

    So don’t go pulling the “Anomaly Dodge”, because the temperatures stay temperatures to very near the end. Long after boat loads of math have been done on them.

    And your right, if that were the case, dropping cooler stations would add a warm bias.

    It’s actually much more subtle that that. COLD stations are kept in during the baseline cold period (the 50s to 70s were cold). That forms a ‘grid box’ fictional temperature. (As there are only about 1200 active GHCN stations in the present, and either 8000 or 16000 ‘grid boxes’ depending on what era of code is used, most boxes are by definition a fabricated value). Later in the code the present temperatures are used to fabricate more grid box values. it is those grid box values that are used to create an “anomaly”… of a thing that does not exist…

    By having cold stations in the early data, the baseline is kept cold. By having them gone, later, the grid boxes are filled in from other stations. Now this is the fun bit. The remaining stations are all in lower volatility areas, so can never become as cold as the original stations (that were in places like mountains with greater temperature ranges). The “Reference Station Method” claims that it can do this without error. It can’t.

    There isn’t space to go into it here, but the “method” is used recursively (3 times by my count). No paper justifies recursive use. It is used on “grid boxes”. The paper justifying it was based on real thermometers, not fabricated fictions (that themselves may be filled in from 1200 km away) The baseline is calculated with the thermometers from the cold period, then applied in other PDO regimes. (So do you REALLY think thermometer have the same relationship when the jet stream is, on average, flat; vs when it is very “loopy”?) Nothing justifies holding that a relationship set in one PDO / AMO phase will be identical in the other phase. And so much more…


    Which is exactly why it isn’t calculated that way. The records all work by comparing each station against its own long term average to produce an anomaly for that station. Only then are these anomalies averaged together.

    This is, how to put it politely… no, “lie” would imply you know it’s balderdash… Flat Out Wrong. The code does no such thing. Read it. (The code that is. I have.) Temperatures are kept AS TEMPERATURES through a load of homogenizing, infilling, averaging, and Reference Station Method steps (including a very badly done attempt at UHI “correction” that often gets it backwards). AFTER converting to “grid boxes” anomalies are created, but those grid boxes are predominantly FICTIONAL (as 1200 thermometer don’t fill more than 1200 boxes – often less – and there are many thousands of them to fill…)

    So removing a cooler station will only introduce a warming bias to the record if that station is also cooling.

    OR if it is a cold station in the baseline for that grid box and is replaced with a warmer station in the present for that grid box via the Reference Station Method infilling process. THEN the anomaly is calculated.

    Then you talk about using data where there wasn’t any. Since you are vague about what you mean here, I will assume that you are referring to the 1200 km averaging radius used by GISTemp. The reason why this is valid is that temperature anomalies are quite closely coupled over large distances and altitudes.

    As noted above, the RSM is applied 3 times in a row. There is no paper to justify that. It is applied using one set of stations in the baseline, a different set in the present. There is no paper to justify that. An ever decreasing and shrinking set of stations, not a well matched set and consistent over the test period as was used in the paper.

    The comparison in the RSM paper was over a very short period of time (so one phase of the PDO / AMO state). There is no justification for using it over a 40 or 50 year period when relationships change.

    Furthermore, most stations in the baseline were in places with a variety of environments, but often grassy, with trees, or otherwise cool. NOW almost all GHCN station are over or near the tarmac at airports. To say an airport tarmac can fill in for grass and trees is just dumb. (And oh, BTW, many of those airports were very small and sometimes grassy in 1950… now some are major International Airports. See the ones in Hawaii, for example.)

    To say the relationship of a grass field to a nearby mountain is unchanged over 50 years as one becomes an airport runway and the other may now be on the other side of a Rossby Wave (as flat vs loopy jet stream changes with PDO / AMO) is just ONE example of the silly assertion made by implication.

    I’ll skip your nice sounding but silly examples of similarity. I’ve done comparisons of stations and found that the relationship often will invert. SFO vs Reno for example (or vs Sacto). When inland temperatures change, fog can be pulled over SFO. Sometimes not. As longer term cloud levels change, the degree of non-correlation rises.

    So why are fewer stations used now? Because they don’t need to use that many stations to obtain sufficient coverage. More isn’t better.

    Bull. We’re already below Nyquist limits as it is (and by quite a margin). We have too low a sample size to say anything meaningful.

    I’m not going to bother with the rest of your comment. The pointer to a rather mindless “read dozens of pages of tripe” link is a standard “Troll Fodder Flag”.

    When I first came at the AGW issue it was with a “Gee it must be bad, I need to learn more” and got sucked into that dodge way too many times. Dredging through dozens (hundreds?) of links to ever more mind numbing mumbojumbo that never quite managed to get to the meat of things. Lots of loose ends that never were quite tied off. Lots of smooth sounding ‘talking points’ that never quite sealed the deal.

    No Thanks.

    As of now, I’ve found much clearer and much more complete sources (most are here in various links scattered over a few years worth of postings, but decent search terms will pick them out).

    I also sunk a couple of years of my life into GISTemp and GHCN. “Digging In” to it myself.

    What I found was false assertions (such as that “it is all done with anomalies’ when it clearly isn’t) and papers supporting one thing stretched out of all proportion in the code. ( So the RSM is justified for a few selected stations in ONE climate regime UP TO 1200 km MAX; then is applied RECURSIVELY three times in a row (so data might be smeared up to 3600 km) and is applied to FICTIONAL ‘grid boxes’ not real geographies, and is applied across very large variations in climate regimes. All unjustified by any scientific investigation.

    So, no, I’m not buying your song.

    Particular “issues” directly related to GIStemp”

    http://chiefio.wordpress.com/category/agw-and-gistemp-issues/agw-gistemp-specific/

    Problems in the GHCN:

    http://chiefio.wordpress.com/category/ncdc-ghcn-issues/

    The source code and technical issues from the version of GISTemp I ported (a bit dated now, but as some of their code is clearly from the 1970s, it doesn’t change fast…)

    http://chiefio.wordpress.com/category/gisstemp-technical-and-source-code/

    The results of my “anomaly first” tests:

    http://chiefio.wordpress.com/category/dtdt/

    And a whole lot more scattered around in my “notebook” site…

    Simply put, the GHCN is relatively buggered, the GISTemp code is crap and worse, and CRUTemp has lost their raw data, their code is crappy (see “Harry README”) and they can’t recreate anything.

    On THAT, I’m not willing to bet the fate of the global economy.

  209. Steven Mosher says:
    March 20, 2012 at 11:45 pm
    [...]
    As more data comes into the system the past will generally cool. It will not stay the same. It can only be higher or lower. Given the existing sampling distribution, bet on lower. Just sayin.

    I rather prefer that my history not change… I really hate it when the past keeps getting re-written. It makes me think about the USSR and airbrushing inconvenient people out of photographs… Just sayin…

  210. Unless we get a strong El Nino soon, I believe we will reach it this year yet. Santer talked about 17 years being needed. And at least as far as RSS is concerned, we are at 15 years and 3 months now. So that leaves another 21 months

    Yep, but we won’t know for sure without testing for significance, and there is some pretty wild variation at either end of the record – particularly with RSS TLT.

  211. I rather prefer that my history not change

    Science must yield to better information, otherwise it is just dogma.

    If you can’t deal with revisions you shouldn’t do science.

  212. Alexej Buergin says: March 19, 2012 at 1:12 pm

    Re Mosher
    quote
    So what was (according to Moshtemp) the average temperature in Reykjavik in 1940: 5°C or 3°C?
    unquote

    Not just Reykjavik: there is a whole suite of islands which were recording temperatures during the WWII blip and the subsequent fall in temps. I’ve just come back from Madeira and overflew the western edge of Spain. From 30+kft you can see Gib, Morocco, Spain, Portugal and France, all westerly facing, all with records which can be compared with the new adjusted temperatures. Add Iceland, the west coast of Ireland, the Faroes, etc etc

    Either the original record was sloppily done — no ground truthing — or the new record is sloppily done. Or, I suppose, both. But perhaps I am maligning the paper and it covers this point exhaustively.

    Smoothing the blip has one other problem: the contemporary windspeed changes match the temperature blip, so the handwave needs to find some explanation for that as well. Insulated anemometers anyone?

    JF

  213. DirkH says:

    “Glenn, I don’t know who you are, so I don’t know who the “we” is you speak for; but what you’re saying to dicredit the “death of the thermometers” meme makes no sense.

    First you say that “we” is only interested in looking at how a station compares to itself; later you mention why GISTemps 1200 km radius smoothing is supposed to work.

    Think for a moment. How can the globally extrapolated and gridded anomaly NOT be affected by the geografically systematic Death Of The Thermometers.

    You should not make the mistake of mentioning that first meme (that “we” is only interested in comparing like with like) in the same comment as the second meme (the GISTemps 1200 km smoothing makes no difference).

    It becomes too obvious when you do.”

    DirkH,

    I think you misunderstand and haven not followed the independent assessments of the
    so called “death of thermometers”

    1. The sampling of thermometers will only effect the trend IF the thermometers that drop out
    DIFFER in trend from those retained. With the death of thermometers those dropped
    tended to be from higher latitude ( higher trend ) stations. If it had any effect it would be a COOLING effect.

    2. We calculated and presented results on this site that show the drop had no effect

    3. We’ve added stations effectively removing the drop and show no difference.

    4. I’ve done reconstructions that only use rural stations, recons that only use long stations series
    ( 500 stations), recons that only use 100 stations, no difference.

    Why? because as long as you have a reasonable sampling of the earth north to south you will get the same answer even with very few stations. Been there, done that, proved that.
    Now, you go back to 2007 when I first started looking at this and I was as skeptical, if not more skeptical, than many hear: Skeptical about “rounding”, skeptical about the number of stations,, skeptical about adjustments, skeptical about siting, about UHI, you name it.

    None of these concerns amounted to mousenuts. Get the data ( I had to fight for it ) its now freely avaliable. Get the code–I had to fight for it, you can use the code I helped free, or you can use the code I make freely available or you can write your own, And do some work

  214. I downloaded yesterday all the new station data for CRUTEM4(Hadcrut4) and then compared the results with the old CRUTEM3 station data. There are 5549 stations in the set compared to 5097 in CRUTEM3. 738 new stations have been added while 286 stations have been discarded. Those added are all mainly in northern Russia. Quite a lot of stations from North America have been discarded. There are none added but some lost in the southern hemisphere despite even sparser coverage than the arctic. The changes to the global anomalies are small and statistically insignificant. However they do psychologically change the impression of “warming” over the last 15 years – moving 2010 and 2005 up a bit and 1998 down a bit. Also the 19th century data has got just a tiny bit cooler. Statistically there has been no warming for about 15 years – but now they can say that 2010 was “warmer” than 1998 by 0.01 +- 0.05 degrees ! You can read more about this and also see where the new stations are at my blog.

  215. Anthony Watts says:
    March 21, 2012 at 10:13 am

    The new CRUTem4 is in blue, old CRUTem3 in red, note how the past is cooler (in blue, the new dataset, compared to red, the new dataset)

    I believe red and blue was mixed up here. I think it should read: (changes in bold)
    The new CRUTem4 is in RED, old CRUTem3 in BLUE, note how the past is cooler (in RED, the new dataset, compared to BLUE, the OLD dataset)

  216. clivebest says:
    March 21, 2012 at 9:59 am
    I downloaded yesterday all the new station data for CRUTEM4(Hadcrut4) and then compared the results with the old CRUTEM3 station data. There are 5549 stations in the set compared to 5097 in CRUTEM3. 738 new stations have been added while 286 stations have been discarded. Those added are all mainly in northern Russia. Quite a lot of stations from North America have been discarded. There are none added but some lost in the southern hemisphere despite even sparser coverage than the arctic. The changes to the global anomalies are small and statistically insignificant. However they do psychologically change the impression of “warming” over the last 15 years – moving 2010 and 2005 up a bit and 1998 down a bit. Also the 19th century data has got just a tiny bit cooler.

    Interestingly your close-up of Crutem3 and Crutem4 since 1990 shows differences from D’Aleo’s graph, D’Aleo’s graph shows Crutem4 lower than Crutem3 whereas yours shows them virtually identical, any thoughts?

    • @ Phil
      My two curves CRUTEM4 and CRUTEM3 are calculated directly from the two full sets of station data (~5000 in each) using exactly the same algorithm as provided by UK Met Office. So this should show directly any systematic differences between the two datasets.

      I just checked on CRU’s website and for some reason they don’t have the Global average of CRUTEM4 data available for download – just SH and NH. But GL is simply (SH+NH)/2

      It could be that he is using CRUTEM3V and CRUTEM4V. Th V stands for “variance correction” . They write the following: “the method of variance adjustment (used for CRUTEM3v and HadCRUT3v) works on the anomalous temperatures relative to the underlying trend on an approximate 30-year timescale.” In other words this looks like a smoothing algorithm to suppress “outliers”, and assumes an “underlying trend”. I prefer to work with the raw temperature data from the stations themselves.

      For example: Before 1900 the method of normalisation for anomalies (subtracting monthly variations) introduces large systematic errors. See this graph where the blue points use normalisation within a single grid point compared to CRUs per station.

  217. E M Smith (Chiefio)

    I spent some time on your site a fair while back trying to see if you had anything worthwhile to say. I walked away very unimpressed.After drowning your audience in endless masses of tables most of your readers wouldn’t know up from down. So I went to look at the core of the issue – the algorithm from Hansen & Lebedeff 1987 and the source file where it was implemented – Step3/to.SBBXGrid.f, a Fortran File.

    I then read your ‘analysis’ of this file. You went through describing the shell script file and what it does. Then you described the header of the Fortran file, the text description of the program. But when it came to you describing what the actual code does, by reading and understanding that code, your description degenerates into a lot of hand waving but not actually any information. I have been back to your site several times since and you have never updated that description.

    Personal opinion. I don’t think you actually understand what the RSM method does. You have never demonstrated an understanding of it in anything I have seen you write.

    If you sit down and go throught the RSM carefully, from H&L87, following what it does through each iteration of the calculation you see the following:

    1. The average over the baseline period are calculated for each station being analysed.
    2. One station is selected as the reference station.
    3. Then for station 2,
    3.1 The difference between the average of Station 1 and the average of station 2 is calculated.
    3.2 The weighting for station 2 is calculated, depending on how far from the cell centre it is located. 1 at the center out to 0 at 1200 km.
    3.3 The data for station 2 has the difference between 1 & 2 calculated in step 3.1 subtracted from it. The effect of this is to produce a value that is now relative to station 1’s baseline rather than station 2’s. THIS IS THE CRITICAL STEP. The data from station 1 and station 2 now have a COMMON AVERAGE VALUE. They could be regarded a being one station
    3.4 Then the values from Station 1 & Station 2 are combined together with area weighting applied to the values from station 2 as they are added.
    3.5 Finally a new common average for the baseline period is calculated for the combined data from stations 1 & 2.

    Then steps 3.1-3.5 are repeated for stations 3-n.

    Finally the average for station 1, which is now the average for all stations data after the adjustment in step 3.3 is subtracted from the calculated weighted average to produce an anomaly value. However what needs to be understood about this process is that because data from different stations have been adjusted in step 3.3 before being averaged, all that data is now relative to a common reference value and is mathematically equivalent to an anomaly based on that reference value. If step 3.3 didn’t occur where it does, this process would be producing an Anomaly of Weighted Averaged Temperatures. But because of step 3.3, what it produces is the Weighted Average of Temperature Anomalies.

    Now I freely admit the algorithm is not that clear, and having the different aspects – anomalies, weighting and averaging all mixed together in the algorithm does make it hard to understand. And its not the prettiest Fortran code I have ever seen. But beneath the messiness of how they have done it, GISTemp IS calculating and Average of Anoalies.

    This is also born out by the fact that others such as the Clear Climate Code project have been able to take the GISTemp code, rewrite it and clean it up in Python (including finding a few minor bugs in the process) and produce the same result. And that many independent analyses of the temp record, culminating in BEST have also produced essentially the same result.

    A couple of other points Chiefio. You have stated that the 1200 km weighting average is hard coded. THIS IS NOT TRUE. It is the default but it is over-ridden by a command line parameter in the shell script file do_comb_step3.sh

    label=’GHCN.CL.PA’ ; rad=1200
    if [[ $# -gt 0 ]] ; then rad=$1 ; fi
    …… Several lines setting up input files ….
    ${fortran_compile} to.SBBXgrid.f -o to.exe
    to.exe 1880 $rad > to.SBBXgrid.1880.$label.$rad.log

    Also you have claimed that the 1200 km weighting average when used on islands then sets the temperature for the surrounding ocean out to large distances. THIS IS SIMPLY NOT TRUE. Again the shell script for step 5 where the land and ocean data is merged sets a default of 100 km. Beyond this distance from land the ocean data is used instead. see here from do_comb_step5.sh:

    RLand=100 ; # up to 100km from a station, land surface data have priority over ocean data
    if [[ $# -gt 0 ]] ; then RLand=$1 ; fi
    …… Several lines setting up input files ….
    $FC SBBXotoBX.f -o to.exe
    to.exe $RLand 0 > SBBXotoBX.log

    And if you actually look at the results from GISTemp, calculate data using land only, ocean only and combined then look at the gridded data output you see clearly that data from islands DOES NOT extend out over large distances of the ocean because the SST data is used instead.
    So you are quite simply wrong on both those points. Out of what looks like simple carelessness. Not actually reading the code CAREFULLY.
    So you may have built some notoriety for yourself but it looks like is may have been founded on a fairly flimsy basis.

    With so many other studies contradicting you, why should anyone take you seriously?

Comments are closed.