CRU’s shifting sands of global surface temperature

Excerpt from The Inconvenient Skeptic by John Kehr

The Inconvenient Skeptic

The longer I am involved in the global warming debate the more frustrated I am getting with the CRU temperature data.   This is the one of the most commonly cited sources of global temperature data, but the numbers just don’t stay put.  Each and every month the past monthly temperatures are revised.  Since I enter the data into a spreadsheet each month I am constantly seeing the shift in the data.  If it was the third significant digit it wouldn’t bother me (very much), but it is much more than that.

For example, I have two very different values for January of 2010 since September 2010.  Here are the values for January based on the date I gathered it.

Sep 10th, 2010:  January 2010 anomaly was  0.707 °C

Jan 30th, 2011:  January 2010 anomaly is now 0.675 °C

That is a 5% shift in the value for last January that has taken place in the past 4 months.  All of the initial months of the year show a fairly significant shift in temperature.

Monthly Temperature values for global temperature change on a regular basis.

Read the entire post here

=============================================================

Some of this may be related to late reporting of GHCN stations, a problem we’ve also seen with GISS. Both GISS and CRU use GHCN station data which are received via CLIMAT reports. Some countries report better than others, some update quickly, some take weeks or months to report in to NOAA/NCDC who manage GHCN.

The data trickle-in problem can have effects on determining the global temperature and making pronouncements about it. What might be a record month at the time of the announcement may not be a few months later when all of the surface data is in. It might be valuable to go back and look at such claims later to see how much monthly temperature values quoted in news reports of the past have shifted in the present.

More on CLIMAT reports here in the WMO technical manual.

UPDATE: For more on the continually shifting data issue, see this WUWT post from John Goetz on what he sees in the GISS surface temperature record:

https://wattsupwiththat.com/2008/04/08/rewriting-history-time-and-time-again/

Advertisements

123 thoughts on “CRU’s shifting sands of global surface temperature

  1. Am I being overly cynical in noting that:

    Sep 10th, 2010: January 2010 anomaly was 0.707 °C

    helps them come up with new “it’s been a record month” regularly

    Jan 30th, 2011: January 2010 anomaly is now 0.675 °C

    lowers the bar for the next set of figures to be “it’s been a record month”?

    Maybe I am, BUT how many times do these type of things happen in reverse? In a random universe the adjustments should be 50:50 up vs down (I would guess as a layman).

    I have a suggestion, perhaps it’s just me, that far more errors and corrections occur in the direction that aid the ‘panicists’ (if I am to be a denier, they get to be ‘panicists’).

    Every time there is a change made in the numbers, there ought to be a just as easy to access justification of the change. The climatologists long ago forfeited the right to automatic trust in their competence or honesty.

    IMHO

  2. Surely this should be done electronically now? The recording is taken and sent to a satellite. All the attendee would have to do is ensure that the recording station has a fresh battery in it every now and then – or even run it off solar. With instant reporting we’d have a better idea of weather/climate.

  3. Sigh.

    I keep harping on about the poor quality of the data, the fact it keeps changing doesn’t exactly help their cause.

  4. The problem with GISS and with CRU series is that, as a result of these adjustments to trailing data – whether it’s late station reporting or it’s manual manipulation, it matters not – is that we are effectively perpetually on the crest of “unprecedented high temperatures”, which always facilitate alarming reports.

    That these alarming reports are nullified, month-on-month, would not be an issue except that those reports are only superseded by the next month’s “unprecedented high temperature” reports.

    It is this publicity trick of press releases and retrospective adjustments that allows even a flat temperature trend to be perceived, by all but the most diligent analysts, as an ever-upward trend.

  5. I had to visit John’s blog to figure out the graph (coffee hadn’t kicked in yet). the caption there is helpful – “Monthly Temperature values for global temperature change on a regular basis.”

    Each line in the graph links the monthly anomaly as of some reporting date. Hence the September 2010 line has temps from January to July, the late January 2011 line is the only one to include December.

    All in all, it’s another reason to rely on the satellite temps.

  6. Is the general trend up to start with and later revised down so that when comparing year on year you will always get an up in temperature? If the figures are continually adjusted up there will come a time when this will be obviously wrong so revision then has to be adjusted down to keep the rise going.

  7. Surely, it is ridiculous to produce temperature figures with an implied accuracy to three decimal places? Does any one reporting these temperatures read the instrument to that level of precision?

  8. Don says:
    January 31, 2011 at 5:00 am
    God forbid if the data for the efficacy and safety of a new drug was collected in this matter.

    Good analogy. Where facts and data are vital, such as medicine and drugs, such latitude isn’t permitted, neither is propaganda. With climate *science*, such abrogation is not unsafe, since there’s nothing at stake either way, being mere voodoo as a consensus.

    Like in science fiction, or even fictional novels and films, it is fine to play with the *truth* since it is done for effect. In a court case however, all facts have to be as objectively presented, otherwise cross examinations will arrive at them.

    Similarly with the CRU. However, The objection is that they present it as fact and not as fiction

  9. I can understand the data trickle point but if this is truly in play, CRU should wait until all data is in (and analysed) before making any pronouncemnets as to record warming.

    Of more concern is the fact the temperature data for the 1930s has been adjusted half a dozen or so times. There can be no good excuse behind all these adjustments.

    All in all, it does not lead one to have confidence in the various data sets.

    I think that there are strong arguments to scrap the land data set (it is too uncertain and corrupted by unreasoned and probably incorrect adjustments to have much value) and look solely at sea temperature data (the oceans contains about 99% of the total heat energy of the system and these temperatures are most telling about whether there is or is not some form of global warming taking place) and if one must, also at satellite data.

  10. Do CRU still use surface measured data? If so then this shows how antiquated the system is. Since the surface data system is not a complete coverage, with that 2/3 loss of stations in 1990, then these people should shift to satellite data sets. These show a slight global cooling which is not what these people need to show.

  11. On October 4 last year I recorded the Dec 2008 and all 2009 monthly temps for the Western Australia location of Kalgoorlie-Boulder from the GISS database, and checked the figures again the following day …

    Recorded Oct 4 2010:
    Dec – 24.3
    Jan – 27.7
    Feb – 25.4
    Mar – 22.7
    Apr – 18.9
    May – 14.3
    Jun – 12
    Jul – 10.7
    Aug -13.8
    Sep – 999.9
    Oct – 20.8
    Nov – 23.5
    Dec – 25.8

    Recorded Oct 5 2010:
    Dec – 24.4
    Jan – 27.8
    Feb – 25.5
    Mar – 22.8
    Apr – 19
    May – 14.4
    Jun – 12.1
    Jul – 10.8
    Aug – 13.9
    Sep – 999.9
    Oct – 20.9
    Nov – 23.6
    Dec – 25.9

    Does it really take as long as 23 months to adjust GHCN (and thus GISS) temps, and how come every month needed adjusting by .1 C? The good news is I’ve just checked again and there’s been no change since October 5!

    As for that 999.9 missing data error in the GISS records for September 2009 … well, that’s another story: http://www.waclimate.net/giss-adjustments.html

    But if you really want to see some fancy temperature adjustments that affected Australian data the previous month, August 2009, check http://www.waclimate.net/bom-bug-temperatures.html

  12. There is a really simple solution to all of this – simply DON’T report the monthly values until all of the data is in! If it takes six months, so be it.

    Unfortunately, we live in an era where climate scientists wish to become rock stars and publicity hounds, and never miss a chance to be first with the manic CAGW press release…

  13. All they are saying is that the temperature record today, is not accurate.
    Tomorrow, a week from now, a month from now…..
    ….it will be accurate

  14. Gendeau says:
    January 31, 2011 at 4:36 am
    I have a suggestion, perhaps it’s just me, that far more errors and corrections occur in the direction that aid the ‘panicists’ (if I am to be a denier, they get to be ‘panicists’).
    ===========================
    I think ‘panic mongers’ has a better ring to it!

    Geoff A

  15. Tony Hansen says:
    January 31, 2011 at 5:02 am

    John,
    > I too worry about the ‘adjustments’, but I also worry about your ’5%’.
    5% of what?

    Of the anomaly.

    $ python
    >>> .707 * .95
    0.67164999999999997

    Approximately – the 5% is rounded up from:

    >>> ‘%.2f’ % (100. * (1. – (.675/.707)))
    ‘4.53’

  16. Tony Hansen says:
    January 31, 2011 at 5:02 am

    John,
    I too worry about the ‘adjustments’, but I also worry about your ’5%’.
    5% of what?

    5% shift in the value.

    (0.707-0.675)/0.675 = 4.7% (roughly 5%)

  17. Those with kids can visualise this; you look away for a moment, when you look back the child is sitting innocently next to the wreckage, with eyes like saucers, claiming they didn’t do it.

  18. Murray Grainger at 5:14 am:

    “three decimal places? Does any one reporting these temperatures read the instrument to that level of precision?”

    That is the first, most obvious, and ongoing proof of the overall preposterous CAGW fraud.

    A quick look at Anthony’s surfacestations.org is proof enough that three decimals is just ludicrous, and you can extrapolate that factual evidence to the third-world hellholes and formerly communist backwaters, that even zero decimal accuracy is quite a ridiculous assertion when it comes to the historical surface temperature “record.”

    One decimal of accuracy is a joke. Three decimals is a knee-slapping laugh riot.

  19. Sometimes up and sometimes down as new data comes in late and while the underlying trend that the anomaly is based on is adjusted. Clearly the overall effect is minimal. It is explained here http://www.cru.uea.ac.uk/cru/data/temperature/#faq and is again making a mountain out of a molehill to sew seeds of doubt on a non issue.

    The point is to get the data out as soon as possible but make sure adjustments are made to keep it accurate as new information comes in. Theirs nothing sneaky or hidden going on.

  20. In cases like this, what do they need data for except to give the impression of honesty and the cover of credibility? In other words, what good is a claim without full disclosure of how the claim is made?

  21. The 6th month on your graph seems to be lower for the Sept measurement than the Jan 2011 measurement why it is the other way round for the 3rd month. This would suggest that it swings both way and is not just for the properganda effect that seems to be claimed by a number of comments.

  22. MattN says:
    January 31, 2011 at 6:24 am

    Why can’t the reporting be automated? This is 2011 for the love off God….
    ———-

    A couple of years back I looked act the report forms for station in Berkeley,CA, the most recent form had several days missing, the original ones, none. Thus, the quality of the data was degraded at that station compared to its beginning, even with all the technology available.

  23. Not at all unlike what the gubmint is constantly doing with economic numbers. Initially put out great numbers, which grab the headlines, then later revise downward.

  24. Since I enter the data into a spreadsheet each month I am constantly seeing the shift in the data. If it was the third significant digit it wouldn’t bother me (very much), but it is much more than that.

    I use Linux bash. This is just an example of a script I use to pipe out the first column of CRU monthly data.

    #!/bin/bash

    SED=’/bin/sed’
    AWK=’/usr/bin/awk’
    CAT=’/bin/cat’

    $CAT monthly |
    $AWK ‘{ print $1″,”$2″,0″ }’ |
    $SED ‘s/\t//’ |$SED ‘s/ //’ >hadlong.csv

    If I want to output any month in the years then (NOV):-

    #!/bin/bash
    SED=’/bin/sed’

    $SED ‘/\/11/!d’ hadlong.csv > Nov.csv

    It takes time to learn and each data set has it’s own foibles but once
    written the scripts save me from firing up a spread sheet on my old clunker.

  25. Ric Werme says:
    January 31, 2011 at 4:54 am

    “I had to visit John’s blog to figure out the graph (coffee hadn’t kicked in yet). the caption there is helpful – “Monthly Temperature values for global temperature change on a regular basis.”

    Each line in the graph links the monthly anomaly as of some reporting date. Hence the September 2010 line has temps from January to July, the late January 2011 line is the only one to include December.

    All in all, it’s another reason to rely on the satellite temps.”

    Getting the global temperature right is not a trivial matter. Satellite temperatures are not a solution to the problem. A few years back, the UAH was shown to have some major errors, and Spencer and Christy were forced to admit some bad mistakes and correct the error.
    Tamino has pointed out that the 3 station based and two satellite temperature records all look alike if you look at the big picture. The satellite records appear to be more affected by El Nino.

    These differences to not appear to be a real issue in the discussion of Global Warming.

  26. Ric Werme

    Unfortunately, it doesn’t make statistical sense to express changes of interval scaled variables as percentages. If a different baseline is chosen for the anomalies, the value of the percentage will change also.

    Shift up the baseline by .6 (to anomalies of .075 and .107) and you now get a whopping 43% or 30% (depending on which of the two you choose as the percentage comparison base).

  27. MattN: I don’t really know what difference it makes. Nobody thought of using weather data to track the global mean temperature, until a few decades ago. The system was set up to measure weather. We could have automatic measurement now but it wouldn’t fix the problem that temperature from the earlier years weren’t gathered in the same way. If we start now collecting accurate data, we would still need at least 30 year to see the true climate is doing. By that time it will be clear to everyone whether or not global warming is happening and what the cause is.

  28. Interesting comment in the original article:

    “Oddly enough the yearly average seems to stay the same, but the monthly values that create the average are in constant flux.”

    That would seem to be extremely unlikely, late data changes individual months significantly (~5%) but the yearly average stays the same.

  29. It isn’t just the constant diddling and fiddling with the numbers – though, God wot, that is objectionable enough in itself, particularly when retroactively applied to figures which have sat there undisturbed for decades. If, as one or two people have already suggested, it’s a matter of not all the results being in, then to put it bluntly there is no figure which can be honestly reported until all the results are in. If that takes several weeks or months despite all our cool technology, then so be it. Stop issuing scary figures, and wait until you can issue real ones. (I’m addressing that comment mainly to the guilty parties, of course, not WUWT posters who have to work with the ‘official pronouncements’ as they come in.)

    As someone old enough to have been taught real science though, I find much “climate talk” a bit weird on purely linguistic grounds; it seems to perform a sort of neuro-linguistic programming in itself, even on this excellent blog. When I was at school, one might have mentioned, say, the human influence on climate. Now, all we hear is to do with forcings: forcing, according to my Shorter OED, being the action of force, its only near-scientific use being in phrases like “forcing frame” (a frame where the growth of a plant is artificially hastened). No. Most, if not all, of the influences on climate are just that, influences, and rarely artificial, without all the connotations of force. ‘Anomaly’ is another such: in normal language, something is anomalous only if it is abnormal, exceptional, irregular; in climate discussions the word seems to be used merely to ‘scarify’ the variation from some value which itself is pretty much arbitrary. What is ‘anomalous’ about some minor variability in parameters which we all know vary all the time? Nothing. It’s just a variation.

    I can’t help feeling that the use of words with such pejorative connotations is all part of the con. Any speaker of normal English, hearing all this talk of ‘forcings’ and ‘anomalies’ etc., will at some subconscious level associate the words with their normal meanings and infer that there is a problem. And as George Gershwin memorably put it, “It Ain’t Necessarily So”. Indeed, as far as I can see, it ain’t so at all. Worth a thought.

    Oh, and if my modest attempts at formatting have worked out OK … thanks for the advice! If not, do us a favour and either correct them or scratch them.

  30. Off the current subject:

    Any one have a way to do an estimate of the CO2 required to enable this meeting?

    htp://www.politico.com/news/stories/0111/48471.html

    All 260 Ambassadors from all embassies, consulates and other post from 180 countries. All called to D.C.

    note: Go to meeting,, is that not on line still?

    Has to be one big number.

  31. If they are reporting the data and trends, then they should not release anything until all the data is in and let the information speak for itself.

    However, as many of us know, they are not only reporting the data and trends, they are also reporting propaganda.

    Unfortunately, a lot of influential people believe the propaganda.

  32. MattN says:
    January 31, 2011 at 6:24 am

    “Why can’t the reporting be automated?”

    Because then NOAA/NASA/UAE/MET/… won’t be able to selectively apply their “quality control” methods to the data.

  33. As a long term observer of the CRU dataset (until post-climategate when I couldn’t see the point) I too have noticed this constant changing of the data.

    Gendeau says: January 31, 2011 at 4:36 am

    Am I being overly cynical in noting that:

    Yes, probably. If there was something obvious to be cynical about the data I would have spotted it. In all the time I observed, I never spotted a consistent trend … except that it would report later when it was cooling, and the results would be rushed out early when it was warming!

    However the significant changes that regularly occurred highlighted the ease with which an observer bias could be introduced into the results. Even a group without an overtly spoken bias may easily have adjusted the figures by viewing some (cooling) results as bad, and not viewing other (warming) results as similarly bad.

    However, when you had a group actively trying to “hide the decline” their data was almost totally useless for drawing any conclusions.

  34. The problem stems from the fact that what is being measured is essentially noise. 0.6C, 0.7C. 0.4C… it does not matter. 2C, 4C, 10C… that is significant.

    When you have a signal, as opposed to noise, these issues do not exist.

  35. This quote makes the whole process seem fishy to me.

    “Oddly enough the yearly average seems to stay the same, but the monthly values that create the average are in constant flux.”

    Perhaps they intentionally hold the annual reading fixed to not draw attention to the changes, on the other hand, the month to month data changes make them look like they have no idea what they’re doing. How can you say we have new evidence that previous monthly temperatures should be adjusted yet the annual temperature remains the same?

    I’ve read this blog for many years now and for most of that time I believed these numbers held some significance. Over time I’ve come to the belief that all of these ground based temperature records are entirely meaningless. There is way too much room for error and even manipulation to come into the picture — and that’s great compared to the problems with the historical, proxy based ‘temperature record’.

    The satellite record is the best tool for this, unfortunately, the record is quite short.

    I’ve been a skeptic for some time, but I, like most here, believe the CO2 we’re adding the atmosphere will have some affect on temperature. But how can we know? We’re mostly traveling blind.

    Still, the evidence seems to indicate that the impact can’t be large.

    MikeEE

  36. Sorry, but if 5% of the data trickles in late it should not move the global monthly temp 5%. With 95% of the data in it should not perturb the global value at all!

    And remember, this is a September value for January – 7 months after the period in question. How late is this trickle?

    I agree with the author, this indicates some fundamental math errors.

  37. Tony Hansen says:
    January 31, 2011 at 5:02 am

    John,
    I too worry about the ‘adjustments’, but I also worry about your ’5%’.
    5% of what?

    I presume he means 5% of the 0.675C anomaly but this is meaningless in relation to the uncertainty of the figure itself which is probably nearer +/-100% !

    There seems to be very little attention paid to the basic scientific principal that any result is meaningless unless it is accompanied by the experimental uncertainty.

    There have been some attempts to estimate it but because of the quality and mixture of the data sources this seems to be a pretty speculative venture. Thus the temperature record it little more than speculative. Simple fact of the data quality.

    Anyone worrying about 5% of 0.675C has not understood what they are looking at.

    It’s also helpful not to get too obsessive about the figures for each month as they come hot off the press. Whatever they say is irrelevant to the climate record over 30 years.

    What is worth archiving and keeping a close eye on is how the likes of GISS and CRU are constantly adjusting the long term record and how each time it just happens to get a little closer to their AGW theory than last time they added up the same temperature data.

  38. How will you ever been able to trust data from organisations (CRU, GISS, etc) run by unscrupulous climate politicians attempting to achieve an economic policy outcome on behalf of the UN – regulation of human CO2 output – regardless of its impact on climate?

  39. Michael says:
    January 31, 2011 at 6:30 am

    Sometimes up and sometimes down as new data comes in late and while the underlying trend that the anomaly is based on is adjusted. Clearly the overall effect is minimal. It is explained here http://www.cru.uea.ac.uk/cru/data/temperature/#faq and is again making a mountain out of a molehill to sew seeds of doubt on a non issue.

    The point is to get the data out as soon as possible but make sure adjustments are made to keep it accurate as new information comes in. Theirs nothing sneaky or hidden going on.

    Are they noting these changes as they occur? The lack of transparency, reason for changes, etc, are the problems.

  40. >> I too worry about the ‘adjustments’, but I also worry about your ’5%’.
    >> 5% of what?

    > Of the anomaly.

    I don’t think you may measure it in that way. Suppose the anomaly is +0.001 C
    and that this value is then revised to +0.002 C. This would be a very slight
    change, yet it would be a change of 100%….!

  41. >Brian H says:
    >January 31, 2011 at 5:14 am
    >“The future is certain; only the past is subject to revision.”
    >Polish Soviet adage.

    That’s got a nice ring to it. Maybe CRU or the IPCC could pick that up as their motto?

  42. richard verney says:
    January 31, 2011 at 5:33 am
    “Where facts and data are vital, such as medicine and drugs, such latitude isn’t permitted, neither is propaganda. With climate *science*, such abrogation is not unsafe, since there’s nothing at stake either way, being mere voodoo as a consensus.”
    … except when people needlessly die on icy roads or in floods.

  43. This has been going on for decades. I’ve monitored the published NASA/ NOAA surface station records, and yes, old data changes each month, often significantly. The temperature record for Manhattan, KS used to show a negative trend, but old data (from the late 1800s and early 1900s) has continuously “corrected” downward, in some cases by multiple degrees, and now shows a warming trend here.

    Of course, scientific-sounding assumptions are given in sciency literature, but these are assumptions are arbitrary, untested, and statistically unchallenged. It’s unbelievable to me that so-called academics are allowing this to happen.

  44. They should really report as they do on election night – and say “with 800 out of 1000 stations reporting, the global temperature anomaly is modelled (or calculated) to be 0.707. The stations not yet reporting have a history of pulling this value down, so we’ll wait for more reports before we can call this temperature a record.”

  45. Don says:
    January 31, 2011 at 5:00 am

    >>
    God forbid if the data for the efficacy and safety of a new drug was collected in this manner.
    >>

    Oh come on! Do you really imagine that the “scientists” employed by pharma labs are totally honest and objective? They’re whoring their PhDs on a full time basis.

    The questions raised by Climatgate are not limited to climate science. They have broken the illusion that the public had about how objective and innocent the whole of the scientific community is.

    A damn good thing this too. Blind faith is rarely a good thing.

  46. “Does it really take as long as 23 months to adjust GHCN?”

    In one respect it never stops “adjusting”. I believe the way it works is something like:

    If a value is missing and a fill value must be calculated, it is calculated from an average of the surrounding stations. This average includes the current month’s value. Next month that “average” might change again and the next month it might change yet again. So this way the fill values calculated for missing data are constantly changing.

    An anomalously cold month can cause adjustments in many years though 5% seems a bit much.

  47. eadler says:
    January 31, 2011 at 7:01 am

    These differences [in GMT series] to not appear to be a real issue in the discussion of Global Warming.

    Okay, eadler, since it’s your religion, what are you doing in your personal life to decrease your own contribution to fossil fuel CO2 production, in order to allegedly help save yourself or the World from CO2CAGW? Or isn’t it really that much of an issue?

  48. NASA using tax-payer money to indoctrinate and propagandize kids on global climate change.

    From NASA we have the greenhouse “effect.”

    Why is it, that in discussing the greenhouse “effect,” heat transfer by convection is never mentioned.

    Control the language; control the debate.
    —————————————————————————————
    http://climate.nasa.gov/kids/index.cfm

    What is a greenhouse?
    A greenhouse is a house made of glass. It has glass walls and a glass roof. People grow tomatoes and flowers and other plants in them. A greenhouse stays warm inside, even during winter. Sunlight shines in and warms the plants and air inside. But the heat is trapped by the glass and can’t escape. So during the daylight hours, it gets warmer and warmer inside a greenhouse, and stays pretty warm at night too.

    How is Earth like a greenhouse?

    Earth’s atmosphere does the same thing as the greenhouse. Gases in the atmosphere such as carbon dioxide do what the roof of a greenhouse does. During the day, the Sun shines through the atmosphere. Earth’s surface warms up in the sunlight. At night, Earth’s surface cools, releasing the heat back into the air. But some of the heat is trapped by the greenhouse gases in the atmosphere. That’s what keeps our Earth a warm and cozy 59 degrees Fahrenheit, on average.

  49. @Geoff Alder, January 31, 2011 at 6:02 am
    “I think ‘panic mongers’ has a better ring to it!”

    Nah….I still prefer: Warm Mongers.

  50. Anthony:
    You worry me that some of the CLIMAT data are not in yet. I remember your article last year how some CLIMAT stations (most of which are at “hot” airports, another problem) were reporting minus temperatures as plus temperatures; for example, some Arctic stations at -30°C were being reported as PLUS 30°C, and this led to huge overall errors in the averages. Does this still happen sometimes? Did they put in a quality control step to prevent this after your excellent report?

    One of the problems with automation—no “spell checking” going on to remediate reporting errors.

  51. I also remember the “adjustment method” that came out in the Climategate emails, where seemingly arbitrary numbers were just added to the algorithms, adjusting upward past years. Is my remembrance correct?

  52. Are the temperature figures presented the mean temperatures ?
    or the mean temperatures per sq km ? The former is virtually
    meaningless.
    And why do they never make a stab at an actual global average,
    instead of talking in ‘anomalies’ all the time? Is it because an actual
    average would be disappointingly low, and not ‘hot’ at all ?

  53. @Michael says:
    January 31, 2011 at 6:30 am
    “The point is to get the data out as soon as possible…”

    Why?

  54. climatebeagle says: January 31, 2011 at 7:15 am

    Interesting comment in the original article:

    “Oddly enough the yearly average seems to stay the same, but the monthly values that create the average are in constant flux.”

    That reminds me of a period which must have been around 2007, when for something like six months the HADCRUT yearly average was within 0.01C of a single value.

    To say I was suspicious is a complete understatement, someone’s mucky hand was on the data – I suspected to keep a final year figure “in line” with some directive from above.

    Of course … as soon as I started posting, the discrepancy disappeared and it is now impossible for me to prove my assertion.

    WHICH IS EVEN MORE WORRYING! because there is just a statistical chance that data might be within a knat’s dick of some value each and every month … but it would stay that way … it wouldn’t suddenly disappear!

  55. Gendeau says:
    January 31, 2011 at 4:36 am
    Am I being overly cynical in noting that:

    Sep 10th, 2010: January 2010 anomaly was 0.707 °C

    helps them come up with new “it’s been a record month” regularly

    Jan 30th, 2011: January 2010 anomaly is now 0.675 °C

    lowers the bar for the next set of figures to be “it’s been a record month”?

    Gendeau – kind of like the unemployment claim numbers. Every week, they are lower than the week before, but never seem to get below 400k The revise them upwards after the initial report so they can make the claim. Just like the temperature numbers.

  56. “Some of this may be related to late reporting of GHCN stations, a problem we’ve also seen with GISS. Both GISS and CRU use GHCN station data which are received via CLIMAT reports. Some countries report better than others, some update quickly, some take weeks or months to report in to NOAA/NCDC who manage GHCN.

    The data trickle-in problem can have effects on determining the global temperature and making pronouncements about it. What might be a record month at the time of the announcement may not be a few months later when all of the surface data is in. It might be valuable to go back and look at such claims later to see how much monthly temperature values quoted in news reports of the past have shifted in the present.”

    No, it must be something else. I make local backups of GHCN V2 from NOAA. I have a copy of v2.mean (raw monthly temperatures by station) from 2010-06-28 22:48:37 UTC. Now I have downloaded it again at 2011-01-31 15:33:26 UTC.

    There are only three differences between the two copies for January 2010, they are as follows:

    13167341000 LOURENCO MARQUES/COUNTINHO (MOZAMBIQUE) -25.90 32.60 (27.0°C)
    30489056000 CENTRO MET.AN (CHILE) -62.42 -58.88 (0.2°C)
    42572597000 MEDFORD/MEDFO (U.S.A.) 42.37 -122.87 (7.9°C)

    In the June 2010 version there’s no January 2010 data for these stations while in the current version they have one (in parentheses at end of line).

    John Kehr says between 10 September 2010 and 30 January 2011 CRU global anomaly for January 2010 has changed by almost 5%. The June 2010 version contains 1447 valid temperatures for January 2010 from all over the globe; the addition of just 3 data points can not possibly cause such a shift.

    Do they keep changing the adjustment algorithm? Is it documented anywhere?

  57. ” P. Solar says:
    January 31, 2011 at 8:34 am

    Oh come on! Do you really imagine that the “scientists” employed by pharma labs are totally honest and objective? They’re whoring their PhDs on a full time basis. ”

    Any good observer will notice there are a lot more drugs withdrawn from approval by big pharma due to problems in the clinical trials then claims of AGW by withdrawn by grant dependent PHDs.

  58. John
    Did you try contacting the responsible group?
    .
    .For all climate questions, please contact the National Climatic Data Center’s Climate Services and Monitoring Division:
    Climate Services and Monitoring Division
    NOAA/National Climatic Data center
    151 Patton Avenue
    Asheville, NC 28801-5001
    fax: +1-828-271-4876
    phone: +1-828-271-4800
    email: ncdc.info@noaa.gov

    They might even be able to tell you when their reporting is complete.

  59. P. Solar says:
    January 31, 2011 at 8:34 am

    At the scientist level, that does not happen in large Pharmas. I am retired from several large and small Pharmas over 40 years’ employment, and corrupting the data is almost impossible.

    For pharmaceuticals that are under FDA investigation, there are protocols that cannot be corrupted. Each measurement has a witnessing protocol, where another scientist has to legally testify, by literally looking over the shoulder of the originator, and signing and witnessing the entry in a bound notebook, that is electronically copied and placed in electronic storage, eventually in Iron Mountain, with the original notebooks.

    Next, the data are checked and verified by QC (Quality Control). Before reporting the data in electronic format to the FDA, the QA (Quality Assurance) officer is called in to verify that the equipment has been serviced and validated (SOP verification) for the data. Next, if there is a transposition step (human entry), the original data is checked line-by-line versus the reported data by the QA officer, who does not report to QC, but to the CEO.

    There is, in this chain, no vested interest in falsifying the data. Only big trouble, meaning jail time, is the outcome of any monkey business, and a chain of people would have to be in the conspiracy.

    Because of Sarbanes/Oxley reporting rules by Congress, the CEO and CFO then would be also ultimately culpable, even if there is inadvertent error, and he would face jail time also.

    These rules are the strictest in any business. On top of this, there are ISO rules (international protocols) that have to be followed for the manufacturing in large scale.

    Oh, that the climate scientists would have to follow similar rules! If so, a lot of this chicanery would never have happened.

  60. P. Solar says:
    January 31, 2011 at 8:34 am

    Oh come on! Do you really imagine that the “scientists” employed by pharma labs are totally honest and objective?

    Clinical trial data are subjected to several layers of 3rd party independent audit….

    ….without FOIA requests!!!

  61. don says:
    January 31, 2011 at 9:09 am

    Clinical trials are designed to do just that! The four stages of trials, Phase I, II, III and IV are designed to incrementally expose an increasing number of patients to gather the statistics to detect toxicity. Phase I is typically done in 30 patients. Only a small percentage of drug candidates survive this step, because they don;t show a cure, or side effects emerge that the rat studies did not show. Phase II, usually in 100-200 patients these are usually done many times (Phase IIA, IIB, and so on), accumulating statistics on efficacy and safety. Phase III, often called a pivotal study, is in say a thousand people, and is usually not done in the clinical hospital, but is a take-home scenario, duplicating actual usage. These are usually repeated, if adequate statistics are not forthcoming. Phase IV is a post marketing step, designed to detect vanishingly small anomalies, or side effects, that only huge numbers, in the millions of patients would detect. Then, any side effects must be reported, — each company has a “hot line” that doctors and patients can access long past the commercialization stage. Called “adverse event reporting”, each event, even if a person dies of a hangnail while being treated for earlobe infection, must be reported immediately to the FDA.

    That is why few drugs are approved each year, because companies have to run this gauntlet in which few drugs survive. I was in this field for 40+ years, have a couple of drugs on the market that I invented, and I do trust this system. I am retired, and have no vested interest. By the way, the Pharma industry is additionally protected by whistle-blowing statutes. Believe me, I would have blown a large whistle if I had ever seen anything crooked happening, and I was in position to see it all.

    If global warming hypotheses, like declining polar bears, had to go through these rigorous steps before allowing publication, how many would survive?

  62. As far as I am concerned, the lack of respect accorded to data that is being used to justify economic shifts in the trillions around the world is one of the biggest problems with climate data today. Regardless of concerns regarding accuracy of the initial data there is somehow a belief that it is perfectly ok to modify historical data at any time without any justification or audit trail. This would not wash in practically any other field where any historical adjustments need to be carefully recorded and justified to ensure that data is not being updated to match the theory rather than theories being updated to match the data.

  63. P Wilson says:
    January 31, 2011 at 5:30 am

    Don says:
    January 31, 2011 at 5:00 am
    God forbid if the data for the efficacy and safety of a new drug was collected in this matter.

    Good analogy. Where facts and data are vital, such as medicine and drugs, such latitude isn’t permitted, neither is propaganda. With climate *science*, such abrogation is not unsafe, since there’s nothing at stake either way, being mere voodoo as a consensus.

    ==================

    And I see another poster above has weighed in with an account of how well regulated medical research is compared with climate science….

    Yet there is a LONG list of drugs, procedures and devices that have been withdrawn after passing the four stages of clinical trials and regulatory approval.
    This despite the method of reporting adverse reactions after release being woefully patchy and inadequate.
    There are also several notorious cases where big Pharma DID manipulate the data, mainly by omission of negative results, to gain approval.

    Recently individual clinicians have been threaten with being sued for libel when they raised the issue of problems with medical products at scientific meetings.

    Climate science is a model of ethical probity compared with some of the shenanigans that have occurred in medical research. That is one of the reasons FOR the extensive and close regulation that is imposed.

    Anyone who thinks that medical research is fraud free while climate science is riddled has it backwards and would be rapidly disillusioned by a modicum of investigation.

  64. I wonder if there is any value in applying some of our Government’s favorite performance measurement/earned value techniques to the “accuracy” of the various institutions measurements and models?

  65. “The future is certain and the past is always near” from the song “Roadhouse Blues” by Jim Morrison and the Doors via John Lee Hooker.

    For the Team it seems to be the that past is always here.

  66. John:

    Last year I attended a “Global Warming” lecture by a “retired climatologist” from the University of Wisconsin, Madison. During the lecture, we were free to ask questions. The audience was a combination of the Minnesota Futurists and the South Metro (Minneapolis) Critical Thinking Club. About 60 people in attendance.

    I asked 6 questions during the talk. None of them “agenda” oriented, all of them technical. The first question I asked was, “Dr. X, I’m going to ask you to do a though experiement with me..” He agreed, if it didn’t involve too much thought. I then outlined the idea of taking 10, 16 oz styrofoam cups and putting in various amounts of hot water, warm water, cold water to randomly in each cup. I said, “Now let’s measure the temperature of the water in each cup, then put all the contents into ONE insulated container. If I “average” the ABSOLUTE temperature measured in each cup, will it match the “temperature” of the aggregate?”

    Fortunately, the retired professor was sharp enough and honest enough to say “No..”

    There were 2 or three interveening questions, then I asked this: “An 86 F day in Minnesota at 60% RH has a total enthalpy in a cubic foot of air of 38 BTU. A 105 degree Day in PHX at 15% RH has 33 BTU per cubic foot. Which volume has more ENERGY in it?”

    Again, fortunately, he answered, “The Minnesota air.”

    Then I asked, which enviroment was “hotter”. “Obviously the AZ air!” He answered.
    This, of course, lead to my FINAL QUESTION…, “Since the ‘Global Warming’ claim, is predicated upon the atmospheric energy balance, and since “average temperature” does not include the effect of relative humidity, isn’t “average temperature” a completely ficticious number, and something without merit on a technical basis?”

    AFTER the presentation, a group of 3 engineers (myself included) and two programming types, were outside the banquet hall, on a balcony, having some “free pop” from the event. One of the other engineers asked me, “Did you get any bad dents from this?” I said, “What do you mean by dents?” He said, “When you RAN OVER Dr. X, after setting him up with the atmospheric energy questions…and he gave you the ‘deer in the headlights’ look when you asked your final question.

    We all laughed! Totally true. However, the good Dr. did try to salvage something, “Well, actually, RH humidity IS included in the calculation of average temperature.” Now I must give a hat tip to one of the programming types. He took my card, and a few days later Emailed me a host of information that indicated that “average temperature”, is indeed, just that…numerically averaging temperature values over spacial distribution and time. BUT there is NO consideration of “atmospheric energy”.

    SO dear fellow Engineer (Chemical too, I might add!) John K., can we get down to brass tacks (if not brass knuckles!) and start to exposit that the “King has NO NEW CLOTHES” and that “average temperature” is a meaningless contrivance?

    I know you are heading this way as it is, but I thought I’d give you a nudge in that direction.

    Max

  67. Juraj V. says:
    January 31, 2011 at 5:59 am

    “Those who read George Orwell’s “1984″ will understand.”

    Indeed. The January 2010 anomaly has always been 0.707 °C.

    This data is as reliable and credible as government economic statistics. So, while it may have been the hottest year at least there was no inflation, even with ‘green shoots’ popping up everywhere as the economic boom got going. And U.S. unemployment isn’t all that bad if you don’t count everybody.

    Off topic, sort of, here’s a great condensed summary of why it was so generous of the ‘little people’ to have bailed out the poor Banksters:

    http://www.zerohedge.com/article/step-aside-bernank-here-comes-timothy-jeethner-bears-explain-banker-bailouts-and-screwing-am#comments

  68. Sarah says:
    January 31, 2011 at 10:44 am

    Thank you Sarah, for picking up on this; I was afraid to have gotten off-topic.

    But yes, almost all other industries have accountability. What I was getting around to was proposing changes, like Sarbanes-Oxley rules, so that accountability would be brought to bear on these climate change artists. We could start with criminal penalties, as now exist for Pharmas, Wall Street, etc., if data does not have a validation trail. We could adopt ISO international protocols for data gathering, storage, and transmittal. I loathe to suggest another bureaucracy, but something is needed to provide quality assurance.

    For FDA or ISO standards, for example, the manufacturers have to provide levels of validation, from insuring that equipment is accurate and serviced, that software is validated to ensure that what is measured is what is stored, to provide protocols for rounding out figures or for significant figure standards for calculations, how averages are calculated (do we average three stations and then average the three results? Or average all nine at once? It makes a difference.), how data stations are extrapolated and interpolated, how error remediation is performed, how Stevenson Screens are brought into conformity, and how or whether the data are used until they are.

    It seems to me that we also need more monitoring stations, not less, in remote places, under strict SOP protocols. And this would cost orders of magnitude less than what we are spending now on frivolous polar bear studies and the like. Not that land stations are not the ultimate solution to measuring the globe’s energy balance. Anthony and others have shown the perils in interpreting these results, no matter how accurate the method that gathers the data. At least it would eliminate some variables.

    I can’t touch on all the weaknesses, like fixing peer review, activism vs. science, and the like, but I think we should all become cAGW activists! (citizens Against Government Waste, that is)

  69. Site validation and approval

    Right now, all Pharmas have to also validate their R&D, clinical, and manufacturing sites. They undergo constant inspection, and if they fail, they are shut down, with disastrous consequences.

    Same with manufacturing sites, in general. In order to pass ISO standards, they have to be open to outside auditing and inspection by disinterested third parties, both in the US and abroad. This way, each country operates under the same rules, and quality is assured across countries.

    REPLY: And I’ll point out that you won’t find a single climate center in the world with an ISO-9000 or ISO-8000 certification – Anthony

  70. There is a saying that covers it. “Blowing smoke up your ass”.
    Because that’s what they’re doing.
    Creating chaos in data land keeps us busy an in the mean time…..
    Watch the news.

  71. richard verney says:
    January 31, 2011 at 5:33 am

    If you think that land-station records are ALL so bad that they should be scrapped in favor of oceanic records, then you must be unaware that there are NO century-long oceanic records made at any FIXED location. The oceanic time series that everyone uses are SYNTHESIZED from scattered observations made UNDERWAY by ships of opportunity. It’s a nightmare from the standpoint of data integrity and reliability. At least with land stations we can select records that are little affected by UHI and land-use changes and reject all those that have been arbitrarily “adjusted” or “homogenized.”

  72. Sarah, it’s a lot easier to adjust historical temps down, than present day temps up, to show artificial warming.
    That way, no matter what the actual temp is today, it will always be warmer.

  73. I have downloaded a number of GISS temperature anomaly data sets from “Wood For Trees” over the past 7 years and I can tell you that very little remains constant. Comparing the graphs, most temperatures as far back as 1880 are now adjusted lower, where as temperature beyond 1998 adjusted upwards. The later graphs show much more of an incline than the older data sets. While this is all happening, temperature data is also being adjusted at the local weather recording station here in OZ before it gets up to Dr. Jim

  74. Max, Max, Max…

    AFTER the presentation, a group of 3 engineers (myself included) and two programming types, were outside the banquet hall, on a balcony, having some “free pop” from the event.

    I’m not sure I buy the free pop statement. :)

    Mark

  75. If anyone is really bothered about the data handling used to produce the CRU and GISTEMP surface records and wants to verify them I recommend producing your own surface temperature record maintained by someone you trust.

    Take the station input and demonstrate how scientists should be handling it – use whatever backup system, adjustments (or lack of), change logs, maintenance, you think is best. Whatever you think the CRU and GISTEMP scientists are doing wrong in terms of data handling, do it yourselves properly. And by doing so demonstrate to everyone that this works better.

    I mean you might discover that the subject of this article – past data changing – is an unavoidable feature of maintaining a surface temperature record from month to month. Or you might not.

    But until you try you are just leaving issues like this up in the air for speculation.

    REPLY: Just curious since you are in the UK, do you work for CRU or the government there? – Anthony

  76. ISO standards – good, bad or both? but always a costly nightmare to implement.
    A thread on this matter would be interesting.

  77. “”””” Ric Werme says:
    January 31, 2011 at 6:06 am
    Tony Hansen says:
    January 31, 2011 at 5:02 am

    John,
    > I too worry about the ‘adjustments’, but I also worry about your ’5%’.
    5% of what?

    Of the anomaly.

    $ python
    >>> .707 * .95
    0.67164999999999997 “””””

    Actually, my stick in the sand calculator gives 0.67165 exactly for 95% of 0.707.

    You need a new supercomputer; or maybe just a new stick.

  78. Sarah says:
    January 31, 2011 at 10:44 am

    “Regardless of concerns regarding accuracy of the initial data there is somehow a belief that it is perfectly ok to modify historical data at any time without any justification or audit trail.”

    You don’t need an audit trail. Take the initial data yourself and check that the result follows from it. Having detailed source code, change logs and audit trails is useful tracking down a problem, but you don’t need any of those to determine whether or not there is one, which is the first port of call.

    In this case take the initial GHCN data and see whether 2010 really is the 2nd warmest year in that record, or find out whatever rank it does fall as. If you find a very different result (eg 2010 is the 10th warmest year) then there’s an issue to explore. But if you find it’s the 2nd, 1st or 3rd then there is no indication the result is wrong.

    In my mind the different temperature products all check each other in this way. Different source code on the same initial data producing results that can be compared.

  79. Berényi Péter says:
    January 31, 2011 at 9:05 am

    “No, it must be something else. I make local backups of GHCN V2 from NOAA. I have a copy of v2.mean (raw monthly temperatures by station) from 2010-06-28 22:48:37 UTC. Now I have downloaded it again at 2011-01-31 15:33:26 UTC.

    There are only three differences between the two copies for January 2010, they are as follows:

    13167341000 LOURENCO MARQUES/COUNTINHO (MOZAMBIQUE) -25.90 32.60 (27.0°C)
    30489056000 CENTRO MET.AN (CHILE) -62.42 -58.88 (0.2°C)
    42572597000 MEDFORD/MEDFO (U.S.A.) 42.37 -122.87 (7.9°C)

    In the June 2010 version there’s no January 2010 data for these stations while in the current version they have one (in parentheses at end of line).

    John Kehr says between 10 September 2010 and 30 January 2011 CRU global anomaly for January 2010 has changed by almost 5%. The June 2010 version contains 1447 valid temperatures for January 2010 from all over the globe; the addition of just 3 data points can not possibly cause such a shift.

    Do they keep changing the adjustment algorithm? Is it documented anywhere?”

    Don’t tell me everyone here has missed this:
    http://hadobs.metoffice.com/crutem3/jan_2010_update.html

  80. Re Dallas Tisdale (No relation to Bob, er that Bob) says:
    January 31, 2011 at 1:01 pm

    “It is a shame they can’t keep a log of adjustment and corrections like UAH does.”

    Yes it would be nice if there was more information published whenever anything changed each month. In this case a quick summary would do it sounds like something has changed with the processing in the last few months (which the jan2010 update I linked to obviously doesn’t cover).

    I know they were planning to change the algorithm at some point (I thought it was new year). If they have done that then there’s nothing on the site indicating it.

  81. @-Bob(Sceptical Redcoat) says:
    January 31, 2011 at 11:03 am
    “Where do they get atmospheric temperature readings to 3 decimal places of accuracy – in the laboratory or computer, perhaps?”

    The number of decimal places is a reflection of the sample size rather than the accuracy of the readings.

    Try thinking of it like this…
    If a large number of thermometers can only be read to the nearest whole degree and are measuring a temperature of about 0.5 deg then the data you get will be a long list of zeros and ones.
    Like a coin toss, where you give the heads/tails a 1-0 values, if you average just a few data points the answer will be somewhere between 0 and 1 but with great uncertainty.
    But if you average thousands of coin tosses/thermometer readings the average will narrow down to 0.500 and the number of decimal places that you can give will be related to the number of data points.

    If it doesn’t narrow down to 0.5, but to some other value, then you can conclude with certainty that the coin is weighted, or that the temperature is not 0.5 deg but closer to what your average turns out to be.

  82. Grainger:
    ‘Surely, it is ridiculous to produce temperature figures with an implied accuracy to three decimal places? Does any one reporting these temperatures read the instrument to that level of precision?’

    No! It is not ridiculous at all. Equipment can easily measure the temperature anywhere to +/-0.001 degree (satellite data) or +/-0.01 for ground and sea data at any instant of time though of course a few seconds later that temperature will have changed.

    Thus you’ll collect a large cluster of raw temperature measurements together with their error bars. Average them, do the statistics and see if the average value differs significantly from that of another data set. I think we can assume that in the figures quoted the statistics justify the thousandth place data.

    As some who has spent time measuring water density to 1 part per million I can say that water sitting anywhere other than in a proportionally-controlled water bath designed to keep temperatures to +/- 0.001 degree, varies in temperature by several hundredths of a degree almost by the second which is, I guess, why the quoted averages are less precise than those for the satellite data. Unnecessary and expensive acquisition of data!

  83. Onion2,

    Interesting that they acknowledge that they did not get things right, such as not doing the recent ‘normals’, but now that they have modified the major effect seems to be a lowering of the temperature in the early part of the record.

    Wow, who would have thought that changes to recent temp records could have such an effect!

  84. I thoroughly dislike the anonymity of “global” temperatures. There are so many c**p issues involved, station loss (urban vs rural esp Arctic), UHI, instrument/station sensitivity changes (several issues), station location (lots of issues), for starters, then all the “necessary” adjustments. And when all the c**p has done its work, then I fear the satellites themselves might risk being adjusted to phony calibration that do NOT allow properly for orbit decay etc.

    The answer is very simple. Under our noses in fact. IMHO.

    Why try to get a totally sci-fi “global average” at all? Why not be content with some local stories; just use enough rural stations of reasonable trustworthiness to provide a statistically significant signal. Like John Daly did. Like I suspect my article does if it could be statistically quantified – certainly it compels visually.

  85. As Anthony pointed out, none of the world climate centres are credited with any of the ISO standards, yet our governments are keen to make policy based on this junk data. Goes to show you what you will do when there is s lot of money at stake.

  86. Monthly raw is not raw. The data is collected daily or more often eg Max Min. The “Monthly” already has a missing days treatment. Then the missing months are estimated. But the “missing months” were in general not empty, just fewer days than some limit. The more normal approach to missing daily records is to go back to the daily record and fit from the measurements you have. If you know there is no trend, you can use later measurements in the fitting mix. However, warmers can NOT assume no trend. If you want to find any trends in the data you have to privilege the data from around the same period. If you have 25 days in May 2007, 5 days in June, and 20 days July 2007. Normal methods to estimate June would be to use the days you have, and not throw away the 5 days in June. Common methods in other fields would be to infill in the gaps with the mean of the two end dates, or better, fit a polynominal through the known points.
    You also need to investigate why you have missing data as far as possible to estimate systematic errors – eg you may have less missing Feb as it is a shorter month. Plus a lot of the “missing data” only appears to be missing if you do not look for it.
    You should be working with the daily data even if you want to graph month results; but people are not doing this as the raw daily just is not publicly available. I wonder if it is even available to the magic circle.

  87. “”””” alleagra says:
    January 31, 2011 at 1:51 pm
    Grainger:
    ‘Surely, it is ridiculous to produce temperature figures with an implied accuracy to three decimal places? Does any one reporting these temperatures read the instrument to that level of precision?’

    No! It is not ridiculous at all. Equipment can easily measure the temperature anywhere to +/-0.001 degree (satellite data) or +/-0.01 for ground and sea data at any instant of time though of course a few seconds later that temperature will have changed. “””””

    Now that satellite Temperature measurement to 1/1000 th degree; just where exactly in, on, or around the planet was that measurement made ? Would not the Temperature change that much between two points say 1 cm apart, say on the ground somewhere in a typical shopping mall (outside buildings). Can you really separate points that close from a satellite ?

  88. “”””” izen says:
    January 31, 2011 at 1:43 pm
    @-Bob(Sceptical Redcoat) says:
    January 31, 2011 at 11:03 am
    “Where do they get atmospheric temperature readings to 3 decimal places of accuracy – in the laboratory or computer, perhaps?”

    The number of decimal places is a reflection of the sample size rather than the accuracy of the readings. “””””

    Well you see that IS the whole point. The “sample size” is precisely one reading, of which they record two a day; the maximum of all of the readings, and the minimum of all of the readings.

    That is insufficient sampling density to even correctly determine the daily average Temperature (which is NOT a sinusoidally varying (single frequency) function of time).

  89. The data set should report the number of stations reporting for that particular number presented… that way if the number of stations changes you’d know why! Also the entire history and all calculations leading up to each and every number MUST specify it’s source and full audit trail of computations for any of this “climate” “data” to be trusted.

  90. onion2 says:
    January 31, 2011 at 1:00 pm

    If anyone is really bothered about the data handling used to produce the CRU and GISTEMP surface records and wants to verify them I recommend producing your own surface temperature record maintained by someone you trust.

    That’s always missed the point. So why don’t you instead figure out why what you recommend is not relevant? Hint, the necessary scepticism of the scientific method as applied to the “materials and methods” of the CRU and Giss reconstructions does not mean it also has to determine a “correct” GMT from surface stations. It doesn’t even involve having to accept that the GMT means anything at all, apart from the way it is constructed.

  91. Is quoting Orwell cliche and hackneyed?
    “All history was a palimpsest, scraped clean and reinscribed exactly as often as was necessary. Even the written instructions which Winston received…….never stated or implied that an act of forgery was to be committed; always the reference was to slips, errors, misprints, or misquotations which it was necessary to put right in the interests of accuracy……there did not exist, and never again could exist, any standard against which it could be tested.”

  92. izen says:
    January 31, 2011 at 11:01 am

    your comparison proves the point. Drugs are withdrawn. Litigation commences.

    In climate science, fraud on every scale is promoted and the aftermath is that when questioned, this *ethical probity* is whitewashed away and put under the carpet (or in the broom cupboard) and they continue on their complacent and ideological path.

    I don’t see the probity in this procedure

  93. Murray Grainger says:
    January 31, 2011 at 5:14 am
    Surely, it is ridiculous to produce temperature figures with an implied accuracy to three decimal places?
    ——–
    No it’s not because the numbers are given explicit error bars.

    It’s also useful to keep the decimal places to help pick up variations due to processing errors.
    And to act as guard digits if the values are propagated into other calculations.
    ——–
    Does any one reporting these temperatures read the instrument to that level of precision?
    ——–
    No they don’t. The values are not raw.

  94. John Marshall says:
    January 31, 2011 at 5:35 am
    Do CRU still use surface measured data? If so then this shows how antiquated the system is. Since the surface data system is not a complete coverage, with that 2/3 loss of stations in 1990, then these people should shift to satellite data sets. These show a slight global cooling which is not what these people need to show.
    ——–
    Err, no.
    Having multiple data sets with different measurenent techniques and data processing methods allows cross-comparisons and sanity checks to be performed, not to mention a bit if competition to encourage people to do their best.

    The satellites were put up to solve known problems with the surface data sets. But it is my understanding that the satellite data had calibration problems for a number of years. Ask Roy Spencer about that.

  95. Max Hugoson says
    ——–
    “Now let’s measure the temperature of the water in each cup, then put all the contents into ONE insulated container. If I “average” the ABSOLUTE temperature measured in each cup, will it match the “temperature” of the aggregate?”
    ———
    Let me guess. You have not actually done the experiment!

  96. Max Hugoson says
    ——–
    “Since the ‘Global Warming’ claim, is predicated upon the atmospheric energy balance,
    ——
    No it’s not

  97. Ed Scott asks
    ——-
    Why is it, that in discussing the greenhouse “effect,” heat transfer by convection is never mentioned.
    ——-
    Because there is no air in outer space.

  98. REPLY: And I’ll point out that you won’t find a single climate center in the world with an ISO-9000 or ISO-8000 certification – Anthony

    That is an excellent point…

  99. Jimmy Haigh says:
    February 1, 2011 at 12:43 am

    REPLY: And I’ll point out that you won’t find a single climate center in the world with an ISO-9000 or ISO-8000 certification – Anthony

    Actually the UK Met office DOES have ISO-9001-2000 certification.

    http://www.metoffice.gov.uk/corporate/contracts/

    “The Met Office holds ISO 9001-2000 accreditation and, wherever practical, requires that its suppliers also hold such accreditation.
    The Met Office also holds ISO 14001-2004 and will increasingly look for environmentally aware suppliers.”

  100. “”””” LazyTeenager says:
    January 31, 2011 at 11:11 pm
    Ed Scott asks
    ——-
    Why is it, that in discussing the greenhouse “effect,” heat transfer by convection is never mentioned.
    ——-
    Because there is no air in outer space. “””””

    Very simple Ed; if you look at Dr Kevin Trenberth’s cartoon diagram purporting to show the energy budget of the earth; you will see all manner of energy and especially heat transporting mechanisms;too many of them for me to list here, and “heat transfer by convection”, is one of them he mentions.

    Of that long list of mechanisms, ONLY ONE, the 390 W/m^2 surface emitted Long Wave Infra-red Radiation, is affected in ANY way by GREENHOUSE GASES, and in particular by CARBON DIOXIDE.

    NO OTHER thermal process on earth is in any way impacted by the amount of CO2 in the atmoshpere. There has never ever been enough CO2 in the earth’s atmosphere to affect atmospheric convection one iota. Thermals, Evapo-transpiration, and Surface radiation, which together amount to 492 W/m^2 as depicted by Trenberth, are not functions of atmospehric CO2 abundance, or any other non-condensing greenhouse gas.

    So the only effect of NCGHGs is partial absorption of the 390 W/m^2 Surface Radiation.

  101. Arguments about decimal places overlook the impact of adjustments made to the station records that totally alter the century long trend. For example, USHCN Ver2 adds .97K to the trend at San Antonio TX relative to the Ver1 record for the period 1895-2005, changing its POLARITY! That’s how the AGW gorilla forages in the data base.

  102. @-George E. Smith says:
    “NO OTHER thermal process on earth is in any way impacted by the amount of CO2 in the atmoshpere. There has never ever been enough CO2 in the earth’s atmosphere to affect atmospheric convection one iota. ….
    So the only effect of NCGHGs is partial absorption of the 390 W/m^2 Surface Radiation.”

    You might want to reconsider this assertion. The partial absorption of IR surface radiation is followed rather rapidly by conversion to thermal energy of the bulk atmosphere. This means the whole of the atmosphere is warmed, and that energy shared is then re-radiated by the GHG’s. This establishes a temperature gradient from the surface upwards because most IR is absorbed in the first few metres which drives the subsequent convection.

    The warming of the bottom of the atmosphere by thermalisation of absorbed surface IR is an important component of the process that provides energy to drive convection.

    If you removed all GHG’s, including water vapour from the atmosphere, the only way the surface could warm the atmosphere is through direct conduction at the surface, so very little of the atmosphere would be warmed and conection would be MUCH less.
    Put back the water vapour and its absorption of IR from the surface will increase convection . The phase change as it condenses into clouds also moves thermal energy higher. Not as fast as the IR would have traveled, c is the maximum velocity. However, when the temperature drops below freezing the effect from water vapour is effectively lost. It is unclear that water vapour could sustain temperatures above freezing leading to a snowball Earth.

    The role of NCGHGs in providing some of the thermal energy that drives convection should be obvious. All these factors interact, the idea that changing one component of an energy flow will not change other aspects is to underestimate the causal factors governing the thermodynamics.
    The role of NCGHGs in providing the thermal energy that drives convection should be obvious. All these factors interact, the idea that changing one component of an energy flow will not change other aspects is to underestimate the interelated factors governing the thermodynamics.

Comments are closed.