Weather Station Data: raw or adjusted?

In my post on the Mohonk Weather Station, the question came up about “raw” temperature data. Tom in Texas complained that he’d looked at data from the observer B91 forms and that it didn’t match what was posted in published data sets.

Neither NOAA nor NASA serve weather station data “raw”.

We’ve all seen examples posted here of how GISS adjusts data. But, it is not only NASA GISS that does this practice, in fact, NOAA adjusts temperature data also, and it is by their own admission. For example here is a NOAA provided graphs showing the trend over time of all the adjustments they apply to the entire USHCN dataset.

Click for a larger image

Click for larger image

Click for a larger image

As illustrated in the graphs above, in simplest terms NOAA adds a positive bias to the raw data reported by weather station observers with their own “adjustment” methodology.

It is important to note that the graph on the bottom shows a positive adjustment of 0.5°F spanning from 1940 to 1999. The agreed upon “global warming signal” is said to be 1.3°F (.74C) over the last century.

The NOAA source for these graphs is: http://cdiac.ornl.gov/epubs/ndp/ushcn/ndp019.html

About these ads

141 thoughts on “Weather Station Data: raw or adjusted?

  1. I would like somebody to point me to an example of any field in the hard sciences where a researcher can get away with adjusting raw data for unmeasurable biases. Even if you can find such an example, I’d be surprised if the adjustment was of the same order of magnitude as the measurement.

    The second graph is incredibly damning. Don’t like the fact that your raw data shows only 0.2 degC/century in warming? Adjust in 3 times the warming. This is not science. A scientist would find ways to determine if data is contaminated and discard it.

  2. I note that “older” GISS and NOAA measurements (for the 30’s and 40’s for example) always seem to get “adjusted” down, while later temperatures (80’s and 90’s and 00’s) always get “adjusted” up.

    Funny how that trend works.

    (Alex, I’ll take “Pre-Adjusted” Temperatures” for three trillion, please. )

  3. Now that we know that they adjust the raw data by adding … to adjust for urban heating? … do we also know how they justify these adjustments and how they determine the adjustment value?

  4. I prefer my data raw and the massaging of data documented, not obscured as seems to be so typical of AGW “research”.

  5. I dont wish to be pedantic, but I assume you mean 0.5°F or 0.28°C from 1940 to 1999. It is still it is a substantial chunk of the 0. 7

    REPLY: Not all all, fixed thanks. I wrote this too quick and I’ve been working mostly in metric, I forget that NOAA publishes in F and NASA in C. This is why we crash satellites into Mars instead of making orbit. – Anthony

  6. So, on top of siting problems we’ve got temps being ‘adjusted’. Add to that the recent spate of loss of rural sites. No wonder there’s all this warming happening — it’s all imaginary. Is there any credibility left?

    What’s the real temperature where you live? Buy your own weather station from Anthony if you really want to know.

  7. Why would the government even publish this? I guess they haven’t done it since 2000, right?
    How can anyone take the temperature record seriously after seeing that graph? What do statisticians, like Steve Mac and Lucia, think of this? How can this be regarded as anything but a huge joke?

  8. Wow … Am I reading that right? Without the adjustments it looks like the 100 year trend would be almost flat! If I had to guess I would have thought that the graph would have been horizontally mirrored due to the UHI. since there is more development they would have to adjust the temps down to compensate.

    Maybe I’m reading it wrong because I just can’t believe that they would have added that much to the recent temps.

    G

  9. How in the world can they justify a POSITIVE bias!?!? Did any warming actually EVER exist!? Between UHI and adjusting the wrong way, it seems unlikely there was ever any warming at all. A complete fabrication.

  10. The POSITIVE adjustments are the “Smoking Gun” that the AGW zealots harp on about.

    I weep for science.

  11. Based on the NOAA adjustment information

    What sort of bias do you think would cause the need for such a massive adjustment in recent years, but not earlier in the century?

    It would have to be one of the following (from the NOAA link above) :
    1. quality control adjustments (mis-measures? This seems like it would get better over time not worse)
    2. time-of-observation adjustments (maybe this one?)
    3. equipment swap adjustments (maybe this one?)
    4. homogeneity adjustment (comparing with other stations?)
    5. missing data adjustments (I can’t see how it would be this one)
    6. UHI adjustments ( I can’t see how it could be this one)

    Thanks
    G

  12. It has been a while since I read 1984, but didn’t the gov’t adjust data as they saw fit in that book?

    Also, didn’t the companies in the Soviet Union adjust their production reports to match what the government wanted, even when actual production was nowhere close? The people catch on after awhile.

    Well, I am sitting here freezing my a$$ off in gray and windy Southern California, with the furnace running all day. (Ok, its not that cold, but it is cold for us, about 10 degrees F colder than normal for this date). I would like to see NOAA adjust today’s Los Angeles temperatures and tell everybody how warm it was today! [/sarc]

    As to adjusting recent temperatures by adding a bit, shouldn’t they be subtracting a bit to account for urban heat island effects? I believe I read that the acreage of urban areas is miniscule compared to the rural area, so that makes much more sense to me. Can anyone explain why the recent adjustments are positive?

  13. Most of the adjustment is for Time of Observation Bias. ToB is real and well documented. However, NOAA relies on a questionable ToB estimation method, rather than use the actual ToB, which is recorded by the observer (and from which the ToB bias can be determined statistically with reasonable accuracy). It appears in order to save a paltry few thousand dollars.

    In summary, there is definitely a need for a ToB adjustment and the sign of the adjustment made by NOAA is probably correct. However, we have no way of knowing if the size of the adjustment is correct. The originator of the ToB adjustment method apparently thinks it is about 75% correct. (source – a paper by Warwick Hughes)

  14. Snip me if I out of line, but
    is there any coincidence of the date where the positive bias start manifest itself,
    and the beginning of the carrier of a famous person at NASA?
    Am I paranoid?

    REPLY: Probably, most of this is related to work by Easterling, Karl, and Petersen and NCDC. – Anthony

  15. The REAL scandal isn’t the fudging, it’s the “pass” the fudgers have been given by society’s panjandrums (science’s “gatekeepers,” science journalists, etc.). None of that herd of independent minds bothered or dared to ask for more transparency, although the climate contrarians had been loudly objecting to this secretive cooking-of-the-books for ages. When judgment day arrives, those bigshots who turned a blind eye to this unprofessional behavior should be required to explain why.

    (It’s possible, of course, that these adjustments are justified. But “keeping their cards close to their chest” is unprofessional anyway. (Presumably they justify doing so because, if they were to be more open, we evil contrarians would confuse the public with misleading insinuations about their legerdemainery.)

  16. Ok, somebody help me here. Which one of the trend lines is the raw unadjusted data?

    And how does the final adjusted data compare to GISS’s adjusted data? Seems to me that the sudden rise in temp is similar to GISS’s rise in temp.

  17. The hypothesis of AGW is nothing to do with Climate. It isn’t climate science. It’s Climate Politics.

    When you look at their methodology, it drives that point home.

  18. Say you have a temp that is flat for 100 years. Then you want to:
    Take out an UHI effect that tilts the last 50 years up at the right side by
    0.5 d F. That is, there is an inflection point mid-way. What should you do? If you bend the line back down at the mid-point the recent temperatures will not appear to “be correct” currently, and you will have to continue to adjust the new entries month by month. If you raised the first 50 years uniformly by half a degree that would be an error also. In neither case will the numbers reflect reality over the 100 year span. What to do?
    Go through the data making four or five adjustments in ways that no one can follow, claim accuracy that you don’t have, and call it the best climate data set in the world. Base policies on these data that will massively disrupt the entire country. Claim you have saved the world for your grandchildren and polar bears. What’s to complain about?
    Clearly, these data are useless for teasing out the small and sensitive temperatures involved. I’m thinking of the site specific “station-siteing” issues also.

  19. The Emperor Has No Clothes!

    Thanks Anthony, for being able to speak the language of math and science and be one honest interpreter/listener as the Blind Men and the Elephant story of Climate Politics unfolds.

    Those of us with a gift for the narrative alone really appreciate those who can do the math, honestly, and keep the narrative as simple as an Aesop’s fable. You are called. Thank You. Your website is a refuge for puzzlers.

  20. Slightly OT, but when you take the temperature anomaly data from NOAA’s site and plot it onto a graph, it doesn’t have the same trend line as observed temperature. Anyone know how this is possible?

    I realize these are two different things, but if temps are dropping, shouldn’t the anomalies follow? There is a huge spike at the end that I found a little confusing.

  21. Buy your own weather station from Anthony if you really want to know.

    And for heaven’s sake make sure it is at least a CRN2!

  22. Bear in mind that these are USHCN1 adjustment methods.

    But the NOAA has wised up: On their USHCN2 adjustment page, they have lots of very wise-sounding verbiage. NO GRAPHS and NO BOTTOM LINE. They learned their lesson when the injudiciously ‘fessed up on their USHCN1 page. Now they know better.

    USHCN2 seems to be WORSE: A poster calculated some time ago that it is a +0.42C/century adjustment (as opposed to 0.29C/century for USHCN1). But he had to calculate it–it was NOT anywhere on their page where I could find it.

    (That’s when the 0.6C 20th Century global trend “turned out” to be 0.72C.)

  23. However, NOAA relies on a questionable ToB estimation method, rather than use the actual ToB, which is recorded by the observer (and from which the ToB bias can be determined statistically with reasonable accuracy). It appears in order to save a paltry few thousand dollars.

    RATHER THAN USE THE ACTUAL TOB?!!!

    AAAARGH!

    And I’ve been going around defending TOBS adjustments.

    Then just recently I’ve been sort of wondering about whether the NOAA TOBS adjustment method could be the ONLY thing the NOAA got right . . .

    AND THEY’RE NOT USING THE ACTUAL TOB?!!!

    AAAARGH!

    ~snip~ me before I kill again!

  24. Looking at this Redoubt webicorder it looks like an eruption of some sort might have started at around 2045UTC. It’s dark there now, so we won’t know until morning.

    REPLY: Sure looks like it. – Anthony

  25. Adjusting the raw day … ugh. I looked at the ornl page on the subject and all I can say is wow. What a complicated can of worms. Of course the charts are meaningless unless you can dig into the individual cases to see the justification for each adjustment. I wonder if they can see the warming signal in completely unmodified sites; ones that were good years ago and are still good because they are located in rural locations. Call me crazy, but I wonder if these issues will show up in the courts down the road if they haven’t already.

  26. Greetings to All,

    I looked into this issue (temperature adjustments) in a fair amount of detail in late 2007 after seeing a graph that seemed to suggest that daily minimum temperatures were going to surpass daily maximum temperatures within a few years. This was in a document which I obtained from a government source (in a jurisdiction that will remain anonymous).

    My suspicions really peaked when I discovered substantial snowfalls recorded as “0” for my locality. This caused me to dig harder and I turned up a number of physically impossible events on the official record. (Have you ever seen bare ground accumulate several feet of snow overnight without a snowfall?)

    It took some digging and a series of e-mails, but I got my hands on some papers that explain how adjustments are made (something which varies from jurisdiction to jurisdiction). I discovered troubling methodology that would undoubtedly impair some (but not all) lines of research.

    I understand very well that there are a number of legitimate issues with data quality management that result in all sorts of headaches with no easy fix – and I think the reasonable thing to do is provide both the raw & adjusted datasets, along with detailed, truthful annotation (about both) to empower researchers to apply balanced judgement.

    Sincerely,
    Paul Vaughan.

  27. There has to be a bunch of raw data out there. I and Orwell contend we need to archive whatever raw data is left. Simple, uncorrupted (uncorrected) data.

    I fear we may have no uncorrupted data after Dr. Oz, er, Hansen adjusts them.

    Is there a uncorrupted data set?

  28. If the NOAA is adjusting raw data, they have to have raw data in order to adjust it. And it has to be in soft copy. Yet that data does not seem to have been made available.

    So we start to consider transcribing B91 data (assuming it is sufficiently legible).

    Well, don’t be amazed if B91s suddenly cease to be made available to non-subscribing “guests”. Or are greenwalled. You don’t have to ban a thing to make it impractical.

    If I am wrong and SOFT COPY unadjusted data is available, please put up a link.

  29. All this data adjustment might keep the politicos happy, but who will get the blame when TSHTF?

  30. I’ve compared raw and Bureau of Met-adjusted temperature data in 32 cities/towns/outposts across Western Australia from 1876 to February this year, including different comparisons for high population locations (UHIs), and coastal vs inland.

    A rough comparison of adjusted data from 1910 to 2008 suggests a .7 degree increase in minima and a .5 degree increase in maxima. But the raw data from 1876 to 1899, compared to adjusted data from 1979 to 2008, suggests a .4 degree minima increase and a .25 degree maxima decrease over the ~130 years.

    If you’re interested in Western Australia temperatures, check http://www.waclimate.net

  31. Can someone help me here. Is there any relationship between the GISS data and the NOAA USHCN data, or are they independent. Also are they used as confirmation of each others data if they are regarded as “independent”.

  32. Would we not expect LESS adjustments as time goes on? Surely our equipment and knowledge have improved, right? This makes no sense to me. Are our methods and equipment now that much worse than they were in 1900?

  33. this haphazardly and grudgingly sharing (or not sharing) of research materials, data and results is a quite common behaviour of public servants in the climate science community:

    “…This is the same Phil Jones who said:

    We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it. There is IPR to consider…”

    http://www.climateaudit.org/?p=3119

    “…It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community. Additionally, we judge that the sharing of research materials, data and results was haphazardly and grudgingly done. ..”

    http://www.climateaudit.org/?p=2322

  34. I’d be fascinated to follow how error bounds are first calculated and then adjusted through the adjustments.

    Surely the error bounds should enclose all of the data on all of the graphs, since it was adjusted because of “errors”.

  35. Catlin Arctic team on the move..
    “Ice Team cover an impressive 20km in 36 hours…only another 902km to go…”

    Is it a lie? – No

    Does it tell us how statistics are used among AGW people? – Yes

  36. I think NOAA puplished their (honest) adjustment graph for the US around 2005. Im not sure we know what adjustments where made 2005-9.

    This is indeed grotesk, Especially since all the adjustments are made to the temperature graph data 1940-2000. The thing is, nature has itself provided a quick warming up until 1940.

    So the late 20´th century warming of 0,3 Kelvin is 0,25 K adjustment, it seems.

  37. John (23:35:01) :
    “Can someone help me here. Is there any relationship between the GISS data and the NOAA USHCN data, or are they independent. Also are they used as confirmation of each others data if they are regarded as “independent”.”

    AFAIK, both GISS and NOAA outputs are based on the same data, but every organization performs their own calculation procedures /adjustments.

  38. Jurav

    If you use RSS rather than UAH, and use all the data rather than stopping in 2002 (Why?) you get this …

    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/plot/rss/from:1979/plot/hadcrut3vgl/from:1979/trend/offset:-0.1/plot/rss/from:1979/trend

    In fact, if you compare the trends in UAH, GISS, RSS and HADCRUT over the lifetime of the satellite measurements you find that UAH is the outlier, trending lower than the resr, indicating that whatever adjustments are being applied to the surface record are legitimate…

  39. Anthony
    do you know if other countries are using the same or similar adjustment methodologys, if they are and I think it is probable they are, due to some international agreement, then much of the worlds ground weather data is also contaminated. What is NOAA’s responce to the queries about data manipulation and unwillingness to provide raw data. Do the public have any legal recourse ?

  40. I have seen limited raw data on-line. Long-time-series raw data from siting that seemed less susceptible to UHI seemed to indicate little or no AGW signal. Not a comprehensive or overly scientific analysis by me.

    Can the major repositories of the global raw data be “forced” to make the raw data available through freedom of information (or some other tactic)? Or, has the original raw data already been lost? — that is, the archived data all has “corrections”?

  41. The thing is, nature has itself provided a quick warming up until 1940.

    I wouldn’t put that down to nature. Most of the warming we see in the early 20th century (on graphs anyway) was due to the expansion of US cities and towns. From the end of the Civil War up until the 1940s those American cities experienced their most rapid growth. So that warming is a growth of the urban heat island effect.

  42. We know, with absolute certainty (more or less), that the global average temperature is 14.44 +/- 2(+) degrees C, based on the temperature series as reported and the information on the quality of the measuring sites at surfacestations.org.

    Isn’t that “close enough for government work”? :-)

  43. Apologies if the tags do not work correctly.

    In case you have not seen the paper “Economists and Climate Science: A Critique” by David Henderson, “former Treasury official; and much later, as Head of what was then the Economics and Statistics Department in the OECD Secretariat” it is available at

    In this he mentions:

  44. This is what is missing after “In this he mentions”

    Other aspects of the work for WGI have also been subject to expert challenge. One
    such aspect concerns the instrument-based series for global average surface
    temperature which the Working Group and other official sources have relied on: the estimated temperature anomalies appear as subject to doubt because of imperfections in coverage and reliability, questionable statistical procedures, and non-climatic influences for which full allowance may not have been made.

  45. joshv (19:01:26) :

    I would like somebody to point me to an example of any field in the hard sciences where a researcher can get away with adjusting raw data for unmeasurable biases. Even if you can find such an example, I’d be surprised if the adjustment was of the same order of magnitude as the measurement.

    The second graph is incredibly damning. Don’t like the fact that your raw data shows only 0.2 degC/century in warming? Adjust in 3 times the warming. This is not science. A scientist would find ways to determine if data is contaminated and discard it.

    In fact, on US defense contracts, people have gone to jail for this sort of thing. Falsifying test data to “demonstrate” something is considered fraud under Earned Value Management. The most famous example was the A10 development program. I believe the fine was ~$1.5B (yes, Billion), jail time for executives at the contractor, etc.

    Pity the government doesn’t have to follow its own rules.

  46. As the observed temperature goes up wouldn’t the absolute value of the adjustment amount increase? Isn’t that what these graphs are saying (if it is TOB)? I admit that it seems strange that summer and winter changes in TOB (given human nature) don’t come closer to canceling out.

    Why not come up with a standardized climate reporting package with remote automatic sensing?

  47. John Philip (01:25:12) :

    You commented that UAH data trended lower than the GISS, RSS, and HADCRUT data sets. I believe the UAH data is a lower troposphere, atmospheric measurements, whereas the others are surface measurements. If one assumes that all are adjustments are appropriate this would seem consistent with more TSI radiation from the Sun, passing through the atmosphere and reaching the surface of the Earth. This does not sound like a greenhouse affect where the atmosphere warms first and then the Earth.

  48. That second graph is extraordinary. What could generate a consistent positive trend lasting for decades?
    .
    Of course, we know only too well about the poor quality of many stations. Some changes, such as moving to areas where there’s more concrete or more air conditioners, could produce a positive trend. But I would expect many changes could be either positive or negative and so would tend to cancel out on average.
    .
    Time of Observation Bias has been mentioned. Could this lead to a consistent positive trend over decades? If TOB changes tend to be random (e.g. a new observer prefers different, more convenient times) then again the effect should randomly cancel out. On the other hand, if observation times have been consistently changing over decades to a time that tends to give higher temperatures, then maybe.
    .
    Of course, the big elephant in the room is UHI. If this were properly adjusted for then it should produce a large negative offset, precisely the opposite of this graph. Michaels & McKitrick showed that, if UHI were properly accounted for, the total warming in recent decades would be about a half of the generally accepted figure. If they’re right then the UHI adjustment should be about -0.3C ( -0.6F).
    .
    To be honest, I simply don’t believe in the adjustments shown in this graph. I smell a huge rat. Clearly, the people who have done this work desperately want to prove AGW, particularly as opinion polls show steadily increasing scepticism. We’re asked to believe that these adjustments were done purely in the interests of science and the truth, and were not fabricated in order to frighten the politicians into throwing billions of dollars at climate scientists. Well, I’m not convinced. I don’t believe this is a direct conspiracy, but rather the result of vested interests and group think. Group think, by a process of rationalisation, can easily turn the worst motives, such as greed, into a noble cause, such as the wish to save the world. But, however noble it may sound, it’s still a scam. This graph could almost be described as the poster-child of this scam.
    I try not to be too angry, but sometimes it’s difficult….
    Chris

  49. OT: For whoever is interested in the Redoubt eruption:

    http://volcanism.wordpress.com/

    As the solar minimum continues and the planet is in a natural cooling cycle, volcanic eruptions dumping emissions at at stratosphere levels could accelerate the cooling process.

    The plume now has reached a level of 15.500 meters.

    Currently we have 4 ongoing volcanic eruptions reaching stratospheric levels from time to time.
    None of the current eruptions however will pose a climate risk at this moment in time.

  50. In your visit to the NCDC last spring Anthony, they gave you a presentation on the newest set of adjustments to US temperatures.

    The adjustments are now stated in Celsius (rather than F) and they split the adjustments into the two different steps and then how each affects Max and Min temperatures separately.

    By my math, the TOBS adjustment added 0.2C to the raw data trend.

    And the Homogenization adjustment added about 0.225C to the raw data trend.

    Together, the two adjustments added 0.425C or 0.765F to the raw data trend(versus the 0.55F added up to 1999)

    http://wattsupwiththat.com/2008/05/13/ushcn-version-2-prelims-expectations-and-tests/

    Here is the PowerPoint although it doesn’t always load properly (you might have to retry it a few times).

    http://wattsupwiththat.files.wordpress.com/2008/05/watts-visit.ppt

    While the TOBs adjustment sounds reasonable enough, with the degree of urbanization which has happened in the US, the Homogenization adjustment should be negative to account for the UHI.

  51. Allen63 (02:52:41) :
    . . . Can the major repositories of the global raw data be “forced” to make the raw data available through freedom of information (or some other tactic)? Or, has the original raw data already been lost? — that is, the archived data all has “corrections”?

    If there is a justifiable suspicion that the raw data is (a) being withheld and (b) manipulated to produce a politically-desired result, then it certainly would behoove people interested in the truth to file FOIA requests, and lawsuits if necessary, to obtain and make public the relevant data, on a continuing basis.

    Mr. Watts here, and many others, as legitimate researchers in the field, certainly would have standing for legal action.

    Two sites where you might get help with legal and FOIA issues:

    http://www.eff.org/issues/bloggers/legal/journalists/foia

    http://www.judicialwatch.org/open-records

    Also, if it could be shown that the data were being massaged, this would be news that even the AGW sycophants in the mass media could not ignore.

    Perhaps Roger Sowell, who is an attorney, could comment—and maybe help organize some formal action?

    /Mr Lynn

  52. The NOAA adjustment between 1955 – 1995 is approximately linear.

    So for this adjustment to be an artifact of time-of-observation change, the time-of-observation would need to have changed monotonically.

    This means a movement of the time-of-observation in each year of the 40-year period.

    For example: 1955, measure at all stations at noon; 1956, measure them at 11:59; 1957 at 11:58. And so on up to 1995 when they measure at 11:20.

    Is there any evidence of a monotonic movement in time-of-observation over this period?

  53. Neo (05:28:59) : Exactly why do they need to “adjust” the raw data ?

    Depends upon what “raw” means. Ther are many reasons for adjustment.

    Most sensor data is rarely clean thus needs filtering. Filtering is an adjustment. Then there’s the problem of calibration. If the temperature values take UHI effects into account then that too is an adjustment.

    The real issue is modifying the data supporting an hypothesis (model) using the hypothesis itself as part of the adjustment criteria. That’s self defeating.

  54. Can someone help me here. Is there any relationship between the GISS data and the NOAA USHCN data, or are they independent. Also are they used as confirmation of each others data if they are regarded as “independent”.

    It’s worse than you think. Like something out of some SF-channel genetic horror movie.

    GISS takes adjusted USHCN data. They apply an “unadjustment” algorithm. (No, really.) The resulting spawn is somewhat analogous to Lord Voldemort shortly before he returns to power.

    Then they readjust it using their own procedures.

    Metadata from the Black Lagoon.

    Is there a uncorrupted data set?

    “Mama, we all go to hell.”

  55. While the TOBs adjustment sounds reasonable enough

    Except when it turns out they are not using the actual TOBS as listed on the B-91 forms?

  56. In computers, we work with two types of data. “raw” and “cooked”. Given that we’re looking at temperatures adjusted to warmer values, I think the term “cooked” applies quite well. It is a better antonym than “adjusted” which should be pared with “unadjusted” or “original”. I think the cooked term would have a proper connotation as well.

  57. Exactly why do they need to “adjust” the raw data ?

    There are legitimate reasons.

    But those reasons provide the opportunity for much mischief.

  58. “”” Ed Reid (04:13:53) :

    We know, with absolute certainty (more or less), that the global average temperature is 14.44 +/- 2(+) degrees C, based on the temperature series as reported and the information on the quality of the measuring sites at surfacestations.org.

    Isn’t that “close enough for government work”? :-) “””

    Actually we don’t know any such thing. Being pedantic; most of the earth is at a temperature of at least 400 K, and some may be as hot as 10,000 K; well make that 5773 K-17320 K putting in the obligatory 3:1 climatology fudge factor.

    Well so maybe that 14.44 is really just a surface temperature, and not average for the whole earth. Remember that the earth’s surface goes from – a few hundredas of metres at the dead sea to well over 8000 metres in the high mountains; I guess you can hardly call that lower troposphere either.

    But even if you say it is for the earth’s surface, you still have a problem; in that 73% of the earth’s surface is oceans; and we don’t have reliable data for any temperature measurments over the oceans before around 1980, when buoys were set up to measure water and air temperatures simultaneously.

    What the found over 20 years was that they aren’t correlated; which means that ocean water temperatures are not a proxy and never have been for oceanic air temperatures.

    So the proxy data is only believable since about 1980; and before that it is simply wild guesswork for 73% of the earth’s surface.

    Even since then, we don’t have a suitable earth bound temperature sampling network to accurately measure the average earth surface temperature; even for a single instant; let a lone averaged (properly) over say a full year orbit of the sun.

    And GISS etc do not report earth temperatures; they report temperature anomalies; which relate to some fictional anomalie average over some 30 year or so interval; which also is unknown as an actual temperature.

    Besides; anyone who says it is 14.44 +/-2, is just blowing smoke.

    And as for adjusting data; if it’s “adjusted” it isn’t data; it’s “modelling”.

    Why should temperature anomaly modelling to correctly “adjust” the readings, be any more believable than the 3:1 fudge factor that goes into temperature trend modelling or ocean level rise modelling.

    If the instruments don’t read right; get rid of them and replace them with instruments which read correctly; and if there are external influences that affect te accuracy of the measurments; get rid of them too.

    A good way to get a fatal processing plant explosion; is to infer a relationship bewteen that which you wish to control (or observe) and something else that you can control (or observe); which is :”modelling”, and then control the inferred variable; instead of directly measuring and then controlling the variable whose value you wish to contain.

    But perhpas this is what Congress had in mind; when they earmarked $140million of our tax dollars for “Climate Data Modelling”; not vlimate modelling but climate data modelling; which is fudging data in my book.

    Yes when the Japanese science advisers to their governmment described the climate modelling an recommendations of climatology’s UNIPCC, as “ancient astrology”; they were surely being unfair to ancient astrology.

  59. I don’t mind adjustments but when the adjustments are as large as the signal they need to be questioned.

    IMHO, these corrections look so unreasonable it’s hard to express it. They are basically saying that all our instruments raw data measurements are dropping at the same rate as global warming. – It makes no sense whatsoever.

  60. I always read that the 4 global temperature metrics tracked fairly well. Didn’t I use to see graphs of this on WUWT? What’s getting lost in this discussion (except for John Philip at 01:25:12) is that regardless of the seemingly biased USHCN adjustments, the end result is a dataset that tracks fairly well with the other metrics. (Correct me if I’m wrong.) Perhaps the bias is nothing more than an attempt to make the dataset look reasonable compared to the others?

  61. I would like somebody to point me to an example of any field in the hard sciences where a researcher can get away with adjusting raw data for unmeasurable biases.

    Sociology. No, wait . . .

  62. What’s getting lost in this discussion (except for John Philip at 01:25:12) is that regardless of the seemingly biased USHCN adjustments, the end result is a dataset that tracks fairly well with the other metrics.

    HadCRUT adjusts (wish we knew how).

    UAH and RSS measure lower troposphere, not surface temperatures. They are supposed to increase 20% to 40% faster in a warming trend.

    Unless I am wrong (and I may be; please correct me if I am missing something here) . . .

  63. Yes when the Japanese science advisers to their governmment described the climate modelling an recommendations of climatology’s UNIPCC, as “ancient astrology”; they were surely being unfair to ancient astrology.

    True. At least astrologists know what signs are coming up and when. The IPCC has a zero percent track record of making correct predictions.

  64. A Climate Audit look at adjustments, 20th Century, US:
    (Better sit down before looking.)

    Raw NOAA

    Adjusted (USHCN1)

    USHCN2 is worse. (The graphs you see in the article are from USHCN1.)

    Come to think of it, this implies that the raw data must be available somewhere (or at least it was a year and a half ago).

  65. Roger Sowell’s comment of above reminds me of an interesting point. Supposedly in USSR the weather station keepers would report lower temperatures than actual, in order to get more provisions. Now what do you think happened when the Soviet Union collapsed? You guessed it, they reported the real temperatures again-or so the story goes. Warm bias eh wot?

  66. The use of the ‘adjusted’ data assumes that the raw data actually means anything. How a single weather station data point can be extrapolated to represent the infinite geographic variations, anthropogenic influences, and biological variations in perhaps tens, hundreds or thousands of square miles of territory that surrounds it is questionable to begin with.

    It’s not surprising that the local weather forecast can be off by several orders of magnitude; the forecasters may be in a whole different environment than you are.

    So some think we should use the satellite temp data. Unfortunately, there weren’t any before 1979, and, from http://en.wikipedia.org/wiki/Satellite_temperature_measurements :

    The CCSP SAP 1.1 Executive Summary states:

    “Previously reported discrepancies between the amount of warming near the surface and higher in the atmosphere have been used to challenge the reliability of climate models and the reality of humaninduced global warming. Specifically, surface data showed substantial global-average warming, while early versions of satellite and radiosonde data showed little or no warming above the surface. This significant discrepancy no longer exists because errors in the satellite and radiosonde data have been identified and corrected. New data sets have also been developed that do not show such discrepancies.”

    In other words, the satellite data has also been ‘adjusted’ to match the surface temperature measurements.

    Draw your own conclusion.

  67. How much of an effect does TOBS have? If Tmax usually ocurs in the early afternoon, and Tmin a bit after midnight and you don’t look at the thermometers anywhere close to those points, what difference does it make? After all, the thermometers remember the max and min until you reset them.

    And why in H*** do we need an UPWARD adjustment of the raw data?

    (mercury leaking out of thermometers, perhaps?)

  68. RJ Hendrickson (10:21:58) : “In other words, the satellite data has also been ‘adjusted’ to match the surface temperature measurements.”

    There must be other sources/satellites apart of those conveniently “adjusted”. I mean russian, japanese or whatever not related or implicated in the global warming or climate change agenda. Where can we find them?

  69. FYI.

    “”ROCKVILLE, MD–(MARKET WIRE)–Mar 23, 2009 —

    MarketResearch.com has announced the addition of Unit Economics’ new report “The New Global Ice Age,” to their collection of Energy/Environment market reports. For more information, visit http://www.marketresearch.com/redirect.asp?progid=67618&productid=2069052.

    Abstract of Unit Economics’ Report: “New Global Ice Age”

    “At first glance, a research piece predicting significantly colder weather seems rather bold. In reality, we’re very confident about this report. That’s because we are not so much predicting colder weather, but are instead observing it. More important, we’re attempting to coax our readers to view recent weather data and trends with a neutral perspective — unbiased by the constant barrage of misinformation about global warming. We assure you, based on the accuracy of climatologists’ long-term (and short-term!) forecasts, you would not even hire them!

    “For example, in 1923 a Chicago Tribune headline proclaimed: ‘Scientist says arctic ice will wipe out Canada.’ By 1952, the New York Times declared ‘Melting glaciers are the trump card of global warming.’ In 1974, Time Magazine ran a feature article predicting ‘Another Ice Age,’ echoed in a Newsweek article the following year. Clearly, the recent history of climate prediction inspires little confidence — despite its shrillness. Why, then, accept the global warming thesis at face value? Merely because it is so pervasive?

    “Unfettered by the Gore-Tex straitjacket of global warming dogma, one might ask some obvious questions. Why, in 2008, did Toronto, the Midwest United States, India, China, the United Kingdom and several areas of Europe all break summer rainfall records? Why was South Africa converted into a ‘winter wonderland’ this past September? Why did Alaska record its coldest summer this year — cold enough for ice packs and glaciers to grow for the first time in measured history? Why has sea ice achieved..”

    http://www.freerepublic.com/focus/f-news/2212820/posts

  70. timetochooseagain

    Ok, I see my error. Satellite temps were matched to radiosonde measurements. Which brings up the question of the reliability of the radiosonde measurements. Reading the material at http://www.aero.jussieu.fr/~sparc/News12/Radiosondes.html headed “Sources of error in radiosonde temperature data” , it seems there are problems with the radiosonde data as well. Again, it needs ‘adjustment’. Adjustments on top of adjustments. It all looks like wild guessing, from my point of view.

  71. To my best guess, the only reason I see as valid, anyway, the upwards adjustment would account for wind chill. Does not, however, excuse the fact that upward adjustments plotted out like that suspiciously match the rise in temp. What does the raw data indicate?

  72. …The real issue is modifying the data supporting an hypothesis (model) using the hypothesis itself as part of the adjustment criteria. That’s self defeating.

    No, it’s truth defeating.

  73. Smokey (09:45:06) :

    Mike Abbott,

    Is this the graph you’re looking for? click Or maybe this one: click

    =======

    Thanks! Those graphs (comparing the 4 temperature metrics) are very close to what I was looking for, especially the second one. However, it only goes back to 1997. I don’t think it proves or disproves the point I was making at 09:36:00, i.e., that the USHCN adjustments may be reasonable because the final dataset tracks well with the other metrics. I think a comparison to the full satellite record going back to 1978 is needed. In any case, I’ll leave further comments to others; I’m already getting in over my head…

  74. maz2 (12:01:09) :
    FYI.
    ”ROCKVILLE, MD–(MARKET WIRE)–Mar 23, 2009 –

    MarketResearch.com has announced the addition of Unit Economics’ new report “The New Global Ice Age,” to their collection of Energy/Environment market reports. For more information, visit http://www.marketresearch.com/redirect.asp?progid=67618&productid=2069052.

    Abstract of Unit Economics’ Report: “New Global Ice Age”

    Very interesting. This outfit is offering this report for $1,295, so they must think it sufficiently credible and important enough for businesses to buy it. Here’s the Table of Contents:

    Overview
    History of Temperatures
    Chart 3.1: – 800,000 years of temperature variations
    Recent Climate Trends
    The Maunder Minimum
    Chart 9.1 – Cosmic Rays and Cloud Cover
    Chart 10.1 – Sunspots and Sea Surface Temperatures
    Chart 10.2 – Sunspot Cycles
    Chart 11.1 – Mark Clivered Solar Cycles
    Chart 11.2 – Sierra Environmental Foundation Solar Cycles
    Chart 12.1 – University of Alabama Troposphere Temperatures
    Temperature Model
    Chart 13.1 – Unit Economics Temperature Model
    Chart 14.1 – NOAA Jan-Oct Temperature Deviations
    Chart 15.1 – Unit Economics Temperature Model Projections
    Investment Implications
    Global Warming
    Chart 17.1 – U.S. National Surface Temperatures
    Chart 18.1 – CO2 and Temperature Records
    Appendix A
    Weather Records and Extremes from the Past Six Months
    Appendix B
    Forty Two Market Themes Resulting from Global Cooling
    Bibliography
    Disclaimers

    Any idea who wrote it? Now if someone would leak it to Drudge. . .

    /Mr Lynn

  75. Isn’t the truly amazing thing with all the fudges adjustments, it still comes out to within 0.1 C of what is needed to prolong the hoax. Simply amazing.

  76. “He who controls the past controls the future. He who controls the present controls the past.”

  77. Mike Abbott (12:50:19) :

    Here’s a graph that compares all the temp sets (shifted to a common baseline) back to 1979:

    You can make graphs like this yourself over at woodfortrees.org.

  78. Chris V. (15:57:52) :

    Here’s a graph that compares all the temp sets (shifted to a common baseline) back to 1979:

    ======

    Thanks, that’s the one. It shows GISTEMP generally in lockstep with the others, thereby confirming that whatever adjustments they make seem to generate reasonable results.

  79. evanmjones (09:42:11) :

    Ouch! That was cruel. I teach Sociology….. but you’ll note I don’t dispute the charge…. I’m just acknowledging the pain.

  80. @Mr Lynn (05:07:07) :

    “If there is a justifiable suspicion that the raw data is (a) being withheld and (b) manipulated to produce a politically-desired result, then it certainly would behoove people interested in the truth to file FOIA requests, and lawsuits if necessary, to obtain and make public the relevant data, on a continuing basis.

    Mr. Watts here, and many others, as legitimate researchers in the field, certainly would have standing for legal action.

    Two sites where you might get help with legal and FOIA issues: (see above)

    Also, if it could be shown that the data were being massaged, this would be news that even the AGW sycophants in the mass media could not ignore.

    Perhaps Roger Sowell, who is an attorney, could comment—and maybe help organize some formal action?”

    DISCLAIMER: Nothing written below is intended to be, nor is it to be construed as, legal advice. Anyone seeking legal advice on a specific matter should consult an attorney. Nor does anything written below constitute formation of an attorney-client relationship.

    Mr. Lynn, all I may do as an attorney, writing on a public forum such as WUWT, is offer very general comments, as ethical rules constrain what attorneys can write in their professional capacity. That is one reason I have not responded earlier on the many comments I have seen on this subject (FOIA) and lawsuits regarding fraud. But, since you have posed the question directly to me, I am allowed to answer as follows:

    The Freedom of Information Act is a means that is available to private parties to obtain some types of data or information from the government, barring certain statutory exceptions. An FOIA request can be made with or without an attorney, but for complex requests retaining an attorney is advisable. If the government denies a valid FOIA request, then a lawsuit can be filed to compel the government to produce the data, and having an attorney is certainly advisable at that point.

    One positive change from the Obama administration is the President’s directive to interpret FOIA requests liberally or more broadly, meaning fewer should be denied.

    Suing in fraud or misrepresentation has many complex aspects, and the likelihood of success depends on what deliberate or negligent actions occurred, and with what level of intent, the potential evidence that may be obtained, and who was harmed and to what extent.

    Fraud is generally difficult to prove. Where fraud fails, one may usually sue for intentional misrepresentation, and sometimes for negligent misrepresentation. There may exist many other legal issues, which the attorney will identify, depending on the specific facts of what did or did not occur.

    For any further information and my qualifications, anyone can reach me at my website by clicking on my name, and sending an email.

    Roger E. Sowell, Esq.
    Climate Change Attorney
    Marina del Rey, California

  81. Can anyone point me to a reference for long term temperature records based on rural areas only where there is no UHI effect and presumably no need for for adjustment to the data?

  82. Dear Malcolm,

    Rural stations are generally right next to the BBQ so we can talk about the weather, have a beer, and keep an eye on the meat without having to rise out of our white plastic chairs. At least that is the general arrangement of my little piece of heaven. And then there is the silver bottom boat nearby. Gotta keep the important stuff handy.

  83. The adjustments are continuous and increasing over about 30 years. It’s hard to see how issues such as TOBS can explain that. Also, wind chill as in one suggestion. A continuous increase for 30 years?

  84. Time of Observation Bias has been mentioned. Could this lead to a consistent positive trend over decades?

    It can and this appears to have been the case. I looked into ToB in detail and if my memory serves me correctly, the NOAA ToB adjustment would require approx 30% of observers to make a one time shift from evening to early morning observation, spaced regularly over 30 years. And then of course continue with morning observation.

    The paper linked below describes the ToB estimation method I believe (in fact I’m quite sure) is used. Although with the caveat that in the opaque world of climate science methods, it’s hard to be certain of anything.

    As I mentioned above, the main issue is why use an estimating method at all when the raw ToB data is available (although of unknown accuracy) and a more accurate determination could be made.

    Bottom line is the ToB adjustment is in the right ballpark, but contains an unknown error that is likely around 5% to perhaps10% of the claimed 20th C warming. And of course that error then gets magnified by the climate models out into the future.

    http://adsabs.harvard.edu/abs/1986JApMe..25..145K

    Note ToB is only an issue with a single daily observation of min/max temps. It does not occur with hourly or continous measurement.

  85. Can anyone point me to a reference for long term temperature records based on rural areas only where there is no UHI effect

    Rural areas are subject to effects from land use changes and irrigation. The only long term records I would trust to be without significant anthropogenic influences are from remote islands, undisturbed natural environments and special sites such as the Armagh Observatory.

  86. Roger Sowell (17:05:56) :
    Mr. Lynn, all I may do as an attorney, writing on a public forum such as WUWT, is offer very general comments, as ethical rules constrain what attorneys can write in their professional capacity. That is one reason I have not responded earlier on the many comments I have seen on this subject (FOIA) and lawsuits regarding fraud. But, since you have posed the question directly to me, I am allowed to answer as follows. . .

    Thanks for the response. I was not suggesting that you compromise ethical rules by offering legal advice on a public forum, but rather that if any climatologists on this or similar forums wanted to pursue a FOIA request for the raw data that may be withheld, interested attorneys might be willing to lend their services. The climatologists would be your clients, not the readership of WUWT.

    You can be sure that if NOAA or other agencies denied such a request, forcing petitioners to sue, the agency(ies) would have plenty of government legal help. That would make it imperative to have lawyers involved, to have any hope of succeeding.

    . . . Suing in fraud or misrepresentation has many complex aspects, and the likelihood of success depends on what deliberate or negligent actions occurred, and with what level of intent, the potential evidence that may be obtained, and who was harmed and to what extent. . .

    Unless the raw data could first be obtained, there would be no point in claiming fraud. If the evidence pointed in that direction, that would be quite a development, one the news media could scarcely ignore. But first we’d need the data, and even then, as you say, any claim of malfeasance would be a difficult row to hoe. The FOIA comes first.

    Of course, if it were denied, that would be prima facie evidence that the agency(ies) had something to hide. It wouldn’t necessarily justify a lawsuit, but it might titillate the media. . . ;-)

    /Mr Lynn

  87. Considering the four major estimates of GMT, it would not be surprising for GISS to move roughly in same direction and similar amounts as RSS and UAH. Since GISS now uses satellite data for ocean “surface” temperatures, and since oceans comprise 70% of earth’s surface, we would expect the ocean temperatures to largely swamp deviations on land. Meanwhile, RSS and UAH seem to have a collegial relationship, helping each other with analysis and oversights whenever divergence occurs. (One should not underestimate the difficulties and assumptions in measuring temperatures via satellite.) HadCru’s trend apparently is not similarly forced to be close to the other three, so its similarity seems to be noteworthy.
    I will not address the issue of anticipated tropospheric trends measured via satellite versus anticipated surface trends measured via stations and ships. However, I will note that my major concern with GISS is its adjustments of pre-satellite data, and one aspect of that concern is how its choice of hinge points has a significant (and convenient) impact on historical records.

  88. Adam Soereg (15:25:25) :

    “He who controls the past controls the future. He who controls the present controls the past.”

    ====

    Expanding on your quote – while being reminded all the time only “scientifically-chosen peer-revenued” papers can be accepted –

    “He who controls the past controls the future. He who controls the presents (cash) and presence (papers) controls the presence of the past.”

  89. My congressman has asked for my recommendation on how he should proceed with this information: What do I tell him to ask for from NOAA (and GISS ??) and to whom should he address his (formal) request for an explanation and (informal WTF) followup?

    If NOAA/GISS were required to provide ALL of the original information to the public for review, what format should it be in and where should it be listed?

  90. What does the raw data indicate?

    Look at the map links I posted earlier in this thread for US raw vs. adjusted.

    It will make you shriek incoherently, I promise.

  91. Here’s a graph that compares all the temp sets (shifted to a common baseline) back to 1979:

    The fact they match means they don’t match. UAH and RSS are Lower trop and therefore supposed to be warming c. 30% faster than surface stations. (And the surface comes in higher, anyway, esp very recently.)

    It shows GISTEMP generally in lockstep with the others, thereby confirming that whatever adjustments they make seem to generate reasonable results.

    Squeezing them narrower like in that graph masks the differences. Look at the earlier 4-way comparison.

    Also, see above. If GISS and HadCRUT “match” the satellite data, it means it doesn’t match. And it’s actually higher, anyway (instead of 30% lower trend, as it should be).

  92. I teach Sociology….. but you’ll note I don’t dispute the charge

    My field is history, so I’m even worse off than you are.

  93. Re FOIA and related:

    This is a swamp of issues that includes foreign sources of climate data claiming their data is proprietary. Someone without deep knowledge of the subject is unlikely to get very far.

    Climate Audit has been down this road and the general conclusion seems to be an impartial audit of both data and processing methods is the best way to expose the truth.

    I’d add that no one would spend the sums being spent on ‘climate action’ without thorough auditing of how the money is spent, and a similar level of audit is justified for the data being used to justify spending the money.

  94. Can anyone point me to a reference for long term temperature records based on rural areas only where there is no UHI effect and presumably no need for for adjustment to the data?

    Microsite violations have eaten up the rural stations. It’s even possible they may have been more affected than the urban stations, whose site violations may have been partially masked by UHI.

  95. evanmjones (21:53:07)

    “The fact they match means they don’t match. UAH and RSS are Lower trop and therefore supposed to be warming c. 30% faster than surface stations. (And the surface comes in higher, anyway, esp very recently.)”.

    ——————————————————–

    as the ocean data should not differ, this means, that the trend “measured” on land is rougly by a factor 2 too high.

    this agrees well with Ross R. and Patrick J. Michaels (2007) “Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data.”

    http://www.uoguelph.ca/%7Ermckitri/research/jgr07/jgr07.html

  96. Mr Lynne,

    Let me save you some lawyers’s fees… the raw data is ‘hidden in plain view’ in the NOAA page linked to in the article above. Its the Areal Edited file. This is the data after any obvious outliers have been trimmed out …

    No need for a FOI request then …

  97. “Can anyone point me to a reference for long term temperature records based on rural areas only where there is no UHI effect and presumably no need for for adjustment to the data?”
    ————–
    Here is comparison between the Bratislava airport measurement on the city outskirts (550,000 inhabitants) vs Lomnicky Peak in the High Tatras (star observatory on the peak, definitely rural): http://www.letka13.sk/~jurinko/Slovakia_UHI.pdf. Distance between Bratislava and Lomnicky Peak is 244km.

  98. John Philip (23:07:54) :

    Let me save you some lawyers’s fees… the raw data is ‘hidden in plain view’ in the NOAA page linked to in the article above. Its the Areal Edited file. This is the data after any obvious outliers have been trimmed out …

    No need for a FOI request then …

    Is that Areal Edited file adequate to satisfy demands in this thread for ‘the raw data’? E.g.,

    EJ (23:17:38) :
    There has to be a bunch of raw data out there. . . Is there a uncorrupted data set?

    evanmjones (23:26:47) :
    If the NOAA is adjusting raw data, they have to have raw data in order to adjust it. And it has to be in soft copy. Yet that data does not seem to have been made available. . . If I am wrong and SOFT COPY unadjusted data is available, please put up a link.

    Allen63 (02:52:41) :
    I have seen limited raw data on-line. . . Can the major repositories of the global raw data be “forced” to make the raw data available through freedom of information (or some other tactic)? Or, has the original raw data already been lost? — that is, the archived data all has “corrections”?

    dwf (07:20:41) :
    Anthony or others,
    As a recent reader of this forum, I do not know if this has been asked before but is it possible to request the raw temperature data using the Freedom of Information Act? I do not think that raw temperature data would qualify as a FOIA exemption. . .

    Here is how the paper cited in Anthony’s post defines the Areal Edited data set:

    Areal Edited (Raw)
    A quality control procedure is performed that uses trimmed means and standard deviations in comparison with surrounding stations to identify suspects (> 3.5 standard deviations away from the mean) and outliers (> 5.0 standard deviations). Until recently these suspects and outliers were hand-verified with the original records. However, with the development of more sophisticated QC procedures at NCDC, this has been found to be unnecessary.

    If the ‘Areal Edited (Raw)’ data set, minus ‘suspects’ and ‘outliers’ as defined above satisfies the needs of the experts here and elsewhere for ‘raw’ data, then John Philip is right, and the data can be obtained here:

    3. Obtaining USHCN Data Files
    The USHCN data files are available from CDIAC’s FTP site, and have been compressed using the UNIX compression utility compress. If this utility is not available, leave off the .Z extension and the files will uncompress on the fly through ftp. For non-internet data acquisitions (e.g., 8mm tape, CD-ROM, etc.), users should contact CDIAC directly.

    Address:
    Carbon Dioxide Information Analysis Center
    Oak Ridge National Laboratory
    P.O. Box 2008
    Oak Ridge, Tennessee 37831-6335, U.S.A.

    Telephone:
    (865) 574-3645 (Voice)
    (865) 574-2232 (Fax)

    email: cdiac@ornl.gov

    If not, then please explain to John Philip (and me) why not.

    /Mr Lynn

  99. Philip_B (17:32:29) :

    Time of Observation Bias has been mentioned. Could this lead to a consistent positive trend over decades?

    It can and this appears to have been the case.

    The reference you cite is to the development of an empirical model of the effects of TOB (as far as I can see – only the abstract is provided).

    That’s the easy part!

    The hard part is NOAA’s modeling the changes of measurement behavior worldwide over a 40 year period to derive the linear 0.5 degree inflation.

    You imply elsewhere in your comment that this work was done. can you provide a reference?

  100. (sorry, missed out a in last post

    Philip_B (17:32:29) :

    Time of Observation Bias has been mentioned. Could this lead to a consistent positive trend over decades?

    It can and this appears to have been the case.

    The reference you cite is to the development of an empirical model of the effects of TOB (as far as I can see – only the abstract is provided).

    That’s the easy part!

    The hard part is NOAA’s modeling the changes of measurement behavior worldwide over a 40 year period to derive the linear 0.5 degree inflation.

    You imply elsewhere in your comment that this work was done. can you provide a reference?

  101. Thanks John Philip.

    I hadn’t noticed the one chart up to 1999 they present showing the Raw versus Adjusted data. [I wonder why they always stop in 1999 when the paper was produced in May 2008 - see below.]

    The solid block markers are the Raw data and one can see that 1934 and 1921 were the warmest years in the Raw data. 1998 was third.

    Now the chart only goes out to 1999. Since 1999, US temperatures have fallen by 1.75F.

    So the chart would now be very close to 0.0 anomaly.

  102. - I would be interested in knowing the origin of the idea that the lower troposphere should be warming at 30% above the surface rate. The models predict a higher rate, but over the tropics and at about 12km up. This is not the lower troposphere. Santer et al found that the actual and modelled trends in the tropical mid-troposhere are consistent.

    – Anyone looking for a non-urban trend need look no further than GISS

    in step 2, the urban and peri-urban (i.e., other than rural) stations are adjusted so that their long-term trend matches that of the mean of neighboring rural stations. Urban stations without nearby rural stations are dropped

    – The Hadley sea surface temperature data is derived from bouys and shipborne measurements, GISS uses satellite derived data. The fact that trends in both agree well gives us confidence that they are measuring a real phenomenon.

    – The USHCN adjustments apply to the US surface stations. These cover about 2% of the surface area of the globe with a corresponding contribution to the global mean estimates. The Michaels and McKitrick paper (controversially) found that the actual climatic warming trend over land is about 50% below the estimates due to the effects of economic activity being understated. Even if they are correct it is not legitimate to extrapolate this to the whole globe, (as Viscount Monckton did in his APS article).

    Put simply TObs adjusts for the fact that many observations are not taken for the standard reference period of midnight to midnight. Understandably, volunteers tend to prefer observations at a more convenient time, this introduces a bias which must be corrected for. Imagine I take the time once a day by looking at 100 clocks and taking the average. Further imagine that I know one clock stopped for an hour then restarted and is now reading one hour slow. I must correct for this in my averaging, and apply the same adjustment every day unless the clock is put right. Now imagine more clocks go wrong, most in the same direction – the size of the adjustment will increase over time. An imperfect analogy but you get the idea.

    – One of these global mean temperature time series contains surface station adjustments, the other does not. Can you tell which is which ? ;-)

    REPLY: John you have no idea what you are talking about. GHCN also goes through TOBS and FILNET adjustments, just like USHCN. – Anthony

  103. Anthony – I never said or implied otherwise. The graphs of adjustments in the lead article show the difference between raw and adjusted data for the USHCN, still you invited comparison with the global warming signal, I have no idea if the TObs adjustment magnitude is the same for rest of the global surface station data – but my point was the danger of extrapolating from the adjustments for one country – with a high density of stations – to the rest of the globe.

  104. John Philip (01:25:12) :

    In fact, if you compare the trends in UAH, GISS, RSS and HADCRUT over the lifetime of the satellite measurements you find that UAH is the outlier, trending lower than the resr, indicating that whatever adjustments are being applied to the surface record are legitimate.

    Legitimate or not you will find all the surface data trends higher than even RSS.

  105. “Can anyone point me to a reference for long term temperature records based on rural areas only where there is no UHI effect and presumably no need for for adjustment to the data?”

    One possibility to answer your question is to refer to David Archibald’s rural data set that has over 100 years of history. Although he maintains his data set is “representative of the US temperature profile away from the urban heat island effect,” he has only four stations in his data set. In his data set, the U.S. has not returned to the high temperatures of the1930s. If you add Montevedio, MN (another rural station out of UHI range) to the data set, you continue to get the same answer. Of course, this data set is only the United States. If you add Antarctica, you still do not change the data set trend. If you want to add Arctic locations, you must be careful of micro siting issues, but you do not change the conclusion by adding Greenland sites. (Are we cherry picking?) Now if you want to add other continents, you have quality issues to address first. Longevity, free from micrositing issues, consistently rural all can be challenges.

  106. John Philip

    Imagine I take the time once a day by looking at 100 clocks and taking the average. Further imagine that I know one clock stopped for an hour then restarted and is now reading one hour slow. I must correct for this in my averaging, and apply the same adjustment every day unless the clock is put right. Now imagine more clocks go wrong, most in the same direction – the size of the adjustment will increase over time.

    Your analogy is thus is that the entire AGW signal for the US derives from a correction for a linearly increasing number of clocks going wrong, with more and more running slow each year over a 40 year period.

    If we alter your analogy to allow the clocks to keep correct time, replacing the bias with a change in the time of reading, we still have to show why, each year over a 40 year period, an increasing number of temperature readers decided to change their reading times.

    Neither scenario seems remotely probable in a professional observation environment.

  107. Legitimate or not you will find all the surface data trends higher than even RSS.

    Yes, and the satellite reading should indicate a c. 30% greater trend than the surface data. That may not look like much on a graph where the “bumps” are in the same place, but it is highly significant.

    I don’t think anyone disputes a rise from 1979 – 1998. It is the degree of change that is at issue. I don’t doubt the shape of the graph or the spikes and troughs. I question the degree of “tilt”.

    For a heat sink to exaggerate a warming trend, there has to be a real warming trend in the first place for it to exaggerate. (It would also exaggerate a cooling trend, as the warm trend [sic] bias “unwinds” on the way down.) Note that the measured drop in temperatures from 2007-2008 was actually greatest for GISS.

    Over the last decade of (relatively) flat temperatures, I would expect the heat sink effect — assuming the siting situation has been stable over the last decade — to have been mostly a push. Of course, one might not assume the siting situation (or the “encroaching concrete” situation) to have been stable.

  108. Would somebody please explain “Time of Observation Bias” TOB.

    Now to me a “bias” is a definite lopsided offset. If it were a random discrepancy of either sign, then it would not be a bias.

    So if they are able to evaluate this TOB to where they could correct for it; why not just fix the measurement system or schedule to simply eliminate the TOB.

    Stories of how climatology “science” is conducted sound to me to be somewhat like Miro$oft Windows; often described as the world’s largest computer virus.

    With so many layer upon layers of band-aid fixes du jour, is seems thoroughly deserving of the “Format C:\” update.

    So it seems that climate data gathering consists of multiple layers of proxies,
    and fixes, and adjustments, and bias corrections. Does anybody even remember what the original AlGorythm was; or does it not matter after you add on sufficient layers of corrections; and TOB factors ?

  109. George E. Smith (13:02:26) :

    Would somebody please explain “Time of Observation Bias” TOB.

    At any weather station using a min/max thermometer set, the readings would occur at one particular time each day. Depending on what that time of day is, that station’s readings would usually be affected more by either previous day lows, or previous day highs (exactly by the lows and highs of the last 24 hours before the reading). This will often cause a Time of Observation Bias, or TOB.

    If readings are taken near the times of daily highs, or daily lows (at 7 am, for example), those highs, and lows, often affect the readings of two days. In any case when the daily mean temperature are calculated from the min/max readings, annual averages of the effects of TOB on recorded temperatures can be more than 0.5-0.6°C (about 1°F) at many locations, and sometimes can even reach a magnitude of 1.2°C (2°F).

    An another type of TOB is the effect of any change in the daily mean temperature calculation method. For example, the Hungarian Weather Service defines the daily mean as the average of 4 readings, at 01, 07, 13 and 19UTC. However, this method was only introduced in 1965. Since the beginning of Hungarian meteorological observations (which are started in Budapest in 1780) the daily mean temperature had been calculated from only 3 readings, as a weighted average:

    Tmean = ( T_07h + T_14h + 2*T_21h )/4

    If you try to calculate average temperature for any day, you will get different values with each method. The long-term result is about 0.2-0,3 deg. C positive bias, which can create an artifical/enhanced warming trend.

    You can read a very informative summary of TOB here. I would say it is a really good summary, maybe beacuse it is ‘only’ a Summary, not a Summary for Policymakers

  110. To John Philip:
    Regarding “the origin of the idea that the lower troposphere should be warming at 30% above the surface rate,” I have heard Christy explain it, but my link to his presentation on the subject is no longer working. Although I am not confident of the 30% figure, the concept is quite straightforward: CO2 traps radiated energy in the troposphere which heats up the troposphere; in turn, the troposphere heats up the surface, so a signature of CO2 induced global warming is the troposphere heats faster than the surface while the stratosphere cools. I have not seen this concept rejected by Global Warming Pessimists.
    Regarding heating at “a higher rate . . . over the tropics,” there is quite a bit of backpedaling on this concept in the GWP community. Although the IPCC graphs certainly do seem to confirm this concept, many GWP speakers are claiming that these graphs are being misinterpreted and that lack of tropical troposphere does not undermine the models. I am intrigued by your reference to Santer. I read his paper and was not convinced. Perhaps my suspicions are on full alert when I again hear the phrase “we have adjusted the observed data and now there is no conflict with model outputs.” But more important, others have tried to replicate Santer’s methodology for more recent data, and the results do not hold.
    Regarding the GISS adjustments to deliver a non-urban trend, I find it remarkable that you can read the GISS procedure and can come the conclusion that it is reliably doing what it says it is doing. However, Hansen apparently comes to that conclusion so maybe you can too. However, I assure you that I do not share that confidence. The problems with the Night Light methodology and quality control are so numerous that the length of such a post would be unwieldy.
    Regarding Hadley sea surface temperature data and GISS satellite derived data, it is not surprising that from 1980 to 2006 these two measures for the oceans showed increases over the positive phases of the PDO and AMO. What is more interesting to me is that for much of 2008, the temperatures were as low as (or even lower than) temperatures in 1980 (according to satellite data). It is hard to me to believe that CO2-induced AGW is such a pressing issue when “variability” can wipe out almost thirty years of temperature increases.

  111. John Philip (01:25:12) :

    In fact, if you compare the trends in UAH, GISS, RSS and HADCRUT over the lifetime of the satellite measurements you find that UAH is the outlier, trending lower than the resr, indicating that whatever adjustments are being applied to the surface record are legitimate.

    But wait a minute, doesn’t UAH rely on GISS data to “fill in” for cloud covered areas, and doesn’t GISS rely on RSS data to fill in for oceans, polar regions and holes in Russia and Africa?

    It seems to this casual observer that interlocking data sets would logically walk somewhat in lockstep.

  112. @ Jeremy Thomas (04:34:29) and Philip_B (17:32:29)
    Actually, the full text is available at that site. (thanks to Philip_B for the link). The main thrust of the paper is the development of a computer subroutine which calculates the TOB for a set of input parameters. There isn’t any direct information on how TOB could have a positive trend over decades. But there were several interesting comments. They stated that there had been a general change in the rules regarding times of observations some time around the 1930’s. It seems that many stations are operated by volunteers and it is conceivable that they switched over to the new regime over quite a few decades, giving rise to a long term positive trend. But it does seem a bit unlikely, nevertheless.
    The paper also stated that at many stations the observation times changed several times in a decade, maybe to fit in with the lives of the current observers. This effect would be random and probably would not contribute to any long term trend.
    Overall, I’m not convinced by the TOB argument. Have there been any good studies that were able to establish a long term trend due to TOB? That I would like to see.
    Chris

  113. Weather Station Data, Raw or Adjusted?

    Wrong question!

    Let’s start with correct measurements before we discuss the “raw or adjusted” question.

    I am still thinking of David Archibald’s temperature forecast for May 2009 he made some time ago! Will he get it right?

    The Canadian Prairie Winter Temps anomalies
    have dropped a whopping 6.6 Degree in just three years time according to this publication:

    http://globalfreeze.wordpress.com/2009/03/25/canadian-prairie-winter-temperature-anomalies-drop-by-66-71-degrees-c-in-just-three-years/

    It is unbelievable that the AGW scare still has such a grip on our politicians.
    If any legislation is accepted, we should send them out of office for reasons of fraud and incompetence.

  114. IIRC, the raw satellite data has to be calibrated against something. I remember (prb’ly David Smith at Climateaudit) that radiosonde measurements were used for that since they get temps at mid-tropospheric heights. So in some respects even sat temps aren’t independent.

    It goes back to Pielke Sr’s recommendation — use ocean heat content for the “real” temps. Of course, measuring this isn’t easy.

  115. “”” Adam Soereg (14:54:08) :

    George E. Smith (13:02:26) :

    Would somebody please explain “Time of Observation Bias” TOB.

    At any weather station using a min/max thermometer set, the readings would occur at one particular time each day. Depending on what that time of day is, that station’s readings would usually be affected more by either previous day lows, or previous day highs (exactly by the lows and highs of the last 24 hours before the reading). This will often cause a Time of Observation Bias, or TOB.

    If readings are taken near the times of daily highs, or daily lows (at 7 am, for example), those highs, and lows, often affect the readings of two days. In any case when the daily mean temperature are calculated from the min/max readings, annual averages of the effects of TOB on recorded temperatures can be more than 0.5-0.6°C (about 1°F) at many locations, and sometimes can even reach a magnitude of 1.2°C (2°F).

    An another type of TOB is the effect of any change in the daily mean temperature calculation method. For example, the Hungarian Weather Service defines the daily mean as the average of 4 readings, at 01, 07, 13 and 19UTC. However, this method was only introduced in 1965. Since the beginning of Hungarian meteorological observations (which are started in Budapest in 1780) the daily mean temperature had been calculated from only 3 readings, as a weighted average:

    Tmean = ( T_07h + T_14h + 2*T_21h )/4

    If you try to calculate average temperature for any day, you will get different values with each method. The long-term result is about 0.2-0,3 deg. C positive bias, which can create an artifical/enhanced warming trend.

    You can read a very informative summary of TOB here. I would say it is a really good summary, maybe beacuse it is ‘only’ a Summary, not a Summary for Policymakers… “””

    Adam, thanks for that explanation; I’m not surprised. Any daily reporting based on a min max thermometer is going to be wrong as far as representing the average temperature for that day at that site; and making three or even four measurments doesn’t really solve the problem.

    Also it is not correct to describe the effect as a bias; which implies that it always errs on one side or other of being correct.

    The problem is not with the time of reading; it is simply that such data sampling protocols violate the Nyquist sampling theorem; and they do so by at least a factor of two or more; and in that case, it is not possible to recover the true average temperature; because the aliassing noise due to gross undersamplng corrupts the zero frequency signal which is the average being sought.

    But TOB and temporal aliassing noise are the least of our worries, because the spatial violation of the Nyquist sampling theorem is by orders of magnitude; not factors of 2-4, so the set of reporting stations, in no way reflects a correct sampling of the entire data field spatially.

    Which is why GISStemp and its lookalikes in ground observations, simply report GISStemp anomalies; and in no way refelct the real average earth surface or lower troposphere temperature.

    Climatologists need to burn their statistical mathematics text books, and buy a good book on sampled data sytem theory.

    I use “Digital and Sampled-Data Control Systems.” by Julius T. Tou of Purdue University. It’s mcGraw-Hill publication but maybe a bit dated (1959) there are more modern texts my colleagues use.

    Until the climate data sampling problem is corrected; these groups like GISS and Hadley, will continue to report absolute rubbish disguised as science.

    But Adam I do appreciate you taking the time to explain this TOB concept to us.

    George

  116. TO: beng (09:18:35) : “. . . the raw satellite data has to be calibrated against something.”

    It may be intuitive to think so; however, the explanation is that they compare atmospheric measurement against measurements from outer space, and that is how they determine atmospheric temperatures. This issue has been discussed a couple of times on this blog, and the scientific (albeit confusing in my view) explanation can be found on the UAH site.

  117. TOBs bias explained On One Side of a Postcard:

    If my observation time is, say, 2 AM (i.e., near Tmin), and it is 0°C at 2AM on Day 1 and 10°C at 2AM on Day 2 (24 hours later), my Tmin for BOTH days is going to be 0C:

    TMin will be at 2:00 AM for Day 1 (O°C) — and 2:01 (one minute later) for Day 2 (O°C)!

    Reverse effect if TOBS is near Tmax.

  118. Juraj V. (00:58:02) :

    Here is comparison between the Bratislava airport measurement on the city outskirts (550,000 inhabitants) vs Lomnicky Peak in the High Tatras (star observatory on the peak, definitely rural):
    —-

    Thank you! The city went up from 10.50 C in 1951 to near 11.6 at the end of 2006, but the mountain observatory stayed the same.

    TOBS bias? So, why is the BIAS continuing to change? Once “adjusted” – always and “conveniently” upwards by the way – why is increasingly upwards? Philips “answer” makes no sense mathematically.

    Doesn’t the raw data hold time of the reading? A time bias may occur – but once it occurs at a station, then the bias does not increase linearly for the rest of the century. If 1200 stations are changing their reading time over fifty years, then (by about 15 years) half are increasing, half decreasing and the “bias” should stop increasing.

Comments are closed.