Independent Review Discovers that NCDC Fumbles Data Handling in GHCN Climate Data

Guest essay by Bob Koss

Being an old retired guy with time on my hands, this summer I decided to find out just how well GHCN-Monthly follows their own methodology in regard to data collection. What I discovered is, they don’t. My remarks below relate strictly to the GHCN monthly unadjusted dataset on which their final adjusted dataset is based. At the end of this article are links to some verifications of what I discuss.

For those unfamiliar with the organizations involved, a few terms are defined.

The Global Historic Climate Network(GHCN), a part of the National Climatic Data Center(NCDC), is the repository other global temperature data analysts turn to for many of their data sources. Monthly Climatic Data of the World(MCDW) is also a part of NCDC and separately compiles a less extensive set of monthly data than GHCN. US Historic Climate Network(USHCN) is a network of stations completely within the continental US and are also part of NCDC. Met Office is a UK data source of stations, many of which overlap with other NCDC sources.

GHCN created a table of data sources, ranking them in order from low to high priority(quality). The highest priority data is to be used whenever multiple sources are available for the same station. This rule might as well not exist, since they don’t follow it. Evidently it is only a rule for PR purposes and not really necessary to follow.

Here is their description of that rule from the methodology paper linked near the end of this post.

[56]The data integration phase begins by assembling and

merging the various source level data sets. Although a single

datum may be provided by more than one source, only one

value is added to version 3 for any particular month. The

datum is selected based on availability and a hierarchical

process involving priority levels based on the reliability and

quality of the source. Data from sources considered to be of

higher quality and reliability are used preferentially over

other sources. Table 3 lists the sources, and their order of

assemblage (highest priority listed first). For example, if a

non-missing datum is present for the same date/location

from data source M (MCDW) and data source P (CLIMAT

bulletin), the datum from data source M will be placed in the

data set. The source from which each datum originated is

indicated in the version 3 data set by a source flag as shown

in the table. Daily reconstruction of the data set using this

method ensures that any changes made in the source data

sets get incorporated into GHCN-M while also allowing for

the reproduction of the version 3 data set by other institutions

or entities.

Table 3 mentioned in the above quote.

Table 3. Source Data Sets From Which GHCN-M Version 3 is

Constructed and Maintained

Priority Source Data Set Source Flag

1 Datzilla (Manual/Expert Assessment) Z

2 USHCN-M Version 2 U

3 World Weather Records W

4 KNMI Netherlands (DeBilt only) N

5 Colonial Era Archive J

6 MCDW (DSI 3500) M

7 MCDW quality controlled but not yet published C

8 UK Met Office CLIMAT K

9 CLIMAT bulletin P

10 GHCN-M Version 2 Ga

For any station incorporated from GHCN-M version 2 that had multiple

time series (“duplicates”) for mean temperature, the ‘G’ flag is replaced by

a number from 0 to 9 that corresponds to the particular duplicate in version

2 from which it originated. This number is the 12th digit in the version 2

station identifier.

Around June 6th, 2014 GHCN rolled back a higher quality source to a lower one by changing 2013 data from MCDW to Met Office data.(16000+ months of data) This resulted in numerous value changes and an increase in the amount of missing data. Those changes remained for over a month until I noticed while comparing my June 3rd file with one from early July. I inquired about the changes. Next day, July 10th, the higher quality source was re-inserted. I was told a couple days later, by one of the head GHCN team members, that it was “an unintentional processing problem that occurred with one of our ingest streams”. They did update their status.txt file, unsurprisingly in about as low-key a way as possible.

I find their reason unpersuasive. Why are they even touching 2013 data unless to over-write with a higher quality source? I wouldn’t expect them to still be streaming 2013 data, but have it always at hand and archived on site. They rebuild their dataset daily. What competent organization would not do a sanity check on their new build by running a simple data comparison to the previous dataset?

My latest query of about a week ago has to do with still using lower quality data at least as far back as 2001. For Australia between 2003-2013, 98% of their data is sourced to Met Office, but the higher quality MCDW has much of that data available. I don’t understand why they aren’t using the higher priority MCDW data. There are 2000-3000 pieces annually of Met Office data still being used since 2001, less than 1/3rd of it is related to Australia. Other countries in the database might also still be listed with inferior data simply because their data hasn’t been properly upgraded. A couple emails were exchanged, but no reason given, and no changes made. At this point I think it is questionable if GHCN will thoroughly investigate and upgrade to higher quality sources where appropriate. It will be a pleasant surprise if they do.

Below is a graphic example of how much difference the data source can make in the monthly temperature record. I’m not saying all stations have differences of such a magnitude, or that this shows the largest/smallest difference, or that all stations go in a similar direction. I haven’t checked, but wouldn’t be surprised if the differences tilted quite a bit in one direction.

33y3mvq[1]

Some digging in July led to finding the entire continent of Australia is devoid of data for September, October, November in 2011. They did have September, October data in v3.0 when it was superceded by v3.1 in early November 2011. v3.1 discarded October when it launched leaving only September intact. At some point in time since then they also discarded September. Emailed them about this on July 31st and a couple times since then. Latest is they are trying to get Met Office to re-transmit the data. MCDW has much of that data and since GHCN considers them a higher quality source than Met Office, I don’t understand why they aren’t using that instead.

Final example for today. October 2nd this year they deleted all the August data for the rest of the world(ROW) leaving only USHCN data in the database. They even deleted US station data not part of USHCN. Amazingly, they still managed to add ROW data for September during the deletion period. The August ROW data was missing until October 8th when they re-inserted it. Still don’t know why they deleted it. Mentioned it in an email about a week ago. No reason has been provided. The data deletion did increase the mean value of the remaining August data by 0.9C. Was there some announcement concerning global temperatures for summer or August during that period?

With such erratic data handling, the accuracy of their product is questionable.

This post is already long enough, so I’ll end here.


 

Reference links:

Free paper on GHCN v3 methodology. pg. 11 explains source priority and processing. http://onlinelibrary.wiley.com/doi/10.1029/2011JD016187/pdf

Daily issued data files along with status.txt, a readme, and other stuff.

ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/

Published MCDW data by station(ends 2011).

ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank/monthly/stage2/mcdw/

Published MCDW data by month. Current to Aug 2014.

http://www1.ncdc.noaa.gov/pub/data/mcdw/

A compilation of annual data concerning the 2013 roll-back, October 2014 deletion,

and the missing Australian data in 2011.

http://goo.gl/UZ73YF

Advertisements

67 thoughts on “Independent Review Discovers that NCDC Fumbles Data Handling in GHCN Climate Data

  1. The explanation that it was a “handling error” (code error?) is plausible, I’ve made similar mistakes.

    The solution is to make their code “open source”, allow massive continuous public review of their code.

    But if their code is anything like the infamous “Harry Read Me” shambles, I can understand why they wouldn’t want to do that.

    • The explanation that it was a “handling error” (code error?) is plausible, I’ve made similar mistakes.

      That was my reaction also Eric. And I have a lot of sympathy for programmers and data analysts who probably have to deal with flaky data where new and creative format/content errors can screw up what should be routine updates.

      BUT it does seem to me that professionalism dictates that there should be a public audit trail that tells us that on 141002 August ROW data was deleted and why. If it was, for example, in preparation for an update containing more complete data or correcting a data transpostion error at some site(s), there should be a statement of what the update fixes and, in this case, that the update failed.

    • really. lets look at examples.

      Hansen made his code open.
      2 skeptics I know of got it working. One made positive contributions by fixing problems
      1 group of non skeptics got it working and made substantial improvements putting it into python.
      NUMEROUS skeptics still maintain and promote false ideas about what the code actually does.

      NCDC makes their adjustment code available. No skeptic that I know of has looked at it or run it.
      Numerous skeptics continue to spread false stories about the codes availablity and what it does.

      We also make our code available. The only questions I ever got about the code were.

      A) why dont you supply SVN access ( we do, he just didnt read the code)
      B) what routines handle seasonality — durr read the code
      C) can you rewrite the code in R for me?
      D) I’m trying to read in matlab files with R and am getting errors from the R package, can you
      fix the R package that you are not an author of.

      So, you’ve never seen NCDC code. I have. I suggest you go get the code, break out your compiler and do it better.

      The point of open source goes beyond merely showing others what you did. The point is making it possible for people who care to IMPROVE the code. Find mistakes and improve.

      • I’m sure someone will have a look at your code. My experience in the area is severely lacking so I will leave it to others.

        What I can do is look at your results and some raw data. Here are the results for Mildura.
        http://berkeleyearth.lbl.gov/locations/34.56S-142.05E
        the data starts in 1855.

        The Post Office station was opened in 1889. White settlers only arrived in 1857 (most likely without a thermometer) and it was a derelict sheep station in 1886 when plans for Mildura started. The nearest weather station in Wentworth (no data available at BOM) was opened in 1868. I don’t think that your confidence intervals are big enough.

        The raw data seems to be missing some months so I’m having a little difficulty in lining up the max and min temperatures to calculate the monthly average, but the max temperature did cool by 0.26°C/decade between 1890 and 1950. The min temperatures warmed by .15°c/decade for an overall 0.9°C/decade cooling.

        You have a warming trend of about 1/10th of that.

        Many inland stations for Australia show this cooling in the raw data at the beginning of the 20th century so how does homogenization work again?

  2. “Steve Goddard” gets a LOT of flack. But his basic premise that NOAA and most of the climate government agencies adjust the raw data to cool the past, warm the present and create trends where non exist seems to be a sensible hypothesis.

    Which is finally being taken seriously in Australia and NZ. for starters.

    If he wasn’t so “Right Wing ish….” he would be getting a lot more credit for his work. His results are publishable imho.

  3. Like leaving a liquor store with an alcoholic behind the till, so it having the climate datasets in the hands of activists. The data will be abused.

    Some professional accountability and independence is called for. Someone like Steve McIntyre should oversee the needed transition.

    • Phlogiston, you’ve put your finger on the key word, “activist”. How could anyone deny that being an activist creates a conflict of interest for a scientist?

      Aristotle would point out that information workers are evaluated by some combination of three attributes: truth, beauty, and goodness. Think of workers in those categories as scientists/journalists, artists, or priests/activists/reformers/patriots.

      Information workers in all of those categories gain or lose status dependent on their evaluations, though clearly the nature of those evaluations differs. ” Is he objectively correct?”, is a different question from “Does he write well?”, or “Does he mean well?”

      Having a bunch of activist environmentalists, eager for approval from the New York Times and Sierra Club, in charge of the data record, is a virtual guarantee of mischief.

  4. “With such erratic data handling, the accuracy of their product is questionable.”

    With such erratic data handling, the accuracy of their product is ZILCH.

    Fixed it for ya.

  5. “the entire continent of Australia is devoid of data for September, October, November in 2011”
    Not sure I fully understand this. How can a national institution, acting s the primary repository and source for historical data lose part of an historic series?
    Sure, I understand the processes by which any daily run can be corrupted or broken, but have they not heard of backups …. such as grandfather, father, son files? Do they not offsite copies of the annual data files that are then protected by very restricted access? As a minimum, any public company would follow these basic procedures (and more) for its financial data. This isn’t magic.
    We live in a era when the world is, and people are, what the databases describe. So it’s probably high time that national data repositories have annual external audits. As a minimum, the audits would verify the robustness of their processes and the quality of their data protection procedures.

    • Just subject them to SARBOX rules and season with a bit of RICO penalties… and for good measure, insist on PCI level logging and compliance… (And yes, I presently work assuring ‘compliance’ in a company setting… so if WE can do it, they ought to do it…)

  6. The only way the IPCC and all the climate aggregators can ever recover their credibility is to admit they lied and manipulated the raw data, and why – Then show they have fired the perps and introduced controls to ensure future integrity can be audited.

    I won’t hold my breath….

  7. Never ascribe to malice that which is adequately explained by incompetence. But then one shouldn’t rule out fanaticism.

  8. How does this compare to the satellite data set, wrt:

    – are the data sets publicly available?
    – are adjustments being made erratically or periodically as required”
    – are adjustment methods and rationale publicly available?
    – what kind of responses are received to queries about the data?

    • 1) Yes, both the raw and unadjusted, which is how Mr. Koss was able to do his investigation in the first place.

      2) Depends on what sort of adjustments you’re talking about. The QA/homogenization process is performed on a regular basis as new data are added.

      3) Yes. You can even download, compile and execute the code NCDC uses to process the GHCN raw data into final published form: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/52i/ NASA/GISS provides open access to their code and documentation as well.

      4) Dunno. Try emailing someone a specific question not already answered in the published FAQs and documentation and see what happens.

    • How does this compare to the satellite data set, wrt:

      – are the data sets publicly available?
      – are adjustments being made erratically or periodically as required”
      – are adjustment methods and rationale publicly available?
      – what kind of responses are received to queries about the data?

      What do you mean by satellite data? Lets take UAH

      – are the data sets publicly available?: the final datasets are available and the sources are as well.

      – are adjustments being made erratically or periodically as required”

      Adjustements to the SOURCE data and the FINAL data are not made periodically. They are
      made when people get the time or the funding or a new idea gets promoted.

      – are adjustment methods and rationale publicly available?
      not clearly in every case.

      – what kind of responses are received to queries about the data?

      Mostly you get ignored.

      I will give you an example from UAH.

      UAH uses a 2.5 degree bin for consolidating data. However the source data has a much higher resolution. I wanted to compare a 1 degree surface record with a 1 degree UAH record.
      The response I got was that moving to 1 degree would be too much work.

      Note: the code for UAH is not open. The code for the sources that UAH uses are not open.
      At the BOTTOM the source code onboard the satellite is not open and is controlled by ITAR

      So, I doubt all the work at UAH and RSS do. I doubt it because I cannot check it. I doubt it because they do not publish code and at the bottom the CORE data acquistion code is a black box.

      But lots of people here trust it. They put principles aside because they like the answer

      • I doubt things to Steven M. I doubt that the data that produces an obvious error like at this station…
        http://stevengoddard.wordpress.com/2014/09/14/occupy-iceland/ is giving a true reading of GAT. My request for you to explain just this one stations adjustments (please no semantic games, they do adjust the data) is not a game of gotcha. It casts legitimate doubt that no matter the code, no matter if all the math is correct, it does not make the methodology correct. The climatologist from Iceland in charge of the stations disputes the adjustments in the link you have ignored.

        Errors this obvious, and there are many other examples, cast doubt on the methodology. You appear swayed by complexity and balance sheets balancing. The numbers may all add up, but when up to 30 percent of the data is made up by inferred algorithm computation, and real data is ignored, then the example of one station is symbolic of the FUBAR of the rest.

        Individual stations clearly adjusted poorly – a large percentage of made up data spread from up to 1200 k away – a past that is constantly changing (long past any TOB adjustments) – adjustments that consistently warm the present and cool the past – adjustments that contradict all time hot readings, contradict historic drought conditions, contradict historic percentage of days over given high temperatures, in summary adjustments that do not square with the past and recent adjustments do not square with the satellite data sets where the 1998 anomalies are double the current anomalies, and do not square with near record NH snow cover, yet the surface stations claim an all time record high; all of this leads to a surface record legitimately called FUBAR

        So once again Steven M, start with explaining one station.

      • David A: “…a true reading of GAT…”

        There is no GAT. It’s a fantasy. A made-up concept with no physical meaning.

  9. Bob, a little picky thing of mine but it is important to me. Sorry in anticipation.

    Methodology is the study (from the greek -ology) of a method or methods.

    • meth·od·ol·o·gy
      noun \ˌme-thə-ˈdä-lə-jē\

      : a set of methods, rules, or ideas that are important in a science or art : a particular procedure or set of procedures
      plural meth·od·ol·o·gies
      Full Definition of METHODOLOGY
      1
      : a body of methods, rules, and postulates employed by a discipline : a particular procedure or set of procedures
      2
      : the analysis of the principles or procedures of inquiry in a particular field
      See methodology defined for English-language learners »
      Examples of METHODOLOGY

      He blamed the failure of their research on poor methodology.

      http://www.merriam-webster.com/dictionary/methodology

  10. Some digging in July led to finding the entire continent of Australia is devoid of data for September, October, November in 2011. They did have September, October data in v3.0 when it was superceded by v3.1 in early November 2011. v3.1 discarded October when it launched leaving only September intact.

    That’s unfortunate.

    At some point in time since then they also discarded September.

    Oh dear.

    October 2nd this year they deleted all the August data for the rest of the world(ROW) leaving only USHCN data in the database. They even deleted US station data not part of USHCN. Amazingly, they still managed to add ROW data for September during the deletion period.

    Well at least they found something from somewhere in the rest of the world.

    The August ROW data was missing until October 8th when they re-inserted it.

    Well , where did that go and how was it handled during the passage through the aether?

    This article makes a good case for questioning the quality of the data.
    Has anyone seen their QMS and audit strategy?

  11. ‘With such erratic data handling, the accuracy of their product is questionable.’

    So normal practice for climate science then ?
    Still what does accuracy matter when you get the ‘right results ‘ from poor quality data .
    When you have result you need effort to make sure it is accurate is counterproductive , third rule of climate ‘science ‘

  12. Anthony,

    Thank you for your patience in guiding me through the requirements necessary to getting this published. I can be somewhat thick about things with which I haven’t previously had experience.

  13. Whether or not there is global warming going on, whether or not we need an IPCC to address matters of global warming, whether or not we need to maintain or increase NCDC staff and funding all depends on the data. If the data shows warming, the IPCC and NCDC build their empires, if not, they may lose funding. Yet we allow the measurements that determines the level of necessity for the NCDC to be maintained and ADJUSTED by the NCDC without any auditing agency checking what they are doing. It’s like letting salesmen adjust their own sales records shown to their bosses before salary reviews. It is absurd.

  14. I’m not a climate scientist, but I am a retired CFO, so I’m very familiar with data management and quality. The type of NCDC data errors you describe are certainly easy to detect ASSUMING THE NCDC DIDN’T INTENTIONALLY FUDGE THE DATA OR THAT THE NCDC ACTUALLY HAS THE PROFESSIONAL INTEGRITY TO CARE ABOUT THE DATA QUALITY.

    It’s that “professional integrity” I’m worried about.

  15. Bob Koss:
    It can be assumed that folks at the NCDC will have read this post. It would be interesting to see if they correct the errors that you have uncovered, or if they ignore them. A follow-up post would be interesting.

    • mpainter,

      They issue a new dataset almost daily between 8:00 and 8:30 AM eastern time. I’ve only see them skip a couple days in the last four months. This morning there was no update. Maybe they are looking into things.

  16. Ingest issues are common. We face them all the time.

    That’s one reason why people should always supply the code as used and data as used.
    Even here the problem is not solved.

    Let me give you an example.

    We receive metadata from a supplier. It reports the station lat as X and the longitude as -Y
    As a part of QA we determine that -Y is wrong and that Y is the correct value. They have switched
    the sign. So we send them a note to upstream the fix and we locally change the value.
    The next month the fix has been made upstream, so we remove the local fix. Then the following month
    the source re introduces the error and trips the QA code again and the process starts all over again.

    There are in excess of 14 differernt sources of temperature data. Before eliminating, reconciling duplicates there are over 300000 different station records.

    And it gets more complicated. Those 14 different sources also inter source between themselves.

    Even if you are 99% correct on handling ingest you end up with a good number of cases where the
    records are problematic.

    However: pick any 1000 stations you want. Pick an single source you want; Compute the global average.
    The answer will be X. Now pick a different source. pick a different 1000. The answer will be X.

    What you come to realize is that regardless of the stations you pick. regardless of the source you use. regardless of your ingest errors, regardless of your quality ranking for sources, regardless of the method you use to average.. the answer is always statistically indistinguishable from X.

    The global average is a global average. Local detail will shift around depending on your sources and methods and mistakes. One thing doesnt change: It’s getting warmer.

    There is an on going project to remedy some of these situations. ISTI.

    Even there some basic problems will remain. GHCN and ISTI ( and Berkely earth for that matter )
    are all aggregators of data. That aggregation is utterly dependent on sources we dont control.
    In my daily job I face the same problems with an industry that has rules and standards for how
    data should be supplied. And still there are data problems every day. Some problems cost lives
    so its no joke.

    There is a different approach to the aggregation approaches taken by GHCN, ISTI and Berkeley.
    That approach is the CRU approach. They only use data that has been adjusted by country suppliers.

    • ” One thing doesnt change: It’s getting warmer.”

      It got warmer for a while. But people born at the start of the “pause” will be voting this year.

    • Steve the data audit and ingest complexity is a very, very old whine that thousands of firms and models have solved. It is the desire to solve which the CaCa do not want solved. It is the dog ate my homework cliche. Please explain again how it is warming.

    • Mr. Mosher, I always appreciate and value your input. Thank you.

      We all know that since the end of the Little Ice Age, temperatures have increased just as they have after every ice age, the point being that no science or math has shown that anthropogenic sources are the culprit here. Quite possibly mankind has influenced this increase to a minor extent, but most likely Mother Nature is running the show, and that point I will believe until proven otherwise.

    • The deeper, the more complex, and and the more arcane the data sets, the more opportunity for mischief.

      • Unfortunately true. The termites do their work unseen. When you finally become aware of the infestation, you can’t get at them.

    • “It’s getting warmer.”
      =======================
      Since when? How much? Why have both satellite records diverged so greatly (1998 compared to 2014)
      from what the surface record shows? Is UHI accounted for properly. ( A very debatable subject) BTW saying the trend ” is always statistically indistinguishable from X.” no matter what you use, does not make it so. For instance just take the unadjusted trend from all continuously active USHCN stations. So no Sir, is not always X.

      You will not focus on one station as I hoped. Let us concentrate on one nation….http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/

      X appears to have a lot of variables.

      • reposting for clarity an unedited post.

        Steve M says “It’s getting warmer.”
        =======================
        Since when? From the last three warm periods prior to the current, we are likely cooling.

        How much? For instance…
        Why have both satellite records diverged so greatly (1998 compared to 2014)
        from what the surface record shows? http://stevengoddard.wordpress.com/2014/08/08/noaa-fraud-of-the-day/

        Is UHI accounted for properly. ( A very debatable subject)

        BTW saying the trend ” is always statistically indistinguishable from X.” no matter what you use, does not make it so. For instance just take the unadjusted trend from all continuously active USHCN stations.

        So no Sir, X is not always X.

        You will not focus on one station as I hoped. Let us concentrate on one nation….http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/

        X appears to have a lot of variables.

    • Steven Mosher @ November 3, 2014 at 9:34 am says:

      “One thing doesnt change: It’s getting warmer.”

      No, it isn’t. Even in your “adjusted” surface station “data”, “global warming” has stopped. During the past ~70 years of rising CO2, GASTA, if such a monstrosity can actually be measured, first trended down for about 30 years, then up for around 20 & now has been at best flat for going on 20. No correlation between GASTA and CO2 there.

      In the past 300 years, there have been intervals of higher and longer lasting multidecadal warming than during the late 20th century, without rising CO2.

      Your project is a hoax perpetrated by government-funded scammers, which has cost the world dearly in life and treasure.

    • Mosher: “The global average is a global average.”

      And meaningless, except in some abstract, mathematical sense.

      Mosher: “One thing doesnt [sic] change: It’s getting warmer.”

      Some places have gotten warmer compared to some arbitrary baseline. Others have cooled, others have remained relatively static. You can’t average them together and come up with anything physically meaningful. The rest of your post is moot.

  17. We keep on seeing reports about all the temperature data fiddling and we all complain about it but don’t we need to find ways to DO something about it? Could some of the many experts who follow this subject put together and organize all the evidence into a complete and understandable report (indictment)? This would serve as a basis for action — like providing evidence that would provoke our representatives to act, and/or possibly even permit pursuing legal means to correct the problem.

    • With any Fraud, intent has to be established. Intent is the bugaboo of all prosecutions of fraud. Unless you are the government and can wiretap or otherwise secretly collect communications, or get wired-mics on cooperative witnesses, it all can be Ex Post Facto chocked-up to incompetence or honest mistakes.

      Barack Obama and the IRS targeting obfuscations are the perfect example of what, to an objective observer, would prima facie appear as probable intent to deceive the true nature of what the IRS did. But without hard internal communications, incompetence is all we are left with on the IRS’s part due to standards of doubt. And Lois Lerner knows this.

      So it is with NCDC, GISS, and NASA insiders and their temperature data sets and reporting. As long as they are they the government and have a protective DoJ , they can get away with anything and claim unintentional data problems, coding errors, insufficient funding, or just general incompetence when gross errors get discovered. Without access to internal communications, any claims of fraud cannot be proven.

      As an aside, that is why the ClimateGate emails were so damning. Those communications showed clear intent to deceive by a set of perpetrators.

      To find intent to deceive on the part of GISS or NCDC, would similarly require (1) either an insider whistleblower coming forward WITH HARD DATA and evidence (emails, recordings from data handling meetings), or (2) it would require the DoJ acting on a tip, say from an Inspector General complaint, to collect internal GISS, NCDC communications. Of course, the latter will never happen in an Obama Administration, as whoever he has a AG will not investigate or allow an investigation to be done in any honest manner. The IRS scandal/cover-up and the Fast and Furious cover-up both proved that beyond a doubt.

  18. Mosher sez: “One thing doesn’t change: It’s getting warmer.”

    Where?
    Anyone can go to Wolfram Alpha and enter something like this, below:
    “average temperature past 80 years rio de janeiro”
    -Put in whatever town you want.

    -and it will take a few moments, then give you the avg temp history in a graph.
    You may have to revise your wording to shorten the time span, if data are not available.

    If the globe is warming, we should find hockey sticks all over the place.
    I don’t. I see flat lines. Every now and then I see a temp record trending up a bit. Or one trending down.

    And something is supposed to be wrong with me psychologically since I am not gob-smacked by this obvious run-away temp rise.

    • Thanks for that, TLD. Useful resource.
      Minor amendment to Mosher’s comment: One thing doesn’t change: we will go on claiming it’s getting warmer.” Works with the impressionable, and those who would not consider independently investigating the claim.

  19. Ok, now extend this weather / climate data collection and processing incompetency debacle to all the other Government bureaucracy run compulsory data acquisition and collection agencies, particularly the surveillance agencies and you might get an inkling on the probability of some very serious and totally false and malicious accusations along with consequent and severe but illegal as in based on false or manipulated information and data, legal strictures against both individuals and groups of individuals that will arise from both the incompetency and the bias and deliberate abuse and misuse by the administrators of the immense amounts of information being collected on every individual.

     The abuse and deliberate misuse of personal data already appears to be starting to occur with the collection of individual’s data by google and all the other above the moral law and apparently well above the legal law, self appointed trans-national personal data collectors.

  20. On second thoughts, just maybe the surveillance and personal data acquisition agencies have a put in place quite strong checks and balances and strong overseer groups to keep a tight rein on the activities and accuracies of the actual personal data collection sections of their organisations.
     If so that would point to the contrasts in climate science of the utter incompetency and error riddled and complete lack of credible data collection and processing standards which have been allowed to become the norm in what has become just another branch of the hubris laden, self promoting advocacy driven climate alarmist science.

    And on the entire basis of this this error riddled science the world has expended close to a trillion dollars over the last decade in a totally futile attempt to stop or prevent the chimera of a man kind created catastrophic warming due to anthropogenic CO2, a CO2 induced warming for which no evidence has been provided or proof provided that it actually exists in the real world climate.
    Except the increasingly recognised fact that the data behind all the claims of increasing global temperatures relies totally on incomplete, corrupted, constantly changing, irrelevant in many aspects and unchecked and unverified and suspected either inadvertently or perhaps even deliberately corrupted processing of data from organisations run by global warming activist scientists

  21. “the answer is always statistically indistinguishable from X.” Just what is X, an imaginary number? Do I need 1 station or 100 or 200 to make an X? If I take all cities is that the same X as the X from all rural? When I have an X is the X for the Mid-west, the Artic or the whole world? If I run the same formula on the data tomorrow will last years X be same as yesterdays last years X. If not can we say X is not accurate but should be getting better all the time? How can we know? Good times…

    • The point about the X is perfectly legitimate. If there is an overall, predominant trend, you will see it in most any sample you grab.

      This is the same concept I used when I posted the Wolfram Alpha strategy for checking the long-term temperature trend at any location you might want. With various specific records going back to 40, 60, 80, 100 years, nearly all sites show flat temp trends.
      http://www.wolframalpha.com/
      Enter “average temperature Istanbul [or Constantinople] past 80 years.”
      Nary a Hockey Stick anywhere.

  22. Reblogged this on Centinel2012 and commented:
    This is a simple case of the Fox guarding the hen house.

    The agenda of the politicians are supported by the agencies that they manage — would anyone in business or any place else where you were employer ever turn in a report that was not in support of the manager or owner of that business? I think not!

    So to expect honest reporting from an agency of the government showing that things are not what the president wants shown are very very unlikely!

  23. My grandparents said the same thing in the 30’s. They said it was getting hotter. And then it got colder. Only the history challenged take today’s weather and think humans are to blame for this current weather pattern variation. Ground stations were NEVER meant to be exacting. They are ballpark sensors. They can tell us to wear a snow suit, not a bikini. But they can’t tell us that the temperature is .3 degrees colder or warmer than last year. And people who think sensors can do that must not have enough important sh** to do during daylight hours.

    • Well said, I have alway though when you come up with precision the exced you instrumentation with claim of the accuracy beyond that of you instrumentation, I alway think of the old movie The Music Man, my first question what BS are you trying to sell me.

Comments are closed.