Climategate: hide the decline – codified

WUWT blogging ally Ecotretas writes in to say that he has made a compendium of programming code segments that show comments by the programmer that suggest places where data may be corrected, modified, adjusted, or busted. Some the  HARRY_READ_ME comments are quite revealing. For those that don’t understand computer programming, don’t fret, the comments by the programmer tell the story quite well even if the code itself makes no sense to you.

http://codyssey.files.wordpress.com/2009/02/software_bug.jpg

To say that the CRU code might be “buggy” would be…well I’ll just let CRU’s programmer tell you in his own words.

  • FOIA\documents\osborn-tree6\mann\oldprog\maps12.proFOIA\documents\osborn-tree6\mann\oldprog\maps15.proFOIA\documents\osborn-tree6\mann\oldprog\maps24.pro; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions

    ; of growing season temperatures. Uses "corrected" MXD - but shouldn't usually

    ; plot past 1960 because these will be artificially adjusted to look closer to

    ; the real temperatures.

  • FOIA\documents\harris-tree\recon_esper.pro; Computes regressions on full, high and low pass Esper et al. (2002) series,

    ; anomalies against full NH temperatures and other series.

    ; CALIBRATES IT AGAINST THE LAND-ONLY TEMPERATURES NORTH OF 20 N

    ;

    ; Specify period over which to compute the regressions (stop in 1960 to avoid

    ; the decline

  • FOIA\documents\harris-tree\calibrate_nhrecon.pro;

    ; Specify period over which to compute the regressions (stop in 1960 to avoid

    ; the decline that affects tree-ring density records)

    ;

  • FOIA\documents\harris-tree\recon1.pro

    FOIA\documents\harris-tree\recon2.proFOIA\documents\harris-tree\recon_jones.pro;

    ; Specify period over which to compute the regressions (stop in 1940 to avoid

    ; the decline

    ;

  • FOIA\documents\HARRY_READ_ME.txt17. Inserted debug statements into anomdtb.f90, discovered that

    a sum-of-squared variable is becoming very, very negative! Key

    output from the debug statements:

    (..)

    forrtl: error (75): floating point exception

    IOT trap (core dumped)

    ..so the data value is unbfeasibly large, but why does the

    sum-of-squares parameter OpTotSq go negative?!!

  • FOIA\documents\HARRY_READ_ME.txt22. Right, time to stop pussyfooting around the niceties of Tim's labyrinthine software

    suites - let's have a go at producing CRU TS 3.0! since failing to do that will be the

    definitive failure of the entire project..

  • FOIA\documents\HARRY_READ_ME.txtgetting seriously fed up with the state of the Australian data. so many new stations have been

    introduced, so many false references.. so many changes that aren't documented. Every time a

    cloud forms I'm presented with a bewildering selection of similar-sounding sites, some with

    references, some with WMO codes, and some with both. And if I look up the station metadata with

    one of the local references, chances are the WMO code will be wrong (another station will have

    it) and the lat/lon will be wrong too.

  • FOIA\documents\HARRY_READ_ME.txtI am very sorry to report that the rest of the databases seem to be in nearly as poor a state as

    Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO

    and one with, usually overlapping and with the same station name and very similar coordinates. I

    know it could be old and new stations, but why such large overlaps if that's the case? Aarrggghhh!

    There truly is no end in sight.

  • FOIA\documents\HARRY_READ_ME.txt28. With huge reluctance, I have dived into 'anomdtb' - and already I have

    that familiar Twilight Zone sensation.

  • FOIA\documents\HARRY_READ_ME.txtWrote 'makedtr.for' to tackle the thorny problem of the tmin and tmax databases not

    being kept in step. Sounds familiar, if worrying. am I the first person to attempt

    to get the CRU databases in working order?!!

  • FOIA\documents\HARRY_READ_ME.txtWell, dtr2cld is not the world's most complicated program. Wheras cloudreg is, and I

    immediately found a mistake! Scanning forward to 1951 was done with a loop that, for

    completely unfathomable reasons, didn't include months! So we read 50 grids instead

    of 600!!! That may have had something to do with it. I also noticed, as I was correcting

    THAT, that I reopened the DTR and CLD data files when I should have been opening the

    bloody station files!!

  • FOIA\documents\HARRY_READ_ME.txtBack to the gridding. I am seriously worried that our flagship gridded data product is produced by

    Delaunay triangulation - apparently linear as well. As far as I can see, this renders the station

    counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived

    at from a statistical perspective - since we're using an off-the-shelf product that isn't documented

    sufficiently to say that. Why this wasn't coded up in Fortran I don't know - time pressures perhaps?

    Was too much effort expended on homogenisation, that there wasn't enough time to write a gridding

    procedure? Of course, it's too late for me to fix it too. Meh.

  • FOIA\documents\HARRY_READ_ME.txtHere, the expected 1990-2003 period is MISSING - so the correlations aren't so hot! Yet

    the WMO codes and station names /locations are identical (or close). What the hell is

    supposed to happen here? Oh yeah - there is no 'supposed', I can make it up. So I have :-)

  • FOIA\documents\HARRY_READ_ME.txtWell, it's been a real day of revelations, never mind the week. This morning I

    discovered that proper angular weighted interpolation was coded into the IDL

    routine, but that its use was discouraged because it was slow! Aaarrrgghh.

    There is even an option to tri-grid at 0.1 degree resolution and then 'rebin'

    to 720x360 - also deprecated! And now, just before midnight (so it counts!),

    having gone back to the tmin/tmax work, I've found that most if not all of the

    Australian bulletin stations have been unceremoniously dumped into the files

    without the briefest check for existing stations.

  • FOIA\documents\HARRY_READ_ME.txtAs we can see, even I'm cocking it up! Though recoverably. DTR, TMN and TMX need to be written as (i7.7)./code>
  • FOIA\documents\HARRY_READ_ME.txtOH FUCK THIS. It's Sunday evening, I've worked all weekend, and just when I thought it was done I'm

    hitting yet another problem that's based on the hopeless state of our databases. There is no uniform

    data integrity, it's just a catalogue of issues that continues to grow as they're found.

  • FOIA\documents\osborn-tree6\mann\mxdgrid2ascii.proprintf,1,’Osborn et al. (2004) gridded reconstruction of warm-season’

    printf,1,’(April-September) temperature anomalies (from the 1961-1990 mean).’

    printf,1,’Reconstruction is based on tree-ring density records.’

    printf,1

    printf,1,’NOTE: recent decline in tree-ring density has been ARTIFICIALLY

    printf,1,’REMOVED to facilitate calibration. THEREFORE, post-1960 values’

    printf,1,’will be much closer to observed temperatures then they should be,’

    printf,1,’which will incorrectly imply the reconstruction is more skilful’

    printf,1,’than it actually is. See Osborn et al. (2004).’

  • FOIA\documents\osborn-tree6\summer_modes\data4sweden.pro

    FOIA\documents\osborn-tree6\summer_modes\data4sweden.proprintf,1,'IMPORTANT NOTE:'

    printf,1,'The data after 1960 should not be used. The tree-ring density'

    printf,1,'records tend to show a decline after 1960 relative to the summer'

    printf,1,'temperature in many high-latitude locations. In this data set'

    printf,1,'this "decline" has been artificially removed in an ad-hoc way, and'

    printf,1,'this means that data after 1960 no longer represent tree-ring

    printf,1,'density variations, but have been modified to look more like the

    printf,1,'observed temperatures.'

  • FOIA\documents\osborn-tree6\combined_wavelet_col.pro;

    ; Remove missing data from start & end (end in 1960 due to decline)

    ;

    kl=where((yrmxd ge 1402) and (yrmxd le 1960),n)

    sst=prednh(kl)

  • FOIA\documents\osborn-tree6\mann\mxd_pcr_localtemp.pro; Tries to reconstruct Apr-Sep temperatures, on a box-by-box basis, from the

    ; EOFs of the MXD data set. This is PCR, although PCs are used as predictors

    ; but not as predictands. This PCR-infilling must be done for a number of

    ; periods, with different EOFs for each period (due to different spatial

    ; coverage). *BUT* don’t do special PCR for the modern period (post-1976),

    ; since they won’t be used due to the decline/correction problem.

    ; Certain boxes that appear to reconstruct well are “manually” removed because

    ; they are isolated and away from any trees.

  • FOIA\documents\osborn-tree6\briffa_sep98_d.pro;mknormal,yyy,timey,refperiod=[1881,1940]

    ;

    ; Apply a VERY ARTIFICAL correction for decline!!

    ;

    yrloc=[1400,findgen(19)*5.+1904]

    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$

    2.6,2.6,2.6]*0.75 ; fudge factor

    (...)

    ;

    ; APPLY ARTIFICIAL CORRECTION

    ;

    yearlyadj=interpol(valadj,yrloc,x)

    densall=densall+yearlyadj

  • FOIA\documents\osborn-tree6\summer_modes\pl_decline.pro;

    ; Plots density ‘decline’ as a time series of the difference between

    ; temperature and density averaged over the region north of 50N,

    ; and an associated pattern in the difference field.

    ; The difference data set is computed using only boxes and years with

    ; both temperature and density in them – i.e., the grid changes in time.

    ; The pattern is computed by correlating and regressing the *filtered*

    ; time series against the unfiltered (or filtered) difference data set.

    ;

    ;*** MUST ALTER FUNCT_DECLINE.PRO TO MATCH THE COORDINATES OF THE

    ; START OF THE DECLINE *** ALTER THIS EVERY TIME YOU CHANGE ANYTHING ***

  • FOIA\documents\osborn-tree6\mann\oldprog\maps12.pro;

    ; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions

    ; of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually

    ; plot past 1960 because these will be artificially adjusted to look closer to

    ; the real temperatures.

    ;

  • FOIA\documents\osborn-tree6\mann\oldprog\calibrate_correctmxd.pro; We have previously (calibrate_mxd.pro) calibrated the high-pass filtered

    ; MXD over 1911-1990, applied the calibration to unfiltered MXD data (which

    ; gives a zero mean over 1881-1960) after extending the calibration to boxes

    ; without temperature data (pl_calibmxd1.pro). We have identified and

    ; artificially removed (i.e. corrected) the decline in this calibrated

    ; data set. We now recalibrate this corrected calibrated dataset against

    ; the unfiltered 1911-1990 temperature data, and apply the same calibration

    ; to the corrected and uncorrected calibrated MXD data.

  • FOIA\documents\osborn-tree6\summer_modes\calibrate_correctmxd.pro; No need to verify the correct and uncorrected versions, since these

    ; should be identical prior to 1920 or 1930 or whenever the decline

    ; was corrected onwards from.

  • FOIA\documents\osborn-tree5\densplus188119602netcdf.pro; we know the file starts at yr 440, but we want nothing till 1400, so we

    ; can skill lines (1400-440)/10 + 1 header line

    ; we now want all lines (10 yr per line) from 1400 to 1980, which is

    ; (1980-1400)/10 + 1 lines

    (...)

    ; we know the file starts at yr 1070, but we want nothing till 1400, so we

    ; can skill lines (1400-1070)/10 + 1 header line

    ; we now want all lines (10 yr per line) from 1400 to 1991, which is

    ; (1990-1400)/10 + 1 lines (since 1991 is on line beginning 1990)

  • FOIA\documents\osborn-tree6\mann\oldprog\maps12.pro

    FOIA\documents\osborn-tree6\mann\oldprog\maps15.pro

    FOIA\documents\osborn-tree6\mann\oldprog\maps24.pro

    ; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions

    ; of growing season temperatures. Uses "corrected" MXD - but shouldn't usually

    ; plot past 1960 because these will be artificially adjusted to look closer to

    ; the real temperatures.

  • FOIA\documents\harris-tree\recon_esper.pro

    ; Computes regressions on full, high and low pass Esper et al. (2002) series,

    ; anomalies against full NH temperatures and other series.

    ; CALIBRATES IT AGAINST THE LAND-ONLY TEMPERATURES NORTH OF 20 N

    ;

    ; Specify period over which to compute the regressions (stop in 1960 to avoid

    ; the decline

  • FOIA\documents\harris-tree\calibrate_nhrecon.pro

    ;

    ; Specify period over which to compute the regressions (stop in 1960 to avoid

    ; the decline that affects tree-ring density records)

    ;

  • FOIA\documents\harris-tree\recon1.pro

    FOIA\documents\harris-tree\recon2.proFOIA\documents\harris-tree\recon_jones.pro

    ;

    ; Specify period over which to compute the regressions (stop in 1940 to avoid

    ; the decline

    ;

  • FOIA\documents\HARRY_READ_ME.txt

    17. Inserted debug statements into anomdtb.f90, discovered that

    a sum-of-squared variable is becoming very, very negative! Key

    output from the debug statements:

    (..)

    forrtl: error (75): floating point exception

    IOT trap (core dumped)

    ..so the data value is unbfeasibly large, but why does the

    sum-of-squares parameter OpTotSq go negative?!!

  • FOIA\documents\HARRY_READ_ME.txt

    22. Right, time to stop pussyfooting around the niceties of Tim's labyrinthine software

    suites - let's have a go at producing CRU TS 3.0! since failing to do that will be the

    definitive failure of the entire project..

  • FOIA\documents\HARRY_READ_ME.txt

    getting seriously fed up with the state of the Australian data. so many new stations have been

    introduced, so many false references.. so many changes that aren't documented. Every time a

    cloud forms I'm presented with a bewildering selection of similar-sounding sites, some with

    references, some with WMO codes, and some with both. And if I look up the station metadata with

    one of the local references, chances are the WMO code will be wrong (another station will have

    it) and the lat/lon will be wrong too.

  • FOIA\documents\HARRY_READ_ME.txt

    I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as

    Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO

    and one with, usually overlapping and with the same station name and very similar coordinates. I

    know it could be old and new stations, but why such large overlaps if that's the case? Aarrggghhh!

    There truly is no end in sight.

  • FOIA\documents\HARRY_READ_ME.txt

    28. With huge reluctance, I have dived into 'anomdtb' - and already I have

    that familiar Twilight Zone sensation.

  • FOIA\documents\HARRY_READ_ME.txt

    Wrote 'makedtr.for' to tackle the thorny problem of the tmin and tmax databases not

    being kept in step. Sounds familiar, if worrying. am I the first person to attempt

    to get the CRU databases in working order?!!

  • FOIA\documents\HARRY_READ_ME.txt

    Well, dtr2cld is not the world's most complicated program. Wheras cloudreg is, and I

    immediately found a mistake! Scanning forward to 1951 was done with a loop that, for

    completely unfathomable reasons, didn't include months! So we read 50 grids instead

    of 600!!! That may have had something to do with it. I also noticed, as I was correcting

    THAT, that I reopened the DTR and CLD data files when I should have been opening the

    bloody station files!!

  • FOIA\documents\HARRY_READ_ME.txt

    Back to the gridding. I am seriously worried that our flagship gridded data product is produced by

    Delaunay triangulation - apparently linear as well. As far as I can see, this renders the station

    counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived

    at from a statistical perspective - since we're using an off-the-shelf product that isn't documented

    sufficiently to say that. Why this wasn't coded up in Fortran I don't know - time pressures perhaps?

    Was too much effort expended on homogenisation, that there wasn't enough time to write a gridding

    procedure? Of course, it's too late for me to fix it too. Meh.

  • FOIA\documents\HARRY_READ_ME.txt

    Here, the expected 1990-2003 period is MISSING - so the correlations aren't so hot! Yet

    the WMO codes and station names /locations are identical (or close). What the hell is

    supposed to happen here? Oh yeah - there is no 'supposed', I can make it up. So I have :-)

  • FOIA\documents\HARRY_READ_ME.txt

    Well, it's been a real day of revelations, never mind the week. This morning I

    discovered that proper angular weighted interpolation was coded into the IDL

    routine, but that its use was discouraged because it was slow! Aaarrrgghh.

    There is even an option to tri-grid at 0.1 degree resolution and then 'rebin'

    to 720x360 - also deprecated! And now, just before midnight (so it counts!),

    having gone back to the tmin/tmax work, I've found that most if not all of the

    Australian bulletin stations have been unceremoniously dumped into the files

    without the briefest check for existing stations.

  • FOIA\documents\HARRY_READ_ME.txt

    As we can see, even I'm cocking it up! Though recoverably. DTR, TMN and TMX need to be written as (i7.7)./code>

  • FOIA\documents\HARRY_READ_ME.txt

    OH FUCK THIS. It's Sunday evening, I've worked all weekend, and just when I thought it was done I'm

    hitting yet another problem that's based on the hopeless state of our databases. There is no uniform

    data integrity, it's just a catalogue of issues that continues to grow as they're found.

  • FOIA\documents\osborn-tree6\mann\mxdgrid2ascii.pro

    printf,1,’Osborn et al. (2004) gridded reconstruction of warm-season’

    printf,1,’(April-September) temperature anomalies (from the 1961-1990 mean).’

    printf,1,’Reconstruction is based on tree-ring density records.’

    printf,1

    printf,1,’NOTE: recent decline in tree-ring density has been ARTIFICIALLY

    printf,1,’REMOVED to facilitate calibration. THEREFORE, post-1960 values’

    printf,1,’will be much closer to observed temperatures then they should be,’

    printf,1,’which will incorrectly imply the reconstruction is more skilful’

    printf,1,’than it actually is. See Osborn et al. (2004).’

  • FOIA\documents\osborn-tree6\summer_modes\data4sweden.pro

    FOIA\documents\osborn-tree6\summer_modes\data4sweden.pro

    printf,1,'IMPORTANT NOTE:'

    printf,1,'The data after 1960 should not be used. The tree-ring density'

    printf,1,'records tend to show a decline after 1960 relative to the summer'

    printf,1,'temperature in many high-latitude locations. In this data set'

    printf,1,'this "decline" has been artificially removed in an ad-hoc way, and'

    printf,1,'this means that data after 1960 no longer represent tree-ring

    printf,1,'density variations, but have been modified to look more like the

    printf,1,'observed temperatures.'

  • FOIA\documents\osborn-tree6\combined_wavelet_col.pro

    ;

    ; Remove missing data from start & end (end in 1960 due to decline)

    ;

    kl=where((yrmxd ge 1402) and (yrmxd le 1960),n)

    sst=prednh(kl)

  • FOIA\documents\osborn-tree6\mann\mxd_pcr_localtemp.pro

    ; Tries to reconstruct Apr-Sep temperatures, on a box-by-box basis, from the

    ; EOFs of the MXD data set. This is PCR, although PCs are used as predictors

    ; but not as predictands. This PCR-infilling must be done for a number of

    ; periods, with different EOFs for each period (due to different spatial

    ; coverage). *BUT* don’t do special PCR for the modern period (post-1976),

    ; since they won’t be used due to the decline/correction problem.

    ; Certain boxes that appear to reconstruct well are “manually” removed because

    ; they are isolated and away from any trees.

  • FOIA\documents\osborn-tree6\briffa_sep98_d.pro;mknormal,yyy,timey,refperiod=[1881,1940]

    ;

    ; Apply a VERY ARTIFICAL correction for decline!!

    ;

    yrloc=[1400,findgen(19)*5.+1904]

    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$

    2.6,2.6,2.6]*0.75 ; fudge factor

    (...)

    ;

    ; APPLY ARTIFICIAL CORRECTION

    ;

    yearlyadj=interpol(valadj,yrloc,x)

    densall=densall+yearlyadj

  • FOIA\documents\osborn-tree6\summer_modes\pl_decline.pro

    ;

    ; Plots density ‘decline’ as a time series of the difference between

    ; temperature and density averaged over the region north of 50N,

    ; and an associated pattern in the difference field.

    ; The difference data set is computed using only boxes and years with

    ; both temperature and density in them – i.e., the grid changes in time.

    ; The pattern is computed by correlating and regressing the *filtered*

    ; time series against the unfiltered (or filtered) difference data set.

    ;

    ;*** MUST ALTER FUNCT_DECLINE.PRO TO MATCH THE COORDINATES OF THE

    ; START OF THE DECLINE *** ALTER THIS EVERY TIME YOU CHANGE ANYTHING ***

  • FOIA\documents\osborn-tree6\mann\oldprog\maps12.pro

    ;

    ; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions

    ; of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually

    ; plot past 1960 because these will be artificially adjusted to look closer to

    ; the real temperatures.

    ;

  • FOIA\documents\osborn-tree6\mann\oldprog\calibrate_correctmxd.pro

    ; We have previously (calibrate_mxd.pro) calibrated the high-pass filtered

    ; MXD over 1911-1990, applied the calibration to unfiltered MXD data (which

    ; gives a zero mean over 1881-1960) after extending the calibration to boxes

    ; without temperature data (pl_calibmxd1.pro). We have identified and

    ; artificially removed (i.e. corrected) the decline in this calibrated

    ; data set. We now recalibrate this corrected calibrated dataset against

    ; the unfiltered 1911-1990 temperature data, and apply the same calibration

    ; to the corrected and uncorrected calibrated MXD data.

  • FOIA\documents\osborn-tree6\summer_modes\calibrate_correctmxd.pro

    ; No need to verify the correct and uncorrected versions, since these

    ; should be identical prior to 1920 or 1930 or whenever the decline

    ; was corrected onwards from.

  • FOIA\documents\osborn-tree5\densplus188119602netcdf.pro

    ; we know the file starts at yr 440, but we want nothing till 1400, so we

    ; can skill lines (1400-440)/10 + 1 header line

    ; we now want all lines (10 yr per line) from 1400 to 1980, which is

    ; (1980-1400)/10 + 1 lines

    (...)

    ; we know the file starts at yr 1070, but we want nothing till 1400, so we

    ; can skill lines (1400-1070)/10 + 1 header line

    ; we now want all lines (10 yr per line) from 1400 to 1991, which is

    ; (1990-1400)/10 + 1 lines (since 1991 is on line beginning 1990)


Sponsored IT training links:

Join 70-291 training program to pass 642-446 test plus get free practice files for next 70-643 exam.


Get notified when a new post is published.
Subscribe today!
5 2 votes
Article Rating
445 Comments
Inline Feedbacks
View all comments
George E. Smith
November 25, 2009 9:28 am

Totally amazing. I’m no programming expert; but I do routinely construct Optical system models for ray trace simulations; and even they require detailed comment of every element that is in the system, or else even months later, I cannot keep track of why the hell, I included some element. Now this is orders of magnitude more straight forward, than actually writing the code; but I certainly understand the concept that the comments better explain in plain English (and evidently include also some Australian terminology) so anyone can tell what it is supposed to do.
Well I have always believed that the failure to observe Nyquist in sampling strategies, was at the heart of this climate data inadequacy, and reading about all their ad hoc inclusions and exclusions of data sites; clearly shows what total BS they are cooking up.
And If I was paying the bills for that stuff; I would really like my money back, and also see some jail (gaol) time included in the correction process for the varmints purveying this rubbish as science data gathering.

Editor
November 25, 2009 9:29 am

In-bloody-credible! I’ve had to rework crappy code on occassion and discovered that you “can’t get there from here”. I sympathize with Harry. I can’t sympathize with the people who have foisted this garbage on us as “science”.

November 25, 2009 9:29 am

And yet the lap-dog press will continue to embargo the controversy, Obama will “get one more step closer to climate change legislation”, and the scientists (sic) will offer lame but “scientific” excuses.
Maybe the way to nail the bastards will be the old fashioned way, mail-fraud and IRS transgressions. I would imagine there has been snail-mail crimes of some sort , and I’m positive such large amounts of money led to some malfeasance on Mann’s and other’s part. This type of personality would never be clean in their persoanal lives while perpetrating such monumental fraud in academia.

November 25, 2009 9:30 am

I spent a decade or so as a professional programmer. Let’s just say that the excerpts revealed above would not have only gotten me fired, they would have gotten me blackballed and unable to work in the industry forever. And rightfully so.

November 25, 2009 9:31 am

Was the data that the code uses also released?
With the data, one could remove the fudge factors from the code and see what really comes out.
The tone of the comments do raise the question as to whether their author helped release the information.

chainpin
November 25, 2009 9:31 am

Data, I don’t need no stinking data!

Infowarrior
November 25, 2009 9:31 am

Having worked as a research assistant at Columbia University, the only thing that surprises me is the childish language and the tone of all the emails. Scholarship is corrupt to the core and that’s why I decided never to get a PHD.
The number one corruption is that no thesis is approved for research unless the research passes a political litmus test. Translation: Unless you already have a conclusion in mind that does not seriously detract from that of your colleagues, you have no hope of being published.
Now, I am published (only one paper), but it is in the field of Philosophical History, it did not require any grant money, and it passed the political litmus test among Orientalists detracting from those whose background ais Occidental. Translation: the paper is perceived as “progressive” so it is good to go.
Of course it’s a bad idea to base policy upon modern scholarship in highly controversial areas. Unless proposed research is designed to meet, substantiate, or reflect progressive ideals, you haven’t got a chance.
Mainstream scholarship in recent history has got some very obvious things dead wrong because of progressive politics. Just as we heard all the lies about imminent O-Zone disaster to racial integration increasing property value (believe what you will about the social merit of integration, the economic is quite obvious), they are wrong again about yet another one.
However, their arrogance is astonishing in the email and programming notes. It should be no surprise they fudge evidence and conclusions–that’s 95% of scholarship, where position is more important than the truth. The surprise is that they all seem to be knowingly complicit in deception and not care. Everyone whom I worked with believed their crap!

Obvious explanation
November 25, 2009 9:31 am

Context:
“ Around 1996, I became aware of how corrupt and ideologically driven current climate research can be. A major researcher working in the area of climate change confided in me that the factual record needed to be altered so that people would become alarmed over global warming. He said, “We have to get rid of the Medieval Warm Period.”
Dr. David Deming (University of Oklahoma)

J.Hansford
November 25, 2009 9:33 am

Oh, good lord!…. and these mob were given a grant of 13 million pounds of British taxpayers money to do climate science?
…. I think the only thing “done” here, was the British taxpayer. They were well and truly “done” over.:-(

November 25, 2009 9:33 am

Funny in that of all the text emails in the Zip file the night this got released, this is the file I read first. And kept reading. The comments looked to be pretty damning, but I wasn’t sure which “official” curve this program contributed to. If this was just a program for some obscure and not relied upon graph, it meant little. If this was something actively promoted by HadCRU then it’s a big deal.

SidViscous
November 25, 2009 9:33 am

“Somebody should Slashdot this, it is exactly the sort of thing the programming geeks there will understand and appreciate. They may also be able to provide valuable insight.”
I’ve been watching Slashdot for this story to come up, obviously this is just the sort of thing that SHOULD show up there.
It won’t. Slashdot is filled with pro AGW types, and controlled by them. I wouldn’t be surprised if it never show up there at all. Even the “science” section is glaringly silent on the issue.
Yes they are the type that would understand this, but for many it would mean exposing their preconceptions to themselves, and that won’t happen.
There is a reason why people will defend the crappy deal they got from a used car salesman.

jonk
November 25, 2009 9:34 am

Another paragraph from the Harry_Read_Me that inpires confidence:
“You can’t imagine what this has cost me – to actually allow the operator to assign false
WMO codes!! But what else is there in such situations? Especially when dealing with a ‘Master’
database of dubious provenance (which, er, they all are and always will be).
False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding
1 at a time until a number is found with no matches in the database. THIS IS NOT PERFECT but as
there is no central repository for WMO codes – especially made-up ones – we’ll have to chance
duplicating one that’s present in one of the other databases. In any case, anyone comparing WMO
codes between databases – something I’ve studiously avoided doing except for tmin/tmax where I
had to – will be treating the false codes with suspicion anyway. Hopefully.”
This is like the train wreck that you don’t want to watch, but you can’t look away from.

Reed Coray
November 25, 2009 9:35 am

BOTO (09:17:11) :
Hi Anthony,
last dinner at Copenhagen!
I love it!
http://i47.tinypic.com/mrxszt.png

Great picture, but I have a question. Which one is Judas?

Robinson
November 25, 2009 9:35 am

What would it take to start over, and make a completely independent temperature record?

Judging by the state of the databases with a seemingly unmanaged history and no “method” in use to ensure integrity, it would be literally impossible to make sense of it all in a second pass. I now understand why they resist the release of code and data so strongly. It isn’t that they can’t, it’s that they’re just plain embarrassed by the hideous mess that would be exposed. I said “would”, I mean “has been”.

Michael
November 25, 2009 9:35 am

I know why Warmists deny global warming is not man-made. They are living in a Hollywood dream world and they can’t wake up.

Chris
November 25, 2009 9:36 am

My question is how the HARRY_READ_ME file fits into the greater scope of CRU’s data analysis. It’s not clear to me whether this was some side project for the programmer to analyze a problematic data subset or if this represents his frustrations with the main CRU data set. There is certainly a story here just don’t know if there are more chapters in this novel.

Hosco
November 25, 2009 9:36 am

If this isn’t enough to get MainStreamMedia to actually do their work – then I’ve lost any hope for humanity!

November 25, 2009 9:37 am

You know, I have asked at realclimate.org how they did software quality control. The post was censored ofcourse.
My guess is that they would not be able to verify their data/models/implementation, so that the operational verification would be something along the lines:
This output looks funny, this can’t be right. Lets look at the software and fix it.
Even without malice in complicated software this will lead to software that confirms what the scientist expects, because then it won’t look ‘funny’.
I wrote something along those lines at rc, and apparently it hit very close to the mark, because it was censored.

BOTO
November 25, 2009 9:37 am

Reed Coray (09:35:23) :
in my knewing, the one at the very right (your president…)

November 25, 2009 9:38 am

”22. Right, time to stop pussyfooting around the niceties of Tim’s labyrinthine software
suites – let’s have a go at producing CRU TS 3.0! since failing to do that will be the
definitive failure of the entire project..”
That will be Tim Osborne at a guess and Harry seems to think this manipulation of the data is vital for the ‘project’. The smoking gun is in the data, which is why they won’t give it up without a fight.

Mike
November 25, 2009 9:38 am

One should just run this code on some totally random data series and plot the output.

jamespapsdorf
November 25, 2009 9:40 am

Limbaugh tearing into Obama on the AGW Hoax !!!!
RUSH: Now we go over to the Universe of Lies. This afternoon, President Obama press conference with the Indian prime minister. Here’s a portion of Obama’s opening remarks.
OBAMA: We’ve made progress in confronting climate change. I commended the prime minister for India’s leadership in areas like green buildings and energy efficiency, and we agreed to a series of important new efforts: A clean energy initiative that will create jobs and improve people’s access to cleaner, more affordable energy; a green partnership to reduce poverty through sustainable and equitable development and an historic effort to phase out subsidies for fossil fuels. With just two weeks until the beginning of Copenhagen, it’s also essential that all countries do what is necessary to reach a strong operational agreement that will confront the threat of climate change while serving as a stepping-stone to a legally binding treaty.
RUSH: The president of the United States has just said in an internationally televised press conference that he is going to continue seeking a resolution to a problem that doesn’t exist. There is no man-made global warming. But it doesn’t matter because he exists in the Universe of Lies. And man-made global warming is only a means to an end to him. It is simply a way to once again chip away at the size of this country and the wealth of this country and to make sure that he is able to enact legislation that will allow him to raise everyone’s taxes so that he can begin even more redistribution of wealth. It’s just a mechanism like all liberal ideas are. They are dressed up in flowery, compassionate language to hide the deceit — the insidiousness — of their real intentions. This is mind-boggling, mind-boggling. It’s a hoax! It has been totally made up! It’s been known since Thursday. This is Tuesday.
The president of the United States, in an internationally televised press conference, says, “We gotta move further and we gotta get closer to Copenhagen with a working agreement. We gotta confront climate change.” There isn’t any! The whole concept of “climate change” is a fraud because climate “changes” constantly, and we’re not responsible for it. We don’t have that kind of power. We can’t do it! If somebody says, “Make it warmer tomorrow,” you can’t. If somebody says, “Make it colder tomorrow,” you can’t. If somebody says, “Make it rain tomorrow,” you can’t — and yet to listen to these people, we’re doing all of that and we’re doing it so fast that we are going to destroy our climate. It’s a hoax! It’s a fraud! There is no climate change. There is no global warming. There never has been any man-made global warming. How else can it be said?……..”.
http://www.rushlimbaugh.com/home/daily/site_112409/content/01125109.guest.html

M.A.DeLuca
November 25, 2009 9:40 am

If a physicist were to submit a paper without showing the math, that paper would (I assume) be rightly ridiculed and sent back with a “show your work” rebuke. It doesn’t seem right that one can hide one’s work in software, and then casually dismiss the absence of documented code upon submitting a paper as these yahoos have done. And yet, that seems exactly the way mainstream climatology works. Do any other sciences permit one to hide calculations in a program and then not publish said program with the paper?

Eric
November 25, 2009 9:40 am

This is just astonishing, and it’s evolving into a serious scandal.
I don’t think most of the public even realize that the computer code upon which these “models” are based isn’t revealed to other scientists for checks/verification/improvement. What’s UEA CRU’s basis for the validity of its predictions? “Trust us, we’re a really nice bunch of English climatologists”???

November 25, 2009 9:41 am

Is this real??????? hot stuff !!!!!!