The Smoking Code, part 2

Climategate Code Analysis Part 2

atomThere are three common issues that have been raised in my previous post that I would like to officially address concerning the CRU’s source code.

If you only get one thing from this post, please get this. I am only making a statement about the research methods of the CRU and trying to show proof that they had the means and intent to falsify data. And, until the CRU’s research results can be verified by a 3rd party, they cannot be trusted.

Here are the four most frequent concerns dealing with the CRU’s source code:

  1. The source code that actually printed the graph was commented out and, therefore, is not valid proof.
  2. No proof exists that shows this code was used in publishing results.
  3. Interpolation is a normal part of dealing with large data sets, this is no different.
  4. You need the raw climate data to prove that foul play occurred.

If anyone can think of something I missed, please let me know.

The source code that actually printed the graph was commented out and, therefore, is not valid proof.

Had I done a better job with my source analysis, I would have found a later revision of the briffa_sep98_d.pro source file (linked to in my previous post) contained in a different working tree which shows the fudge-factor array playing a direct result in the (uncommented) plotting of the data.

Snippit from: harris-tree/briffa_sep98_e.pro (see the end of the post for the full source listing)

;

; APPLY ARTIFICIAL CORRECTION

;

yearlyadj=interpol(valadj,yrloc,x)

densall=densall+yearlyadj

  ;

  ; Now plot them

  ;

  filter_cru,20,tsin=densall,tslow=tslow,/nan

  cpl_barts,x,densall,title='Age-banded MXD from all sites',$

    xrange=[1399.5,1994.5],xtitle='Year',/xstyle,$

    zeroline=tslow,yrange=[-7,3]

  oplot,x,tslow,thick=3

  oplot,!x.crange,[0.,0.],linestyle=1

  ;

Now, we can finally put this concern to rest.

Interpolation is a normal part of dealing with large data sets, this is no different.

This is partially true, the issue doesn’t lie in the fact that the CRU researchers used interpolation. The issue is the weight of the valadj array with respect to the raw data. valadj simply introduces too large of an influence to the original data to do anything productive with it.

Here is the graph I plotted of the valadj array. When we’re talking about trying to interpret temperature data that grows on the scale of one-tenths of a degree over a period of time, “fudging” a value by 2.5 is going to have a significant impact on the data set.

No proof exists that shows this code was used in publishing results.

Correct! That’s why I am (and always have) taken the following stand: Enough proof exists that the CRU had both the means and intent to intentionally falsify data. This means that all of their research results cannot be trusted until they are verified. Period.

The fact that the “fudge-factor” source code exists in the first place is reason enough for alarm. Hopefully, they didn’t use fudged results in the CRU research results, but the truth is, we just don’t know.

You need the raw climate data to prove that foul play occurred.

This is assuming the raw data are valid, which I maintain that it probably is. Several people question the validity of the climate data gathering methods used by the different climate research institutions, but I am not enough of a climate expert to have an opinion one way or the other. Furthermore, It simply doesn’t matter if the raw climate data are correct or not to demonstrate the extreme bias the valadj array forces on the raw data.

So, the raw data could actually be temperature data or corporate sales figures, the result is the same; a severe manipulation of data.

Full Source Listing

As promised, here is the entire source listing for: harris-tree/briffa_sep98_e.pro

[sourcecode language=”text”]

1. ;

2. ; PLOTS ‘ALL’ REGION MXD timeseries from age banded and from hugershoff

3. ; standardised datasets.

4. ; Reads Harry’s regional timeseries and outputs the 1600-1992 portion

5. ; with missing values set appropriately. Uses mxd, and just the

6. ; "all band" timeseries

7. ;****** APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE*********

8. ;

9. yrloc=[1400,findgen(19)*5.+1904]

10. valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$

11. 2.6,2.6,2.6]*0.75 ; fudge factor

12. if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’

13. ;

14. loadct,39

15. def_1color,20,color=’red’

16. plot,[0,1]

17. multi_plot,nrow=4,layout=’large’

18. if !d.name eq ‘X’ then begin

19. window, ysize=800

20. !p.font=-1

21. endif else begin

22. !p.font=0

23. device,/helvetica,/bold,font_size=18

24. endelse

25. ;

26. ; Get regional tree lists and rbar

27. ;

28. restore,filename=’reglists.idlsave’

29. harryfn=[‘nwcan’,’wnam’,’cecan’,’nweur’,’sweur’,’nsib’,’csib’,’tib’,$

30. ‘esib’,’allsites’]

31. ;

32. rawdat=fltarr(4,2000)

33. for i = nreg-1 , nreg-1 do begin

34. fn=’mxd.’+harryfn(i)+’.pa.mean.dat’

35. print,fn

36. openr,1,fn

37. readf,1,rawdat

38. close,1

39. ;

40. densadj=reform(rawdat(2:3,*))

41. ml=where(densadj eq -99.999,nmiss)

42. densadj(ml)=!values.f_nan

43. ;

44. x=reform(rawdat(0,*))

45. kl=where((x ge 1400) and (x le 1992))

46. x=x(kl)

47. densall=densadj(1,kl) ; all bands

48. densadj=densadj(0,kl) ; 2-6 bands

49. ;

50. ; Now normalise w.r.t. 1881-1960

51. ;

52. mknormal,densadj,x,refperiod=[1881,1960],refmean=refmean,refsd=refsd

53. mknormal,densall,x,refperiod=[1881,1960],refmean=refmean,refsd=refsd

54. ;

55. ; APPLY ARTIFICIAL CORRECTION

56. ;

57. yearlyadj=interpol(valadj,yrloc,x)

58. densall=densall+yearlyadj

59. ;

60. ; Now plot them

61. ;

62. filter_cru,20,tsin=densall,tslow=tslow,/nan

63. cpl_barts,x,densall,title=’Age-banded MXD from all sites’,$

64. xrange=[1399.5,1994.5],xtitle=’Year’,/xstyle,$

65. zeroline=tslow,yrange=[-7,3]

66. oplot,x,tslow,thick=3

67. oplot,!x.crange,[0.,0.],linestyle=1

68. ;

69. endfor

70. ;

71. ; Restore the Hugershoff NHD1 (see Nature paper 2)

72. ;

73. xband=x

74. restore,filename=’../tree5/densadj_MEAN.idlsave’

75. ; gets: x,densadj,n,neff

76. ;

77. ; Extract the post 1600 part

78. ;

79. kl=where(x ge 1400)

80. x=x(kl)

81. densadj=densadj(kl)

82. ;

83. ; APPLY ARTIFICIAL CORRECTION

84. ;

85. yearlyadj=interpol(valadj,yrloc,x)

86. densadj=densadj+yearlyadj

87. ;

88. ; Now plot it too

89. ;

90. filter_cru,20,tsin=densadj,tslow=tshug,/nan

91. cpl_barts,x,densadj,title=’Hugershoff-standardised MXD from all sites’,$

92. xrange=[1399.5,1994.5],xtitle=’Year’,/xstyle,$

93. zeroline=tshug,yrange=[-7,3],bar_color=20

94. oplot,x,tshug,thick=3,color=20

95. oplot,!x.crange,[0.,0.],linestyle=1

96. ;

97. ; Now overplot their bidecadal components

98. ;

99. plot,xband,tslow,$

100. xrange=[1399.5,1994.5],xtitle=’Year’,/xstyle,$

101. yrange=[-6,2],thick=3,title=’Low-pass (20-yr) filtered comparison’

102. oplot,x,tshug,thick=3,color=20

103. oplot,!x.crange,[0.,0.],linestyle=1

104. ;

105. ; Now overplot their 50-yr components

106. ;

107. filter_cru,50,tsin=densadj,tslow=tshug,/nan

108. filter_cru,50,tsin=densall,tslow=tslow,/nan

109. plot,xband,tslow,$

110. xrange=[1399.5,1994.5],xtitle=’Year’,/xstyle,$

111. yrange=[-6,2],thick=3,title=’Low-pass (50-yr) filtered comparison’

112. oplot,x,tshug,thick=3,color=20

113. oplot,!x.crange,[0.,0.],linestyle=1

114. ;

115. ; Now compute the full, high and low pass correlations between the two

116. ; series

117. ;

118. perst=1400.

119. peren=1992.

120. ;

121. openw,1,’corr_age2hug.out’

122. thalf=[10.,30.,50.,100.]

123. ntry=n_elements(thalf)

124. printf,1,’Correlations between timeseries’

125. printf,1,’Age-banded vs. Hugershoff-standardised’

126. printf,1,’ Region Full <10 >10 >30 >50 >100′

127. ;

128. kla=where((xband ge perst) and (xband le peren))

129. klh=where((x ge perst) and (x le peren))

130. ts1=densadj(klh)

131. ts2=densall(kla)

132. ;

133. r1=correlate(ts1,ts2)

134. rall=fltarr(ntry)

135. for i = 0 , ntry-1 do begin

136. filter_cru,thalf(i),tsin=ts1,tslow=tslow1,tshigh=tshi1,/nan

137. filter_cru,thalf(i),tsin=ts2,tslow=tslow2,tshigh=tshi2,/nan

138. if i eq 0 then r2=correlate(tshi1,tshi2)

139. rall(i)=correlate(tslow1,tslow2)

140. endfor

141. ;

142. printf,1,’ALL SITES’,r1,r2,rall,$

143. format='(A11,2X,6F6.2)’

144. ;

145. printf,1,’ ‘

146. printf,1,’Correlations carried out over the period ‘,perst,peren

147. ;

148. close,1

149. ;

150. end

[/sourcecode]

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
209 Comments
Inline Feedbacks
View all comments
Roger Knights
December 6, 2009 9:15 pm

vulgarmorality (17:48:19) :
What emerges from the source code, data losses, emails, etc., is a sense that these climatologists believed they were playing a larger game than science: they were good shepherds, bringing us out of the dark. The same, with few qualifications, can be said of the media – which explains its spotty and off-center reporting of Climategate – and, for that matter, of many politicians and governments.
See “Climategate: The good shepherds”:
http://vulgarmorality.wordpress.com/2009/12/06/climategate-the-good-shepherds/

================
Here’s the full article, my favorite article ever on this whole business, which deserves a thread of its own because of its importance in derailing the blame-the-left meme here that Pam and others like savethesharks have objected to.
The blame falls “the anointed” — the self-consciously “aware and concerned,” highly educated cognitive elite who resonate with one another across all sectors of society and buy into one another’s rationales, tactics, credibility, and value-priorities.
One of their main concerns is to avoid being consigned by their fellows in this group into the ranks of the “benighted” (the crudely selfish and ignorant), which is why accusations of being tainted by Big Oil or right-wing think tanks or Fox News or IDers or flat-earthers make such powerful and commonly used weapons by the groups’ mind-guards in keeping the rank and file in line.
IOW, in large part their motivations are partly idealistic, but also partly social and psychological, in that they want to be part of the leading edge of a high-status in-group, and also want to nourish and bask in the feeling of self-approbation that this reflected self-worth, and this perception of acting idealistically, gives them.
Well, I could go on like this for pages, but enough.
=================
Climategate: The Good Shepherds
http://vulgarmorality.wordpress.com/2009/12/06/climategate-the-good-shepherds/
Belief in conspiracy theories, let me suggest, is more a matter of personality than of evidence. Temperamentally, I’m a conspiracy skeptic. I doubt there are many people on earth who can be devilishly clever.
So when it comes to Climategate – the scandal triggered by the unauthorized release of thousands of emails and documents from East Anglia University’s Climatic Research Unit – I reject explanations that involve Machiavellian behavior. I can’t see the CRU as the hub of a global campaign to impose the political triumph of green policies.
But the real explanation may turn out to be more serious and dangerous, because it casts a far wider net. It involves the climate bureaucrats at CRU and their American allies at NASA, NOAH, and elsewhere, many agencies in many governments and international organizations, and the mainstream media virtually everywhere. In my opinion, these people didn’t conspire together. They just think alike.
They subscribe to a particular story about themselves and human society which is prevalent among highly educated people, and may well be the greatest threat to liberal democracy today. My name for the story is “rationalism”; Thomas Sowell called it the “unconstrained vision.” Some, including many who embrace it, associate this cluster of dogmas with the political left – but I believe it transcends such archaic labels.
I want to be clear about this. I hold that many climatologists, politicians, and journalists share a number of operating assumptions, which in effect allows them to coordinate their actions without resorting to conspiracies. That these assumptions are self-serving is undeniable but here besides the point. They support the story of the elites as the good shepherds, and this in turn endows the believer with the moral authority for practically any action.
Here are the logical pillars for the story of the good shepherds:
A few of us are wise and good, but the average person is foolish and easily misled.
The only moral imperative is human development, and the only path to human development is power in the hands of the wise and good.
Information must be used by the wise and good, but withheld from the public to avoid panic and confusion.
Society is a tissue of outworn traditions and superstitions, and must be rationalized according to scientific principles.
Opposition to the wise and good can only come from selfish, corrupt forces and their dupes.
Evidence of these principles in action abounds in the Climategate affair, and would fill more space than I have in this post – the CRU documents alone are 160 MB. What follows is by necessity selective and illustrative, which is to say, partial and incomplete.
First, the climate scientists. We should think of them as scientist-bureaucrats, combining the analytic inclination of the former and the primal hunger for funding and prestige of the latter. Becoming saviors of the earth by using their educated brains must have been, from both perspectives, impossible to resist. Presidents and prime ministers were now their audience. Further, the names in the CRU documents comprise a suprisingly small group – maybe 50 persons, the power elite of climatology.
Their emails depict a world misled by false prophets, in sore need of guidance: “I trust that history will give us all proper credit for what we’re doing here.” As good shepherds, they sought to keep control of the IPCC process, which – as ferocious turf warriors – they intuited to be of supreme strategic importance. If, to control the IPCC, journal editors must be purged, or the peer review process corrupted – well, the moral imperative trumped such quibbles. Critics were unscientific barbarians, whom one wishes to pummel and in whose death one rejoices. They must be denied data at all costs.
The CRU group perpetrated fraud and abuses in perfectly good faith, out of concern for their flock.
The IPCC represented the commanding heights of their work. It too made news, and provided cover to politicians who advocated costly good shepherd policies and needed a global authority for this purpose. The 2007 IPCC report obliged with a “Summary for Policymakers” brimming with authoritative dictums – “There is high agreement and much evidence” recurs like a mantra – and making the leap to policy recommendations. (By contrast, in the full report the words “uncertain” and “uncertainty” appear “1,300 times in 900 pages.”)
The IPCC chair, Rajendra Pachauri, is nothing of a scientist but very much of a Torquemada, who responded thusly to criticism by a skeptical Bjorn Lomborg: “What is the difference between Lomborg’s views and Hitler’s?” Not surprisingly, Pachauri’s response to Climategate has focused on the “unfortunate” “illegal act” of divulging the CRU documents.
For politicians, global warming is like manna from heaven. Unlike wars, recessions, or hurricanes, the crisis will come, if at all, in the far future, long after they have retired. Yet it allows them to make messianic speeches, demand increased powers, and hammer their opponents without mercy or restraint. They can point to the IPCC reports and play the good shepherds free of political risk.
The role demands the use of unbridled language, as Mark Steyn amusingly demonstrates. These are elites talking to their foolish publics. They presume simple-minded exaggerations are all such people will understand. Critics are dismissed as illiterates – “flat earth” types, according to the UK’s Gordon Brown – or villlains. They, the good shepherds, are wiser and nobler: thus Brown, Nicholas Sarkozy, and – possibly – President Obama transcend mere politics and assume the robes of philosopher-kings.
Finally, the media. The story of the good shepherds is identical to the ideology of news, which assumes that, without journalists, the public will wallow in self-satisfied ignorance. Global warming was the sweetest kind of journalistic enterprise. It demanded that people be educated against their will. It inspired constant flattery and cajoling from the ultra-smart scientific set.
Some years back the vice president of the Royal Society appealed to “all parts of UK media” to avoid skepticism about global warming. Shadowy people “on the fringes, with financial support from the oil industry” might try to corrupt journalists; they must resist. (Interestingly, the released documents reveal strong “financial support from the oil industry” in CRU research.) NYT science correspondent Andrew Rivkin appears as “Andy” in the CRU emails. He is asked by climatologist Michael Mann, who is heaping scorn on those debunking his findings: “Fortunately, the prestige press doesn’t fall for this sort of stuff, right?”
Climategate has been another blow to the skull of mainstream journalism. Coverage has been scant and bizarrely slanted. The best in my opinion has been the WaPo. Worst by far has been the BBC, which has become a sort of Pravda of global warming – calling it, in one particularly strange post-Climategate story, a “major cause of conflict in Africa.” But the typical MSM reaction has been muttering or silence. One need only recall the uproar from the Pentagon papers or the leaked Bush-era domestic surveillance materials, to realize how unnatural this behavior is.
Against its own business interests, the media is looking away from a scandal. The reason, I suggest, isn’t conspiratorial but ideological. Journalists, like climatologists and politicians, despise the public and wish to become society’s good shepherds.
The picture that emerges is that of elites in different domains supporting and reinforcing each others’ impermeability to public opinion. Climatologists demand funding and the silencing of reasonable criticism. Politicians promote huge government programs and relegate reasonable opposition to the Flat Earth Society. Journalists can deal in doomsday and be flattered by powerful and brilliant individuals. Nowhere, in all this, is there a place for the voter or the marketplace. Ordinary people are foolish and must be protected from themselves.
And that should be the great concern of all. The story of the good shepherds leaves no room for liberal democracy – for a multiplicity of choices by free citizens. It’s top-down. It’s nakedly authoritarian. That so many smart people, in so many influential places, have bought into it should give one pause.
I’d almost prefer an honest conspiracy.

TKl
December 7, 2009 12:43 am

‘Hide the decline’ with precorrected data files?
if you grep ‘artifi’ the document- and subfolders, you get 32 files.
Hardcoded (factors in source) correction for ‘decline’ is only found in briffa_sep98_d.pro (dated 03/99) and briffa_sep98_e.pro (dated 09/98).
It seems, later on CRU used another technique for correction values, these are precalculated in the data files itself.
In more recent sources like data4alps.pro (dated 08/2008) you find:
;
; Writes an ASCII file with data (gridded, yes/no extended, corrected,
; yes/no ABD-adjusted, calibrated) for input to the Arctic synthesis update.
;
doinfill=0 ; use PCR-infilled data or not?
doabd=1 ; use ABD-adjusted data or not?
docorr=1 ; use corrected version or not? (uncorrected only available
; for doinfill=doabd=0)
missval=-9.99
;
; Get the calibrated data
;
win
;
print,’Reading reconstructions’
if doabd eq 0 then begin
if doinfill eq 0 then begin
restore,’calibmxd5.idlsave’
; Gets: g,mxdyear,mxdnyr,fdcalibu,fdcalibc,mxdfd2,timey,fdseas
if docorr eq 0 then fdcalibc=fdcalibu <—- look here
endif else begin
restore,'calibmxd5_pcr.idlsave'
; Gets: g,mxdyear,mxdnyr,fdcalibc,timey,fdseas
endelse
endif else begin
if doinfill eq 0 then begin
restore,'../mann/calibmxd5_abdlow.idlsave'
; Gets: g,mxdyear,mxdnyr,fdcalibu
print,'PROBABLY WANT THIS ONE'
if docorr eq 0 then fdcalibc=fdcalibu <—– look here
endif else begin
restore,'../mann/calibmxd5_abdlow_pcr.idlsave'
; Gets: g,mxdyear,mxdnyr,fdcalibu
endelse
endelse
The statement
if docorr eq 0 then fdcalibc=fdcalibu
seems to overwrite precorrected values (fdcalibc) with uncorrected ones (fdcalibu) only when docorr = 0
Further processing here and in many other sourcefiles uses only fdcalibc.
if you grep 'calibmxd5.idlsave' (one datafile), you get 34 matches. In only one file this data file is written, look at pl_calibmxd4.pro (dated 09/99):
;
; Now compute the calibrated values now that all boxes have coefficients
;
for iyr = 0 , mxdnyr-1 do begin
fdcalibu(*,*,iyr)=fdalph(*,*)+fdbeta(*,*)*fdcalib(*,*,iyr)
fdcalibc(*,*,iyr)=fdalph(*,*)+fdbeta(*,*)*fdcorrect(*,*,iyr)
endfor
;
; Now save the data for later analysis
;
save,filename='calibmxd5.idlsave',$
g,mxdyear,mxdnyr,fdcalibu,fdcalibc,mxdfd2,timey,fdseas
;
end
Variable fdcorrect is computed in only two sourcefiles with same name pl_decline.pro, the more recent one is dated 03/2004
I would like to examine these sources files precise, only my IDL-knowledge is restricted, it's a travesty…

TKl
December 7, 2009 12:46 am

Oops, I submitted too fast.
pl_decline.pro:
;
; Now apply a completely artificial adjustment for the decline
; (only where coefficient is positive!)
;
tfac=declinets-cval
fdcorrect=fdcalib
for iyr = 0 , mxdnyr-1 do begin
fdcorrect(*,*,iyr)=fdcorrect(*,*,iyr)-tfac(iyr)*(zcoeff(*,*) > 0.)
endfor
;
; Now save the data for later analysis
;
save,filename=’calibmxd3’+fnadd+’.idlsave’,$
g,mxdyear,mxdnyr,fdcalib,mxdfd2,fdcorrect
;

TKl
December 7, 2009 1:47 am

Phil Jones’ famous E-Mail (‘Mike’s nature trick’, ‘to hide the decline’ is time-stamped “Date: Tue, 16 Nov 1999 13:31:15 +0000”
Source pl_decline.pro from FOIA\documents\osborn-tree6\mann\oldprog is dated 05-01-2000 dd-mm-yyyy
There might be a relation between the E-Mail and the source file.

December 7, 2009 2:21 am

Did you see the code comment at the start?
;****** APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE*********
Now, there are one of two explanations here:
1. A comment in case at a later date they forgot they were committing fraud.
2. A comment to make clear the code… applied a very artificial correction for decline.
I guess which you believe depends on one’s pre-existing view of the integrity of the scientists in question.

Invariant
December 7, 2009 2:43 am

J. Bob (08:31:36) : I find it unimaginable a supposedly 1st rate research facility would tolerate this code. […] The only time we would ever keep commented code in the source stream, is that it would be readily available to re-insert for whatever reason at a later date. Otherwise it was deleted. PERIOD.
Many scientist and engineers have no interest in the extra effort to generate readable code; they are more interested in the results of the calculations than keeping the source code elegant and beautiful. As long as they understand the code themselves they are happy, they simply ignore the readability for the next guy. This behavior is nasty but true…
But that’s not the major point. The major point is that poor source code is completely irrelevant here, imagine New York Times headlines “Copenhagen cancelled due to unreadable source code”… Well, that’s not so likely is it? 🙂

TKl
December 7, 2009 3:13 am

A closer look at the Sources gives some sort of daisy chaining multiple programs to create the data files. First there is ‘calibmxd1’, ‘calibmxd2’ … up to ‘calibmxd5’.
The source “pl_decline.pro” (where correction is done) reads’calibmxd2′ and writes ‘calibmxd3’. “pl_calibmxd4” reads ‘calibmxd3’ and writes ‘calibmxd5’, which contains the corrected and uncorrected values.
Plotting various graphs is based on ‘calibmxd5’, mostly variable fdcalibc is used for preparing the plot. fdcalibc contains ‘adjusted’ values ‘to hide the decline’.

December 7, 2009 3:31 am

Richard & others,
I think the “thought experiment” was something along the lines of “what if the tree data didn’t suffer from the decline – how would that affect the correlation of the whole dataset to actual temperature?”. That doesn’t mean in this case that they ignored the decline, only that they wanted to know what the its effect on their process was.
The “bootstrapping” process is described in that Osborn paper quotation I posted above, but here’s how I understand what they did (as through a glass, darkly):
1) Recognise (as is well known) that the post 1960 tree data doesn’t track real temperatures, and that if you include them in your analysis it’s going to screw things up. Assuming you just don’t want to give up and go home at this point, you have to do something about it.
2) Then there are two ways of working out a correlation and a calibration between the tree ring widths/densities and real temperatures even though you know the post-1960 results are dodgy:
a) Chop them off and only use the ones before 1960 (there is code that does this elsewhere)
b) Fudge the post-1960 temperatures so they track real temperatures better.
I’ll discuss the benefits and dangers of each of those later…
3) This gives you a calibrated mapping – that is to say, it allows you to work out a temperature for a given ring width or density.
4) Then you apply this mapping to the original *unmodified* data to give you your temperature reconstruction. You know the output of this is going to diverge post 1960, and needs to be replaced with real temperatures (or “hide the decline”, if you must), but at least it gives you something. Obviously you would document the process you did, as Osborn et al. did in their paper – which apparently was never published, so maybe the reviewers thought it was all a bit too dodgy – but it was all out in the open.
OK, so lets come back to the 2(a) and (b) – the choice of whether to chop (“bobbit”, as we say in programming 😉 or fudge.
2(a), bobbiting, gives you a more accurate view of the correlation/mapping for the period you leave in, but because the period is shorter it will be more sensitive to that particular period – as Osborn”s comment indicates.
2(b), fudging, is really pretty horrible, granted, but it does leave the *short-term* post-1960 variations in place to correlate with, even though you’ve messed with the long-term trend – so it gives you a little less sensitivity to the time period.
So when I say it’s “reasonable”, what I mean is, it’s a reasonable way to make the best of some data which has known, documented problems in recent years. Extracting information from imperfect data is what science (and signal processing) is all about. As long as it was done in the open, it’s fine. Now granted the code wasn’t published, and I agree it should be, but Osborn did summarise what it did in the paper.
(BTW, I don’t know if the bit about programming video software in C (actually it’s C++) was meant as a jibe about my qualification to comment – if so, fair enough, if that’s all there was. But I have also spent the last two years building and maintaining woodfortrees.org, which gives me some insight into some of the concepts, here, I think. But I don’t claim to be a Real Statistician!)

December 7, 2009 3:44 am

One more thing I should add: I’m well aware that the result of removing the decline (in either way I described above) means the calibration point will be higher, and so historical values will be reconstructed lower, possibly reducing the MWP.
Whether you think that matters depends whether you think the same process that is causing the decline now was also happening in the Middle Ages. If it’s purely temperature related, it might have done; if it’s some other anthropogenic or random effect, maybe it didn’t. I don’t know either way.
But this an issue in dendrochronology which has been out in the open and much discussed (not least by Steve M) for a decade now. This code doesn’t change that argument in either direction, it just (predictably) demonstrates that CRU did have code to do what they said they were doing in their papers. Which in a way is a good thing, although it would have been better if the code was published to begin with – maybe they and others will learn that lesson now.

December 7, 2009 4:06 am

… and (as Gary points out in another thread), I should have said “dendroclimatology”.

Nick Stokes
December 7, 2009 4:06 am

Paul, I think that is a very good analysis. The “fudge” may have been more legitimate than it first appears. valadj is said to be a component of a principal components analysis. In some ways that is just another set of numbers to optimise a fit, but it may bring in more non-temp data.

Per-Axel
December 7, 2009 6:21 am

This was not convincing to me as I don´t understand it. Could you please expain what you have found in more simple terms, understandable to somebody who doesn´t understand a shit about programming? And please use simple words. I´m a stupid Swedish journalist, you know.

Bob Kutz
December 7, 2009 7:51 am

Somebody somewhere knows how that code was used and if was in fact used to influence any peer reviewed journal articles.
For them to simultaneously tout peer review, withhold data and methodology, and make the dubious claim that there is no ‘proof’ that this code was ever used is Kafkaesque.
If this code made it into a published article, those who allowed it were engaged in fraud. For them to remain silent on the issue at this time should be the final nail in the coffin of their professional careers. For those who peer-reviewed any such articles, it should be a deathblow for their careers as well. Claiming to have peer reviewed something is different from actually having done so. That is tantamount to fraud as well.
Somebody needs to produce a list of any papers that incorporated this data manipulation program. Failure by CRU/EAU to do so should be the end of tax funded research at this institution.
This is really sick stuff.

Bob Kutz
December 7, 2009 8:12 am

Tenuc (00:11:04) :
To carry your analogy a bit further; we’ve got emails where they discuss disposing of the body (raw data), and now the body is missing.
We’ve not only got method, motive and opportunity, we’ve got a missing body and emails where they openly discuss their intent to dispose of it.
I don’t know how the science community works, but I know several prosecutor’s who would love to have this much to go on in the court room. If this were a criminal case (at least in the U.S.) we’d be talking about pleading guilty and turning state’s evidence in exchange for a life sentence. Somehow in the scientific community, a bunch of the co-conspirators bear character witness, with the understanding that that will be sufficient to render a verdict of not-guilty.
Finally, after having read the comments here, it seems this code is potentially a bit more innocuous than what I first interpreted, per se. However, in light of the missing raw data, emails that display a clear sense of contempt for the scientific method and actual criminal intent with respect to FOI, this code should be just another stick in the funeral pyre. Let us hope.

JJ
December 7, 2009 8:36 am

WFT:
“1) Recognise (as is well known) that the post 1960 tree data doesn’t track real temperatures, and that if you include them in your analysis it’s going to screw things up. Assuming you just don’t want to give up and go home at this point, you have to do something about it.”
[snip]? You’re trying to come up with a proxy for real temperatures. If you are truly ‘Recognising (as is well known) that post 1960 tree data doesnt track real temperatures’, then it is time to give up and go home.
That, or direct considerable effort to locating the mechanism that causes the divergence – taking care to diligently consider all possible mechanisms, not just ones that you can bluff and bluster into operating only post 1960.
Those are the two legitimate, scientific reponses to the ‘divergence problem’. The rest is nothing more or less than fraud dressing itself up as science. Ignoring adverse results is not any more scientific than hiding them, and it is only ever so slightly less egregious.

December 7, 2009 8:55 am

JJ: I think if you find the tree data does track temperatures pretty well for 110 years, but doesn’t for the last 40, you could argue it’s not Game Over and try to do the best you can. But I agree, it certainly demands a rain check , and I haven’t yet seen anything that really satisfies me to explain it, either (but I’ve only been looking at this particular issue for a couple of weeks).
My real point is just this: this whole issue with divergence was already well known to those in the field, and all that’s been “discovered” here is the code that implements techniques (dubious or not) that were already described by the authors in various papers. It’s interesting to discuss the finer points of dendroclimatology as to whether what they did was valid; but to claim this is a SMOKING GUN THAT RENDERS THE WHOLE OF AGW A FARCE (paraphrasing) is just overkill.

JJ
December 7, 2009 9:30 am

WFT,
“JJ: I think if you find the tree data does track temperatures pretty well for 110 years, but doesn’t for the last 40, you could argue it’s not Game Over and try to do the best you can.”
That is not science! Anything can be proven by simply ignoring data that disagree with the desired conclusion. That is religionism. Faith healers have an impressive track record, if you dont count the dead people.
“But I agree, it certainly demands a rain check , and I haven’t yet seen anything that really satisfies me to explain it, either (but I’ve only been looking at this particular issue for a couple of weeks).”
Dont worry. The Team has been searching for the cause of the divergence with the same dogged determination that OJ has been applying toward finding the real killer. I’m sure an answer is just around the corner.
“My real point is just this: this whole issue with divergence was already well known to those in the field, …”
Should not some of your ‘real point’ be unmitigated anger that they have been ignoring adverse results for a decade?
“… and all that’s been “discovered” here is the code that implements techniques (dubious or not) that were already described by the authors in various papers.”
I agree that we dont know the context of the code snippet dissected here, and have argued to that effect above. We risk losing the force of the point by perserverating over how they did it (if in fact that is what this code snippet represents), rather than concentrating on the fact that THEY DID IT.
JJ

D
December 7, 2009 9:37 am

woodfortrees (Paul Clark) (08:55:48) :
JJ: I think if you find the tree data does track temperatures pretty well for 110 years, but doesn’t for the last 40, you could argue it’s not Game Over and try to do the best you can.

I am not a scientist but this seems like nonsense to me because I do not see how data for 110 years can tell us anything definitive about conditions on a planet that has been around for an estimated 4.5 billion years. Am I wrong? Do we have any data that goes back 4.5 billion years?

Invariant
December 7, 2009 10:04 am

I think we are jumping to conclusion here. The dump contains 177 FORTRAN files, 655 PRO files, 24 RAW files, 848 RW files, 365 DAT files and 1157 OUT files – it’s obvious that the whistleblower that prepared this dump certainly did not select a large number of irrelevant files…

exmodeler
December 7, 2009 10:05 am

Just a note about the adjustment factor array. It sure looks similar to the correction factors “tobs” that is in use by our US temperature data massagers.
Has anyone checked this out?

TKl
December 7, 2009 10:41 am

JJ (09:30:52) :
“I agree that we dont know the context of the code snippet dissected here, and have argued to that effect above. We risk losing the force of the point by perserverating over how they did it (if in fact that is what this code snippet represents), rather than concentrating on the fact that THEY DID IT.”
I agree on your statement. But one argument was: “the corrected values are not used”.
A little survey of the sources shows that corrected values are used all over the place, not only in briffa_sep98_e.pro. Usually there is a hint not to plot beyond 1960, but for what reason “adjust” this values, if you do not want to use them in any context?

JRandom
December 7, 2009 12:33 pm

“To carry your analogy a bit further; we’ve got emails where they discuss disposing of the body (raw data), and now the body is missing.”
I see many claims to that effect…but where’s the PROOF?!
It’s completely dishonest to make a subjective interpretation of an email that is taken out of context and then brandy that about as the truth.
I see many claiming “We have emails showing they are hiding data!”
No, you don’t… You have emails that SUGGEST they might…but the fact is they could also suggest many other things.
I’m not sure exactly what data you speak of, but much of the data that was supposedly being “suppressed” is in fact available from the sources it originally came from!
CRU collected data from various sources…so they might very well have been justified in saying it was not their place to give it out …..or there are many many legit reasons for them to “suppress” it.
My biggest problem in this whole scandal is so called “skeptics” taking bits and pieces out of context and bellowing “THIS IS ABSOLUTE PROOF OF XXXX” ….based on a SUBJECTIVE INTERPRETATION.
We need a thorough investigation and ALL The facts to come out before we can make any absolute claims.

JRandom
December 7, 2009 12:36 pm

I just ant to clarify that I think there is certainly a lot to investigate here, and there may have been wrongdoing etc.
…but many people are trying to claim it is cut and dry:
That there is absolute proof of wrongdoing… or they are blowing up the extent and severity of said wrongdoing…
…and those people are either lying to everyone else or lying to themselves.
They are a disgrace to the very idea of being skeptical.

JRandom
December 7, 2009 12:46 pm

Another thing:
So many are loudly proclaiming the fact that the scientists have motivation to fake the data…
Where’s the proof of this?
Where’s the proof that they are making more money than they would if they didn’t support the “conspiracy”?
Also I’m surprised that some still don’t get it:
The point is NOT “Was the data processed (or output or whatever) by this code published or used”
the point is “Was this output data used WITHOUT people being told that it was processed by this function”

December 7, 2009 3:00 pm

D: They were only trying to go back a 1000 years or so, so 110 years of data is a fair proportion of the total.