When the CRU emails first made it into news stories, there was immediate reaction from the head of CRU, Dr. Phil Jones over this passage in an email:
From a yahoo.com news story:
In one leaked e-mail, the research center’s director, Phil Jones, writes to colleagues about graphs showing climate statistics over the last millennium. He alludes to a technique used by a fellow scientist to “hide the decline” in recent global temperatures. Some evidence appears to show a halt in a rise of global temperatures from about 1960, but is contradicted by other evidence which appears to show a rise in temperatures is continuing.
Jones wrote that, in compiling new data, he had “just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (i.e., from 1981 onwards) and from 1961 for Keith’s to hide the decline,” according to a leaked e-mail, which the author confirmed was genuine.
Dr. Jones responded.
However, Jones denied manipulating evidence and insisted his comment had been taken out of context. “The word ‘trick’ was used here colloquially, as in a clever thing to do. It is ludicrous to suggest that it refers to anything untoward,” he said in a statement Saturday.
Ok fine, but how Dr. Jones, do you explain this?
There’s a file of code also in the collection of emails and documents from CRU. A commenter named Neal on climate audit writes:
People are talking about the emails being smoking guns but I find the remarks in the code and the code more of a smoking gun. The code is so hacked around to give predetermined results that it shows the bias of the coder. In other words make the code ignore inconvenient data to show what I want it to show. The code after a quick scan is quite a mess. Anyone with any pride would be to ashamed of to let it out public viewing. As examples [of] bias take a look at the following remarks from the MANN code files:
Here’s the code with the comments left by the programmer:
function mkp2correlation,indts,depts,remts,t,filter=filter,refperiod=refperiod,$
datathresh=datathresh
;
; THIS WORKS WITH REMTS BEING A 2D ARRAY (nseries,ntime) OF MULTIPLE TIMESERIES
; WHOSE INFLUENCE IS TO BE REMOVED. UNFORTUNATELY THE IDL5.4 p_correlate
; FAILS WITH >1 SERIES TO HOLD CONSTANT, SO I HAVE TO REMOVE THEIR INFLUENCE
; FROM BOTH INDTS AND DEPTS USING MULTIPLE LINEAR REGRESSION AND THEN USE THE
; USUAL correlate FUNCTION ON THE RESIDUALS.
;
pro maps12,yrstart,doinfill=doinfill
;
; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions
; of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually
; plot past 1960 because these will be artificially adjusted to look closer to
; the real temperatures.
;
and later the same programming comment again in another routine:
; ; Plots (1 at a time) yearly maps of calibrated (PCR-infilled or not) MXD ; reconstructions ; of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually ; plot past 1960 because these will be artificially adjusted to look closer to ; the real temperatures.
You can claim an email you wrote years ago isn’t accurate saying it was “taken out of context”, but a programmer making notes in the code does so that he/she can document what the code is actually doing at that stage, so that anyone who looks at it later can figure out why this function doesn’t plot past 1960. In this case, it is not allowing all of the temperature data to be plotted. Growing season data (summer months when the new tree rings are formed) past 1960 is thrown out because “these will be artificially adjusted to look closer to the real temperatures”, which implies some post processing routine.
Spin that, spin it to the moon if you want. I’ll believe programmer notes over the word of somebody who stands to gain from suggesting there’s nothing “untowards” about it.
Either the data tells the story of nature or it does not. Data that has been “artificially adjusted to look closer to the real temperatures” is false data, yielding a false result.
For more details, see Mike’s Nature Trick
UPDATE: By way of verification….
The source files with the comments that are the topic of this thread are in this folder of the FOI2009.zip file
/documents/osborn-tree6/mann/oldprog
in the files
maps12.pro
maps15.pro
maps24.pro
These first two files are dated 1/18/2000, and the map24 file on 11/10/1999 so it fits timeline-wise with Dr. Jones email where he mentions “Mike’s Nature trick” which is dated 11/16/1999, six days later.
UPDATE2: Commenter Eric at the Climate Audit Mirror site writes:
================
From documents\harris-tree\recon_esper.pro:
; Computes regressions on full, high and low pass Esper et al. (2002) series,
; anomalies against full NH temperatures and other series.
; CALIBRATES IT AGAINST THE LAND-ONLY TEMPERATURES NORTH OF 20 N
;
; Specify period over which to compute the regressions (stop in 1960 to avoid
; the decline
;
Note the wording here “avoid the decline” versus “hide the decline” in the famous email.
===============
I’ll give Dr. Jones and CRU the benefit of the doubt, maybe these are not “untowards” issues, but these things scream for rational explanations. Having transparency and being able to replicate all this years ago would have gone a long way towards either correcting problems and/or assuaging concerns.
Sponsored IT training links:
Need help for EX0-101 exam ? We offer self study 642-436 training program for all your 642-974 exam needs.
“BBC 2 Newsnight, with Paxman as presenter, just had a report on the leaks,it was pretty even handed.”
Glen Beck had a long segment on his show this evening. It wasn’t even handed, it was scathing. Loved it.
Interesting change in the UEA statement:
Original:
http://enviroknow.com/2009/11/21/east-anglia-university-cru-climate-hacking/
Current: (17.45 November 23)
http://www.uea.ac.uk/mac/comm/media/press/2009/nov/homepagenews/CRU-update
The comment by Dr. Phil Jones has been dropped.
Jesse (21:24:11) :
Once again, you guys are making mountains out of ant hills. This is just normal data processing per the 1960 divergence problem as shown by NUMEROUS sources. This is what happens when a bunch of uninformed amateurs try and “debunk” real scientists. Leave the science to the scientists and go back to your day jobs as custodians, wal-mart employees and laborers.
REPLY: So what do you do down there in Norman? NSSL? U of OK? You might be surprised at the sort of professionals that frequent here. I invite them to sound off. – A
It is comments like this that cause elitism. What is a scientist except fro someone who try to understand through experimentation and observation.
What do you need? A PhD? If I have one then will you say that I am a scientist? Just trying to get the lay of the land here? Do I have to agree with AGW before I am ruled as a scientist? Publish findings? Use a peer review process? Oh wait if you don’t agree with AGW you don’t get peer reviewed… Hmmm… almost sounds like collusion to me but I suppose I will simply go back to my day job, which obviously can only be at Wal-Mart…
By the way your comment was in point of fact very offensive. Do you not believe in challenging authority? If you are wrong in the challenge all that has to be done is for the authority to provide proof of how they are right. If they are wrong… well if they are honest about it they can still be an authority who simply incorporated additional information into their knowledge base. Making them even a better authority. The Scientists that are in these emails have actively colluded ( which we had evidence of before ) in keeping data from the hands of people like Steve McIntyre. While they are sure that their data supports their beliefs Steve has already shown that they have messed up more then once on some pretty major pieces of information. Both taking seats of honor in the IPCC reports.
Perhaps they are a little worried of actually being reviewed by someone better then they are. I mean the idea of peer review is you have someone on your level review your work, not someone better lol…
not that you care about any of this you have already classified us people who in your mind perform meaningless tasks in society. Which I feel again is highly offensive to both us and people who have those jobs.
“This is what they did — these climate “scientists” on whose unsupported word the world’s classe politique proposes to set up an unelected global government this December in Copenhagen, with vast and unprecedented powers to control all formerly free markets, to tax wealthy nations and all of their financial transactions, to regulate the economic and environmental affairs of all nations, and to confiscate and extinguish all patent and intellectual property rights.
The tiny, close-knit clique of climate scientists who invented and now drive the “global warming” fraud — for fraud is what we now know it to be — tampered with temperature data so assiduously that, on the recent admission of one of them, land temperatures since 1980 have risen twice as fast as ocean temperatures.”
http://pajamasmedia.com/blog/viscount-monckton-on-global-warminggate-they-are-criminals-pjm-exclusive/
http://www.eastangliaemails.com/emails.php?page=1&pp=25&kw=manipulate
Seems that phil Jones et al are totally obsessed with WUWT and Climate audit – so they panic when Steve Mcintyre pulishes his analysis, and admit they have no idea whether he’s correct or not. The only one who might understand it is a certain Tim Melvin, who is “loose canon” according to Mann, so shouldn’t be contacted directly.
Here is what is described as a “climate case study”
http://www.powerlineblog.com/archives/2009/11/024995.php
climatebeagle (16:35:27)
Thats amusing. It says:
“The selective publication of some stolen emails and other papers taken out of context is mischievous and cannot be considered a genuine attempt to engage with this issue in a responsible way.”
They’re the ones who wrote the damn things!
So let me get this straight. Tree ring proxies are not to be trusted when they diverge from the modern thermometer record, but when they erase the Medieval Warm Period, they are gospel. Do I have that right?
It reminds me of something I learned about newspapers a long time ago. If you have ever had first-hand involvement in a story that was reported in the newspaper, you realize that half of what is written is wrong, but when you read the next story about which you have no prior knowledge, you find yourself taking the whole thing at face value. What’s wrong with this picture?
These leaked files prove that Jones & Co. have stopped being scientists and started using data the way a drunkard uses a lamppost, for support rather than illumination.
As I’ve read through the e-mails, the data “dropped after 1960” is dendrochronological data — tree rings. As you recall from your vast experience in this issue, tree ring data correlates very well with other temperature measures until about 1960, and then it tails off as if temperatures declined. However, thermometer readings from the same places don’t show a drop. So, the data aren’t used where they cease to be informative. Critics, unused to trying to make serious science studies, will argue that all the dendro data should be scrapped — that only strengthens the trends of the other data toward warming, though, as I read it.
There is a mystery as to what happened in about 1960 and afterward which impinged on the growth of trees. It may be additional pollution. It may be acid rain. It may be insect plagues (though that should be regional, shouldn’t it?). Critics again may argue the data should be dropped, but the “divergence issue” is well known, described in papers, and of course you’re well aware of the debate in all its incarnations.
I don’t follow the dendro stuff that closely. My suspicion is that the forests used for the tree ring samples were afflicted by air pollution in the latter 40 years of the 20th century, and that caused a decline in growth that shows up in a study that tries to correlate growth with temperature, as equivalent to a decline in temperature. Critics should be wary here: If the cause is anthropologically-related, it damages the case against warming even more. If air pollution itself masks the results of warming, then there is a double whammy to deal with. Surely you’ve covered this issue before, Mr. Watts.
If you read the e-mails, you learn that the “Nature trick” is to impose hard measures of temperature on the charts — real data, as opposed to predictions or projections from measures based on hypothesis or theory. The “trick” is to make the charts more honest, to expose errors in models, and generally to shed more light.
Are you really opposed to using real measures of temperature in place of calculated values? Why?
Oh that’s encouraging, the fate of the world hanging on shade tree coders.
I have no formal training in coding and yet I have written numerical calculation code that flew on the F-16 (taken from a routine I wrote for the A-320). It is not where you do your work (under the shade tree) but how well thought out and documented the code is.
When I started in programming (around 1977) there were very few schools teaching the subject. Many engineers learned to do it by reading manuals and magazine articles. Structured code and good documentation are the keys to good code. Good design is a help as well.
After a while with old code you need to stop patching and do a bottom up/top down redesign. It just gets too crufty otherwise.
The problem I see is that in the early days there were no standards enforced.
Going back and trying to properly document undocumented code is very difficult. That is why for “real code” the documentation is done when the particular routine is written.
I noticed Gavin was taking a lot of hard questions on RC, must make for a long night, but kudos to him for that. I’m not sure if my code questions got pulled or are just still in moderation (a few hours, I’ll still assume the latter). In any case I’ve dug further and I think q.1 is yes (I’m really surprised that is allowed through though). Q.2 I really am not sure of. I read the whole Harry file last night and it seems that was how 3.0 was approached, but then in other comments Gavin mentioned it was completely independent.
Anyway, this was the post — I’d be interested what others programmers who have been through ‘harry’s read me ordeal’ think of 2:
Hi Gavin,
First, kudos to you for this marathon, I understand it must not be fun, and appreciate the huge effort.
I have been a programmer for 20 years plus, but like most people looking at this stuff not a climate expert in any way. That said, I do understand working with data and the daily travails of code massage. A lot of things causing arm waving are just normal day in the life stuff imo, but I do have two questions.
1) Just to verify my assessment of the fortan code, it seems that through out the code there is a cutoff of about 1960 (not always exactly that) where proxy data is replaced, weighted, or blended with other more accurate measurements due to the proxy data not matching a known signal. Some of the comments use unfortunate language in retrospect, but people should try to make comments clear in any case so I have no issue with those. So it isn’t a comment issue, just I want to be sure that I understand correctly that the pre-1960 and post-1960 data is coming/influenced from two different sets… is that a fair assessment?
2) The Harry file seems normal enough (not best practices I’m sure, but the poor guy had his work cut out for himself with that data, ugh! Hat tip to him). It seems to have started as just improving existing code, but for most of the remainder been a log of creating the 3.0 datasets and code (is that correct?). It seemed to me in general the new set isn’t considered correct until is gives a close match to the old set. Was that the goal (and I understand that can be a legitimate goal), or was 3.0 meant to be a second set of data/code to compare and verify the first?
Thanks very much for your time,
Robin
Tom in Texas (16:35:11) :
After years of, “THE CONCENSUS AMONG SCIENTISTS IS……..” from the BBC, an even handed report from aunty (BBC) on climate= Glen Beck being scathing^nth
I think the only reason there was an even handed report, or even a report, on the “emails” was because the’ve been taking a caining all day on their blogs for censorship and hiding behind legalities.
This blog
http://www.bbc.co.uk/blogs/thereporters/richardblack/2009/11/copenhagen_countdown_17_days.html?s_sync=1
was shut down on Friday because:
” Update 2309: Because comments were posted quoting excerpts apparently from the hacked Climate Research Unit e-mails, and because there are potential legal issues connected with publishing this material, we have temporarily removed all comments until we can ensure that watertight oversight is in place.”
Then was reopened today with this:
“Update 2 – 0930 GMT Monday 23 November: We have now re-opened comments on this post. However, legal considerations mean that we will not publish comments quoting from e-mails purporting to be those stolen from the University of East Anglia, nor comments linking to other sites quoting from that material.”
After they watched the posts all day, there was this:
“Update 3 – 2116 GMT Monday 23 November: As lots of material apparently from the stolen batch of CRU e-mails is now in the public domain, we will not from now on be removing comments simply because they quote from these e-mails.
However, an important couple of caveats: a) the authenticity of most of the material has not to our knowledge been confirmed, and b) it would be easy when posting quotes to break inadvertently some of the House Rules – such as the one barring posting of contact details – which are still in operation and which will see comments being blocked.
In addition to our news story and Roger Harrabin’s analysis, those of you enraptured by this issue will probably have noticed Paul Hudson’s post on his climate blog, and Martin Rosenbaum’s post on his Freedom of Information blog. If not – enjoy. There’s also a comment board open at the moment on climate change generally that you might want to plaster.
Again – there’s nothing at all barring comments on the original blog ”
I cannot rember aunty ever being turned by the general public the govement yes but not the public.
They sail on regardless with their liberal left wing agenda, forgetting that they are paid for, by the licence fee, paid, under threat of imprisonment, by everybody who owns a TV in the UK.
As an interesting aside, Roger Harrabin, one of their climate reporters, claims to have recieved the “emails” on the 12th October. I’m not shure he didn’t mean 12th November, which seems to have been some kind of red letter day for the original poster.
Monbiot over at The Guardian in Surly Semi-apology Shock Horror!
He can’t resist erecting a huge straw man and also suggests that it’s a few bad apples but it’s still a major shift. He calls for Jones’ defenestration for crying out loud.
http://www.guardian.co.uk/commentisfree/cif-green/2009/nov/23/global-warming-leaked-email-climate-scientists?showallcomments=true#comment-51
If you read the e-mails, you learn that the “Nature trick” is to impose hard measures of temperature on the charts — real data, as opposed to predictions or projections from measures based on hypothesis or theory. The “trick” is to make the charts more honest, to expose errors in models, and generally to shed more light.
Are you really opposed to using real measures of temperature in place of calculated values? Why?
What the divergence tells us is that tree rings are not a good way to measure temperature because we lack all the relevant confounding variables. Like rainfall. Air quality (volcanic eruption?). CO2 content of the atmosphere. Cloud cover. etc.
To say something caused the divergence is true. But if you discard the later data you also have to discard the prior data until you know the cause.
And it is dishonest in the extreme to append “real” values on a chart of calculated values without making that explicit each an every time the chart is presented and the data used.
The idea of deriving temperatures to an accuracy of .1 deg C or even .5 deg C from tree rings is a triumph of self delusion over error bars.
Ed Darrell (18:06:23)
Its unlikely as trees produce their own volatile organic compounds more than humans reduce it, so trees produce three times more pollution than we do..
However, if its sulphides then most of those are naturally occuring even over the 20th century, and hardly have any effect on trees anyway – so its essentially down to ozone and c02. As co2 is increasing all over the world, that increases tree growth, whilst ozone has the opposite efect, although since 1960, ozone goes on a decrasing trend.
“UnfrozenCavemanMD (18:02:42) :
These leaked files prove that Jones & Co. have stopped being scientists and started using data the way a drunkard uses a lamppost, for support rather than illumination.”
Very good, I thought of a beehive with a queen (?),workers and drones,but I like yours better.
Havent seen this posted anywhere yet, but heres another interesting code snippet from \FOIA\documents\osborn-tree6\briffa_sep98_d.pro:
yyy=reform(comptemp(*,2))
;mknormal,yyy,timey,refperiod=[1881,1940]
filter_cru,5.,/nan,tsin=yyy,tslow=tslow
oplot,timey,tslow,thick=5,color=22
yyy=reform(compmxd(*,2,1))
;mknormal,yyy,timey,refperiod=[1881,1940]
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
;
yearlyadj=interpol(valadj,yrloc,timey)
;
;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
;oplot,timey,tslow,thick=5,color=20
Note the specific use of the term “fudge factor”.
From the wiki entry on “fudge factor”:
“Some variables in scientific theory are set arbitrarily according to measured results rather than by calculation (for example, Planck’s constant). However, in the case of these fundamental constants, their arbitrariness is usually explicit. To suggest that other calculations may include a “fudge factor” may suggest that the calculation has been somehow tampered with to make results give a misleadingly good match to experimental data.”
http://en.wikipedia.org/wiki/Fudge_factor
How about a link to FOI2009.zip?
As I read through HARRY_READ_ME.txt, I wanted to understand what was so important about the work he was tasked with to invest three to four years lamenting over such crappy and disparate data and code. Here’ s what I’ve been able to surmise through a personal effort to sort out some history and context to Harry’s narrative. Perhaps the following will add some additional insight into Harry’s read me file.
* This was not a side project or snapshot of some lowly intern’s work on an obscure project as some warmers have suggested but rather a central and important project – the finalization of the CRU TS3.0 GA (general availability) data product. What is important to understand is TS3.0 was released as a beta product in 2006 around the time Harry begins his work and certainly before Harry completes his narrative. Absent any other commentary, it seems most logical to assume his initial objective was a clean up, verification, and finalization of TS3.0 GA with a possible intent to produce a TS3.1+ given his work spanned several years past the release of TS3.0 and clearly makes use of newer data files with data current to 2009.
* One gets the initial impression that Harry doesn’t know what he is doing. However, if you dig into his comments, you find Harry doesn’t know what the previous programmers were doing and spends considerable time searching for clues. It is my impression that Harry is actually a reasonably good programmer equipped with lousy tools, no versioning system, and handed a completely undocumented, grossly disorganized, and incomplete package of code and data from the former TS2.1 project. From this mess he is tasked with creating TS3.0.
* The data files and code Harry is trying to make sense of is the basis of CRU’s published TS2.1 dataset product consisting of interpolated (on a 0.5 degree latitude-longitude grid) global monthly precipitation, temperature, humidity, cloud cover, diurnal temperature range, frost day frequency, T-Max, T-Min, vapor pressure, and wet day frequency data spanning from 1901 to 2002 (Mitchell and Jones, 2005).
* While TS3.0 was released in 2006, it was released as a beta product. Harry’ approach to completing / refining TS3.0 appears to be one of creating his own recompilation of TS2.1 and adding in more current data. This would make sense given that he spends much time comparing his ongoing results to the final TS2.1 product. As a side note, The CRU publicly comments that one major difference between TS2.1 and TS3.0 is “no homogenisation is performed in the latter.” [http://badc.nerc.ac.uk/data/cru/]. It seems TS3.0 is merely the same horse under a different blanket in terms of methodologies and data aggregation concepts. TS3.0 is scaled to a factor of 10 and 100 (tenths and hundredths of a degree / day respectively to allow for the use of integers in the published product. Given that Harry seems happy to achieve an accuracy of .5 to one degree at best using fudge factoring and data glossing it seems TS3.0’s accuracy is probably no better (or potentially worse) than TS2.1. An alternate view might be that TS3.0 could have been a more accurate product but was “normalized” to the manufactured biases of TS2.1 to maintain a consistent lineage of data products.
* The developer(s) of the TS2.1 dataset are apparently no longer available to answer simple questions; perhaps the Team’s practical application of the saying “there comes the time in the evolution of every product when you must kill the engineers and go into production.” Harry identifies Tim Mitchell and Mark New as the previous programmers and often questions their methodologies. It is also evident that Tim and Mark lost or deleted relevant sets of source data used in TS2.1 before the project was handed to Harry.
* Harry seems to acknowledge that some of the earlier data may be available to him but it is deemed unusable due to an obscure (undocumented) history of data handling. In Harry’s own words, near the end of his narrative, he states “I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before [my] head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more.”
* It might help some of the Fortran experts looking at the source code files to understand that the final organization of TS3.0 is a 360-lat x 720-long grid that is output as 720 columns, and 360 rows per timestep. The first row in each grid is the southernmost (centered on 89.75S). The first column is the westernmost (centered on 179.75W). It might help to look at the end hashtables in this light.
* The version history of the CRU TS product line that I’ve been able to figure out is: TS3.0 is descended from TS2.1: [Mitchell and Jones, 2005: An improved method of constructing a database of monthly climate observations and associated high-resolution grids.], which is descended from TS1.2 and TS2.0: [Mitchell, T.D., et al, 2003 – A comprehensive set of climate scenarios for Europe and the globe], which is ultimately descended from CRU TS1.0 and TS1.1: [New, M., Hulme, M. and Jones, P.D., 2000: Representing twentieth century space-time variability].
Maybe I’m getting ahead of myself here, but I’ve always had some disdain for Tamino and his (former) superiority complex. It’s perhaps instructive to rehash the essay(s) by co-conspirator Grant “Tamino” Foster on how the HadCru data correlated well with GISS.
Yes, if George Monbiot can apologize, then I can post a link from ClosedMind:
http://tamino.wordpress.com/2008/01/24/giss-ncdc-hadcru/
When the cluster**** that is HadCru can be fully determined, what’s the statistical chance of GISS being such a closely correlated cluster**** by coincidence ??
Over to you Senator Inhofe ……..
Chris Wright (05:11:41) :
Well, what do you suppose he meant? What kind of adjustment is not ‘artificial’?
The reason why I didn’t detail the rest of the quote is that it doesn’t matter, The commenter is saying don’t plot beyond 1960!. The notion is that they have massaged the data to look better beyond 1960, and then tell you not to plot it, is just self-contradictory.
There’s a huge amount of code here, and if you dig through it, you’ll probably find better gotcha’s than this. But you still can’t make much of it. Unknown commenters writing in some unknown circumstances ten or more years ago were not trying to give a careful explanation of the science. It’s possible he/she didn’t know much of the science – it seems to be mainly a graphics output routine. The key message here re the code is, don’t use after 1960 (and they didn’t). The programmer didn’t need to take a lot of care to get the scientific reason right.
@Ed Darrell- Suppose for a moment that the dendro data is actually correct (although a good argument can be made that it is not temp that is being measured, but multiple aspects of plant growth requirements) Let’s say for the sake of argument that a thousand years of tree rings are accurate but suddenly diverge from observed temp data about 1960. Well, either the tree’s habitat changed, or the observed temp data did. Considering the UHI and other biases that have been uncovered, it is equally reasonable to think that the tree growth is the same as it always was, and it is the temp data that is wrong!
Joanie<— not a scientist, just a middle aged sedentary housewife
PWilson, don’t forget acid rain. The effects of acid rain would indeed be to reduce the growth of tree rings. SO2, NO2 and NOx all rain out as acid, and none of these compounds are produced by most trees, especially not in the concentrations that came from coal-fired powerplants in the era 1955-1980.
Here’s an EPA explanation of effects of acid rain on a forest. You’re aware, I hope, that CO2 can’t improve the growth of most trees, because they are limited by water, temperature, or nitrogen before CO2 becomes a limiting factor. Additional CO2 to a western forest of lodgepole or ponderosa pine can’t help because there is not naturally enough water to allow the trees to sink the additional carbon. Consequently, more CO2 in the air would simply be a contribution to acidity in fog and rain, either of which will limit the growth of these trees, especially in the usually alkalinic soils of these forests.
Trees produce no SO2, no NOx, no significant particulates, and forests often resorb VOCs.
I think your information on forests and air pollution is skewed, but I admit I’ve worked air pollution only in forests in North America, hardwoods, softwoods, desert and rainforest. I’m interested in your sources that say trees produce more pollution than humans. Generally forest declines result from human-generated air pollution, and were it true that forests produce more pollution, forests should be dying out around the world from their own existence. That doesn’t make sense.
I think you’ve grokked it pretty well. We have this problem with a lot of modern data sets — same thing in carbon dating, for example. In carbon dating, after human actions started changing environmental factors on a global scale, the usual measures of the age of an animal from death to now became difficult for the past 200 years. More carbon, and consequently, more carbon 14, in the above-water environment. With careful calibrations, correction factors have been calculated, and some dating of things for the last 200 years has been done very successfully. But corrections have to be done, different from dating critters who died between, say 1820 and 50,000 years ago (some hotshots have stretched to date objects close to 100,000 years old with carbon dating, but it’s generally not recommended).
For most of the time humans have been on this planet, trees grew without interference from humans. In the past 200 years there has been a goodly amount of interference. Particulate air pollution both reduces sunlight and clogs the stoma on the leaves of trees in industrial areas. Sulfur oxides and nitrogen oxides assault the stoma directly and acidify fog and rain, which leaches good minerals out of the soil and makes harmful minerals available to damage tree roots. The benefits that could result from CO2 don’t obtain generally because of the usual limiting factors on tree growth — water, light (and heat), nitrogen and carbon dioxide availability — carbon availability is fourth, and one of the other factors limits growth so available CO2 cannot be utilized fully, let alone extra CO2.
Generally, then, we can correlate tree rings with temperature well through most of human history. What caused the divergence after 1960? I suspect acid rain, but I haven’t read the papers. There are a host of other causes possible, depending a lot on where the sampled trees were.
The issue here is, should we discount the previous thousand years of data because the data go off the rails in 1960, or should we just dismiss the data after 1960? There’s a separate issue about whether the tree rings were the only accurate set of data after 1960, recording a decline in temperature as every other instrument recorded an increase, but no one seriously argued that.
If we include the tree-ring data, does that mean warming didn’t happen? I don’t think so.
Housewifery is a good prerequisite for scientists, in my experience. If you’re looking for a change in career, don’t regard your experience as limiting, nor as inapplicable.
Thank you.