McIntyre on Stephen Schneider

An excerpt from Steve’s post at Climate Audit

Schneider replied that he had been editor of Climatic Change for 28 years and, during that time, nobody had ever requested supporting data, let alone source code, and he therefore required a policy from his editorial board approving his requesting such information from an author. He observed that he would not be able to get reviewers if they were required to examine supporting data and source code. I replied that I was not suggesting that he make that a condition of all reviews, but that I wished to examine such supporting information as part of my review, was willing to do so in my specific case (and wanted to do so under the circumstances) and asked him to seek approval from his editorial board if that was required.

This episode became an important component of Climategate emails in the first half of 2004. As it turned out (though it was not a point that I thought about at the time), both Phil Jones and Ben Santer were on the editorial board of Climatic Change. Some members of the editorial board (e.g. Pfister) thought that it would be a good idea to require Mann to provide supporting code as well as data. But both Jones and Santer lobbied hard and prevailed on code, but not data. They defeated any requirement that Mann supply source code, but Schneider did adopt a policy requiring authors to supply supporting data.

I therefore re-iterated my request as a reviewer for supporting data – including the residuals that Climategate letters show that Mann had supplied to CRU (described as his “dirty laundry”). The requested supporting data was not supplied by Mann and his coauthors and I accordingly submitted a review to Climatic Change, observing that Mann et al had flouted the new policy on providing supporting data. The submission was not published. I observed on another occasion that Jones and Mann (2004) contained a statement slagging us, based on a check-kiting citation to this rejected article.

During this exchange, I attempted to write thoughtfully to Schneider about processes of due diligence, drawing on my own experience and on Ross’ experience in econometrics. The correspondence was fairly lengthy; Schneider’s responses were chatty and cordial and he seemed fairly engaged, though the Climategate emails of the period perhaps cast a slightly different light on events.

Following the establishment of a data policy at Climatic Change, I requested data from Gordon Jacoby – which led to the “few good men” explanation of non-archiving (see CA in early 2005) and from Lonnie Thompson (leading to the first archiving of any information from Dunde, Guliya and Dasuopu, if only summary 10-year data inconsistent with other versions.) Here Schneider accomplished something that almost no one else has been able to do – get data from Lonnie Thompson, something that, in itself, shows Schneider’s stature in the field.

It was very disappointing to read Schneider’s description of these fairly genial exchanges in his book last year. Schneider stated:

The National Science Foundation has asserted that scientists are not required to present their personal computer codes to peer reviewers and critics, recognizing how much that would inhibit scientific practice.

A serial abuser of legalistic attacks was Stephen McIntyre a statistician who had worked in Canada for a mining company. I had had a similar experience with McIntyre when he demanded that Michael Mann and colleagues publish all their computer codes for peer-reviewed papers previously published in Climatic Change. The journal’s editorial board supported the view that the replication efforts do not extend to personal computer codes with all their undocumented subroutines. It’s an intellectual property issue as well as a major drain on scientists’ productivity, an opinion with which the National Science Foundation concurred, as mentioned.

This was untrue in important particulars and a very unfair account of our 2004 exchange. At the time, Schneider did not express any hint that the exchange was unreasonable. Indeed, the exchange had the positive outcome of Climatic Change adopting data archiving policies for the first time.

As I noted above, at his best, Schneider was engaging and cheerful – qualities that I prefer to remember him by. I was unaware of his personal battles or that he ironically described himself as “The Patient from Hell” – a title that seems an honorable one.

Read more at Climate Audit

0 0 votes
Article Rating
179 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
tallbloke
July 21, 2010 12:05 am

As a graduate of the History and Philosophy of Science, and knowing something of the way institutional insiders cover each other’s backs against outside auditors, I think Steve McKintyre’s untiring efforts to force openness and the provision of the data and code required for replication represent a turning point in the pursuit of science. The computer age has changed the game, and the old boy network which has been running science has not kept up with the times. They have been running scared of being exposed to critical outside reviewers.
May Schneider rest in peace, let’s bury the defunct peer review system with him.
Time to drag the institutions of science into the modern world.

jcrabb
July 21, 2010 12:09 am

..a little soon?

jcrabb
July 21, 2010 12:22 am

For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

toby
July 21, 2010 12:25 am

As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.
The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.
That being said, I did release Matlab code to a requestor, but only with great trepidation and unease. Luckily, he never came back with questions. Release of code should be a personal choice of the author.
I also believe that the peer review process (for all its faults) is an invaluable bulwark between science and non-science. While I have been singed badly by the process (Ouch!), I also found it improved greatly the papers I and my colleagues did get published.
As a current reader of “Science as a Contact Sport”, I greatly regret the passing of a great scientist and a great man in Stephen Schneider.

paulsnz
July 21, 2010 12:26 am

Not evil, Just wrong.

July 21, 2010 12:29 am

When I created the petition on the CRU, I (mistakenly) called it the Climate Research Unit, because it never occurred to me that anyone would could it anything else.
I notice that Mr Sneider journal was called “climatic change”.
Has anyone else noticed how anything simple in global warming immediately gets a much longer more complicated name? Manmade-warming = Anthropogenic Global Warming? To me this is always an attempt to make something appear more complicated than it really is. Anthropogenic Global Warming sounds scientific, sounds as if it is important. Manmade warming sounds simple is easy to understand and encourages everyone to have their input — and immediately makes it obvious that the people being “got at” are not some nebulous “others” but ordinary people like you and me.
As for “climatic”: “climate change” is: a change in the climate. However “clamatic change” is a change of a climatic nature. The difference is subtle, but when the simpler more straightforward version is perfectly adequate, the use of “climatic” rather than “climate” implies that they are trying to say something so different that it warrants the use of ridiculous language to highlight the importance of using climatic.
personally, it sounds to me like pretentious claptrap intending to try to make a third rate subject sound as if it had some scientific credibility.

Martin Brumby
July 21, 2010 12:33 am

No-one likes to speak ill of the dead. But I think this piece from Steve and the following great piece from Phelim McAleer are entirely appropriate.
http://www.noteviljustwrong.com/blog/general/464-stephen-schneiderdeath-of-an-unrepentant-hypocrite
If Schneider was a protagonist in some obscure academic spat about String Theory then I would certainly take the view “if you can’t think of anything nice to say then keep your mouth shut.”
But this was a scientist who happily promoted more than one shroud waving ‘scenario’ – absolutely careless of the enormous consequences of his advocacy. There are real people who have suffered and who suffer now because of the malicious and incompetent dogma Schneider promoted.

DC
July 21, 2010 12:45 am

Opening line, ‘excerpt’ rather than ‘except’.

tallbloke
July 21, 2010 12:48 am

toby says:
July 21, 2010 at 12:25 am (Edit)
As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.

If you take a look at Steve McKintyre’s site climateaudit.org you’ll see he has done that many many times. The point is, he didn’t get the same results, and the description of the methodology given in the papers was inadequate for replication purposes. So he then had to try to work out what the paper wasn’t saying. A time consuming reverse engineering job.
If climate ‘scientists’ such as Mann, Briffa, Santer and Jones want to keep the IP of their code, then they must be clear with the description of methodology in the papers.
The reasons they haven’t been is obvious now, thanks to Steve’s work in uncovering the shoddy statistical malfeasance these goons are guilty of.

mike sphar
July 21, 2010 12:53 am

I cannot imagine a scientist not expecting to have his work examined, This bit about being “idiosyncratic and uncommented” is hogwash. and don’t get me started on “valuable time”. If you make a claim, you should be able to validate that claim in public, with somebody else doing the driving otherwise it is meaningless and not true science.

mike sphar
July 21, 2010 12:54 am

[snip]

Rick Bradford
July 21, 2010 12:54 am

Schneider: “We need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.”

Tenuc
July 21, 2010 12:55 am

“…This episode became an important component of Climategate emails in the first half of 2004. As it turned out (though it was not a point that I thought about at the time), both Phil Jones and Ben Santer were on the editorial board of Climatic Change…”
This is how the supposed ‘consensus’ of cargo cult climate science was maintained. I suspect that similar arrangements are to be found in other scientific disciplines. Otherwise the various bits of ‘pixie dust’ – dark matter, the graviton, CO2 e.t.c. – needed to maintain the mainstream group-think would be laughed out of court.
Real science is about finding truth; without openness, honesty and trust is just becomes another political football.

Andrew30
July 21, 2010 12:56 am

toby says: July 21, 2010 at 12:25 am
“Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.”
Computer code is a complete and unanimous description of a function. If you can read the language you can easily understand the function. It is no different from a complex formula in physics or a molecular description in chemistry, if you understand the language you can understand the description.
The idea that computer source code is somehow different from any other shorthand used to describe a complex system is a complete fallacy.
The idea that I could publish a paper in physics and not include the formulas, but instead tell the reader to go write their own formulas is an indefensible position.
The people asking for the code can read the code just as easily as a physicist can read a formula.
There is no excuse.

July 21, 2010 1:04 am

Many years ago, I’ve been doing some statistical database programming, and I know well how “wrinkles” in the code can produce desirable results.
The whole process of computer modeling is always suspect. Starting with the quality of the data entered into the model (who and how measured the data, who and how sorted it out, who and how decided, which data to use, etc.), and ending with the more than likely possibility of conscious and/or unconscious desire to arrive at the predetermined result.
A good scientist must be very careful with this whole process, commenting and explaining every step on the way, and — if he is doing a publicly funded research — be ready to provide his code to any requesting party.
Now, when trillions of dollars and millions of livelihoods depend on walking the AGW party line, complete transparency and honesty in science is indispensable, and any attempt to hide information or to silence opponents is a condemning evidence of a wrongdoing.

tallbloke
July 21, 2010 1:10 am

jcrabb says:
July 21, 2010 at 12:22 am (Edit)
all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

They show solar activity being at an 8000 year high too.
But to address your point more directly, given the archeological finds on ice covered mountain passes in the european Alps, it seems it was even warmer in the Roman period. And even warmer than that in the high bronze age.
Temperature goes up, temperature goes down. Get used to it. Especially the going down bit.

Andrew30
July 21, 2010 1:14 am

tallbloke says: July 21, 2010 at 12:48 am
“The reasons they haven’t been is obvious now, ”
You only have to look at some code.

stan stendera
July 21, 2010 1:36 am

[bridge too far. Sorry about that Stan. ~ ctm]

Ryan
July 21, 2010 1:38 am

“the various bits of ‘pixie dust’ – dark matter, the graviton, CO2 e.t.c. – needed to maintain the mainstream group-think would be laughed out of court.”
If all those great minds had been employed looking for what the people need rather than pixie dust perhaps the world would be a better place? Imagine if Stephen Hawking’s mind had been employed looking for a cure for AIDs rather than black holes. Socialist science is all about forcing taxpayers to pay for the search for pixie dust, when great needs lie elsewhere.

peakbear
July 21, 2010 1:41 am

toby says:
July 21, 2010 at 12:25 am
“As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.”
As tallbloke has already pointed out, complete replication/verification of the work is necessary and if it is computer based then that means the exact method and algorithms must be released.
I’ve worked in both science(climate modelling) and industry(IT) and one thing I can clearly see is that research institutions are way behind in IT skills. You’re trying to advocate each researcher manually working away, duplicating others work to get the specific results for themselves. For things like Temperature reconstructions and GCM’s what is needed is a configuration/build tool setup to bring in all the data/algorithms and then allowing replication/adjustments at the press of a button.
The Climategate code release showed the code to be pretty much the same thing I worked on nearly 20 years ago, the IT industry has moved on massively since then. Surely anything publically funded should be open source anyway and if your research is commercially viable perhaps it should be done by industry, in which case you can do what you want, but not releasing the code would make replication very difficult.
I almost dread to say it but what a lot of research institutions need is some decent project management to enable better efficiency. I know this goes against the grain from what a traditional Phd/PostDoc is (working individually) working together would allow much better work to be done and would also be more rewarding.

July 21, 2010 1:58 am

jcrabb: July 21, 2010 at 12:22 am
For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.
Archaeological findings and written records confirm the temperature was warmer.

July 21, 2010 2:00 am

Mike Haseler says:
I notice that Mr Sneider journal was called “climatic change”…personally, it sounds to me like pretentious claptrap intending to try to make a third rate subject sound as if it had some scientific credibility.

Mike, The historical dimension of the termanology is more interesting than you might suspect. It went from ‘global warming’ to ‘climate change,’ but before AGW there was ‘climatic change.’ (For a taste see the 22 essays compiled in ‘Climatic Change’, Shapley ed. 1953). And hence the name of the journal edited by Schneider – the name has remained the same but the subject has changed. At that time it was mostly about geological scale change – ice ages.
Likewise the Climatic Research Unit was established to allow the Met Office’s H H Lamb to continue his work uninterupted on Climatic change in historical time (that is, not geological time). After Brooks, Lamb was the leader in this field. His graph (view it here) was the source for that sketch you always see from the IPCC 1st assessment showing the medieval ‘climate optimum’ (another change there!). The founder of the ‘Climatic’ Research Unit remained a sceptic of AGW until he died in the late 1990s, but you wont find that in Trevor Davies bio, nor in wikipedia. So I wonder if we would be better not to snub climatic but to reclaim it!

July 21, 2010 2:02 am

A scientist would newer say the following statement:
we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have
Stephen Schneider will be remembered for things like this.
Despite being in portuguese, please check out the three videos at http://ecotretas.blogspot.com/2009/10/ecologistas-troca-tintas.html. They will give an extra idea of Schneider.
Ecotretas

David
July 21, 2010 2:08 am

toby says:
July 21, 2010 at 12:25 am
Whole heartedly agree.

Peter Miller
July 21, 2010 2:13 am

Real Climate’s eulogy on Schneider is an absolute classic of exaggeration and distortion – for that reason alone, it is worth a read.
For example: ‘We honor Steve by raising our voices, and by speaking out when powerful “forces of unreason” seek to misrepresent our science’.

JTinTokyo
July 21, 2010 2:23 am

Tallbloke says:
“If you take a look at Steve McKintyre’s site climateaudit.org you’ll see he has done that many many times. The point is, he didn’t get the same results, and the description of the methodology given in the papers was inadequate for replication purposes. So he then had to try to work out what the paper wasn’t saying. A time consuming reverse engineering job.”
Tallbloke is correct. An empirical paper should make clear the methodology used to derive the results. I understand Toby’s unease in releasing code as I am uncomfortable in doing the same regarding my own models for my own work in business. However, I always attempt to make clear exactly how I derived my results so that my results can be replicated and my clients generally want to understand how I derived those results. That academics would attempt to obfuscate their methodology while at the same time hiding behind the skirts of “peer review” when their claims have the potential of affecting the lives of hundreds of millions, if not billions, is preposterous.

Eric (skeptic)
July 21, 2010 2:27 am

Schneider seems to have had a dichotomous personal and pubic persona. From what I have read, everyone who came into contact with him had great personal admiration for him. At the same time he seemed to scorn those he didn’t know such as those of us in the great unwashed masses of the public. We were apparently not capable of appreciating the big picture and could not be trusted with the facts.

Patrick M.
July 21, 2010 2:40 am

toby said:
“The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.”
As a professional software developer I can state that if your code is “idiosyncratic and uncommented (or at least poorly commented)”, it is also likely to have bugs. That’s a major issue with your statement above, you are assuming that there are no bugs in the original code. So if somebody tries to replicate your results and fails because the original has bugs, where does that leave us? Obviously, if the original programmer didn’t have time to comment their code then they’re not going to have time to review the code of someone who is trying to replicate their results. So then what?

Jackie
July 21, 2010 2:43 am

Excellent post tallbloke.
Corrupt, inept, delusional scientists fear the blogosphere more than anything else.
The biggest threat to them from the blogospere is truth.
It is the blogospere that has allowed discussion to reopen on what was “settled science”. It is the blogospere that has allowed discussion to reopen on settled “alternative energies”. It is the blogospere that has allowed discussion to reopen in main stream media about settled “environmentalism”.
The blogosphere is much bigger than one scientist or one concerned scientists club. For these people, that is the real problem with the blogosphere, the truth rises and collective intelligience prevails irrespective of education, affiliation, of award, of status, of political belief or of any other label. This is the hard part for celebrity scientists to accept. There are many, many better and more intelligent people rising to the surface in the blogosphere that are showing better scientific understanding.

Frank
July 21, 2010 3:00 am

Rick Bradford (July 21, 2010 at 12:54 am) took Schneider’s quote (see below) about telling scary stories out of context, a gross thing to do right now. When dealing with McIntyre and Mann, did Schneider succeed in keeping his scientific and public ethics from interfering? McIntyre appears to think he tried.
The real problem with Schneider’s ethics lies where science interfaces with the public, especially at the IPCC. The IPCC wants us to believe that its reports, particularly the main body, are pure science, not documents advocating legislation. I don’t think Schneider would ever advocate “telling scary stories” in an IPCC report. On the other hand, there is little doubt that scientists are happy to “make little mention of any doubts we might have” in the main body of an IPCC report and they go along with “making simplified dramatic statements” in the SPM’s. “Hide the decline” is a classic case of “making little mention of any doubts we might have”. “1998 was the warmest year to the millenium” was a classic example of a “simplified dramatic statement”. Schneider may have personally done an excellent job of handling his “ethical double bind”, but the climate science community has failed. They want to “save the planet”, not follow the IPCC’s charter to provide unbiased scientific information which will enable legislatures to chart an optimum path between costs and benefits of fossil fuels. In public policy and law, we don’t trust humans to handle “ethical double binds”. When scientists think they can do so, they create a situation equivalent to allowing one person to serve as prosecutor, judge and jury. When scientists permit themselves to have (or tolerate colleagues with) large egos, they abandon the obligation to “include all the doubts, the caveats, the ifs, ands, and buts”. Doubt and ego rarely mix well.
“On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.” (Quoted in Discover, pp. 45–48, Oct. 1989.

Ken Hall
July 21, 2010 3:12 am

“For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.”
This is so wrong in so many ways…
Where do I start? “Global reconstruction” Well this suggests a computer model. My response? the Models are incomplete and so are clearly wrong. If you cannot model cloud activity properly, you cannot model climate. If you need to “add an anthropological forcing” to make your model work, then that is just saying, that we had to make something up to get our flawed model to work! IF the model was right, the Anthropological evidence would be patently clear in the tropics in the form of the tropospheric heat island. It is not there, but you leave it in the model to make the model fit with the adjusted and homologated temperature record taken from a diminished list of global temperature stations which have seen a massive reduction in the number of stations in COLD locations.
The models are wrong, worthless and corrupt. End of that story.
“temperature rise to be within ‘normal’ parameters over the last 1000 years”
What is “normal” and how and why have you defined “normal” as being something other than what nature is showing us now? There is a massive amount of geological, anthropological, geographical, cultural and historical evidence showing clearly that the world was much warmer within the last 1000 years. Farmsteads frozen in Greenland tundra is but one example. Vineyards in Yorkshire being another, But that is inconvenient to climate scientists who revert to their unscientific habits of hiding or ignoring inconvenient evidence and become deniers of truth.
The current global temperatures are NOT unusual, and the climate has warmed, and cooled, by a far greater extent AND at a far more rapid rate than now, hundreds of times in the past before man was ever on this ocean covered rock we call Earth.

Yarmy
July 21, 2010 3:17 am

jcrabb says:
July 21, 2010 at 12:22 am
For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

It doesn’t.
http://www.ossfoundation.us/projects/environment/global-warming/myths/loehle-temperature-reconstruction/image/image_view_fullscreen
Note I’m not making any assertions about whether this is a better or worse reconstruction than any of the others (and as Phil Jones himself said, he doesn’t think it’s possible to do anyway!).

Christopher Hanley
July 21, 2010 3:31 am

jcrabb (12:22 am) says:
“….all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years….”
The Greenland Ice Sheet Project 2 temperature Holocene reconstruction shows that the Little Ice Age was the coldest period for about 4000 years.
http://westinstenv.org/wp-content/postimage/Easterbrook_graph.jpg

July 21, 2010 3:53 am


At 12:25 am on 21 July 2010, Toby had written:

As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data. The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.

Others have made a similar point, but amplification and reinforcement is worth an effort, particularly with regard to the second of these two paragraphs.
My own “modest record of publications” is in clinical research, in which a great deal of what is done – on the pharmaceuticals and medical device manufacturers’ dime – has to be submitted to one or another office of the U.S. Food & Drug Administration and/or the European Medicines Agency (formerly EMEA). In order to facilitate the manufacturers’ compliance with the regulations of these agencies, clinical investigators withhold nothing from their reports, and the manufacturers are in turn expected to report the results of these clinical trials completely and truthfully.
Look to the suppression of adverse safety data by Merck Pharmaceuticals in the case of their VIGOR clinical trail of rofecoxib (Vioxx), the discovery of which in 2004 precipitated the withdrawal of that product from the market and multi-billion-dollar lawsuits for damages done to patients as the result of that and other suppression of pertinent safety information.
Really good example of what happens when corporate suits try to evade the rigor of scientific method as well as regulatory compliance.
The analytical methods used to assess clinical data is also emphatically “not for commercial use and not user-friendly.”
This notwithstanding, if anybody asks a clinical investigator or author “How did you get from the raw data in your study to these results you’ve collated in your reports?” the response is always “Here y’go.”
If you’re not willing to put “[t]he code you write yourself when doing a paper” in front of the whole damned world, why the hell did you write the paper in the first place?
A published paper is never anything more than a holographic sketch of the research work upon which it remarks. The machinery by which you got from your observations to your conclusions really ought to be as lucid and well-reasoned as anything that passes review and gets published, ought it not?
And if it’s not, if it’s kludgy and clumsy and embarrassingly inelegant, as long as it’s valid, how the hell can you make any kind of argument for withholding it from anybody who is interested enough in it to invest the skull-sweat required to examine it?
“Here y’go” does not imply “spending valuable time explaining all the wrinkles” at all.
You might add: “This stuff is pretty messy,” and ask: “If you find any mistakes I’ve made, would you please get back to me about it?”
Of course, that assumes you’re honest enough to admit that you’re capable of making mistakes.

July 21, 2010 3:54 am

Breathtaking post from “toby”.
Point 1 – as has been pointed out, if you’re writing ‘idiosyncratic’ poorly-commented code, then it’s likely to be wrong. More importantly, if you look at your own code and think ‘no-one but me is ever going to be able to understand this’, then you yourself will also be unable to understand it in six months’ time. Write the bloody code properly, for god’s sake.
Point 2 – no one is asking for researchers to release professional-quality, properly-commented code (ignoring your implied lack of concern with governments making multi-billion dollar decisions based on amateur-quality, uncommented code). Another programmer will be able to read your code, even if it takes longer than it should – you can always release it with a ‘no support’ rider, just like all free software, if you don’t have time to answer their questions. However, if you’re embarassed to let others see your crappy code, or you’re worried that someone will find that bug that you just couldn’t figure out, then perhaps you need to (a) take some programming lessons, and (b) add a caveat to the end of your papers: “software has not been reviewed, tested or QCd”.
Point 3 – If you don’t release code, then for replication to be possible, you *must* provide a fully-detailed functional specification of your code. This will, of course, take many many hours. Why not just release the code, which will take approx 30 seconds? The answer, as you kindly told us, is that the code is not fit to be seen in public. Why should we have any confidence in it?
I’m also a professional programmer, as are other commenters above, and this kind of crap really annoys me. I wish a few climate researchers would start posting their colleagues’ code to http://thedailywtf.com – I think we’d all learn a lot.

Ross Jackson
July 21, 2010 4:01 am

As another professional coder:
Instead of the code, a high level description of the algorithm to process the data would be sufficient. That would include all statistical methods and should be sufficient to enable data processing to be duplicated.
And if the “code” was properly written, the algorithm should be well documented in any case.

LearDog
July 21, 2010 4:01 am

The IP argument is a pure smokescreen and they know it. In academia – if you publish it first – it IS your Intellectual Property.
Unless of course one wants to be paid for ones unique answer?

David
July 21, 2010 4:11 am

Jackie says:
July 21, 2010 at 2:43 am
Excellent post tallbloke.
‘Corrupt, inept, delusional scientists fear the blogosphere more than anything else.
The biggest threat to them from the blogospere is truth.”
———————————————————————————–
Since only about 5% of climate scientists argue against global warming, either totally or guardedly, are the other 95% “corrupt, inept, delusional? Fear the blogoshere? Fear what, hundreds of thousands of wannebe scientists who collectively are the beholders of the truth?
———————————————————————————–
It is the blogospere that has allowed discussion to reopen on what was “settled science”. It is the blogospere that has allowed discussion to reopen on settled “alternative energies”. It is the blogospere that has allowed discussion to reopen in main stream media about settled “environmentalism”
———————————————————————————-
How can something be re-opened if it wasn’t closed to begin with.
———————————————————————————–
The blogosphere is much bigger than one scientist or one concerned scientists club. For these people, that is the real problem with the blogosphere, the truth rises and collective intelligience prevails irrespective of education, affiliation, of award, of status, of political belief or of any other label. This is the hard part for celebrity scientists to accept. There are many, many better and more intelligent people rising to the surface in the blogosphere that are showing better scientific understanding.
————————————————————————————-
“The truth rises and collective intelligence prevails…? How about collective ignorace and collective denial? Do they collectively disappear? How do you qualify and quantify the many, many better and more intelligent people? Are they qualified or do they have a self awarded status? Do you read all the scientific papers and understand them? Are you a scientist? I read many scientific papers and readily confess that there are some papers I don’t understand and some I do. I will never comment on a scientific paper I haven’t read or criticise one that I don’t fully understand. That is out of common courtesy to the scientist concerned.
I see too many idiotic comments in the blogosphere to realise that the truth faces a formidable barrier on its way to the surface.

Quinn the Eskimo
July 21, 2010 4:18 am

Worrying about intellectual property rights in computer code developed for an academic paper is revealing.
IP rights matter when you are trying to make money from the creation.
Climate scientists hoarding their code based on alleged IP rights are entrepreneurs in the competitive market for grant money, and their code and their data are the products they sell into this market. The Climategate emails showed the CRU boys and their collaborators treated data, both raw and processed, as proprietary IP as well. Indeed, they show that even collaborators on the same papers did not share their secret blend of herbs and spices with each other out of “respect” for each other’s “proprietary” work.
From the outside, this viewpoint seems fundamentally wrong. The work is either commercial, proprietary and for profit, or it is not. It is either the academic pursuit of science and knowledge for its own sake – and for the sake of “saving the planet” – or it is not.
The way they act tells us which it is.

Joe Lalonde
July 21, 2010 4:23 am

The current science is based on concepts of theories and observations that can be manipulated as changes occur.
Good hard physical evidence is not a part of this due to the current system that keeps the system in place for funding grants. Unless this system changes, the science will always be incorrect as science is not just one area of study.
Just to study the complexity of water is a massive area in itself due to all the factors that water can do and change. The planet having a stable speed and heat variation are extremely important.
The speed of this planet is not a consideration in lab testing along with the EXACT positioning of this planet co-ordinates which makes many theories incorrect. This would include quantuum mechanics and deep space travel, time travel theories, etc.(To find a sister particle or travel, you need to be exact as our solar system is traveling in space).
Even Darwin made the mistake that our planet evolved changes when new chemicals or effects were added worldwide to evolve to the current species we have today.
Who is an expert or reviewer of new science when they have not been exposed to it?
A scientific formula theory is very hard to break even with massive amounts of physical evidence.

BBk
July 21, 2010 4:29 am

“I cannot imagine a scientist not expecting to have his work examined, This bit about being “idiosyncratic and uncommented” is hogwash. and don’t get me started on “valuable time”.”
It’s rather like a math publishing a paper proving some white-whale axiom. Four pages of equations are left out of the middle because of “IP” issues, but everyone is just supposed to believe them that it works. That’s unacceptible in math, but apparently totally fine in science.

RockyRoad
July 21, 2010 4:34 am

toby says:
July 21, 2010 at 12:25 am
As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.
The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented).
——————-Reply
Then, as a computer systems engineer, I’d call such “code” gobblydegook or something else (your choice of demonic definitions). If your code isn’t “user-friendly” (and by that I understand you to mean nobody can understand it), or it is “idiosyncratic” (again, the computer instructions probably don’t do what they were intended to do), or “uncommented” (meaning you probably couldn’t figure out what the code was doing when you wrote it, and certainly down the line that perception got no better), then you are a lazy, confused programmer.
But in climate science it hasn’t been about replicability at all–it is about the end results, and in this way some argue that the end justifies the means. Take some scientist that has an ideological rather than scientific objective and he can take a set of data and, applying his proprietary black box (because how DARE anybody wish to look inside) come up with all sorts of strange and often idiosyncratic results. (Idiosyncratic code produces, at best, idiosyncratic results.)
Indeed, the code is exactly what provides answers as to how the data is being manipulated; what assumptions are being applied, and even what fudge factors are being dreamed up. As in the Harry_Read_Me file, it is obvious something terribly wrong was/is going on at the CRU. For them to continue to hide behind all sorts of indefensible excuses is not science at all. It is whitewash. It is refusing to “show your work”.
And neither is the CRU’s supposition correct that one can hide code as your argument also futilly attempts. All my own code is commented and my specification documents run into the thousands of pages–fully indexed, illustrated, cross-referenced and footnoted. And while this is documentation and code for commercial applications, were I a scientist trying to convince colleagues and policy makers of some nefarious doom mankind is supposedly perpetrating on the Earth, displaying the code by which you come to such a conclusion is the only, I repeat, the ONLY choice you have. To do otherwise is egalitarian snobbery of the worst kind. That your code is poorly written or embarrassing is no justifiable reason to hide it.

Jack Simmons
July 21, 2010 4:39 am

Back in the day, I learned to make a lot of comments in my code.
When debugging or answering a client’s question, I would always have to ask myself, “What was I thinking?”.
When tracking down a production problem, it was always a relief, sometimes comic, to come across some comments.
My favorite was the programmer’s comment: *** Don’t blame me. They made me do it this way. ***
If you don’t comment your code, expect it to be eventually tossed.

DirkH
July 21, 2010 4:40 am

Frank says:
July 21, 2010 at 3:00 am
“Rick Bradford (July 21, 2010 at 12:54 am) took Schneider’s quote (see below) about telling scary stories out of context, a gross thing to do right now. […]”
Thanks for providing more context about the infamous Schneider quote, Frank.
The problem i have is i’m not surprised. The context is exactly in line with the shortened version; the shortened version does not distort the meaning at all.
Choosing between efficiency and honesty, that’s what he said and that’s what he meant and yes, it’s gross – not quoting it, but the fact that he said it and meant it.

Jack Simmons
July 21, 2010 4:42 am

jcrabb says:
July 21, 2010 at 12:22 am

For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

Then all the Global reconstructions are wrong.
Apparently, from the first portion of your statement, it is not possible for model builders to get it right.

wws
July 21, 2010 4:59 am

A truly fitting memorial to Prof. Schneider will be if the last attempts at passing a “climate bill” in the Senate result in that idea being buried along with him.

David
July 21, 2010 5:01 am

Yarmy says:
July 21, 2010 at 3:17 am
jcrabb says:
July 21, 2010 at 12:22 am
When I follow your link, it takes me to a graph which has the following http address (not URL) http://www.NCASI.org/programs/areas/climate/LoehleE&E2007.csv
Your link takes me to the Ossfoundation, which said the following;
“Myths vs. Facts in Global Warming: This news and analysis section addresses substance of arguments such as “global warming is a hoax”, “global warming is a fiction”, “global warming is created to make money for Al Gore”. The main fallacy noted is that most arguments are facts out of context while others are simply false representations. When the facts pertaining to the arguments are viewed in context relevance becomes obvious. GLOBAL WARMING IS HAPPENING AND IT IS HUMAN CAUSED. (my capitalising)
http://www.ossfoundation.us/projects/environment/global-warming/myths/.
The graph you used is cherry picked and out of context. It came from Loehle E & E in a paper called “A 2000-Year Global Temperature Reconstruction Based on Non-Tree-ring Proxies.”

John Egan
July 21, 2010 5:05 am

Even if there is profound disagreement with someone –
To publish something like this a day after a person dies is unseemly.
It should be removed.

Ken Harvey
July 21, 2010 5:06 am

Were I a Maths teacher and set an equation for a student to solve and he gave me the correct solution of 2.348527336. with no explanation as to how he got there – I would be disinclined to give him a mark. If, upon enquiry, he told me that he had used a method of his own devising which was his personal intellectual property and which I would. in all likelihood, find difficult to understand, then I would be inclined to kick him out of class. A computer model is nothing more than a mechanical aid to quickly solving an equation that would take great time to do manually. We need both the data and the code to establish whether our climatologist is competent to do long winded, but essentially simple, sums.

Eric (skeptic)
July 21, 2010 5:25 am

It looks to me like the full context of the Scneider quote is that he favored leaving out caveats and uncertainties in certain public presentations. That may be ok in some cases (e.g. weather forecasting), but as some people have pointed out, perhaps not for trillion dollar interventions in the economy.
Another possible problem is that Schneider as a computer model may have believed that his computer models expressed uncertainty (e.g. run-to-run differences) when in fact they were insufficiently detailed to be accurate or contained actual errors.

Joe Lalonde
July 21, 2010 5:35 am

Proxies(gotta love that word!) are used in place of having the actual data and are aquired in many different areas and forms. Essencially theories to back up the data.
Does the “normal” public aware that many areas use proxies and how they were created? How proxies effect the overall final numbers?
Science is following the actual evidence. Not follow the theroy to crate an outcome.
If Anthony had the magic money wand and said “I am giving grants to any scientific theories that are credible”, the different ideas and different mindsets would show how each of us think differently, yet other influences and ideas from others seems to alter the final outcome when allowed to follow the scientific trail. What we have acquired today in knowledge is not the same as 10 or 20 years ago in our own path of thought.

Richard111
July 21, 2010 5:39 am

If “personal computer codes with all their undocumented subroutines” are not cleaned up and offered with the data how will the author ever learn from any mistakes?

RockyRoad
July 21, 2010 5:43 am

John Egan says:
July 21, 2010 at 5:05 am
Even if there is profound disagreement with someone –
To publish something like this a day after a person dies is unseemly.
It should be removed.
————-Reply:
Nope. While I respect the memory of Stephen Schneider (from the perspective of a fellow human that will meet the same fate in the near future), his passing will either elevate his work or cause it to be evaluated realistically. Besides, I’m betting if all the above comments were glowing reviews, you’d have no problem with it. That’s a double standard.

Yarmy
July 21, 2010 5:46 am

David says:
July 21, 2010 at 5:01 am
The graph you used is cherry picked and out of context. It came from Loehle E & E in a paper called “A 2000-Year Global Temperature Reconstruction Based on Non-Tree-ring Proxies.”

The url of the graph is irrelevant: it’s just the first one google spat out when I went looking for it. The point is that – contrary to JCrabb’s assertion – the Loehle reconstruction does not show that current temps are unprecedented in 1000 years.

Andrew Zalotocky
July 21, 2010 5:54 am

Andrew30 says:
“If you can read the language you can easily understand the function”
If you can read the language you should be able to work out what the function is doing but it won’t be easy if the code is an “idiosyncratic and uncommented” mess. More importantly, the fact that you can understand what it’s doing doesn’t necessarily mean you can understand why it’s doing it. Comments are necessary to explain the purpose of each part of the code and the assumptions behind it. If you don’t have that information it is impossible to be sure if the code is actually doing what it was intended to do.
The act of writing comments also forces you to make your assumptions explicit, thus making it more likely that any flaws in the methodology will be spotted at that stage. It ensures that you will have this information to hand if you have to modify the code in future, as you will probably have forgotten some or all of it by then. Comments are an integral part of good code.

Richard Tol
July 21, 2010 5:57 am

Steve Schneider was a great man.
Reproducibility is a cornerstone of science. If that requires publishing your code, so be it.

Kate
July 21, 2010 5:58 am

tallbloke at 12:05 am
“…May Schneider rest in peace, let’s bury the defunct peer review system with him…”
This is the real world of science, not the washed and scrubbed corporate media version.
Editors and scientists alike insist on the pivotal importance of peer review. The mistake is to have thought that peer review was anything more than a crude means of discovering the acceptability — not the validity — of a new finding.
The big lie is to portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. The system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.

July 21, 2010 5:59 am

Chris Long says:
July 21, 2010 at 3:54 am
Breathtaking post from “toby”.

I am, like you, a professional programmer, and everything that you said is wrong because it is from the point of view of a programmer tasked with producing commercial code (whether for sale or use within a business). Science is very, very different.

Point 1 – as has been pointed out, if you’re writing ‘idiosyncratic’ poorly-commented code, then it’s likely to be wrong.

No, that’s not at all true. There is a lot of code out in the world that works perfectly well, and is almost untouchable because it’s so convoluted. In particular, code that evolves (as almost must be the case in a research project), and in a situation where the lifetime of the code is going to be short lived (i.e. ancillary to one short research project, not a system with a lifespan of five to fifteen years), is not going to be robust from a programmer’s point of view. The point in that situation is to get to the answer with the least possible effort, not to produce “great code.”
And before you go on about how better code saves time in the long run, if you think that then you’re not listening and appreciating the difference in the situation. Commercial code has a clear objective from the start. It has an important goal of being stable and maintainable. Not so with research.

Point 2 – …Another programmer will be able to read your code, even if it takes longer than it should…

But it’s not going to be read by a programmer, its going to be read by other scientists who only use programming as much as they need to. Scientists work in science, and use programming as a tool. Again, it’s a whole different world than you are used to, and forcing your own personal experience and paradigm into a different environment, then declaring other people wrong, is not valid.

(a) take some programming lessons

This is just arrogant, and obnoxious. Just about everyone here should take science lessons before they post, but it doesn’t stop an obscene amount of arrogance, flavored with ignorance, from polluting the blogosphere. Scientists work hard at what they do, and put their time where best needed. You need to take some science lessons before making a comment like that.

Point 3 – If you don’t release code, then for replication to be possible, you *must* provide a fully-detailed functional specification of your code.

No, and the reason you don’t understand this is because you don’t understand either science or the peer-review system. The point is not for someone else to run the same program on the same data and get the same result. The point is not for someone to go through a program with a fine tooth comb looking for logic errors.
The way science works is that someone should be able to take the same hypothesis, and the same readily available data, and completely on their own perform an experiment that yields results consistent with the original study. If it doesn’t, there’s doubt about the hypothesis, and its left to science to determine how the two studies differed and obtained different results. But this is how you test a hypothesis, not by demanding detailed notes. It would be like walking into Madam Curie’s lab and insisting that you use her lab equipment to repeat her experiments.
A major failing of every bystander in the issue of climate change is to apply their own personal, limited experiences and training to the problem, and then to believe that they, over everyone else, understand it best. There are far too many intelligent but ill-equipped engineers posting with hubris their confident (and wrong) assessment of climate science.
One would think that intelligent people would realize that the first step in attacking a new problem is research and education, and well beyond what people here bother to undertake. As an engineer, do you apply everything you did on the last project to the next, or do you learn what’s needed for the new project? Do you take an assignment in Python, but insist on trying to write Python code using Java techniques, then complain when things don’t work right that it’s because Python is a bad language?
One would also think that intelligent people would keep an open mind, and assume that if there’s something that doesn’t look right, that maybe the problem is a gap in their own education and experience, and not with the people who have spent a lifetime training for and working on a problem.
Note to world: study, learn, keep an open mind, and try to realize that arrogance and unjustified self confidence are failings, not virtues.

Ken Harvey
July 21, 2010 6:00 am

If Einstein had said “M = mc2, but I am not going to tell you what the symbols stand for nor how I got there”, who would have taken any notice?.

July 21, 2010 6:07 am

Quinn the Eskimo says:
July 21, 2010 at 4:18 am
Worrying about intellectual property rights in computer code developed for an academic paper is revealing.
IP rights matter when you are trying to make money from the creation.

Spoken like a true capitalist, and very, very wrong.
Not everything in the world is measured in money. Obviously a scientist is trying to build a reputation and advance his career. Many scientists have worked on the same problem at the same time, basically in a race to be first to the end result. Competition occurs everywhere in human society, not just in financial ventures. It occurs in science.
This crazy idea that somehow all scientists working on the same problem should be part of one, big, happy team is ludicrous. People are entitled to their own intellectual property, and they’re entitled to try to be “the one” to publish that next great ground breaking study. And if some of the information from their last paper is serving as the foundation for the next, then no, they do not and should not have to share it.
It would be like insisting that all businesses share all information, because the ultimate goal is a stronger economy for the country. Someone tried that once, and it failed miserably (see the U.S.S.R. and communism).

red432
July 21, 2010 6:16 am

peakbear says:
“””
I almost dread to say it but what a lot of research institutions need is some decent project management to enable better efficiency. I know this goes against the grain from what a traditional Phd/PostDoc is (working individually) working together would allow much better work to be done and would also be more rewarding.
“””
In my experience you could use some psychologists and mediators also — I’ve seen large “simulations” built in bits by groups of “scientists” that hated each other and could stand being in the same room together. Needless to say the results of the simulation were astounding.

July 21, 2010 6:16 am

Slightly off-topic, but it always grates on me to read about ‘computer codes’. Where did this idea come from? Code, in the context of programming source code, is an aggregate noun and does not require a plural form. You wouldn’t say that a beach contains thousands of tons of sands.
A single computer program is compiled from ‘code’; a suite of computer programs is also compiled from ‘code’. A large project would have hundreds of thousands of lines of ‘code’. The Climategate archive contained a lot of ‘code’.
Does anyone know what grammatical rules people are using when they decide to talk about ‘computer codes’, plural?

Ryan
July 21, 2010 6:29 am

: I get really fed up with the nonsense that seeks to protect climate scientists on the basis that they are somehow super-intelligent humans and the rest of us are not worthy to challenge their output. In passing I remember the invention of the atom bomb, the cover up of BSE and the release of Thalidomide as rather obvious examples where science outdid itself in releasing the product of their hyper-intelligence on the semi-retarded masses that simply couldn’t cope with their supreme creativity.
I, for one, only have an IQ of 137. I couldn’t POSSIBLY be expected to understand how a thermometer is used to measure temperature over long periods of time, nor to understand that perhaps some trees grow better in certain climates producing a higher density of tree rings. After all, I gave up the possibility of a lifetime doing research for a doctorate at a third rate university such as the University of East Anglia to make MONEY in a high-tech industry. What a fool! I could have stood shoulder to shoulder with such giants as Dr Jones.
Naturally I, along with all my fellow dimwits, should volunteer for the gas chambers so that planet earth can be left to be the playground of such scientific luminaries as Al Gore, who will regale the children of this new Utopia with stories of an Earth’s core at millions of degrees Celcius, unchallenged by those of us that know no better.

Scott B
July 21, 2010 6:39 am

Frank says:
July 21, 2010 at 3:00 am
“Rick Bradford (July 21, 2010 at 12:54 am) took Schneider’s quote (see below) about telling scary stories out of context, a gross thing to do right now. When dealing with McIntyre and Mann, did Schneider succeed in keeping his scientific and public ethics from interfering? McIntyre appears to think he tried.”
How is that quote out of context?
Schneider wanted to wear two hats, one as scientist, one as activist. But the only reason anyone took his opionion seriously was because they thought, as a scientist, he would stick to the facts. Instead, he used his scientist hat to “trick” people (trick, being a good way to get people to believe something that is not proven) into giving more weight to his activist views than they deserved.
Did he ever said “Ok, now I’m wearing my activist hat, so while what I say is not supported by the science, I think it eventually will be, and is too important to let the facts prevent us from taking action anyway?” I doubt it. That would ruin the trick.

vigilantfish
July 21, 2010 6:41 am

Frank says:
July 21, 2010 at 3:00 am
Rick Bradford (July 21, 2010 at 12:54 am) took Schneider’s quote (see below) about telling scary stories out of context, a gross thing to do right now.
———————
Most of us here are familiar with the longer version of the quote. By including the preamble of the high-sounding justification for choosing between honesty and efficacy, the ends justifies the means? “And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have”. Most of us would like to see the world a better place, but it is totalitarian to assume that one’s personal– or a specific interest group’s–version of what is ‘better’ should be imposed on others through the use of distortions. That, unfortunately, is the legacy of Schneider, a ‘science’ that serves a totalitarian agenda.

July 21, 2010 6:41 am

sphaerica,
You leave a vitally important element out of all your comments: replication.
You try to cover that omission by talking about intellectual property rights, but that is a red herring argument any time there is public money involved — and in the climate sciences, public money is always involved.
Without replication, falsification is extremely difficult. McIntyre was able to reverse engineer Mann’s fraudulent hokey stick through exceptional diligence, and because of some oversights by Mann.
But to this day, 12 years after MBH98, Michael Mann still refuses to publicly archive his code and methodologies — which were paid for by millions of taxpayers. His work product is not Michael Mann’s personal intellectual property. It belongs to the people who paid for it.
People like Schneider gave Michael Mann a free pass to unethically withhold the details of his work, by saying it’s OK to tell half-truths [which as we know are whole lies] in order to advance their agenda.
Refusal to cooperate in replication means that we are expected to take Mann’s word. But “trust me” is not an answer when so much is at stake. Since replication is an essential part of the scientific method, what Mann is doing is not science. It is anti-science propaganda, and enabling that propaganda is a universal tactic of the alarmist contingent. If the alarmist crowd started to honestly work within the scientific method, the whole CO2=CAGW edifice would promptly come tumbling down like the house of cards that it is.

July 21, 2010 6:46 am

With all due respect for Prof Schneider, he was a very public arena centered scientist. With that goes all the positives and negatives of public figures. At his unfortunate passing it seems fitting to reflect on all aspects of his prominently public life. My emphasis is on all. I further suggest to do negative aspects in good taste.
John

July 21, 2010 6:55 am

sphaerica says:
July 21, 2010 at 5:59 am
Chris Long says:
July 21, 2010 at 3:54 am
Breathtaking post from “toby”.
I am, like you, a professional programmer, and everything that you said is wrong because it is from the point of view of a programmer tasked with producing commercial code (whether for sale or use within a business). Science is very, very different.

I understand the points you’re making, but I (clearly) disagree. Of course, programs written to support a single paper do not need to meet the same maintainability and quality standards that commercial software does. My own work involves writing a lot of one-off programs (analysing clinical trials data) but that does not mean that it’s ok for them to contain errors or to be incomprehensible.

But it’s not going to be read by a programmer, its going to be read by other scientists who only use programming as much as they need to.

You seem to be saying that it’s ok to write incomprehensible code because nobody else is going to try to understand it anyway. That doesn’t fill me with confidence, and is why I (somewhat flippantly) encouraged Toby to educate himself on the benefits of well-documented code.

No, and the reason you don’t understand this is because you don’t understand either science or the peer-review system. The point is not for someone else to run the same program on the same data and get the same result. … The way science works is that someone should be able to take the same hypothesis, and the same readily available data, and completely on their own perform an experiment that yields results consistent with the original study. If it doesn’t, there’s doubt about the hypothesis, and its left to science to determine how the two studies differed and obtained different results.

No, I think you missed the point. Of course, in an ideal world, the replication of a study would involve writing brand new code to implement that same processes, thus confirming the results – that’s what we do in my shop: dual programming. However, for a paper to replicable, it has to accurately describe its methods – you can’t just say ‘we took the data and ran it through our secret algorithm’, nor can you just say ‘we took the data and applied a reverse time-independent wibbleflab test’. Many climate science papers do not adequately describe their methods, as Steve McIntyre has amply documented, and as Phil Jones has admitted. It would be like Marie Curie publishing on the radioactivity of uranium without describing the electrometer apparatus used to make the measurements.
It would be extremely easy for researchers to accurately document their methods by simply releasing their code. The source code developed for the paper forms an ideal description of the methods (assumptions, mistakes and all). Researchers’ reluctance to release code is a major problem, and your post, while giving me a few valuable pointers on how science works, does not address this.
This seems to be a very common misunderstanding in the ‘release your code’ debate. The point is not that people want to run exactly the same code on the same data; nor is it that people want to pick holes in researchers’ programming techniques. The point is simply that the code, which already exists and is sitting on a machine somewhere, provides an totally complete and unambiguous description of what was actually done with the data, at no extra cost to the researchers.
I would be interested to know whether you can provide a good reason for not providing either (a) the code or (b) an equally full description of the methods.
As a follow-up question – in your opinion, if a paper relying on computer data processing does *not* adequately describe the algorithms used (by whatever method) and thus anyone trying diligently to replicate the paper’s results is thwarted, should the original paper be disregarded?

DennisA
July 21, 2010 6:57 am

Perhaps we should remember the e-mail from Phil Jones, who said he was “cheered” in a strange way, by news of the sudden death of John L. Daly, who died of a heart attack in 2004.
There is a lot of insight into Professor Schneider’s attitude to science, the public and the media at http://stephenschneider.stanford.edu/Mediarology/MediarologyFrameset.html?http://stephenschneider.stanford.edu/Mediarology/Mediarology.html
Masses of links….

Scott B
July 21, 2010 7:02 am

sphaerica says:
July 21, 2010 at 6:07 am – “It would be like insisting that all businesses share all information, because the ultimate goal is a stronger economy for the country.”
No, it would be like requiring all publically traded companies to provide audited financial statements, so we could know that their claims of profits and growth are legitimate. That way, no one gets ripped off by investing with a company that is misrepresenting itself. And we absolutely require that, because all public companies greatly benefit from appearing to be successful, in both increased investments and prestige.
As you point out, scientists have the same motivation to appear successful. But if we are unable to confirm their findings, how can we know these findings are legitimate? Further, as you said, what if “some of the information from their last paper is serving as the foundation for the next?” Wouldn’t it be possible for a scientist to appear to be very successful, publishing several papers, each building on the last, solving many difficult problems without anyone ever being able to confirm the results?
It seems like this is exactly what happened with the hockey stick and all of its spin offs. A lot of garbage data was massaged with the programming to appear to deliver a telling message. But since no one could review the programming, no one could confirm or deny the results. Then Mann and other scientists built upon the hockey stick data and code, all coming to “better and better” conclusions, without anyone being able to confirm that dendrochronology was actually a useful measure of temperature.

vigilantfish
July 21, 2010 7:05 am

sphaerica says:
July 21, 2010 at 6:07 am
The way science works is that someone should be able to take the same hypothesis, and the same readily available data, and completely on their own perform an experiment that yields results consistent with the original study. If it doesn’t, there’s doubt about the hypothesis, and its left to science to determine how the two studies differed and obtained different results. But this is how you test a hypothesis, not by demanding detailed notes. It would be like walking into Madam Curie’s lab and insisting that you use her lab equipment to repeat her experiments.
—————–
As has been argued here and at Climate Audit and at many other websites many times, the data is not readily available. Why else did Steven McIntyre have to battle for years to get the data. You also don’t understand how science works. There have been many instances in which scientists could not replicate results because descriptions of equipment and methods do not exactly recapture what procedures were used. Some scientific inquiries during scientific disputes have required other scientists to come into a lab to observe the original scientists at work – occasionally the scientists involved in making a claim unconsciously perform some manoeuvre that turns out to be critical to the experiment. There are cases in which perfectly valid results cannot be replicated until scientists at other institutions learn the exact techniques used by observation of those who achieve the original breakthrough.
For example, as John C. Baillar III noted in “The Role of Data Access in Scientific Replication”, a paper presented to ” Access to Research Data: Risks and Opportunities. Committee on National Statistics, National Academy of Sciences”:
In chemistry, I recall hearing about a specific, real, case of a new chemical synthesis in which, spite the best efforts of the originator and the replicator, the latter could not get a specific synthesis to work — until the two of them followed the protocol side by side and found that the critical difference was in whether a metal stirring rod touched the side of the beaker during mixing.
Canadian fisheries biologists, who created the foundation of a thriving but embattled fish farming industry in the Bay of Fundy and in British Columbia, attempted to set up fish culture operations for years, based on descriptions published by successful Norwegian scientists. It was not until Canadian scientists spent a season in Norway observing (rather than reading about) the techniques and technologies involved that Canadian fish farming actually began to succeed.
Therefore if scientists had not been able to recreate Mme. Curie’s experiments just using her written accounts, it would have been quite reasonable to request a direct demonstration from her, or to use her laboratory equipment in the attempt. Likewise, since there is no way on earth that climate ‘science’ can be properly audited without some knowledge of how the data was manipulated, by a provision of algorithms or code, the requests for information by McIntyre and other individuals who have a real respect for scientific understanding should be universally accepted as falling within sound scientific practice. And not just accepted, but actively supported by scientists everywhere!

Andrew30
July 21, 2010 7:08 am

Andrew Zalotocky says: July 21, 2010 at 5:54 am
“More importantly, the fact that you can understand what it’s doing doesn’t necessarily mean you can understand why it’s doing it”
The paper explains the thinking and the why, the formula or program is the what and the how.
Comments are like the pictures in a childrens book, they are not the story, they are only an illustration. Comments are often not up to date and almost never corrected once they are entered. Computers do not run comments.
Anyone who debugs comments is doomed to fail, debug only code.

July 21, 2010 7:10 am

sphaerica says:
July 21, 2010 at 5:59 am
Chris Long says:
July 21, 2010 at 3:54 am
Breathtaking post from “toby”.
I am, like you, a professional programmer, and everything that you said is wrong because it is from the point of view of a programmer tasked with producing commercial code (whether for sale or use within a business). Science is very, very different.

I understand the points you’re making, but I (clearly) disagree. Of course, programs written to support a single paper do not need to meet the same maintainability and quality standards that commercial software does. My own work involves writing a lot of one-off programs (analysing clinical trials data) but that does not mean that it’s ok for them to contain errors or to be incomprehensible.

But it’s not going to be read by a programmer, its going to be read by other scientists who only use programming as much as they need to.

You seem to be saying that it’s ok to write incomprehensible code because nobody else is going to try to understand it anyway. That doesn’t fill me with confidence, and is why I (somewhat flippantly) encouraged Toby to educate himself on the benefits of well-documented code.

No, and the reason you don’t understand this is because you don’t understand either science or the peer-review system. The point is not for someone else to run the same program on the same data and get the same result. … The way science works is that someone should be able to take the same hypothesis, and the same readily available data, and completely on their own perform an experiment that yields results consistent with the original study. If it doesn’t, there’s doubt about the hypothesis, and its left to science to determine how the two studies differed and obtained different results.

No, I think you missed the point. Of course, in an ideal world, the replication of a study would involve writing brand new code to implement that same processes, thus confirming the results – that’s what we do in my shop: dual programming. However, for a paper to replicable, it has to accurately describe its methods – you can’t just say ‘we took the data and ran it through our secret algorithm’, nor can you just say ‘we took the data and applied a reverse time-independent wibbleflab test’. Many climate science papers do not adequately describe their methods, as Steve McIntyre has amply documented, and as Phil Jones has admitted. It would be like Marie Curie publishing on the radioactivity of uranium without describing the electrometer apparatus used to make the measurements.
It would be extremely easy for researchers to accurately document their methods by simply releasing their code. The source code developed for the paper forms an ideal description of the methods (assumptions, mistakes and all). Researchers’ reluctance to release code is a major problem, and your post, while giving me a few valuable pointers on how science works, does not address this.
This seems to be a very common misunderstanding in the ‘release your code’ debate. The point is not that people want to run exactly the same code on the same data; nor is it that people want to pick holes in researchers’ programming techniques. The point is simply that the code, which already exists and is sitting on a machine somewhere, provides an totally complete and unambiguous description of what was actually done with the data, at no extra cost to the researchers.
I would be interested to know whether you can provide a good reason for not providing either (a) the code or (b) an equally full description of the methods.
As a follow-up question – in your opinion, if a paper relying on computer data processing does *not* adequately describe the algorithms used (by whatever method) and thus anyone trying diligently to replicate the paper’s results is thwarted, should the original paper be disregarded?

vigilantfish
July 21, 2010 7:15 am

sphaerica says:
July 21, 2010 at 6:07 am
People are entitled to their own intellectual property, and they’re entitled to try to be “the one” to publish that next great ground breaking study. And if some of the information from their last paper is serving as the foundation for the next, then no, they do not and should not have to share it.
————–
Ahh, yes, if only John von Neumann had not leaked the paper that revealed how to make computers, and the blueprints for computer architecture, John Presper Eckert and John Mauchly could have been computer billionaires, and we’d all be paying royalties to their estates. So what if the work was 100% financed by tax-payers?
The question of intellectual property is simple only if one is privately funded. Corporate scientists have no intellectual property rights that are independent of the corporate interest they serve, so why should tax-funded scientists be able to lord it over us, tell us how to live our lives, serve up scary stories to get more funding, and then refuse to turn over the basic data and algorithms? Sure there are no problems if the science remains esoteric and removed from political and moral questions, but when a public agenda is being backed up by science, nothing should be secret. Something’s a little askew in your universe.

Nuke
July 21, 2010 7:22 am

The National Science Foundation has asserted that scientists are not required to present their personal computer codes to peer reviewers and critics, recognizing how much that would inhibit scientific practice.

Whoa!

Pamela Gray
July 21, 2010 7:27 am

hmm. I am just a one-article has been researcher but I still have my raw data on a disk, still have the computer program used to do the statistical analysis, still have the schematics of the electronic equipment used, still have hard copies of the raw tracings, still have drafts of the final article. If someone were to ask for these items, I would provide certified copies right now. The fact that others have duplicated our work, and at the same facility as well as at competing labs, is a feather in my cap, not an opportunity to horde so that we can continue to claim king of the hill.
Sphaerica, I’m thinking that arrogance can be seen in the mirror.

Nuke
July 21, 2010 7:29 am

jcrabb says:
July 21, 2010 at 12:22 am
For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

The exact opposite is true. Nobody has shown 20th century warming to be unusual or unprecedented.

Nuke
July 21, 2010 7:33 am

@sphaerica
A computer model run is not an experiment. Computer models do not output data or facts. The code used to build these models is part of the process.

tallbloke
July 21, 2010 7:36 am

sphaerica says:
July 21, 2010 at 5:59 am
A major failing of every bystander in the issue of climate change is to apply their own personal, limited experiences and training to the problem, and then to believe that they, over everyone else, understand it best. There are far too many intelligent but ill-equipped engineers posting with hubris their confident (and wrong) assessment of climate science.
Note to world: study, learn, keep an open mind, and try to realize that arrogance and unjustified self confidence are failings, not virtues.

Do you believe that should apply to climate ‘scientists’ too? Is it not possible that these groupthinking cliques have arrived at a “wrong assessment of climate science” through their mutual pal review system?

Nuke
July 21, 2010 7:38 am

Computer code is deterministic. Run the same code with the same data and the results are always the same.
The point is not whether somebody can replicate the results of somebody else’s computer model. Computer model runs are not experiments. The point is whether the computer model properly performs as claimed. Are the algorithms expressed properly? What values were assumed for the various forcings? And yes, is it full of bugs?
None of these things tell us if the model accurately models the real world. Only comparing model results to actual observation will do that. A computer model is an expression or illustration of an hypothesis.

Jacob
July 21, 2010 7:47 am

Andrew30 says: July 21, 2010 at 12:56 am: – He is right.
Computer code is a language that trained people understand. I am a software engineer, and I know. I never hesistate to hand over code when a colleague asks me, I know he will understand. ( I do not write many comments, on principle. The code itself is the best comment).
Sometimes I need to be paid for the code (for my work!), but I don’t think that applies in Mann’s case.
Suppose there is a fragment in the paper written in, say, Spanish. Could Mann refrain from divulging this piece stating “it’s not documented, you would not understand it”? If it’s part of the paper or it’s supporting calculations, it must be divulged.
There is no excuse for not posting the code, except the desire to prevent replication. That NAS agreed to this is apalling.

Tenuc
July 21, 2010 7:54 am

tallbloke says: @sphaerica
July 21, 2010 at 7:36 am
“Do you believe that should apply to climate ‘scientists’ too? Is it not possible that these group-thinking cliques have arrived at a “wrong assessment of climate science” through their mutual pal review system?”
Ah, Tallbloke, you seem to have forgotten that Sphaerica is talking about the faith based cargo cult ‘science’ of CAGW, where the usual rules of Popperian falsification do not apply. The cosy and lucrative CAGW cabal can only survive via cronyism and deliberate obfuscation of the truth. It about as scientific as reading tea leaves!
It’s an easy mistake to make:-)

trbixler
July 21, 2010 8:00 am

Just take my word for it you do not need Philosophiæ Naturalis Principia Mathematica to be published or understood. Somehow the concept of proof of an assertion is of no interest in ‘climate science’, Newton would not have enjoyed these times.

Dave Springer
July 21, 2010 8:02 am

toby says:
July 21, 2010 at 12:25 am
As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.
The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.

If by “idiosyncratic and uncommented” you mean rushed, amateur, and buggy then I’ll agree. But that’s the whole point. Raw data, if transformed by software, should be accompanied by at least the executable form of the software. No one’s sloppy source code is exposed there but it makes it much easier to test its functionality. Why make replication harder unless it’s because the author fears there will be flaws found? That doesn’t display much respect for the scientific method.
Scientists are supposed to encourage others in any reasonable way they can in replication. We’re not talking about the author buying beakers and reactants for others to use in replication here. Softare and source codes thereof are virtually free of time and cost to distribute. Put a copy on an FTP server and it’s done. Do you not support the scientific method, Toby, and reasonable efforts to assist others in replication?
I’m not a climatologist but as a programmer with decades of experience in commercial software development I must point out that a climatologist has as much expertise in writing software as I do in global circulation modeling. I understand quite well why a climatologist wouldn’t want a software expert examining his program source code. I think you know why too, Toby.
That said, there should be no question of releasing source code that was developed with public funding unless there is some national security reason for withholding it. That comes under the rubric of Freedom of Information law. The public has a legal right to it. Code developed with private funds of course can be released or not as the author wishes but it should least be viewed as not being fully supportive of the scientific method and cause for suspicion that the work may have deliberately hidden flaws.

Pamela Gray
July 21, 2010 8:09 am

The posts may be talking about two different types of codes used for two different types of research. There are raw data adjustment and infill codes, and then there are model codes. In either case, the results of these researchers are being used to create public law. Therefore the public has a complete right to see this work displayed in all its details, regardless of whether or not we understand it. Those types of restrictions were once used to deny the right to vote.
For temperature purposes, the code has to do with developing a running average and trend line from swiss cheese data infills and adjustments, in other words, the creation of data instead of staying with the typical 999, and the adjustments to data related to the researcher’s idea of contamination. It is the created/adjusted data code (adjustments to raw data and data infill) that is of interest and is apparently not available to the degree that it should be available. The very premise of the research sits wholly on this code. The code is the experiment being done on raw data and should be made available.
Model codes are another thing. While they also should be readily available and are also “the experiment”, these codes try to replicate forcings and feedbacks.
Both types of code should be available for other researchers and the public, to examine and critique, duplicate, or improve.
A point about accusations of arrogance. I bought the damned stuff, the lab, the equipment, the lab assistants, the cost of publication, and I could be asked to pay even more for it if it is used for the creation of laws. So excuse me all to hell but I want to know just what it is I am buying. If that makes me arrogant, fine. I’ll be arrogant, especially when someone else’s hand is in my back pocket. If there ever was a case between who should be and has the right to be arrogant, it would be the paying public, not the researchers. As my grandma used to say to me, you need an attitude adjustment. And shortly thereafter, I got a knuckle on my head. If researchers don’t like that, then get the hell out of the kitchen.

jaypan
July 21, 2010 8:16 am

Some commenters (toby, David) seem to suggest that the way how original data are “processed, enhanced and presented” is the IP of the inventor/programmer and not for the eyes of the public or critics. Well, this explains a lot.
Did I miss the “sarc/” somewhere?

July 21, 2010 8:17 am

Those arguing to distraction that computer code need not be released, commented, easy to read, read by non-programmers…. to support academic papers are missing the essential point. What is required to reproduce and/or replicate the findings of a study is the methodology used to analyse the data and draw the conclusion.
In the absence of fully documented methodology, the code performing the analysis/homogenising/etc should be released. If you do not wish to release your code, you must instead be prepared to explain your methodology fully and in detail in another format on request.
What is NOT acceptable, and is in this circumstance the net result of those arguing that code may be protected by IP, is that NEITHER the code NOR the methodology is provided for effective replication. It is against this background of withholding the entire methodology, or critical portions of methods of analysis, that the call to release code has been made. It is because the claim has been made, that the mechanism is too complex to explain, that those who wish to test replication have requested the programming code instead.
If you will release an academic paper making assertions and drawing conclusions, you diminish to NOTHING the scientific value of your paper at the first sign of resistance to requests to reveal your methodology – if you have not already fully defined it within the pages of your paper which, arguably, you should have done anyway.
Those arguing that code should be for code monkeys and not for scientists to use in the study and subsequent replication of findings ought to be distinguishing themselves from those who would argue that methodology is itself subject to intellectual property rights. The latter argument has no place in academic or scientific research.

R Connelly
July 21, 2010 8:17 am

Stephen Schneider was a great man and a great scientist. Not perfect, but great.
There has never been a great reason for not releasing the data and the code. It could be (and has been) argued that its ok to withhold if you have future papers in the works. However, when the research and papers are public funded – it all belongs to the public and should be released ASAP.
Actually,iirc, I believe that DR Mann now releases all his data/code for his recent papers – as do many other researchers.

Chris B
July 21, 2010 8:18 am

Schneider reminds me of the Wizard in “The Wizard of OZ”,……… except the Wizard helped Dorothy get back to Kansas after he was discovered to be the ordinary man behind the illusion.
Who’s gonna help us get back to “Kansas” now that Schneider has passed.

Dave Springer
July 21, 2010 8:18 am

@Nuke
“Computer code is deterministic. Run the same code with the same data and the results are always the same.”
If identical results from identical input is a design goal then I’d agree with that in principle but there are a great many possible pitfalls in reality. This is especially true when floating point calculations are involved and the underlying hardware, firmware, and ancillary software is not identical.

Dave Springer
July 21, 2010 8:30 am

Long

This seems to be a very common misunderstanding in the ‘release your code’ debate. The point is not that people want to run exactly the same code on the same data; nor is it that people want to pick holes in researchers’ programming techniques. The point is simply that the code, which already exists and is sitting on a machine somewhere, provides an totally complete and unambiguous description of what was actually done with the data, at no extra cost to the researchers.

Bingo!

dp
July 21, 2010 8:31 am

To distill this story to its critical component, we are left with this unsettling fact: The term “peer reviewed” is meaningless. In fact it’s insulting to make the claim that one has written peer reviewed papers as a means to validate their effort, and utterly vain to state one has peer reviewed other papers. It is a failed process in desperate need of overhaul. I’ve added the term to my list of weasel words.

k winterkorn
July 21, 2010 8:31 am

Stephen Schneider was no doubt brilliant and charming. He was also clearly a post-normal scientist and an apocalyptic grandstander. His jump to Global Cooling in the 1970’s, and then his turnabout jump to Global Warming shows either a lack of scientific discipline in his thinking or fraud in his public presentation of his beliefs, or some of both.

Alan F
July 21, 2010 8:33 am

Nuke,
Given the end result wished, I can write in f77 (still have all my notes even) as many nested procedures as is required to accomplish such from any set of numbers and so can ANY other schooled in the late 80’s IT dweeb. The bad bit here appears to be any academical coding requires little if any outside debugging and certainly scarce documentation while industrial coding (my job) is checked here, back in Germany and once again here in good old Cannuckville before it makes its way into any systems or machinery. Bad code in information systems costs coin, in machinery lives and neither is allowable in business. Why should the script kiddies in Academia, whose code is setting the stage for %GDP level spending be afforded any less vigilance?

Dave Springer
July 21, 2010 8:42 am

sphaerica says:
July 21, 2010 at 6:07 am

The way science works is that someone should be able to take the same hypothesis, and the same readily available data, and completely on their own perform an experiment that yields results consistent with the original study. If it doesn’t, there’s doubt about the hypothesis, and its left to science to determine how the two studies differed and obtained different results. But this is how you test a hypothesis, not by demanding detailed notes. It would be like walking into Madam Curie’s lab and insisting that you use her lab equipment to repeat her experiments.

I thought science was supposed to be about building upon the shoulders of giants. The way you describe it’s more like science is about reinventing the wheel.

July 21, 2010 8:44 am

An odd fundamental contradiction exists in the CAGW line of reasoning as follows:
Most CAGW proponents say we must act quickly now to save ourselves from ” . . . . the end of the world as we know . . . .” and yet most CAGW advocates actively defend some mainstream climate researcher’s attempts to block/delay access to data, methodology and code from research performed with public funds. The contradiction for these CAGW proponents is that, if it is so important to save the world then it is criminal of them to support researchers who block/delay access to the info.
My thanks to R.E.M. for their great song . . . it is so quotable in the current CAGW area. : )
What would be their purpose in maintaining such an obvious contradiction? This is not a rhetorical question. Seriously.
John

jorgekafkazar
July 21, 2010 8:45 am

Sphaerica says (among numerous other ridiculous assertions): “The point in that situation is to get to the answer with the least possible effort, not to produce “’great code.’”
I’ve seen code produced with the “least possible effort.” It is shoddy, impenetrable, and incompetent. It is, ipso facto, laden with errors both large and small. We’re dealing with a situation in which trillions of dollars and billions of lives will be risked. Accepting anything less than “great code” is indefensible.

Jeff
July 21, 2010 9:04 am

Toby ???
excuse me but WHAT TF are you talking about …
If you publish results and raw data without your code then what is the reviewer actually doing ? making up their own CODE ??? are you kidding me … the work being reviewed is the code you dolt …

Roger Knights
July 21, 2010 9:12 am

“Has anyone else noticed how anything simple in global warming immediately gets a much longer more complicated name? Manmade-warming = Anthropogenic Global Warming? To me this is always an attempt to make something appear more complicated than it really is.”

More likely it was an attempt to duck a handbag-bashing. (Due to use of the M-word.)

Jeff
July 21, 2010 9:13 am

as background I have spent much of the last 18 years testing input data being run thru software and looking at the output data and validating that the code did exactly what it was supposed to do … not having either the actual code or a detailed description of exactly what the code has done in each and every possible scenario would cause me to fail a program without running single test … I would have nothing to test … and in my world untested = failed …
No, I don’t need the code but without a detailed description of how the code acts on the data I would be unable to “manually” replicate its behavior to validate its output …
It is simply beyond belief that anyone could think the code is not what is being tested by reviewers …

Kate
July 21, 2010 9:26 am

k winterkorn says at 8:31 am
“Stephen Schneider…His jump to Global Cooling in the 1970′s, and then his turnabout jump to Global Warming shows either a lack of scientific discipline in his thinking or fraud in his public presentation of his beliefs, or some of both.”
…Stephen Schneider has been around a long time, and by the time the great “new ice age” scare was over, he knew his way around the political/academic/scientific grant-giving government machine like it was his own living room. Stephen Schneider wanted fame, recognition, and his bills paid, and he saw his big chance when the global warming rocket was about to take off, so he rushed on board, dumping all his previous work on global cooling as fast as possible. No doubt he was hoping that nobody would notice, or if they did notice they wouldn’t care enough to make a big deal about it.

john a
July 21, 2010 9:27 am

re: jcrabb’s assertion that no one has said that the late 20th century rate of temperature change has been seen before in the last 1,000 years…
Monckton’s presentations site several since the middle ages–it’s one of his central points, that temperature can change abruptly under natural conditions.

George E. Smith
July 21, 2010 9:31 am

“”” jcrabb says:
July 21, 2010 at 12:22 am
For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years. “””
That’s an odd conclusion; a very odd conclusion.
We are talking about the globally averaged; over some significant period of time; of a variable that on any given day can range within an extreme range of over 150 deg C; and some people suggest as much as a 180 deg C range; a variable for which credible data for almost 3/4 of the planetary total simply is not available; well if you take the remote “ground” areas of the world, that is over 3/4 of the total area; and the area that has had some sort of credible monitoring has not even been followed with any consistency over the last 150 years. Stations are added, or subtracted; moved to new locations; and the purveyors of what limited measurement has been done claim measurable differences in hundredths of a deg and decadal changes of maybe a tenth of a degree; and you believe that no period in the last 1000 years has been warmer than the present.
I assume that you are fully aware of the concept of ” 1/f noise ” ; a very common source of random noise in many ordinary physical systems; where the observed amplitude of random fluctuations can range without limit with a spectral distibution such that the observed instantaneous noise amplitude grows as the inverse of the frequency of occurrence. 1/f noise is present in all electronic signal processing systems; and the 1/f character of low frequency noise has been confirmed down to as low a frequency as anybody has ever cared to sepnd the time observing. The growth without limit does not violate any energy laws, since events of higher power occur with ever diminishing frequency; so they are spread out over an increasing period of time. It is a trivial exercise to prove that 1/f noise contains an equal noise power in each octave of frequency range.
So it is quite unlikely that short (<30 years) intervals of warmer (or colder) Temperatures than have been observed recently, have ever been surpassed in the last 1000 years. The recent period of "global warming" itself is less than 30 years from the mid 1970s to the mid 1990s; and it has been on a down slope for the last 10 to 15 years or so.
And you can't get away with that sleight of hand trick; saying that no reconstruction proves that recent warmth is within natural variability.
The burden of proof that it is NOT natural rests on those who would argue that it is NOT natural.
There's plenty of documented anecdotal history of warmer epochs within that 1000 year range; and why limit it to 1000 years; what about the last 10,000 years ?
Quite apart from the noise spectrum aspects of temperature fluctuations; there's the whole problem of measurement rigor; which doesn't even come close to satisfying known governing laws and principles of information theory or sampled data system theory.
And in the end; the matching up of global and historic CO2 data (all the way back to the IGY in 1957/8) to the temperature fluctuations is even worse than our knowledge of either variable.
There's not anyone on this planet who can even tell us definitively what is the correct time delay to use in Stephen Schneider's "Climate Sensitivity" equation:-
T2 -T1 = (cs).log (CO2,2/CO2,1)
Here T2 and T1 are two Mean Global Surface Temperature values (for over some time intervals) and CO2,2 and CO2,1 are atmospheric CO2 relative molecular abundances at two time periods; not necessarily coincident with the Temperature epochs.
Purported data has tried to relate those two variables with time delays between the Temperature, and the CO2 observations that can be anywhere over a +/- 1000 year or more.
Nobel Laureate Al Gore has published data in his famous book showing that the best correlation for any moves of Temperature and CO2 occurs for about an 800 year delay between Temperature changes, and atmospheric CO2 changes; yet he insists that the CO2 changes are what caused the Temeprature changes 800 years earlier.
Right now we are just 800 years delayed from the so-called mediaeval warm period; that history shows was warmer than today; and we now are in the midst of the rising CO2 abundance; that apparently caused the mediaeval warming period (well according to Al Gore) and he's a Nobel Laureate; so he should know.
You've got the shoe on the wrong foot, jcrabb. It's up to the Deniers (of natural cause) to prove it is NOT natural cause.

Nuke
July 21, 2010 9:32 am

Dave Springer says:
July 21, 2010 at 8:18 am
@Nuke
“Computer code is deterministic. Run the same code with the same data and the results are always the same.”
If identical results from identical input is a design goal then I’d agree with that in principle but there are a great many possible pitfalls in reality. This is especially true when floating point calculations are involved and the underlying hardware, firmware, and ancillary software is not identical.

I think we are in agreement here. You’re talking about changes in the runtime environment. These are variables which must be controlled and accounted for.
It’s possible to get one results from environment A and different results when moving to environment B. But if you run the same code and same data repeatedly on A, then the same results are expected each run. Run on B with the same code and data and the results may not match A. But each run on B should have the same results.

EthicallyCivil
July 21, 2010 9:32 am

On the publication of source code, Appendix A of my Masters thesis is the solver I wrote based on the equations developed in the thesis and the unit test of the “closed form” solution test problems.
The results section of the thesis was predictive modelling of the novel aerodynamic device we were building. Compared to climate modelling, it was a simple problem. but it had design impacts on the experimental work we were doing. My advisers wouldn’t accept the results without the source code and test code, why should we expect less of those whose conclusions are driving an overhaul of the global enconomy.

Bernie
July 21, 2010 9:33 am

sphaerica:
Your comments miss the point. To replicate many data analysis results in climate science, the actual analysis needs to be available. It is as simple as that. All the issues about commented code, convoluted code, etc., are secondary. The IP defence or trade secret defence is another red herring. Lastly, the CRU emails clearly indicate the primary motivations among those using idiosyncratic statistical procedures to prepare and analyze the raw data for denying access to both data and code.

Nuke
July 21, 2010 9:35 am

Alan F says:
July 21, 2010 at 8:33 am
Nuke,
Given the end result wished, I can write in f77 (still have all my notes even) as many nested procedures as is required to accomplish such from any set of numbers and so can ANY other schooled in the late 80′s IT dweeb. The bad bit here appears to be any academical coding requires little if any outside debugging and certainly scarce documentation while industrial coding (my job) is checked here, back in Germany and once again here in good old Cannuckville before it makes its way into any systems or machinery. Bad code in information systems costs coin, in machinery lives and neither is allowable in business. Why should the script kiddies in Academia, whose code is setting the stage for %GDP level spending be afforded any less vigilance?

Absolutely agree. Are there no Software Engineering or Computer Science grad students available to assist with climate research?

Editor
July 21, 2010 9:37 am

Modern programming and scripting languages (notably anything labeled VISUAL) use ridiculously-long-variable-names-that-are-intended-to-be-self-documenting. Older languages of the sort I spent a quarter century working with as a business/manufacturing programmer tended to be a lot more terse – and variables and arrays often needed to be re-used – especially in constructions like
FOR Q=1 TO 20
READ (Q) A$, B$, C$
GOSUB nnnn
NEXT Q
or even worse….
Q$=”FILE1FILE2FILE3FILE4FILE5″
FOR I=2 TO LEN(Q$)/5
CLOSE(1)
OPEN(1) Q$((((I-1)*5)+1),5)
READ (1) A$,B$,C$
GOSUB nnnn
NEXT I
REM There is are two bugs in this program. Where are they?
Comments were often critical. My programs were always heavily commented, which cost little since comments are ignored when compiled for execution. The number of times I’ve had to de-bug someone else’s code and was faced with an undocumented GOSUB and was left wondering “just what the hell was THAT all about?”… sometimes you can only figure it out by stepping through the program with various starting values… and voila! it works fine EXCEPT in this one instance where a calendar month has two full moons….

Reed Coray
July 21, 2010 9:42 am

Shortly before his death, Dr. Stephen Schneider gave the Stanford Magazine an interview which was reported in their July/August 2010 issue. I would like to comment on a few of Dr. Schneider’s remarks.
The primary lasting impact will be that it has delayed climate policy by a year or two—which, if the Congress tips away from Democrats, could delay it by eight or more.”
Well at last we have a concrete example of a “climate change” tipping point–the change from a Democrat controlled Congress to a Republican controlled contest.
When Dr. Schneider was asked the question: “Why do you think it’s wrong to give equal public consideration to most climate-change dissenters?” in part his response was:
It is completely appropriate in covering two-party politics, if you [cover] the Democrat to [cover] the Republican. In fact, if somebody didn’t do that, they would not [be considered] fair and balanced. It is completely inappropriate, if there’s an announcement of the new cancer drug for pediatric leukemia [with] a panel of three doctors from various hospitals, to then give equal time to the president of the herbalist society, who says that modern medicine is a crock. They wouldn’t even put that person on the air, so why put on petroleum geologists—who know as much about climate as we climatologists know about drilling for oil—because they’ve studied one climate change a hundred million years ago?
Why is it inappropriate? Inappropriate to whom? It may be a waste of time. It may be silly. It may even be dumb. But it’s a stretch to call it “completely inappropriate.” Mark Twain wrote a story called the Man That Corrupted Hadleyburg. The story is about a town whose claim to fame is honesty. The town’s self image is so wedded to its reputation for honesy that for years the inhabitants are kept from all temptation lest they succumb and destroy the town’s reputation. In the minds of non-residents, it isn’t long before “arrogance” overtakes “honesty” as the town’s primary descriptor. Eventually the town’s arrogance offends a visitor who believes because the town’s citizens are seldom tempted, their honesty is only skin deep. Unknown to the other leading citizens, the offended stranger tempts each of the town’s stalwarts with a large sum of money. To a man, they succumb to the temptation. Dr. Schneider and his ilk remind me of the town’s leading citizens both in regards to arrogance and to the belief that non-exposure to nonsensical ideas is the path to salvation.
The reason that we do not ask focus groups of farmers and auto workers to determine how to license airplane pilots and doctors is they have no skill at that. And we do not ask people with PhDs who are not climatologists to tell us whether climate science is right or wrong, because they have no skill at that, particularly when they’re hired by the fossil-fuel industry because of their PhDs to cast doubt. So here is where balance is actually false reporting.”
BALANCE is false reporting! Since when? Reporting is reporting. False reporting is in incorrectly describing real-world events either deliberately or inadvertently. If a group of people claims a secret society exists below the Earth’s surface, reporting what the group says may not serve a useful purpose, but it isn’t “false reporting”.
But climate risks occur at the level of the planet, where there is no management other than agreements among willing countries.”
And what should we do if the number of “willing countries” is small? As much as I believe the political left is leading this country to ruin, I believe the principle of self-government trumps my personal beliefs. If the people want socialism, so be it. After all, when a doctor says you’ll die unless all of your limbs are amputated, the final decision is not the doctor’s, it’s yours.
At least in the old days when we had a Fourth Estate that did get the other side—yes, they framed it in whether it was more or less likely to be true, the better ones did—at least everybody was hearing more than just their own opinion.”
Now I’m confused. I thought Dr. Scheider said it was “completely inappropriate” to give both sides of the “panel of three doctors” and the “president of the herbalist society” equal time. The question I’d like to ask Dr. Schneider, but now cannot, is “Who gets to decide what is appropriate for balanced reporting and what is inappropriate?”
What we have to do is convince the bulk of the public, that amorphous middle.”
So I guess even Dr. Schneider believes the unwashed masses have some say in their own destiny.
But now, given the new media business-driven model, where they fired most specialists and the only people left in the newsroom are general-assignment reporters who have to do a grown-up’s job, how are they going to be able to discern the north end of a southbound horse?”
In the case of some reporters as well as some scientists, it’s as simple as looking over your shoulder.

adamskirving
July 21, 2010 9:45 am

@ Nuke
“Computer code is deterministic. Run the same code with the same data and the results are always the same. ”
I agree with the thrust of your post, but I would like to emphasise that Dave Springer is not being snarky. Hardware and firmware may have faults that mean the same code produces different results on different machines. Worse still there are certain bugs like ‘race conditions’ in code that sometimes cause programs run on the same machine to exhibit errors at apparently random intervals. That’s why I was taught to not only make my code available on request, but to keep a record of hardware used, operating system and version, compiler and version, and so on.

July 21, 2010 9:54 am

Re : Loehle’s reconstruction
The Loehle’s data from:
http://www.ncasi.org/programs/areas/climate/LoehleE&E2007.csv
appear to be perfectly reasonable. On the other hand my view may be biased.
http://www.vukcevic.talktalk.net/LFC1.htm
I would love to know what Dr. Loehle would have to say for the above ‘misuse’ of his data?

TomB
July 21, 2010 10:07 am

I think one of the great difficulties in releasing source code would be version control. It would be important to get version of the code actually used to run whatever result is being analyzed, not necessarily the current version. Given what I’ve seen in the ClimateGate code, I doubt that a good version control system was being employed.

Rod Smith
July 21, 2010 10:11 am

Computer Code: Many decades ago I was handed about 4000 lines of assembler language to debug. Labels were limited to six characters. The perhaps 4000 lines of code had two comments. The first line said, “On the first day, I coded.” The last said, “And on the seventh day I rested.”

DirkH
July 21, 2010 10:12 am

Nuke says:
July 21, 2010 at 7:38 am
“Computer code is deterministic.[…]”
Only if you do it right. Think parallel systems. The climatologists use multiprocessor machines, determinism is in this case a quality that must be engineered into the code, and by default a program would be indeterministic. Everybody who now says, oh, that’s basic, has never debugged a multithreaded or multiprocessing program.

Andrew30
July 21, 2010 10:13 am

Robert E. Phelan says: July 21, 2010 at 9:37 am
Q$=”FILE1FILE2FILE3FILE4FILE5″
FOR I=2 TO LEN(Q$)/5
CLOSE(1)
OPEN(1) Q$((((I-1)*5)+1),5)
READ (1) A$,B$,C$
GOSUB nnnn
NEXT I
REM There is are two bugs in this program. Where are they?
1. The initial value for Q1, ”FILE1FILE2FILE3FILE4FILE5″ appears to be a single literal value, which is likely an error Unless there is One input file called FILE1FILE2FILE3FILE4FILE5.
2. The statement FOR I=2 TO LEN(Q$)/5 will Not execute a loop since the length of Q1 is 1, from the initialization on the first line, and the initial value of I is 2, and therefore already exceeds 1/5.
3. The statement CLOSE(1) would have failed if the loop was executed (which it will not be (see: 2) since the file 1 is not yet open.
4. The evaluation of ((((I-1)*5)+1),5) will not be executed (see: 2) but if it was then the result value ((( 2 – 1) * 5) + 1) as a first index on a two-dimensional array with a value 6 is out of range for the contents of Q (length = 1 (and it is a single dimensional array)) and the use of a second index of 5 on a single dimensional array (if not caught by the compiler) will also be out of range. Both or either would result in a run-time memory violation.
5. After the read there is no test to check is any values where assigned to A, B or C, then are therefore undefined.
6. When run the program will do nothing very quickly (see: 2)
That’s Six, not Two; now where is nnnn, and does this language scope variables such that they are visible to nnnn 🙂 ?

George E. Smith
July 21, 2010 10:15 am

Seems to me that this “code review” business has a simple solution.
Back in the days where science could be done on the back of an envelope; people wrote down specific mathematical equations that defined precisely how data was to be processed to “show aconnection”.
Planck’s Radiation Law for Black Body Radiation, is NOT wishy washy. It is mathematically specific and nothing empirical enters into it. All parameters are dervived from fundamental Physical constants whose values are know to estremely high precision; and publishing that equation tells any user how to make use of the information.
In the field of Patents, we have a situation, where an “author” also known in this case as an “inventor” publishes for all to see; sufficient information about his previously secret invention to enable ANYONE (having ordinary skill in the art of the invention) to replicate the invention and to use the information to build on for his own purposes; and patent law requires that such teaching takes place in the patent that is finally approved.
IN RETURN for disclosing to ALL how to practice the invention or make use of it, the “author/inventor” is granted exclusive rights to the use and benefits of the invention; and to license the invention to others if (s)he chooses and for monetary gain.
And in most International Patent law; the first to publish gets the rights whether he is the inventor or not.
So why not use the same system for “Scientific Inventions”.
You want to get credit for some scientific breakthrough or discovery; you publish your paper that teaches ANYONE (having ordinary skill in the art) how to use/replicate/expand on your discovery. That could include releasing the code that turned your raw observations into your published results.
Well of course if you don’t teach others how to “do it” and somebody else comes along; and DOES publish code or whatever that shows how to manipulate your data to replicate your results; then of course (S)HE should be the person who gets the academic recognition and credit for the results.
Lots of companies choose to hold key information they obtain as a “Trade Secret”; like the recipe for Coca Cola; rather than Patent it. They are gambling that they can protect their secret from discovery by someone else. But if somebody quite independently should discover the exact same concoction and publish it for all to see; then CC would be screwed; and the re-discoverer would derive the benefit (or other consequence) of his discovery.
So there you have it. If you want credit for some new knowledge; then disclose ALL that is necessary to use; and/or replicate it.
Or keep it to yourself as a “Trade Secret”.
When I do a lens design (for my boss); I really don’t design a lens. I design a “Merit Function” which describes in some formal way what I PERSONALLY consider to be a good lens design for whatever the end purpose is.
The off the shelf optimisation software simply manipulates whatever variables I supplied, until it minimises the value of that Merit Function. The software could care less about lenses or lens design; it simply follows whatever algorithms its authors put into it; to find some minimum value for the functiuon that I claimed constitutes a good lens.
Whenever I send such a design out to be manufactured; the whole design file goes out with it so others can evaluate the performance to see if the lens meets its goals. They can do any kind of evaluation; make any kind of changes or do whatever they like with my design.
The one thing they do not get, is the merit function; which after all is nothing more than my opinion of what a good lens is for the prescribed purpose. Others may have a different description of what is a good lens; and they are free to make up their own description and modify my design as they choose; either improve it if they have a better idea than mine; or royally screw it up, if they don’t. My boss; and our competitors can easily prove for themselves how my design performs; although the competitor may have to buy a product, and reverse engineer my lens.
They just can’t get into my head to find out what I know about lens design. And yes the company archives do have every single thing I have ever done and tried safely stored; so if I get hit by a truck this afternoon; then they finally can learn to appreciate how much I really do know about lens design. They paid for it; it belongs to them.
So if your work for which you expect credit requires computer codes to teach others about your work; if that was the intention; then you should publish the code too; if you want the credit; before somebody else does.

hotrod ( Larry L )
July 21, 2010 10:21 am

Patrick M. says:
July 21, 2010 at 2:40 am
toby said:
“The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.”
As a professional software developer I can state that if your code is “idiosyncratic and uncommented (or at least poorly commented)”, it is also likely to have bugs. That’s a major issue with your statement above, you are assuming that there are no bugs in the original code. So if somebody tries to replicate your results and fails because the original has bugs, where does that leave us? Obviously, if the original programmer didn’t have time to comment their code then they’re not going to have time to review the code of someone who is trying to replicate their results. So then what?

Good points. In the analogy of computer code to a mathematical expression in a paper you have a good example of the crux of the problem. If a person writes a paper that includes a formula 2+2=5, it can quickly and easily be tested by the reader and found to be in error. If they write a description of the process then it gets a bit more complicated and harder to verify.
Even if the author honestly attempts to write a correct description of the manipulations the code performs with some sort of pseudo code and commenting, that still does not necessarily mean his methods can be accurately duplicated.
For those that work in IT and have spent hours or days chasing down an obscure bug that only shows up when a specific combination of inputs are used but works flawlessly on common inputs knows, the code does not always do what you think it does. This is the heart of the matter.
Just because you tell me you perform steps x y and z, with inputs a b and c, does not necessarily mean that your code actually does that in all cases. For example it might add x + y, and multiply by 2*z if a and b are less that 5*c but if they sum to a value greater than 5*c they might actually add x+y+c and multiply by 2*(z-c).
No reviewer could find that sort of error by writing their own code unless they exactly duplicated the coding error the author used.
Computer code output can only be accurately duplicated, and its output verified by testing the exact same source code complied by the exact same compiler on the same hardware.
How many people remember the floating point bug in early PC’s?
It was on the Intel P5 Pentium floating point unit (FPU).
(note wiki reference)
http://en.wikipedia.org/wiki/Pentium_FDIV_bug
A friend of mine worked in a tax assessment office where they did calculations out to something like 6 or 8 decimal places to eliminate rounding errors. The code that worked perfectly on their older pc’s suddenly started having errors when they upgraded to the newer computers because of the then unknown floating point calculation error in the cpu’s.
http://www.intel.com/standards/floatingpoint.pdf
http://cache-www.intel.com/cd/00/00/33/01/330130_330130.pdf
Researchers need to move past “idiosyncratic and poorly documented code” for research that could be used as the basis for trillion dollar decisions that will effect billions of people. Even professional code crunchers at Microsoft and other large players in the IT industry routinely crank out buggy software despite years of testing and software control measures.
Everyone knows from personal experience that commercial software written by professional coders, and validated through formal beta testing programs always contains bugs that take years to find when used by millions of every day users.
The presumption that one off research code is bug free is absolutely absurd!
The presumption should be that it is riddled with bugs, and validation and replication of research results should operate on that assumption. That means that the only legitimate replication of research results would be one that includes outside audit of the research code and its behavior under all imaginable inputs and conditions.
Software containing large line counts are statistically far more likely to be buggy than they are to be correct. Even if you find a bug, fixing it might introduce yet another more serious bug. In thousand or million line computer code, bugs are unavoidable.
It is safe to say, that it is an absolute certainty that all the existing climate model codes contain literally hundreds if not thousands of bugs in the code. When you understand that much of this code is written not by professional programmers but by professionals in another field that happen to code software to assist their research, the likelihood of code errors is probably higher than you would expect in professional commercial software which as we all know is almost certain to contain multiple bugs, many not showing up until the software package has been in widespread use for years by millions of users.
http://www.guardian.co.uk/technology/2006/may/25/insideit.guardianweeklytechnologysection
http://www.nist.gov/director/planning/loader.cfm?csModule=security/getfile&pageid=53212
Larry

July 21, 2010 10:26 am

Not being computer programmer oriented, I have a question.
Aren’t there computer software industry standards like similar to mechanical engineering’s ASME, AWS, ANSI? If so, do any standards govern the areas of computer software development/control?
John

Jim
July 21, 2010 10:30 am

Giving up the data but not the code is rather like a chemist supplying the data from his or her measurements in a experiment, but not describing the apparatus. How could you have confidence in Millikan’s oil drop experiment or even understand the numbers without knowing the apparatus and how it worked. The measurements are rendered meaningless. As for obscure code, all the programming languages I use provide for comments. Comments are essential because if you look at your code a year from now, even you might not understand what you were trying to do. Sloppy coding practices is no excuse and could be looked upon as unprofessional. I know if I don’t include comments, I get that tag!

Quinn the Eskimo
July 21, 2010 10:35 am

sphaerica says:
July 21, 2010 at 6:07 am
The obligation to disclose data and methods depends on the context, which is my point. Commercial science is under no obligation to disclose data, methods or code, and intellectual property law exists to protect, reward and encourage such invention. National security related research also must be kept secret.
Climate science, the basis of the ultra-radical demand to decarbonize the economy to forestall doom and catastrophe, stands on a different footing. The notion that work in this area should be protected intellectual property while at the same time serving as the foundation of ultra-radical public policy is ludicrous and indefensible.
Further, the norms of purely academic research are fundamentally irreconcilable with the claim of protected intellectual property if disclosure is necessary to replication/falsification. Three separate British Royal scientific societies have said so explicitly in commenting on Climategate.
Finally, if the CRU boys had disclosed their code, they would have disclosed the many gems in the Harry_Read_Me.txt file, such as this one:

printf,1,’IMPORTANT NOTE:’
printf,1,’The data after 1960 should not be used. The tree-ring density’
printf,1,’records tend to show a decline after 1960 relative to the summer’
printf,1,’temperature in many high-latitude locations. In this data set’
printf,1,’this “decline” has been artificially removed in an ad-hoc way, and’
printf,1,’this means that data after 1960 no longer represent tree-ring
printf,1,’density variations, but have been modified to look more like the
printf,1,’observed temperatures.’

That’s the “intellectual property” the CRU boys were hiding.

adamskirving
July 21, 2010 10:50 am

@hotrod
The floating point erreor wasn’t in the P5 but the P1, i.e. the fifth generation of the X86 hence the name Pentium (5). /Pedant off

July 21, 2010 10:50 am

Anthony Watts says:
July 21, 2010 at 8:45 am
sphaerica says:
July 21, 2010 at 6:07 am
People are entitled to their own intellectual property, and they’re entitled to try to be “the one” to publish that next great ground breaking study. And if some of the information from their last paper is serving as the foundation for the next, then no, they do not and should not have to share it.
=============================
I assume then that you will complain on my behalf, loudly, to Mr. Menne and to Director Karl of NCDC who “borrowed” my data from the surfacestations project when it was 43% complete, over my written objections, and over the objections of my co-authors, ignoring us all and revoking all professional courtesy in order to preempt the paper we are now finishing with the data at 87%?
Seems to me that “climate science” gets a free pass when it suits them.
I look forward to seeing your signed complaint letter to them.

I’ll take it from this post that you agree with my position.

REPLY:
I take it from your reply you won’t defend me being abused in exactly the situation you describe. – Anthony

July 21, 2010 10:58 am

Smokey, Chris Long, Scott B., vigilantfish, and all the others…
You still don’t understand how science works. Publishing an exact recipe for how to prove something so that someone else can repeat the exact same process is not going to get anyone anywhere. That’s not how it’s done, or how it should be done.
And you won’t get any traction with me by using McIntyre as any sort of example. If anything, his inclusion in an argument is further evidence of how little you understand how science works. Hint: it’s not engineering, and most of the comments posted here fail to understand the difference.
And for Pamela Gray, Nuke, Tallbloke, and other angry posters… sorry, I’m not biting. Spew your hatred, innuendo and evil mad scientist conspiracies all you want. I’m not interested, because it has no place in the real world. It’s not even worth discussing.

Editor
July 21, 2010 11:17 am

Andrew30 says: July 21, 2010 at 10:13 am
Good try Andrew, but in Business BASIC, an antique language I spent lots of years working with… too many, probably, when supplied with line numbers, that code fragment will execute.
Q$ is in fact a literal value, or, as we called it, a string. Most of the funky code is trying to parse Q$, a fairly common technique when opening and closing a number of files. the function LEN(Q$) returns the length of the string, in this case 25…. dividing that length gives me 5 so the loop in this case is FOR I=2 TO 5. The advantage here is that I can add another file name to my string and not have to change anything else in the code.
In some versions a CLOSE statement may generate an error, but often the default is to simply fall through to the next line. If you want special handling, you can do CLOSE(1, ERR=nnnn)
The “nnnn” notation was meant to suggest a line number for generic purposes. Putting a real line number in, even one that doesn’t exist, will result in the program defaulting to the line after that line number. If there is no subsequent line, the program ends as if it had hit an “END” statement.
The two errors are in the parsing of Q$ in the OPEN statement. It will OPEN and READ files 2, 3 and 4, but not 1 and 5. If I were working with a series of identical files containing data for a given year and my GOSUB was populating an array prior to generating say, a trend, the output might look just fine, especially if I had lots of files.
Context is everything. By the way, which language were your comments based on?

July 21, 2010 11:21 am

One commentor alledges that he does not like sharing code “because of the time wasted to try to explain it to someone.”
As a part time professional “coder”, your rational for not releasing source code, to put it mildly, stinks of sloth and ineptitude. If I did not document, did not clearly comment, did not make MAINTAINABLE any of my professionally produced “coding”, I would be either fired, not hired on contract, or not able to obtain future contract work once my poor documentation and “idosyncratic” code was revealed. Frankly my Dear, I DO give a damn, and regard ANY work done using computer analysis which does not have “tracibility” as WORTHLESS. (See “Photoshop Job” for an insight into this.)
I’d recommend this fellow try to do some:
1. FDA Approved coding.
2. FAA Approved coding.
3. NRC Approved coding.
I’ve worked with or on all three. A CENTURY from now someone should be able to run my code done with these rules and to make it work, do verifications, and do modifications based mostly on the “code listing” itself.
Max

July 21, 2010 11:23 am

sphaerica says:
July 21, 2010 at 10:58 am

You still don’t understand how science works. Publishing an exact recipe for how to prove something so that someone else can repeat the exact same process is not going to get anyone anywhere. That’s not how it’s done, or how it should be done.
And you won’t get any traction with me by using McIntyre as any sort of example. If anything, his inclusion in an argument is further evidence of how little you understand how science works. Hint: it’s not engineering, and most of the comments posted here fail to understand the difference.


———————-
sphaerica,
Re your most recent comment, in summary you are saying what science “is” is only worth discussing if it agrees with your concept?
John

Nuke
July 21, 2010 11:24 am

sphaerica says:
July 21, 2010 at 10:58 am
Smokey, Chris Long, Scott B., vigilantfish, and all the others…
You still don’t understand how science works. Publishing an exact recipe for how to prove something so that someone else can repeat the exact same process is not going to get anyone anywhere. That’s not how it’s done, or how it should be done.
And you won’t get any traction with me by using McIntyre as any sort of example. If anything, his inclusion in an argument is further evidence of how little you understand how science works. Hint: it’s not engineering, and most of the comments posted here fail to understand the difference.
And for Pamela Gray, Nuke, Tallbloke, and other angry posters… sorry, I’m not biting. Spew your hatred, innuendo and evil mad scientist conspiracies all you want. I’m not interested, because it has no place in the real world. It’s not even worth discussing.

Well!
Are you one of those people who makes assertions and then cut off any attempt to discuss and digest? Do you think anybody who disagrees with you is full of hatred? What exactly did I say to you that falls into the realm of “ hatred, innuendo and evil mad scientist conspiracies.” And for the record, I haven’t “spewed” anything since I was a small child.
What really makes this post so unbelievable is you seem to be saying computer code represents the real world, as if a model and the actual climate are indistinguishable.
In the real world, you have to show your work. You can’t just present a paper and say “these are the results” without showing how you got to those results.
You keep forgetting computer models do not output facts.
Get over yourself.

July 21, 2010 11:25 am

In response to Mike Haseler’s comments about simple terms being complicated by science, I remember the quote of Dr. Howard Hayden who, remarking about how science instead of using ‘man made’ uses ‘anthropogenic’ global warming, said “why use two syllables, when five will do!”

Editor
July 21, 2010 11:30 am

sphaerica says: July 21, 2010 at 10:58 am
sphaerica, you are obviously the one who has no clue how science works. The first step is replication: can I get the same results doing exactly what the scientist claims he did. Following the recipe didn’t work so well for cold fusion. Trying to replicate someone else’s results using your own data and methods and codes is the second step…. but if the results don’t match, you get the sort of response we;ve seen all too often from climate science: “Obviously they were too inept to do it right.”
Hint: Failure to replicate with identical methods sometimes indicates “fraud”. Unwillingness to enable such replication and the kind of disparagement that we have seen coming from you and others strongly suggests it. Science is being corrupted by people just like you.

July 21, 2010 11:37 am

REPLY: I take it from your reply you won’t defend me being abused in exactly the situation you describe. – Anthony

I would, on principal, but in fact I find most of what you do to be reprehensible and so no, I would not defend you in any way. In principle you may be right (I haven’t looked into the particulars, and don’t particularly care, since I find your work on the surface record to be a valueless distraction), but being right in one small area while being so very wrong in so many others is not suddenly going to get me on or by your side.
But the question at hand stands, and you dodged it: do you believe every bit of knowledge in science should be public property, or do you believe that you were wronged, and that demanding complete 100% transparency is not always appropriate. Do you believe that the way to conduct science is not to nitpick other people’s work, and to demand that they do the heavy lifting for each and every self declared auditor, but rather for scientists to work independently, using a variety of methods and approaches, in order to achieve more robust and defensible conclusions?
REPLY: I asked first, and you dodged with your own question. I believe the way to conduct publicly funded science with full disclosure and open code accountability. Private science should also do so, when their papers are published, as I will do with my surfacestations paper when published. (assuming people like the CRU won’t lobby to keep it unpublished, as we’ve seen in the Climategate emails)
Having big government horn in on a private science project before completion just to CYA is reprehensible, but you are blind to it.
You find me reprehensible, I find that you not only do you hide behind the cowardly comfort of anonymity, as do many of your ilk, you are hypocritical on your own stated positions as well. You have no honor and exemplify the worst of the worst anom-trolls. I didn’t ask you to take sides, only to act on your own position and you showed your true colors. I have no further tolerance for you.
– Anthony

July 21, 2010 11:46 am

engineering = applied sciences
developmental engineering = engineering research = doing science while trying to achieve a practical end
looks like science to me
[if it looks like a duck, and smell/tastes/sounds/feels/reproduces like a duck, voila . . . a duck]
John

tonyb
Editor
July 21, 2010 11:53 am

Vukcevic
As always you post an interesting graph. The correlation of the GMF to the colf climate from 1660 to 1700 is particularly intriguing. Is the recent trend of the GMF giving us any hints as to where our climate may be headed over the next few decades?
tonyb

tallbloke
July 21, 2010 11:59 am

sphaerica says:
July 21, 2010 at 10:58 am (Edit)
And for Pamela Gray, Nuke, Tallbloke, and other angry posters… sorry, I’m not biting. Spew your hatred, innuendo and evil mad scientist conspiracies all you want. I’m not interested, because it has no place in the real world. It’s not even worth discussing.

Lol.
Thanks for playing.
Now naff off.

RayG
July 21, 2010 12:00 pm

Re Pamela Gray’s comment at: Pamela Gray says: July 21, 2010 at 8:09 am
I add a hearty “Amen!”

July 21, 2010 12:07 pm

Anthony,
I am a real fan, but especially so when you bring up the anonymity topic.
Thank you and your team for all the work it takes to make this place an independent venue. You do honor to the long standing (ancient) traditions of open forums.
John

Ben of Houston
July 21, 2010 12:38 pm

To Mr. Toby,
With all respect to your position, a Sophmore in their first numerical methods class could explain exactly why you need to present your code as part of a review. If everything is perfect, but a count is off (ie, the infamous “count++ should have been ++count”), a mathematical function can give reasonable-looking-but-wrong answers. Small deficiencies within code that takes weeks to write and debug are often key to problems in a calculation. It is absurd to think that a reviewer should be able to recreate the calculation, run it, debug it, and then decypher that the problem was with the original and not the recreation in their side-position of reviewing papers. If a statistician is to properly review the work, they need to review the actual code, not just read a summary of the method and then check a “Looks OK” box. I cannot tell you the number of times I have seen people describe the code, then look at it and find that due to bugs it behaves nothing like the summary. Now, if your code is idiosyncratic and uncommented, then that is no different than writing your laboratory results in gibberish shorthand. If you would not fail a student for incomprehensible notes in the lab, then I must think less of you. If you expect a different standard from yourself, you have no place performing research.
Sapherica, Logic 101: If you cannot support a statement then you should not make a statement. While your arguments about confidentiality hold in a corporate environment, it does not apply to peer-reviewers, who can and should be subject to confidentiality and proper-use agreements.

July 21, 2010 12:39 pm

tonyb says: July 21, 2010 at 11:53 am
Is the recent trend of the GMF giving us any hints as to where our climate may be headed over the next few decades?
Professional climate experts with huge funds and the ‘Cray’ supercomputers do predicting, I do it for fun. I wouldn’t put any bet on either.
“With high hope for the future, no prediction is ventured.” A.L.
Times have changed since.

Spence_UK
July 21, 2010 12:51 pm

Sphaerica shows little or no experience or knowledge of science, unfortunately.

It would be like walking into Madam Curie’s lab and insisting that you use her lab equipment to repeat her experiments.

Indeed, and there are many examples of exactly that happening in science. If Sphaerica actually knew his/her scientific history, she/he would be aware of it.
One of the most famous examples is Prosper-René Blondlot’s discovery of “N-rays”. If you have never heard of them before, it is because they do not exist: but their very existance is an object lesson in scientific experimentation.
Blondlot conducted an experiment in which he claimed to discover a new form of radiation which he called N-rays. Of course, being a new discovery in science, many people tried to replicate the experiment. But the replications were ambiguous: many labs claimed to replicate the results (some even claiming the discovery for their own), others failed to replicate them. Naturally, disputes arose as to whether those that failed to replicate the results had correctly executed the experiment – since so many others had claimed success.
The issue caused much fuss at the time. It was resolved when one of those sceptical of the results – US scientist Robert Wood went to Blondlot’s lab to monitor his experiment. In “auditing” Blondlot’s experiment, he found that by double-blinding the experiment in Blondlot’s lab, using Blondlot’s experimental setup and staff, the positive results disappeared. By showing unequivocally that Blondlot’s experiment failed – rather than his own set up, there was no ambiguity, no room for doubt. The issue was resolved in a way that replication could not have achieved.
Note that this is a great example of science operating in exactly the way Sphaerica said it doesn’t. Of course, Blondlot’s experiment was a fairly simple setup. There are far more scope for errors in even a simple experiment due to the complexities in software code, which can involve a sequence of statistical analyses, which are often not fully or correctly documented.
Perhaps Sphaerica should consider a little more humility in his/her claims until learning a bit more about the history of scientific research.

Editor
July 21, 2010 1:12 pm

Point to Ponder
There is no such thing as “private science”. Lord alone knows what those guys in private, commercially-funded labs are doing, but if their processes and methods are not open to public scrutiny, then it is not science. The public nature of science is in many ways its most critical aspect. Kind of reminds me of those smirking Extenz commercials on late night TV, “this is real science…” yeah.

DirkH
July 21, 2010 1:35 pm

John Whitman says:
July 21, 2010 at 10:26 am
“Not being computer programmer oriented, I have a question.
Aren’t there computer software industry standards like similar to mechanical engineering’s ASME, AWS, ANSI? If so, do any standards govern the areas of computer software development/control?”
Yes, in safety-related applications like railways, avionics, power plants, esp. nuclear. In the EU, you have SIL0 to SIL4 , safety integrity levels 0 to 4, and the higher levels require you to choose a coding style that you are able to justify – the easiest way to do this is to pick an industry standard set of guidelines like MISRA, or in Germany for railways, something like MÜ 8004.
You can define your own coding style but you will have to document it and you will have to convince the authorities that it’s state of the art what you’re doing.

Gail Combs
July 21, 2010 2:10 pm

Pamela Gray says:
July 21, 2010 at 8:09 am
The posts may be talking about two different types of codes used for two different types of research.
…. I bought the damned stuff, the lab, the equipment, the lab assistants, the cost of publication, and I could be asked to pay even more for it if it is used for the creation of laws. So excuse me all to hell but I want to know just what it is I am buying. If that makes me arrogant, fine. I’ll be arrogant, especially when someone else’s hand is in my back pocket. If there ever was a case between who should be and has the right to be arrogant, it would be the paying public, not the researchers. As my grandma used to say to me, you need an attitude adjustment. And shortly thereafter, I got a knuckle on my head. If researchers don’t like that, then get the hell out of the kitchen.
_________________________________________________________________________
I agree. If it can not be replicated it is not science.
If it is paid for with my tax dollars then I own it</b. not the scientist who did the research and therefore I have the right to see it… ALL of it.
I would like to add to that. If a scientist is going to be that protective over his research he should not be publishing in the first place. Get out of academia and go work for industry.
What is the definition of
publication any way?
Definition
Publication is the act of offering something for the general public to inspect or scrutinize. It means to convey knowledge or give notice. Making something known to the community at large, exhibiting, displaying, disclosing, or revealing….
Notice that scientists? Publication is the act of offering something for the general public to inspect or scrutinize. Therefore “Climate Scientists” are not even meeting the definition of publishing much less the definition of science they only meet the definition of propaganda.
Definition
propaganda: In general, a message designed to persuade its intended audience to think and behave in a certain manner. Thus advertising is commercial propaganda. In specific, institutionalized and systematic spreading of information and/or disinformation, usually to promote a narrow political or religious viewpoint. Originally, propaganda meant an arm of the Roman Catholic church responsible for ‘de propaganda fidei,’ propagation of the faith. It acquired negative connotations in the 20th century when totalitarian regimes (principally the Nazi Germany) used every means to distort facts and spread total falsehoods.
Now compare that to Schneider statement:
“We need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.”

Jan Zeman
July 21, 2010 2:12 pm

Almost everything in this science of CAGW is too overstated – in 1979 Stephen Schneider was telling us the CO2 will rise +20% at the end of century and in middle of 21st century “will double” – http://www.youtube.com/watch?v=pB2ugPM0cRM .
In fact now we have 2010 and CO2 have risen 13.7% since 1979, 5.5% since 2000. So he overstated the figure 2.4 times than actually was the reality.
It reminds me the fishermen who show using their hands how big the fish they caught was to win attention. It is more like an activism than a serious science…

July 21, 2010 2:13 pm

Robert,
Sure, I can see that there must be independent checks and balances in a scientific process for it to remain objectively focused. That can be done privately. I see no reason why individual people involved (I purposely didn’t say scientists) in a well set up process could not all be private citizens funded privately and even confidentially. Why public?
Are you implying by “public” that governments and newspapers are necessary for the scientific process?
NOTE: In the case of using government funding for areas like climate science I see that the process needs to be public.
John

Billy Liar
July 21, 2010 2:14 pm

Ecotretas says:
July 21, 2010 at 2:02 am
It is enlightening to watch the video of a Channel 4 TV programme called ‘The Global Warming Conspiracy’ on Ecotretas’ website.
It was made in 1990 and it is quite obvious that climate science has not progressed at all in the intervening 20 years. The names have changed: in the film Wigley was running the CRU, Houghton was at the UK Met office. Michaels, Lindzen, Spencer, Schneider and a few others who have dropped from the scene (probably because of their sceptic views) definitely look 20 years younger but still say exactly the same things as now.
In fact, some things have got worse. When asked difficult questions the tone of the reply was somewhat less angry 20 years ago.
It is depressing how little has really changed – despite billions of dollars in funding there is no more certainty in the hypothesis of global warming than there was 20 years ago.

July 21, 2010 2:16 pm

Robert E. Phelan says:
July 21, 2010 at 1:12 pm

Point to Ponder
There is no such thing as “private science”. Lord alone knows what those guys in private, commercially-funded labs are doing, but if their processes and methods are not open to public scrutiny, then it is not science. The public nature of science is in many ways its most critical aspect. Kind of reminds me of those smirking Extenz commercials on late night TV, “this is real science…” yeah.


——————
Robert,
Sure, I can see that there must be independent checks and balances in a scientific process for it to remain objectively focused. That can be done privately. I see no reason why individual people involved (I purposely didn’t say scientists) in a well set up process could not all be private citizens funded privately and even confidentially. Why public?
Are you implying by “public” that governments and newspapers are necessary for the scientific process?
NOTE: In the case of using government funding for areas like climate science I see that the process needs to be public.
John

harvey
July 21, 2010 2:27 pm

[snip – calling people sociopaths won’t help]

Peter S
July 21, 2010 2:31 pm

toby says:
July 21, 2010 at 12:25 am
“What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.
The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.”
Crap code is crap code.
Some basic fundamentals of good programming.
You should always write your code to the best standards, because even throwaway stuff will get reused if it does anything remotely useful.
It is not quicker to write sloppy code, because you never get it 100% right the first time, and debugging always takes longer.
You won’t have time to go back and write it properly, so do it right right from the start.
If your code is not good enough for public display then it probably does not work properly anyway.
Yes, if you publish a complete formula then someone else can write their own program to replicate the results, but you have just doubled the possibility of bugs affecting the results.
You don’t see the Presenters at Top Gear creating vehicles from manufacturers blueprints so that they can replicate the performance of a new vehicle. The manufacturer provides a complete and working example to be test driven.
Its about time that the academic community were held to the same standards that the rest of the world work to.

Nuke
July 21, 2010 2:35 pm

harvey says:
July 21, 2010 at 2:27 pm
*shakes his head*
I cannot believe the coldness in the comments here for a person who has died.
Some of these commentators have no soul.
Sociopaths almost.
Can you not look at the body of work this person produced and applaud his attempt to create order from chaos?

Are we discussing the work or the person?
My condolences to his family.

vigilantfish
July 21, 2010 3:03 pm

sphaerica says:
July 21, 2010 at 10:58 am
“Smokey, Chris Long, Scott B., vigilantfish, and all the others…
You still don’t understand how science works.”
———–
I hope I understand how science works – I have a Ph.D. and a professorial career in exactly that field: the study of how science works ( i.e. history and philosophy of science. ) Try again.

Editor
July 21, 2010 3:20 pm

John Whitman says: July 21, 2010 at 2:16 pm
Hi, John. Daughter spent an hour at CKS before flying on to Phnom Penh. She’ll have a whole seven hours on return.
What I meant about public science…. if it’s proprietary, if it is for limited distribution… it can call itself what it likes, its not science. Science is what happen in journals, seminars, demonstrations, blogs, magazines and newspapers. If its a secret, its not science. It might even be correct, but its not science. If its not public, it is not science.

RoyFOMR
July 21, 2010 3:46 pm

Sphaerica sphaerica, sphaerica.
I feel for you mate, I really do. Your intelligence is clearly A1, your literacy may even exceed your intelligence and your passion pushes your literacy into a poor second place.
All those gifts, IMO, are somehow rendered adrift by an immaturity of sensibility that refuses to accept that you may have misjudged those that you seem to admire so much.
Sir, if you can’t see transgressions in the behaviour of those to whom you appear to fully support, if you are incapable of understanding that a good sausage doesn’t need to push the sizzle, then you should stick to your gifts and
recognize your shortcomings.
I could pile on but that would be unhelpful. I’ll even resist the cheap shot of translating the dog-Latin of your ndp.
Sir, don’t confuse consensus with certainty, authority with adolescent admiration but, most of all, reject advice when freely and helpfully offered.

Andrew30
July 21, 2010 3:49 pm

Robert E. Phelan says: July 21, 2010 at 11:17 am
“By the way, which language were your comments based on?”
None, I did not try to locate the syntax, just guessing at it as pseudo-code.

harvey
July 21, 2010 4:08 pm

[snip – stop all this now]

DirkH
July 21, 2010 4:14 pm

Here’s a video interview with Stephen Schneider from 1979.
Talks about rises in CO2 and how this might become a problem in the future.
http://www.climatesciencewatch.org/index.php/csw/details/stephen-schneider-in-1979/

July 21, 2010 4:17 pm

By vigilantfish on July 21, 2010 at 3:03 pm

I hope I understand how science works – I have a Ph.D. and a professorial career in exactly that field: the study of how science works ( i.e. history and philosophy of science. ) Try again.


——-
vigilantfish,
I have been interested in the history of philosophy for some years now, particular related to the science & reasoning concepts of western culture/civilization. And recent good books you can recommend? I would appreciate it.
[This is a blackberry & adirondack chair . . Beware of odd spelling & grammar]
John

harvey
July 21, 2010 4:23 pm

I am sorry.
I cannot speak the truth anymore on this blog.
I have been censored.
bye.
[you’ve been snipped for saying people here are sociopaths, and for using the middle finger example in your last post. if that is your idea of “truth” then perhaps you should consult a professional about it, we don’t need that sort of commentary here ~mod]

fp
July 21, 2010 4:24 pm

Sometimes I wonder how I can doubt AGW when so many reputable scientists say that it’s true. But after reading toby’s and sphaerica’s account of how science works these days, it becomes clear how we got into this mess. If you can’t reproduce the results it’s not science. Why on earth would anyone place any trust in climate models unless those models could be examined in detail?
I was telling my wife (who isn’t on board with my AGW skepticism) what I read here and she refused to believe that’s how science is conducted. She figures I got it off some wacky conspiracy theory website.
[to read more wacky, see http://sphaerica.wordpress.com ~ mod]

Editor
July 21, 2010 4:28 pm

Andrew30 says:
July 21, 2010 at 3:49 pm
Robert E. Phelan says: July 21, 2010 at 11:17 am
“By the way, which language were your comments based on?”
None, I did not try to locate the syntax, just guessing at it as pseudo-code.
Ahhh… I’ve always been fascinated by the effect of language on perceptions. Looks like it kinda works in technical areas…. my illustration was a pseudo code but based on BASIC rules and perspectives… kind of a technical creole… based on “C” it would have been different…. or (shudder!) COBOL

RoyFOMR
July 21, 2010 4:46 pm

harvey says:
July 21, 2010 at 4:23 pm
I am sorry.
I cannot speak the truth anymore on this blog.
I have been censored.
bye.
I feel for you Harvey, I really do.
I feel for your isolationism, I feel for your refusal to let truth intrude into your mindset but, most of all, I feel for your inability to see when you’re a j**k.
Come back when you feel better. You’re more than welcome.

July 21, 2010 4:52 pm

By Robert E. Phelan on July 21, 2010 at 3:20 pm

Hi, John. Daughter spent an hour at CKS before flying on to Phn
What I meant about public science…. if it’s proprietary, if it is for limited distribution… it can call itself what it likes, its not science. Science is what happen in journals, seminars, demonstrations, blogs, magazines and newspapers. If its a secret, its not science. It might even be correct, but its not science. If its not public, it is not science.


——–
Hey Robert,
Yeh, I was in Kaoshiung a few weeks ago for a family ceremony. Good for ypur daughter. I haven’t forgotten to send pics, but the pics from the ’70s area hardcopy only and in storage 3000miles away. I will send some recent unrecognizable pics.
Regarding science as science is necessary to be public, OK. But don’t be surprised some morning to hear some dudes/dudettes went to the nearest star and back in a day and a half without gov’t or press knowledge : )
John

July 21, 2010 5:10 pm

By DirkH on July 21, 2010 at 1:35 pm

Yes, in safety-related applications like railways, avionics, power plants, esp. nuclear. In the EU, you have SIL0 to SIL4 , safety integrity levels 0 to 4, and the higher levels require you to choose a coding style that you are able to justify – the easiest way to do this is to pick an industry standard set of guidelines like MISRA, or in Germany for railways, something like MÜ 8004.
You can define your own coding style but you will have to document it and you will have to convince the authorities that it’s state of the art what you’re doing.


——-
DirkH,
Sorry that I am so late to respond. Thanks for the info.
Software development as an industry is captivating.
John

July 21, 2010 5:24 pm



At 5:58 am on 21 July 2010, Kate had written:

Editors and scientists alike insist on the pivotal importance of peer review. The mistake is to have thought that peer review was anything more than a crude means of discovering the acceptability — not the validity — of a new finding.

The big lie is to portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. The system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.

Just like everything else human beings do. This bit of light extemporanea on the part of Kate offers such a profundity of wisdom and perspicacity that the last of her sentences in those two cited paragraphs should appear in bold print on the masthead of every scientific periodical:

The system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.


A bit of “truth in advertising” sorely needed in this era of overweening government thugs and irresponsible media root-weevils “hot for certainties.”
Caveat goddam emptor, people.
Of course, there’s always my favorite Cromwell quote (even a stopped clock can be right once in a while):

“I beseech you in the bowels of Christ think it possible you may be mistaken.”

MikeD
July 21, 2010 5:54 pm

As someone with a strong computer science background, this IP argument is weak. If you explain with enough detail for someone to replicate your results how you manipulated the data, then you have essentially delivered the IP. The rest is just typing for anyone that knows how to program. The logic not the exact syntax is typically where the legitimate IP lies. Furthermore these guys aren’t breaking any great new ground on the computer science front with thier severely antiquated “university level tech.” Perhaps with some of the advanced modeling programs (though I have suspicions otherwise) but not with thier vanilla utilititarian apps.
In business, a math genius would hand the software guy a mathematical equation or algo and the comp sci guy would “translate” that into machine language and hopefully do so in a well documented and easy to maintain manner. In “university level tech” you often see the math guy trying to do the comp sci part himself using some vastly obsolete language and with limited knowledge of exactly how to get the best results out of the code (usually with a couple classes of cross training and zero real world experience).
So in short, hiding the source code serves no IP purpose if they’ve accurately described the logic to the point of easy replication. It just allows them to hide their potentially poorly written code and avoid questions related to the code. In the case of Jones, Mann, et al I suspect on occaision they did NOT accurately represent the logic to sufficient levels for people to replicate the experiment and did not want people to see exactly how they arrived at their numbers.
The university science “g-ds” are often far less that than they would like their students and grant providers to know. Through obfuscation and secrecy they can avoid having to admit “that’s over my head.” Reminds me of being accused of plaguerism at university because I got bored of DOS command line programming and used assembly to integrate mouse and windowing (but single threaded) GUI along with graphing/charting libraries into a simple telemetry test app assignment. Professor didn’t understand how it was done therefore how could a 3000 level course student? A professor with sizeable endowments, grants, multiple PhD’s from prestigious institutions across the globe, well published, and a “genius” in the AI field. Explaining the source line by line with comments deleted settled that one. But perhaps I should have just screamed IP and ahd the professor grade my work with nothing but the data.

Editor
July 21, 2010 7:05 pm

John Whitman says: July 21, 2010 at 4:52 pm
don’t be surprised some morning to hear some dudes/dudettes went to the nearest star and back in a day and a half without gov’t or press knowledge : )
Yeah, and I keep telling my students that the first star ship is not gonna be called “Enterprise”…. it will be called “Tien Shan” – and when the dudettes get back they’ll casually mention that the colony left behind calls itself the Han Minh Tien Shang Kuo…..

Steve in SC
July 21, 2010 8:00 pm

As per the teachings of my mother, I will withhold comment on the demise of [Dr.] Schneider.

Steve McIntyre
July 21, 2010 8:04 pm

In the case at hand, Mann had sent some of the data that I had requested (residuals) to CRU describing it as his “dirty laundry”. After Climatic Change established a data policy, Mann withdrew his submission rather than enable me as a reviewer to see his “dirty laundry”.

Steve Garcia
July 21, 2010 10:36 pm

(Off topic comment, which I rarely do…)
@ Tenuc July 21, 2010 at 12:55 am:

This is how the supposed ‘consensus’ of cargo cult climate science was maintained. I suspect that similar arrangements are to be found in other scientific disciplines. Otherwise the various bits of ‘pixie dust’ – dark matter, the graviton, CO2 e.t.c. – needed to maintain the mainstream group-think would be laughed out of court.

Thank you for mentioning dark matter, one of the silliest ideas ever to come out of a scientist’s brain. We can’t see it. It doesn’t block light, even over billions of light years. But it is there. And it is 90% of the matter in the universe. R-i-g-h-t. . . .
Cosmologist guys, it might be time to go back and see where you drove the train off the tracks.
And Tenuc, you might want to throw the Higgs boson in there, too.

tallbloke
July 22, 2010 2:20 am

John Whitman says:
July 21, 2010 at 4:17 pm (Edit)
vigilantfish,
I have been interested in the history of philosophy for some years now, particular related to the science & reasoning concepts of western culture/civilization. And recent good books you can recommend? I would appreciate it.

Hi John,
I’m also a HPS graduate, not as well qualified as Vigilantfish though. I always find that going straight to the original work is better than readings endless commentaries, so I’m going to recommend some oldies but goldies:
The structure of scientific revolutions: Thomas Kuhn
Against Method: Paul Feyerabend
Laboratory Life: Bruno Latour|Steve Woolgar

July 22, 2010 3:22 am

Richard Black BBC is on the CAGW media blitz as well
A Stephen Schnieder piece, which ends up linking scepticism to an extreme group..
That was a choice.
He could have equally linked a postive story, with Anthony Watts, Steve Mcintyre, Bishop Hill, etc, respectful stories regarding Schneiders death, and written a positive story, following many MAINSTREAM sceptical/pro people meeting at the Climategate (Guardain) debate and having drinks together afterwards.
Yet, chooses some group, I’d never heard of, with some extreme commemnts in it’s forums.. As if the extreme /left eco type groups, don’t have some nutters, in their forums as well..
And I being too sensitive, about the BBC? I expect better from them.
http://www.bbc.co.uk/blogs/thereporters/richardblack/2010/07/i_didnt_know_stephen_schneider.html#comments

papertiger
July 22, 2010 4:42 am

You ever watch one of those climate change docs on PBS and the show ends when you throw a slipper at Prof Schneider? (because right after throwing the shoe you change the channel?)
SS holds the Tiger house record for being beaned in abstentia.

Nuke
July 22, 2010 6:57 am

adamskirving says:
July 21, 2010 at 9:45 am
@ Nuke
“Computer code is deterministic. Run the same code with the same data and the results are always the same. ”
I agree with the thrust of your post, but I would like to emphasise that Dave Springer is not being snarky. Hardware and firmware may have faults that mean the same code produces different results on different machines. Worse still there are certain bugs like ‘race conditions’ in code that sometimes cause programs run on the same machine to exhibit errors at apparently random intervals. That’s why I was taught to not only make my code available on request, but to keep a record of hardware used, operating system and version, compiler and version, and so on.

I’m not sure how we even got on to this topic. I believe my point was in response to somebody (was it sphaerica?) saying the data and the program could be released, but not the source code.
In controlled conditions, (notice the qualifier this time), a computer program should get the same results when run with the same data. There is a whole industry of software testing built around that principle. Software still gets released with bugs and one reason is the test environment does not accurately duplicate the production environment.
If Mann, or Hansen, as example, released their data and the executable programs they used to create their outputs, we expect our results to be the same. This proves nothing because we expect software to work this way.
A computer model run is not an experiment. If we run independent experiments in the real world, and get the same results, then we may be on to something. (Let’s assume the experiment was conducted using proper methodology and remove those type of possible objections from the discussion for now.)

Steve Garcia
July 22, 2010 8:41 am

@ LearDog says July 21, 2010 at 4:01 am:

The IP argument is a pure smokescreen and they know it. In academia – if you publish it first – it IS your Intellectual Property.

I recently had reason to look this up:
Just FYI, that doesn’t just apply to academia. According to US Copyright law, anything written is copyright protected, the moment you finish it. There is, therefore, no need to formally copyright anything. You MAY, but it really isn’t necessary.
I was surprised to find this all out, BTW.

July 22, 2010 9:47 am

tallbloke says:
July 22, 2010 at 2:20 am

John Whitman says:
July 21, 2010 at 4:17 pm (Edit)
vigilantfish,
I have been interested in the history of philosophy for some years now, particular related to the science & reasoning concepts of western culture/civilization. And recent good books you can recommend? I would appreciate it.

Hi John,
I’m also a HPS graduate, not as well qualified as Vigilantfish though. I always find that going straight to the original work is better than readings endless commentaries, so I’m going to recommend some oldies but goldies:
The structure of scientific revolutions: Thomas Kuhn
Against Method: Paul Feyerabend
Laboratory Life: Bruno Latour|Steve Woolgar

————–
tallbloke,
Hey, thanks. I think it would be a fun day if somehow there was a post on the History of Western Philosophy specifically focused on the Science thread from the beginning prior to Ancient Greece.
I would like to add, that although my curiosity is only focused now on Western Philosophy, my intention is not to minimize any philosophic history in other civilizations.
John

July 22, 2010 9:49 am

At 5:59 am on 21 July 2010, sphaerica had written:

The way science works is that someone should be able to take the same hypothesis, and the same readily available data, and completely on their own perform an experiment that yields results consistent with the original study. If it doesn’t, there’s doubt about the hypothesis, and its left to science to determine how the two studies differed and obtained different results. But this is how you test a hypothesis, not by demanding detailed notes. It would be like walking into Madam Curie’s lab and insisting that you use her lab equipment to repeat her experiments.

Repeating another investigator’s experiment is merely a method of error-checking, undertaken not only to confirm the original thinker’s methods, observations, and results but also to gain a better appreciation of how the published conclusions were arrived upon.
Almost always, such repeat work is undertaken by a scientist who is nagged by a suspicion that the original investigator has put forth something not only extraordinarily interesting but also inadequately examined and therefore puzzling.
“Damn. That’s funny….”
The hallmark of scientific curiosity.
The “walking into Madame Curie’s lab” comment is an astonishing over-reach. No request is being made to take over another person’s physical resources, thereby denying the investigator the use of anything he might require for further work. Providing a digital copy of one’s computer code doesn’t destroy or otherwise render unworkable the investigator’s original code itself, does it?
The effort to reproduce nothing more than the originator’s analysis of the information gained through his observations – which is the sort of fiddlin’ practice by which any intellectually active person sharpens his skills and strengthens his understanding of the subject under examination – is the “poor man’s experimentation,” the perfectly harmless effort of someone who hasn’t got the time or materials to do everything the originator had done.
If the originator is an intellectually honest individual, there can never be any objection made about this kind of thing.
After all, the inquisitive emulator of one’s interpretations may catch you in a mistake, and such corrective input is always welcome.
Isn’t it?

July 22, 2010 10:27 am

Robert E. Phelan says:
July 21, 2010 at 7:05 pm

John Whitman says: July 21, 2010 at 4:52 pm
don’t be surprised some morning to hear some dudes/dudettes went to the nearest star and back in a day and a half without gov’t or press knowledge : )

Yeah, and I keep telling my students that the first star ship is not gonna be called “Enterprise”…. it will be called “Tien Shan” – and when the dudettes get back they’ll casually mention that the colony left behind calls itself the Han Minh Tien Shang Kuo…..
—————————
Robert,
Wasn’t it Isaac Asimov (of SF fame) who maintained that it was most likely to be people from the Chinese culture who would colonize the stars first?
John

Mark
July 22, 2010 5:25 pm

In the linked article “Lessons of Climatology Apply as a Vicious Front Moves In”, Schneider states “So I decided to use the techniques of climate prediction to increase my survival odds.”
Given that the techniques were his, it’s no wonder the man is now dead!

Matt in Houston
July 23, 2010 3:45 pm

Sphaerica,
You sir are a buffoon. You clearly appear to be intelligent. You clearly appear to have some logical skills. Unfortunately you have not learned to apply those gifts properly.
Normally in the course of debate I refrain from making ad hominem attacks because they do not serve the course of debate, but in your case you have done nothing but make them against all reasonable responses to your laughable idea of science. This idea of course we must extrapolate from your ridiculous musings. In addition to your ad hominem attacks you follow standard bad debate tactics of those who know they cannot win on the merits- red herrings and distractions without substantiation.
Your position is indefensible and you know it, as well as everyone else here with a clue.
When you come to realize that the real world does not run inside of a computer nor your twisted version of “science” I am certain most of the people that read and participate in Mr. Watts EXCELLENT SCIENCE site will be more than happy to engage in a real and civil discussion with you.
Until then…
You Sir ARE AN ABHORRENT manifestation of what “science” is becoming in this world.

July 23, 2010 6:00 pm


At 5:25 pm on 22 July 2010, Matt in Houston had written:

Normally in the course of debate I refrain from making ad hominem attacks….

Don’t worry, Matt. You haven’t yet perpetrated anything properly characterized as argumentum ad hominem, which is not simple insult but rather a logical fallacy of a specific type. I quote from the Nizkor Project Web page on the subject:
An Ad Hominem is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Typically, this fallacy involves two steps. First, an attack against the character of person making the claim, her circumstances, or her actions is made (or the character, circumstances, or actions of the person reporting the claim). Second, this attack is taken to be evidence against the claim or argument the person in question is making (or presenting). This type of “argument” has the following form:
1. Person A makes claim X.
2. Person B makes an attack on person A.
3. Therefore A’s claim is false.
The reason why an Ad Hominem (of any kind) is a fallacy is that the character, circumstances, or actions of a person do not (in most cases) have a bearing on the truth or falsity of the claim being made (or the quality of the argument being made).

People who use “ad hominem” as what they think to be a fancy-word condemnation of verbal abuse (thereby arrogating to themselves the aroma of erudition) are almost invariably a buncha stupid schmucks.