McIntyre on Stephen Schneider

An excerpt from Steve’s post at Climate Audit

Schneider replied that he had been editor of Climatic Change for 28 years and, during that time, nobody had ever requested supporting data, let alone source code, and he therefore required a policy from his editorial board approving his requesting such information from an author. He observed that he would not be able to get reviewers if they were required to examine supporting data and source code. I replied that I was not suggesting that he make that a condition of all reviews, but that I wished to examine such supporting information as part of my review, was willing to do so in my specific case (and wanted to do so under the circumstances) and asked him to seek approval from his editorial board if that was required.

This episode became an important component of Climategate emails in the first half of 2004. As it turned out (though it was not a point that I thought about at the time), both Phil Jones and Ben Santer were on the editorial board of Climatic Change. Some members of the editorial board (e.g. Pfister) thought that it would be a good idea to require Mann to provide supporting code as well as data. But both Jones and Santer lobbied hard and prevailed on code, but not data. They defeated any requirement that Mann supply source code, but Schneider did adopt a policy requiring authors to supply supporting data.

I therefore re-iterated my request as a reviewer for supporting data – including the residuals that Climategate letters show that Mann had supplied to CRU (described as his “dirty laundry”). The requested supporting data was not supplied by Mann and his coauthors and I accordingly submitted a review to Climatic Change, observing that Mann et al had flouted the new policy on providing supporting data. The submission was not published. I observed on another occasion that Jones and Mann (2004) contained a statement slagging us, based on a check-kiting citation to this rejected article.

During this exchange, I attempted to write thoughtfully to Schneider about processes of due diligence, drawing on my own experience and on Ross’ experience in econometrics. The correspondence was fairly lengthy; Schneider’s responses were chatty and cordial and he seemed fairly engaged, though the Climategate emails of the period perhaps cast a slightly different light on events.

Following the establishment of a data policy at Climatic Change, I requested data from Gordon Jacoby – which led to the “few good men” explanation of non-archiving (see CA in early 2005) and from Lonnie Thompson (leading to the first archiving of any information from Dunde, Guliya and Dasuopu, if only summary 10-year data inconsistent with other versions.) Here Schneider accomplished something that almost no one else has been able to do – get data from Lonnie Thompson, something that, in itself, shows Schneider’s stature in the field.

It was very disappointing to read Schneider’s description of these fairly genial exchanges in his book last year. Schneider stated:

The National Science Foundation has asserted that scientists are not required to present their personal computer codes to peer reviewers and critics, recognizing how much that would inhibit scientific practice.

A serial abuser of legalistic attacks was Stephen McIntyre a statistician who had worked in Canada for a mining company. I had had a similar experience with McIntyre when he demanded that Michael Mann and colleagues publish all their computer codes for peer-reviewed papers previously published in Climatic Change. The journal’s editorial board supported the view that the replication efforts do not extend to personal computer codes with all their undocumented subroutines. It’s an intellectual property issue as well as a major drain on scientists’ productivity, an opinion with which the National Science Foundation concurred, as mentioned.

This was untrue in important particulars and a very unfair account of our 2004 exchange. At the time, Schneider did not express any hint that the exchange was unreasonable. Indeed, the exchange had the positive outcome of Climatic Change adopting data archiving policies for the first time.

As I noted above, at his best, Schneider was engaging and cheerful – qualities that I prefer to remember him by. I was unaware of his personal battles or that he ironically described himself as “The Patient from Hell” – a title that seems an honorable one.

Read more at Climate Audit

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

179 Comments
Inline Feedbacks
View all comments
JTinTokyo
July 21, 2010 2:23 am

Tallbloke says:
“If you take a look at Steve McKintyre’s site climateaudit.org you’ll see he has done that many many times. The point is, he didn’t get the same results, and the description of the methodology given in the papers was inadequate for replication purposes. So he then had to try to work out what the paper wasn’t saying. A time consuming reverse engineering job.”
Tallbloke is correct. An empirical paper should make clear the methodology used to derive the results. I understand Toby’s unease in releasing code as I am uncomfortable in doing the same regarding my own models for my own work in business. However, I always attempt to make clear exactly how I derived my results so that my results can be replicated and my clients generally want to understand how I derived those results. That academics would attempt to obfuscate their methodology while at the same time hiding behind the skirts of “peer review” when their claims have the potential of affecting the lives of hundreds of millions, if not billions, is preposterous.

Eric (skeptic)
July 21, 2010 2:27 am

Schneider seems to have had a dichotomous personal and pubic persona. From what I have read, everyone who came into contact with him had great personal admiration for him. At the same time he seemed to scorn those he didn’t know such as those of us in the great unwashed masses of the public. We were apparently not capable of appreciating the big picture and could not be trusted with the facts.

Patrick M.
July 21, 2010 2:40 am

toby said:
“The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.”
As a professional software developer I can state that if your code is “idiosyncratic and uncommented (or at least poorly commented)”, it is also likely to have bugs. That’s a major issue with your statement above, you are assuming that there are no bugs in the original code. So if somebody tries to replicate your results and fails because the original has bugs, where does that leave us? Obviously, if the original programmer didn’t have time to comment their code then they’re not going to have time to review the code of someone who is trying to replicate their results. So then what?

Jackie
July 21, 2010 2:43 am

Excellent post tallbloke.
Corrupt, inept, delusional scientists fear the blogosphere more than anything else.
The biggest threat to them from the blogospere is truth.
It is the blogospere that has allowed discussion to reopen on what was “settled science”. It is the blogospere that has allowed discussion to reopen on settled “alternative energies”. It is the blogospere that has allowed discussion to reopen in main stream media about settled “environmentalism”.
The blogosphere is much bigger than one scientist or one concerned scientists club. For these people, that is the real problem with the blogosphere, the truth rises and collective intelligience prevails irrespective of education, affiliation, of award, of status, of political belief or of any other label. This is the hard part for celebrity scientists to accept. There are many, many better and more intelligent people rising to the surface in the blogosphere that are showing better scientific understanding.

Frank
July 21, 2010 3:00 am

Rick Bradford (July 21, 2010 at 12:54 am) took Schneider’s quote (see below) about telling scary stories out of context, a gross thing to do right now. When dealing with McIntyre and Mann, did Schneider succeed in keeping his scientific and public ethics from interfering? McIntyre appears to think he tried.
The real problem with Schneider’s ethics lies where science interfaces with the public, especially at the IPCC. The IPCC wants us to believe that its reports, particularly the main body, are pure science, not documents advocating legislation. I don’t think Schneider would ever advocate “telling scary stories” in an IPCC report. On the other hand, there is little doubt that scientists are happy to “make little mention of any doubts we might have” in the main body of an IPCC report and they go along with “making simplified dramatic statements” in the SPM’s. “Hide the decline” is a classic case of “making little mention of any doubts we might have”. “1998 was the warmest year to the millenium” was a classic example of a “simplified dramatic statement”. Schneider may have personally done an excellent job of handling his “ethical double bind”, but the climate science community has failed. They want to “save the planet”, not follow the IPCC’s charter to provide unbiased scientific information which will enable legislatures to chart an optimum path between costs and benefits of fossil fuels. In public policy and law, we don’t trust humans to handle “ethical double binds”. When scientists think they can do so, they create a situation equivalent to allowing one person to serve as prosecutor, judge and jury. When scientists permit themselves to have (or tolerate colleagues with) large egos, they abandon the obligation to “include all the doubts, the caveats, the ifs, ands, and buts”. Doubt and ego rarely mix well.
“On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.” (Quoted in Discover, pp. 45–48, Oct. 1989.

Ken Hall
July 21, 2010 3:12 am

“For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.”
This is so wrong in so many ways…
Where do I start? “Global reconstruction” Well this suggests a computer model. My response? the Models are incomplete and so are clearly wrong. If you cannot model cloud activity properly, you cannot model climate. If you need to “add an anthropological forcing” to make your model work, then that is just saying, that we had to make something up to get our flawed model to work! IF the model was right, the Anthropological evidence would be patently clear in the tropics in the form of the tropospheric heat island. It is not there, but you leave it in the model to make the model fit with the adjusted and homologated temperature record taken from a diminished list of global temperature stations which have seen a massive reduction in the number of stations in COLD locations.
The models are wrong, worthless and corrupt. End of that story.
“temperature rise to be within ‘normal’ parameters over the last 1000 years”
What is “normal” and how and why have you defined “normal” as being something other than what nature is showing us now? There is a massive amount of geological, anthropological, geographical, cultural and historical evidence showing clearly that the world was much warmer within the last 1000 years. Farmsteads frozen in Greenland tundra is but one example. Vineyards in Yorkshire being another, But that is inconvenient to climate scientists who revert to their unscientific habits of hiding or ignoring inconvenient evidence and become deniers of truth.
The current global temperatures are NOT unusual, and the climate has warmed, and cooled, by a far greater extent AND at a far more rapid rate than now, hundreds of times in the past before man was ever on this ocean covered rock we call Earth.

Yarmy
July 21, 2010 3:17 am

jcrabb says:
July 21, 2010 at 12:22 am
For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

It doesn’t.
http://www.ossfoundation.us/projects/environment/global-warming/myths/loehle-temperature-reconstruction/image/image_view_fullscreen
Note I’m not making any assertions about whether this is a better or worse reconstruction than any of the others (and as Phil Jones himself said, he doesn’t think it’s possible to do anyway!).

Christopher Hanley
July 21, 2010 3:31 am

jcrabb (12:22 am) says:
“….all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years….”
The Greenland Ice Sheet Project 2 temperature Holocene reconstruction shows that the Little Ice Age was the coldest period for about 4000 years.
http://westinstenv.org/wp-content/postimage/Easterbrook_graph.jpg

July 21, 2010 3:53 am


At 12:25 am on 21 July 2010, Toby had written:

As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data. The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented). Giving it to somebody implies spending valuable time explaining all the wrinkles as well – a waste of time with someone who should be able to do the job themselves.

Others have made a similar point, but amplification and reinforcement is worth an effort, particularly with regard to the second of these two paragraphs.
My own “modest record of publications” is in clinical research, in which a great deal of what is done – on the pharmaceuticals and medical device manufacturers’ dime – has to be submitted to one or another office of the U.S. Food & Drug Administration and/or the European Medicines Agency (formerly EMEA). In order to facilitate the manufacturers’ compliance with the regulations of these agencies, clinical investigators withhold nothing from their reports, and the manufacturers are in turn expected to report the results of these clinical trials completely and truthfully.
Look to the suppression of adverse safety data by Merck Pharmaceuticals in the case of their VIGOR clinical trail of rofecoxib (Vioxx), the discovery of which in 2004 precipitated the withdrawal of that product from the market and multi-billion-dollar lawsuits for damages done to patients as the result of that and other suppression of pertinent safety information.
Really good example of what happens when corporate suits try to evade the rigor of scientific method as well as regulatory compliance.
The analytical methods used to assess clinical data is also emphatically “not for commercial use and not user-friendly.”
This notwithstanding, if anybody asks a clinical investigator or author “How did you get from the raw data in your study to these results you’ve collated in your reports?” the response is always “Here y’go.”
If you’re not willing to put “[t]he code you write yourself when doing a paper” in front of the whole damned world, why the hell did you write the paper in the first place?
A published paper is never anything more than a holographic sketch of the research work upon which it remarks. The machinery by which you got from your observations to your conclusions really ought to be as lucid and well-reasoned as anything that passes review and gets published, ought it not?
And if it’s not, if it’s kludgy and clumsy and embarrassingly inelegant, as long as it’s valid, how the hell can you make any kind of argument for withholding it from anybody who is interested enough in it to invest the skull-sweat required to examine it?
“Here y’go” does not imply “spending valuable time explaining all the wrinkles” at all.
You might add: “This stuff is pretty messy,” and ask: “If you find any mistakes I’ve made, would you please get back to me about it?”
Of course, that assumes you’re honest enough to admit that you’re capable of making mistakes.

July 21, 2010 3:54 am

Breathtaking post from “toby”.
Point 1 – as has been pointed out, if you’re writing ‘idiosyncratic’ poorly-commented code, then it’s likely to be wrong. More importantly, if you look at your own code and think ‘no-one but me is ever going to be able to understand this’, then you yourself will also be unable to understand it in six months’ time. Write the bloody code properly, for god’s sake.
Point 2 – no one is asking for researchers to release professional-quality, properly-commented code (ignoring your implied lack of concern with governments making multi-billion dollar decisions based on amateur-quality, uncommented code). Another programmer will be able to read your code, even if it takes longer than it should – you can always release it with a ‘no support’ rider, just like all free software, if you don’t have time to answer their questions. However, if you’re embarassed to let others see your crappy code, or you’re worried that someone will find that bug that you just couldn’t figure out, then perhaps you need to (a) take some programming lessons, and (b) add a caveat to the end of your papers: “software has not been reviewed, tested or QCd”.
Point 3 – If you don’t release code, then for replication to be possible, you *must* provide a fully-detailed functional specification of your code. This will, of course, take many many hours. Why not just release the code, which will take approx 30 seconds? The answer, as you kindly told us, is that the code is not fit to be seen in public. Why should we have any confidence in it?
I’m also a professional programmer, as are other commenters above, and this kind of crap really annoys me. I wish a few climate researchers would start posting their colleagues’ code to http://thedailywtf.com – I think we’d all learn a lot.

Ross Jackson
July 21, 2010 4:01 am

As another professional coder:
Instead of the code, a high level description of the algorithm to process the data would be sufficient. That would include all statistical methods and should be sufficient to enable data processing to be duplicated.
And if the “code” was properly written, the algorithm should be well documented in any case.

LearDog
July 21, 2010 4:01 am

The IP argument is a pure smokescreen and they know it. In academia – if you publish it first – it IS your Intellectual Property.
Unless of course one wants to be paid for ones unique answer?

David
July 21, 2010 4:11 am

Jackie says:
July 21, 2010 at 2:43 am
Excellent post tallbloke.
‘Corrupt, inept, delusional scientists fear the blogosphere more than anything else.
The biggest threat to them from the blogospere is truth.”
———————————————————————————–
Since only about 5% of climate scientists argue against global warming, either totally or guardedly, are the other 95% “corrupt, inept, delusional? Fear the blogoshere? Fear what, hundreds of thousands of wannebe scientists who collectively are the beholders of the truth?
———————————————————————————–
It is the blogospere that has allowed discussion to reopen on what was “settled science”. It is the blogospere that has allowed discussion to reopen on settled “alternative energies”. It is the blogospere that has allowed discussion to reopen in main stream media about settled “environmentalism”
———————————————————————————-
How can something be re-opened if it wasn’t closed to begin with.
———————————————————————————–
The blogosphere is much bigger than one scientist or one concerned scientists club. For these people, that is the real problem with the blogosphere, the truth rises and collective intelligience prevails irrespective of education, affiliation, of award, of status, of political belief or of any other label. This is the hard part for celebrity scientists to accept. There are many, many better and more intelligent people rising to the surface in the blogosphere that are showing better scientific understanding.
————————————————————————————-
“The truth rises and collective intelligence prevails…? How about collective ignorace and collective denial? Do they collectively disappear? How do you qualify and quantify the many, many better and more intelligent people? Are they qualified or do they have a self awarded status? Do you read all the scientific papers and understand them? Are you a scientist? I read many scientific papers and readily confess that there are some papers I don’t understand and some I do. I will never comment on a scientific paper I haven’t read or criticise one that I don’t fully understand. That is out of common courtesy to the scientist concerned.
I see too many idiotic comments in the blogosphere to realise that the truth faces a formidable barrier on its way to the surface.

Quinn the Eskimo
July 21, 2010 4:18 am

Worrying about intellectual property rights in computer code developed for an academic paper is revealing.
IP rights matter when you are trying to make money from the creation.
Climate scientists hoarding their code based on alleged IP rights are entrepreneurs in the competitive market for grant money, and their code and their data are the products they sell into this market. The Climategate emails showed the CRU boys and their collaborators treated data, both raw and processed, as proprietary IP as well. Indeed, they show that even collaborators on the same papers did not share their secret blend of herbs and spices with each other out of “respect” for each other’s “proprietary” work.
From the outside, this viewpoint seems fundamentally wrong. The work is either commercial, proprietary and for profit, or it is not. It is either the academic pursuit of science and knowledge for its own sake – and for the sake of “saving the planet” – or it is not.
The way they act tells us which it is.

Joe Lalonde
July 21, 2010 4:23 am

The current science is based on concepts of theories and observations that can be manipulated as changes occur.
Good hard physical evidence is not a part of this due to the current system that keeps the system in place for funding grants. Unless this system changes, the science will always be incorrect as science is not just one area of study.
Just to study the complexity of water is a massive area in itself due to all the factors that water can do and change. The planet having a stable speed and heat variation are extremely important.
The speed of this planet is not a consideration in lab testing along with the EXACT positioning of this planet co-ordinates which makes many theories incorrect. This would include quantuum mechanics and deep space travel, time travel theories, etc.(To find a sister particle or travel, you need to be exact as our solar system is traveling in space).
Even Darwin made the mistake that our planet evolved changes when new chemicals or effects were added worldwide to evolve to the current species we have today.
Who is an expert or reviewer of new science when they have not been exposed to it?
A scientific formula theory is very hard to break even with massive amounts of physical evidence.

BBk
July 21, 2010 4:29 am

“I cannot imagine a scientist not expecting to have his work examined, This bit about being “idiosyncratic and uncommented” is hogwash. and don’t get me started on “valuable time”.”
It’s rather like a math publishing a paper proving some white-whale axiom. Four pages of equations are left out of the middle because of “IP” issues, but everyone is just supposed to believe them that it works. That’s unacceptible in math, but apparently totally fine in science.

RockyRoad
July 21, 2010 4:34 am

toby says:
July 21, 2010 at 12:25 am
As a working scientist, with a modest record of publications, I completely support Dr Schneider’s perspective on release of code. What is in question is replication, and a critic should be able to write his or her own code in order to test results with the same data.
The code you write yourself when doing a paper is not for commercial use and not user-friendly. It is usually idiosyncratic and uncommented (or at least poorly commented).
——————-Reply
Then, as a computer systems engineer, I’d call such “code” gobblydegook or something else (your choice of demonic definitions). If your code isn’t “user-friendly” (and by that I understand you to mean nobody can understand it), or it is “idiosyncratic” (again, the computer instructions probably don’t do what they were intended to do), or “uncommented” (meaning you probably couldn’t figure out what the code was doing when you wrote it, and certainly down the line that perception got no better), then you are a lazy, confused programmer.
But in climate science it hasn’t been about replicability at all–it is about the end results, and in this way some argue that the end justifies the means. Take some scientist that has an ideological rather than scientific objective and he can take a set of data and, applying his proprietary black box (because how DARE anybody wish to look inside) come up with all sorts of strange and often idiosyncratic results. (Idiosyncratic code produces, at best, idiosyncratic results.)
Indeed, the code is exactly what provides answers as to how the data is being manipulated; what assumptions are being applied, and even what fudge factors are being dreamed up. As in the Harry_Read_Me file, it is obvious something terribly wrong was/is going on at the CRU. For them to continue to hide behind all sorts of indefensible excuses is not science at all. It is whitewash. It is refusing to “show your work”.
And neither is the CRU’s supposition correct that one can hide code as your argument also futilly attempts. All my own code is commented and my specification documents run into the thousands of pages–fully indexed, illustrated, cross-referenced and footnoted. And while this is documentation and code for commercial applications, were I a scientist trying to convince colleagues and policy makers of some nefarious doom mankind is supposedly perpetrating on the Earth, displaying the code by which you come to such a conclusion is the only, I repeat, the ONLY choice you have. To do otherwise is egalitarian snobbery of the worst kind. That your code is poorly written or embarrassing is no justifiable reason to hide it.

Jack Simmons
July 21, 2010 4:39 am

Back in the day, I learned to make a lot of comments in my code.
When debugging or answering a client’s question, I would always have to ask myself, “What was I thinking?”.
When tracking down a production problem, it was always a relief, sometimes comic, to come across some comments.
My favorite was the programmer’s comment: *** Don’t blame me. They made me do it this way. ***
If you don’t comment your code, expect it to be eventually tossed.

DirkH
July 21, 2010 4:40 am

Frank says:
July 21, 2010 at 3:00 am
“Rick Bradford (July 21, 2010 at 12:54 am) took Schneider’s quote (see below) about telling scary stories out of context, a gross thing to do right now. […]”
Thanks for providing more context about the infamous Schneider quote, Frank.
The problem i have is i’m not surprised. The context is exactly in line with the shortened version; the shortened version does not distort the meaning at all.
Choosing between efficiency and honesty, that’s what he said and that’s what he meant and yes, it’s gross – not quoting it, but the fact that he said it and meant it.

Jack Simmons
July 21, 2010 4:42 am

jcrabb says:
July 21, 2010 at 12:22 am

For all the criticism of Global temperature reconstructions, no one has created a current Global reconstruction showing the current temperature rise to be within ‘normal’ parameters over the last 1000 years, all Global reconstructions, including even Loehles show current Global temps being the highest for over a thousand years.

Then all the Global reconstructions are wrong.
Apparently, from the first portion of your statement, it is not possible for model builders to get it right.

wws
July 21, 2010 4:59 am

A truly fitting memorial to Prof. Schneider will be if the last attempts at passing a “climate bill” in the Senate result in that idea being buried along with him.

David
July 21, 2010 5:01 am

Yarmy says:
July 21, 2010 at 3:17 am
jcrabb says:
July 21, 2010 at 12:22 am
When I follow your link, it takes me to a graph which has the following http address (not URL) http://www.NCASI.org/programs/areas/climate/LoehleE&E2007.csv
Your link takes me to the Ossfoundation, which said the following;
“Myths vs. Facts in Global Warming: This news and analysis section addresses substance of arguments such as “global warming is a hoax”, “global warming is a fiction”, “global warming is created to make money for Al Gore”. The main fallacy noted is that most arguments are facts out of context while others are simply false representations. When the facts pertaining to the arguments are viewed in context relevance becomes obvious. GLOBAL WARMING IS HAPPENING AND IT IS HUMAN CAUSED. (my capitalising)
http://www.ossfoundation.us/projects/environment/global-warming/myths/.
The graph you used is cherry picked and out of context. It came from Loehle E & E in a paper called “A 2000-Year Global Temperature Reconstruction Based on Non-Tree-ring Proxies.”

John Egan
July 21, 2010 5:05 am

Even if there is profound disagreement with someone –
To publish something like this a day after a person dies is unseemly.
It should be removed.

Ken Harvey
July 21, 2010 5:06 am

Were I a Maths teacher and set an equation for a student to solve and he gave me the correct solution of 2.348527336. with no explanation as to how he got there – I would be disinclined to give him a mark. If, upon enquiry, he told me that he had used a method of his own devising which was his personal intellectual property and which I would. in all likelihood, find difficult to understand, then I would be inclined to kick him out of class. A computer model is nothing more than a mechanical aid to quickly solving an equation that would take great time to do manually. We need both the data and the code to establish whether our climatologist is competent to do long winded, but essentially simple, sums.

Eric (skeptic)
July 21, 2010 5:25 am

It looks to me like the full context of the Scneider quote is that he favored leaving out caveats and uncertainties in certain public presentations. That may be ok in some cases (e.g. weather forecasting), but as some people have pointed out, perhaps not for trillion dollar interventions in the economy.
Another possible problem is that Schneider as a computer model may have believed that his computer models expressed uncertainty (e.g. run-to-run differences) when in fact they were insufficiently detailed to be accurate or contained actual errors.

Verified by MonsterInsights