Guest Post by Willis Eschenbach
I’ve been involved in climate science for a while now, this is not my first rodeo. And I’ve read so many pseudo-scientific studies that I’m starting to develop a list of signs that indicate when all is not well with a particular piece of work.
One sign is whether, how, and when they cite the IPCC “Bible”, their “IPCC Fourth Assessment Report”. The previous report was called the “T. A. R.” for “Third Assessment Report”, but the most recent one is called “AR4” rather than the “F. A. R. “, presumably to avoid using the “F-word”. This report is thousands upon thousands of pages of … of … of a complex mix of poorly documented “facts”, carefully selected computer model runs, good science, blatantly political screeds from Greenpeace and the World Wildlife fund, excellent science, laughable errors, heavily redacted observations, poor science, “data” which turns out to be computer model output, claims based on unarchived data, things that are indeed known and correctly described, shabby science, alarmist fantasies, things they claim are known that aren’t known or are incorrectly described, post-normal science, overstated precision, and understated uncertainty. That covers most of the AR4, at least.
Since many of the opinions expressed therein are vague waffle-mouthed mush, loaded with “could” and “may” and “the chance of” and “we might see by 2050”, you can find either support or falsification within its pages for almost any position you might take.
I have an “IPCC fail-scale” that runs from 1 to 30. The higher the number, the more likely it is that the paper will be quoted in the next IPCC report, and thus the less likely it is that the paper contains any actual science.
I’d seen some high-scoring papers, but a team of unknowns has carried off the prize, and very decisively, with a perfect score of 30 out of 30. So how does my “IPCC Fail-Scale” work, and how did the newcomers walk off with the gold?
First, there are three categories, “how”, “whether”, and “when”. They are each rated from zero to ten. The most important of these is how they cite the IPCC report in the text. If they cite it as something like “IPCC Fourth Assessment Report: Climate Change 2007 (AR4), Volume I, pages 37-39 and p. 40, Footnote [3]”, they get no points at all. That’s far too scientific and too specific. You could quickly use that citation to see if it supports their claims, without blindly searching and guessing at what they are citing. No points at all for that.
If they cite it as “IPCC Fourth Assessment Report: Climate Change 2007 (AR4), Volume I” I award them five points for leaving out the page and paragraph numbers. They get only two points if they just omit the paragraph. And they get eight points if they leave out the volume. Leaving out a URL so their version can’t be found gets a bonus point. But to get the full ten points, they have to disguise the report in the document. They can’t seem to be building their castles on air. So how did the winning paper list the IPCC Fourth Assessment Report in their study?
They list it in the text as “Solomon 2007”. That’s absolutely brilliant. I had to award the full ten points just for style. Plus they stuck the landing, because Susan Solomon is indeed listed as the chief culprit in the IPCC documents, and dang, I do like the way they got around advertising that they haven’t done their homework. 10 full points.
Next, where do they cite it? Newcomers to the field sometimes cite it way at the end of their study (0 to 5 points) or in the middle somewhere (six to nine points). But if you have real nerve, you throw it in as your very first reference. That’s what got them the so-called “brownie point”, the extra score named after the color of their nose, the final point that improves their chances of being in the Fifth Assessment Report. Once again, 10 out of 10 points to the winner, “Solomon 2007” is the first reference out of the box.
Finally, do they cite the IPCC at all? Of course, the authors not citing the IPCC Report greatly improves the odds that the author has actually read, understood, and classified the IPCC document as a secondary source, so no points if they don’t cite it, 10 points if they cite it. One points per occurrence for citing it indirectly through one of their citations, to a maximum of 8. And of course, the winner has ten points in this category as well.
And what is this paragon of scientific studies, this ninja reference-master of analyses, this brazen grab by the newcomers for the crown?
Quite appropriately, it is a study which shows that when the Arctic is warmer, we should expect Northern winters to be colder.
Lately there have been a string of bitterly cold winters … who would have guessed? Well, as the authors of the study point out, none of the climate models guessed it, that’s for sure.
The study is “Arctic warming, increasing snow cover and widespread boreal winter cooling“, by Judah L Cohen, Jason C Furtado, Mathew A Barlow, Vladimir A Alexeev and Jessica E Cherry. This study proves once again that in the topsy-turvy world of climate science, all things are explainable by the AGW hypothesis … but only in hindsight.
It’s also a curious study in that the authors, who are clearly AGW supporters, are baldly stating that the climate models are wrong, and trying to explain why they are wrong … man, if I say the models are wrong, I get my hand slapped by the AGW folks, but these authors can say it no problem. It does put them into a difficult position, though, explaining why their vaunted models got it wrong.
Finally, if they are correct that a warmer Arctic has cooler winters, then for the average Arctic temperature to be rising, it would have to be much, much warmer in the summers. I haven’t seen any data supporting that, but I could have missed it. In fact, thinking about cooling winters, one of the longest underlying claims was that CO2 warming was going to lead to warming winters in the extra-tropics and polar regions … what happened to that claim?
CONCLUSIONS in no particular order
• I have no idea if what they are claiming, about snow and cold being the result of warming, is correct or not. They say:
Understanding this counterintuitive response to radiative warming of the climate system has the potential for improving climate predictions at seasonal and longer timescales.
And they may be right in their explanation. My point was not whether they are correct. I just do love how every time the models are shown to be wrong, it has the “possibility of improving climate predictions”. It’s never “hmmm … maybe there’s a fundamental problem with the models.” It’s always the Panglossian “all is for the best in the best of all possible worlds.” From their perspective, this never ever means that the models were wrong up until now. Instead, it just makes them righter in the future. They’ve been making them righter and even righterer for so long that any day now we should reach righterest, and in all that time, the models have never been wrong. In fact, we are advised to trust them because they are claimed to do so well …
• Mrs. Henninger, my high school science teacher, had very clear rules about references. The essence of it was the logical scientific requirement that the reader be able to unambiguously identify exactly what you were referencing. For example, I couldn’t list “The Encyclopedia Britannica, Volume ‘Nox to Pat'” as a reference in a paper I submitted to her. I’d have gotten the paper back with a huge red slash through that reference, and deservedly so.
Now imagine if I’d cited my source as just “The Encyclopedia Britannica”? A citation to “The Encyclopedia Britannica” is worse than no citation, because it is misleading. It lends a scientifically deceptive mask of actual scholarship to a totally unsupported claim. And as a result …
• Citing the IPCC TAR in its entirety, without complete volume, page, and if necessary paragraph numbers, is an infallible mark of advocacy disguised as science. It means that the authors have drunk the koolaid, and that the reviewers are asleep at the switch.
• Mrs. Henninger also would not let us cite secondary sources as being authoritative. If we wanted a rock to build on, it had to, must be, was required to refer to the original source. Secondary sources like citing Wikipedia were anathema to her. The Encyclopedia Britannica was OK, but barely, because the articles in the Britannica are signed by the expert who wrote each article. She would not accept Jones’s comments on Smith’s work except in the context of discussing Smith’s work itself.
But the IPCC is very upfront about not doing a single scrap of science themselves. They are just giving us their gloss on the science, a gloss from a single highly-slanted point of view that assumes what they are supposed to be setting out to establish.
As a result, the IPCC Reports are a secondary source. In other words, if there is something in the IPCC report that you are relying on, you need to specify the underlying original source. The IPCC’s comments on the original source are worthless, they are not the science you are looking for.
• If the global climate models were as good as their proprietors claim, if the models were based on physical principles as the programmers insist … how come they all missed it? How come every one of them, without exception, got the wrong answer about cold wintertimes?
• And finally, given that the models are unanimously wrong on the decadal scale, why would anyone place credence in the unanimity of their predictions of the upcoming Thermageddon™ a century from now? Seriously, folks, I’ve written dozens of computer models, from the simple to the very complex. They are all just solid, fast-calculating embodiments of my beliefs, ideas, assumptions, errors, and prejudices. Any claim that my models make is nothing more than my beliefs and errors made solid and tangible. And my belief gains no extra credibility simply because I have encoded it plus the typical number of errors into a computer program.
If my beliefs are right, then my model will be accurate. But all too often, my models, just like everyones’ models, end up being dominated by my errors and my prejudices. Computer climate models are no different. The programmers didn’t believe that arctic warming would cause cooler winters, so guess what? The models agree, they say that arctic warming will cause warmer winters. Fancy that. Now that the modelers think it will happen, guess what future models will do.
Now think about their century-long predictions, and how they can only reflect the programmers beliefs, prejudices, and errors … here is the part that many people don’t seem to understand about models:
The climate models cannot show whether our beliefs are correct or not, because they are just the embodiment of our beliefs. So the fact that their output agrees with our beliefs means nothing. People keep conflating computer model output and evidence. The only thing it is evidence of is the knowledge, assumptions, and theoretical mistakes of the programmers. It is not evidence about the world, it is only evidence of the programmers’ state of mind. And if the programmers don’t believe in cooling winters accompanying Arctic warming, the models will show warmer winters. As a result, the computer models all agreeing that the winters will be warmer is not evidence about the real world. No matter how many of the models agree, no matter how much the modelers congratulate each other on the agreement between their models, it’s still not evidence.
My best to all,
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Well said: “models cannot show whether our beliefs are correct or not, because they are just the embodiment of our beliefs”
A good example is how Dr Hansen’s treatment of the Earth’s surface as being a blackbody (even though heat transfers by several means other than radiation) is incorporated into the models via the infamous 33 degrees of warming figure. They calculate radiation on the basis of it transferring all thermal energy as in a blackbody, and then double count the “extra” energy in conduction, convection, evaporation etc. Then they assume backradiation adds back some of the thermal energy that was radiated (or was it conducted?) because they assume it will be converted to thermal energy (perhaps by those mass-less photons colliding with it and causing friction or something) – but it isn’t, and so it doesn’t.
See top of my Home page (updated today) http://climate-change-theory.com for more detail on this.
I despair. Trying to teach a bunch of 9th graders the importance of proper citations is obviously futile when I see supposed scholars getting Willis’ full Monty of 30 points. I should require them to read this thread.
Isn’t it grand to know that each error we find only serves to improve the models?
What an accomplishment.
So productive!
You are wrongly critical of all models in general. The climate AGW “models” are indeed calcified prejudice, protected by fudging the data.
And you say ” I’ve written dozens of computer models, from the simple to the very complex. They are all just solid, fast-calculating embodiments of my beliefs, ideas, assumptions, errors, and prejudices. Any claim that my models make is nothing more than my beliefs and errors made solid and tangible” ???
WTF! You don’t calibrate your models, run them against an independent test set, align your initial guesses with objective reality? I’ve written dozens of models in the past 6 months, all data driven, self learning and objectively testable and verifiable. Many years ago I wrote several discrete simulation models. They are also testable and verifiable. What’s wrong with your models? If they contain only your preconceptions, why bother to write them at all.
Take Stanford’s free Machine Learning class – advanced track. You’ll feel better when you can write objective models.
Conrad
maybe co-author Jessica E Cherry was cherry-picking the citations?
I agree with all.
Let us at least make science cleaner and sharper, following rigorous rules, if nothing else. The some of us skeptics might be persuaded a bit more.
So, in summary:
1) Hearsay is not evidence
2) The Map is not the Territory
Mrs. Henninger reaches from the grave through the hand of Willis Eschenbach. Her teachings are through her red pencil. We seem to learn and remember most by the mistakes we make, pointed out to us, and we being held accountable. Science seems to work best under this scenario. It does take a certain attitude and maybe even personality to grow from one’s errors. One’s goals need to be aligned with the philosophy of science, an awe and wonderment on how this whole thing works. I kind of understand how one becomes attached to one’s creation, the love of Pygmalion. Unfortunately for climate scientists, there is no Aphrodite to give models life and tell us the future. Instead of Gods and Goddesses intervening in our work, we, and in reality scientists are left with rules of engagement; rules on how we do things, how we proceed; what can be said from the machinations of our models. When we violate such rules, stretch the true a bit, don’t back up our claims to the satisfaction of others, then Mrs. Henninger needs to revisit us, reach out, and draw a red line through our fanciful presentation.
conrad clark says:
February 1, 2012 at 8:04 pm
Jeez, conrad, come down off of your high horse. Of course I do that. I told you up front, this is not my first rodeo. You should pay attention when a man says that, and adjust your foolish expectations accordingly.
The last six months? I don’t know how old you are, conrad, but I’ve been writing computer programs for 49 years now, nearly half a century. Your assumed air of superiority from your “dozens of models in the past 6 months” is unwarranted.
My models are like everyone else’s models including yours. If your claim is that you’ve never written models with errors, you are lying. If you have written models with errors, you’re no better than me. Your choice.
I said above:
So yes, I do test my models. And guess what? Some of them are 100% right. And you know what else? Even though I test them, some of them still turn out wrong.
I would say that’s the nature of models, that they can fool you. However, clearly in the past I failed to realize that there are perfect programmers like yourself, who never have errors in their models. Thanks for straightening me out on that.
Seems like what you should do, conrad, is write climate models. I mean, since your models are all perfect, you could write a climate model with no flaws, and we could all go home.
w.
Mr. Eschenbach, “…the most recent one is called “AR4″ rather than the ”F. A. R. “, presumably to avoid using the “F-word”.”
Or perhaps because there was already a FAR, First Assessment Report.
Seconding everything Willis said in response to conrad clark.
conrad clark wrote:
WTF! You don’t calibrate your models, run them against an independent test set, align your initial guesses with objective reality?
Even with the best models and independent test, there are still questions regarding interpolation versus extrapolation.
They’re still just embodiments of beliefs, even if they fit well within the interpolated regions.
w.
Strawman much? I never claimed or implied that my models/programs are without error. I never claimed to be a perfect programmer. Trying to write a climate model with no flaws??? Where did I imply that I would attempt that? My models are all perfect? Where is “perfect” in my comment?
My 1st paying programming job was in 1964. I’ve done very well with that, slowly and with much effort improving and keeping up. You should try that. (that’s a snark vs your “get off your high horse” comment).
It seems that the word “model” doesn’t mean the same between the two parties in this conversation. You should try to keep opinion and prejudice out of your programs (I won’t call them models anymore, since we disagree on some definitions, apparently). Look up “objective(philosophy)” and “science wars” in Wikipedia.
Hmmm: do these thoughts apply to the GHG theory?
PS—conrad, you say:
I just took a look. It’s a class on machine learning and neural networks … what use is that to climate science? Iterative models are the order of the day in climate science, my friend, get with the picture.
In any case, I was doing work in neural networks in 1986, and was involved in studying evolutionary machine learning systems as well at that time. So I fear you are about, oh, maybe a quarter century behind the times with your advice as far as I’m concerned.
w.
Thank you Willis.
Reminds me of my time working on my Master’s degree. I used a statistical analysis program to analyze my data. Low and behold, the output was marvelous and supported my theory. Unfortunately, that is why there are students and there are professors. I learned that statistics tell you pretty much what you want to. I had used the wrong variables and even the wrong statistical test.
sceptical says:
February 1, 2012 at 9:05 pm
As Foghorn Leghorn used to say, “Son, I say son, it’s just a joke, y’know, a bit of humor.”
w.
[SNIP: -REP]
w.
Re 1986 neural networks and machine learning. Were you in the same IBM SRI class as me? Believe me, you need to be dragged into the 21st century.
Iterative models don’t seem to work with climate science (or am I misreading the lack of actual predictions)?
Conrad
Pandora looks super cool! I’d like to go there someday…..oh wait its fiction, but it looks so real!
conrad clark says:
February 1, 2012 at 9:21 pm
Oh, please. You came in criticizing my claim, getting on my case because I admit that I have written models with errors. You speculated that I “don’t calibrate [my] models, run them against an independent test set, align [my] initial guesses with objective reality?,” as though I were some newbie.
Now you are whining about a strawman, and butter wouldn’t melt in your mouth, oh, no, you never said anything like that at all.
Look, conrad. Either lead, follow, or get out of the way. Coming in to school me on models isn’t working. I know models. Here is what I said, and I hold to it:
I see nothing in there that I would change. And yes, I do test my models, I do calibrate them, I do run them against independent data, and I have done so for years.
w.
I agree, all models , once bug free (lol) should be validated against reality.
This is where the GCM’s seem to be having very major issues.
Maybe it time for applying the circular file. 😉
I just saw a chart today that I had saved knowing that it would be perfect for one of these threads. Little did I suspect Willis would have the perfect post to respond to with it. Enjoy!
What they say vs what it actually means:
“It has long been known” — I didn’t look up the original reference.
“A definite trend is evident” — The data are practically meaningless.
“While it has not been possible to provide definite answers to the questions” — An unsuccessful experiment, but I still hope to get it published.
“Three of the samples were chosen for detailed study” — The other results didn’t make any sense.
“Typical results are shown.” — This is the prettiest graph.
“These results will be in a subsequent report.” — I might get around to this sometime, if published/funded.
“A careful analysis of obtained data.” — Three pages of notes were obliterated when I knocked over a glass of beer.
“After additional study by my colleagues.” — They didn’t understand it, either.
“Thanks are due to Joe Blotz for assistance with the experiment and to Cindy Adams for valuable discussions.” — Mr. Blotz did the work and Ms. Adams explained to me what it meant.
“A highly significant area for exploratory study.” — A totally useless topic selected by my committee.
“In my experience.” — Once.
“In case after case.” — Twice.
“In a series of cases.” — Three times.
“It is believed that.” — I think.
“Correct within an order of magnitude.” — Wrong.
“According to statistical analysis.” — Rumor has it.
“It is clear that much additional work will be required before a complete understanding of this phenomenon occurs.” — I don’t understand.
“A statistically-oriented projection of the significance of these findings.” — A wild guess.
“It is hoped that this study will simulate further investigations in this field.” — I quit.
All credit to: Scientists Research Paper Chart
JM
You read my mind. or I read yours. Only yesterday I’d started to wonder how the IPCC reports were being cited. For example many are happy to cite it when referencing the future 2-4oC temp increase predicted this century. Yet as you say this is only a review document.
w.
OK, go quote yourself some more. That’s an iterative model that seems to work.
Conrad
One can see it this way:
If we take the assumption that warming makes winters colder, we can speculate that past glacial eras were actually triggered by global warming.
Now we really have to worry about the coming ice age…