Cites and Signs of the times

Guest Post by Willis Eschenbach

I’ve been involved in climate science for a while now, this is not my first rodeo. And I’ve read so many pseudo-scientific studies that I’m starting to develop a list of signs that indicate when all is not well with a particular piece of work.

One sign is whether, how, and when they cite the IPCC “Bible”, their “IPCC Fourth Assessment Report”. The previous report was called the “T. A. R.” for “Third Assessment Report”, but the most recent one is called “AR4” rather than the “F. A. R. “, presumably to avoid using the “F-word”. This report is thousands upon thousands of pages of … of … of a complex mix of poorly documented “facts”, carefully selected computer model runs, good science, blatantly political screeds from Greenpeace and the World Wildlife fund, excellent science, laughable errors, heavily redacted observations, poor science, “data” which turns out to be computer model output, claims based on unarchived data, things that are indeed known and correctly described, shabby science, alarmist fantasies, things they claim are known that aren’t known or are incorrectly described, post-normal science, overstated precision, and understated uncertainty. That covers most of the AR4, at least.

Since many of the opinions expressed therein are vague waffle-mouthed mush, loaded with “could” and “may” and “the chance of” and “we might see by 2050”, you can find either support or falsification within its pages for almost any position you might take.

I have an “IPCC fail-scale” that runs from 1 to 30. The higher the number, the more likely it is that the paper will be quoted in the next IPCC report, and thus the less likely it is that the paper contains any actual science.

Image Source

I’d seen some high-scoring papers, but a team of unknowns has carried off the prize, and very decisively, with a perfect score of 30 out of 30. So how does my “IPCC Fail-Scale” work, and how did the newcomers walk off with the gold?

First, there are three categories, “how”, “whether”, and “when”. They are each rated from zero to ten. The most important of these is how they cite the IPCC report in the text. If they cite it as something like “IPCC Fourth Assessment Report: Climate Change 2007 (AR4), Volume I, pages 37-39 and p. 40, Footnote [3]”, they get no points at all. That’s far too scientific and too specific. You could quickly use that citation to see if it supports their claims, without blindly searching and guessing at what they are citing. No points at all for that.

If they cite it as “IPCC Fourth Assessment Report: Climate Change 2007 (AR4), Volume I” I award them five points for leaving out the page and paragraph numbers. They get only two points if they just omit the paragraph. And they get eight points if they leave out the volume. Leaving out a URL so their version can’t be found gets a bonus point.  But to get the full ten points, they have to disguise the report in the document. They can’t seem to be building their castles on air. So how did the winning paper list the IPCC Fourth Assessment Report in their study?

They list it in the text as “Solomon 2007”. That’s absolutely brilliant. I had to award the full ten points just for style. Plus they stuck the landing, because Susan Solomon is indeed listed as the chief culprit in the IPCC documents, and dang, I do like the way they got around advertising that they haven’t done their homework. 10 full points.

Next, where do they cite it? Newcomers to the field sometimes cite it way at the end of their study (0 to 5 points) or in the middle somewhere (six to nine points). But if you have real nerve, you throw it in as your very first reference. That’s what got them the so-called “brownie point”, the extra score named after the color of their nose, the final point that improves their chances of  being in the Fifth Assessment Report. Once again, 10 out of 10 points to the winner, “Solomon 2007” is the first reference out of the box.

Finally, do they cite the IPCC at all? Of course, the authors not citing the IPCC Report greatly improves the odds that the author has actually read, understood, and classified the IPCC document as a secondary source, so no points if they don’t cite it, 10 points if they cite it. One points per occurrence for citing it indirectly through one of their citations, to a maximum of 8. And of course, the winner has ten points in this category as well.

And what is this paragon of scientific studies, this ninja reference-master of analyses, this brazen grab by the newcomers for the crown?

Quite appropriately, it is a study which shows that when the Arctic is warmer, we should expect Northern winters to be colder.

Lately there have been a string of bitterly cold winters … who would have guessed? Well, as the authors of the study point out, none of the climate models guessed it, that’s for sure.

The study is “Arctic warming, increasing snow cover and widespread boreal winter cooling“,  by Judah L Cohen, Jason C Furtado, Mathew A Barlow, Vladimir A Alexeev and Jessica E Cherry. This study proves once again that in the topsy-turvy world of climate science, all things are explainable by the AGW hypothesis … but only in hindsight.

It’s also a curious study in that the authors, who are clearly AGW supporters, are baldly stating that the climate models are wrong, and trying to explain why they are wrong … man, if I say the models are wrong, I get my hand slapped by the AGW folks, but these authors can say it no problem. It does put them into a difficult position, though, explaining why their vaunted models got it wrong.

Finally, if they are correct that a warmer Arctic has cooler winters, then for the average Arctic temperature to be rising, it would have to be much, much warmer in the summers. I haven’t seen any data supporting that, but I could have missed it. In fact, thinking about cooling winters, one of the longest underlying claims was that CO2 warming was going to lead to warming winters in the extra-tropics and polar regions … what happened to that claim?

CONCLUSIONS in no particular order

• I have no idea if what they are claiming, about snow and cold being the result of warming, is correct or not. They say:

Understanding this counterintuitive response to radiative warming of the climate system has the potential for improving climate predictions at seasonal and longer timescales.

And they may be right in their explanation. My point was not whether they are correct. I just do love how every time the models are shown to be wrong, it has the “possibility of improving climate predictions”. It’s never “hmmm … maybe there’s a fundamental problem with the models.” It’s always the Panglossian “all is for the best in the best of all possible worlds.” From their perspective, this never ever means that the models were wrong up until now. Instead, it just makes them righter in the future. They’ve been making them righter and even righterer for so long that any day now we should reach righterest, and in all that time, the models have never been wrong. In fact, we are advised to trust them because they are claimed to do so well …

• Mrs. Henninger, my high school science teacher, had very clear rules about references. The essence of it was the logical scientific requirement that the reader be able to unambiguously identify exactly what you were referencing. For example, I couldn’t list “The Encyclopedia Britannica, Volume ‘Nox to Pat'” as a reference in a paper I submitted to her. I’d have gotten the paper back with a huge red slash through that reference, and deservedly so.

Now imagine if I’d cited my source as just “The Encyclopedia Britannica”? A citation to “The Encyclopedia Britannica” is worse than no citation, because it is misleading. It lends a scientifically deceptive mask of actual scholarship to a totally unsupported claim. And as a result …

Citing the IPCC TAR in its entirety, without complete volume, page, and if necessary paragraph numbers, is an infallible mark of advocacy disguised as science. It means that the authors have drunk the koolaid, and that the reviewers are asleep at the switch.

• Mrs. Henninger also would not let us cite secondary sources as being authoritative. If we wanted a rock to build on, it had to, must be, was required to refer to the original source. Secondary sources like citing Wikipedia were anathema to her. The Encyclopedia Britannica was OK, but barely, because the articles in the Britannica are signed by the expert who wrote each article. She would not accept Jones’s comments on Smith’s work except in the context of discussing Smith’s work itself.

But the IPCC is very upfront about not doing a single scrap of science themselves. They are just giving us their gloss on the science, a gloss from a single highly-slanted point of view that assumes what they are supposed to be setting out to establish.

As a result, the IPCC Reports are a secondary source. In other words, if there is something in the IPCC report that you are relying on, you need to specify the underlying original source. The IPCC’s comments on the original source are worthless, they are not the science you are looking for.

• If the global climate models were as good as their proprietors claim, if the models were based on physical principles as the programmers insist … how come they all missed it? How come every one of them, without exception, got the wrong answer about cold wintertimes?

• And finally, given that the models are unanimously wrong on the decadal scale, why would anyone place credence in the unanimity of their predictions of the upcoming Thermageddon™ a century from now? Seriously, folks, I’ve written dozens of computer models, from the simple to the very complex. They are all just solid, fast-calculating embodiments of my beliefs, ideas, assumptions, errors, and prejudices. Any claim that my models make is nothing more than my beliefs and errors made solid and tangible. And my belief gains no extra credibility simply because I have encoded it plus the typical number of errors into a computer program.

If my beliefs are right, then my model will be accurate. But all too often, my models, just like everyones’ models, end up being dominated by my errors and my prejudices. Computer climate models are no different. The programmers didn’t believe that arctic warming would cause cooler winters, so guess what? The models agree, they say that arctic warming will cause warmer winters. Fancy that. Now that the modelers think it will happen, guess what future models will do.

Now think about their century-long predictions, and how they can only reflect the programmers beliefs, prejudices, and errors … here is the part that many people don’t seem to understand about models:

The climate models cannot show whether our beliefs are correct or not, because they are just the embodiment of our beliefs. So the fact that their output agrees with our beliefs means nothing. People keep conflating computer model output and evidence. The only thing it is evidence of is the knowledge, assumptions, and theoretical mistakes of the programmers. It is not evidence about the world, it is only evidence of the programmers’ state of mind. And if the programmers don’t believe in cooling winters accompanying Arctic warming, the models will show warmer winters. As a result, the computer models all agreeing that the winters will be warmer is not evidence about the real world. No matter how many of the models agree, no matter how much the modelers congratulate each other on the agreement between their models, it’s still not evidence.

My best to all,

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

132 Comments
Inline Feedbacks
View all comments
conrad clark
February 5, 2012 12:28 am

Anders says:
“My own programming skills are very basic, but we do use models such as you describe in our research.”
Conrad says – If that is the case, you can relatively easily improve your skills and critical abilities re experiment design and a deeper understanding of (statistically and data-wise) what is right, wrong or stupid. I will again promote the Stanford Machine Learning (ML) class. It’s free, it’s HARD (the advanced track, which also requires that you rapidly learn the one of the goofy but powerful languages: octave(free) or MatLab(not free)). At the end you will have a good grasp of perhaps 15-30% (a very fuzzy number) of what current models do in the real world. As you describe your job, however, ML may be much more than 30% relevant.
Conrad

Brian H
February 5, 2012 11:13 am

Willis Eschenbach says:
February 3, 2012 at 12:10 pm

As far as I recall, all you have ever done is show up on my threads and make snide comments about my writings. I

Don’t feel specially favoured, W.; she/it/he does the same thing over at Climate, Etc. Sneering ad-hom denigrations of Judith. Not even worth the brief preliminary skim needed to verify that it’s just more of the same-old.
It’s a calling, I guess.

Brian H
February 5, 2012 10:25 pm

@contrad;
“3. Adjusting the model as more data comes in.”
As I once observed to a pro-tweakist, as soon as you adjust a model (or hypothesis), especially by tuning to new data, you have done a Reset, and must start your validation testing all over, back to square one. So for GCMs, that means a new 10 (or 15, or 17, or 30, etc.) year period starts — during which your predictions are “frozen” and stand as-is, unfiddled, live or die!
Not a popular POV in Climastrology.

Brian H
February 5, 2012 10:28 pm

typo: @contrad @conrad

Brian H
February 5, 2012 10:50 pm

Willis;
Don’t know if it’s related to the Rashef study you linked, but have you heard of Eureqa? Finds standard and novel “rules” that are implicit in any raw data. E.g., derived the laws of motion from observations of a jointed dual pendulum. Free download from Cornell.

Brian H
February 5, 2012 10:52 pm

typo: Rashef Reshef

Hilary Ostrov (aka hro001)
February 6, 2012 2:33 am

climatereason says: February 2, 2012 at 12:06 am

I am currently reviewing the draft AR5 […] I was quickly astonished by the amount of assertion, speculation and conjecture that was immediately apparent within the areas I am competent to comment on.
I had a very circular argument with the IPCC in Geneva when I queried a comment that ‘unpublished research has shown’ regarding a particularly contentious aspect of ocean warming. I queried as to who authored the unpublished research and what it actually said, which brought forth a string of emails between us as I asked for a copy (which we are permitted to see)
This culminated in the surprising (to me) response that only cited (unpublished or otherwise) material i.e that with a number and a corresponding reference to the authors, could be provided. A (seemingly) wild assertion with no citation could not be provided as it…er…had no citation. [emphasis added -hro]

Unbelievable, Tony! Perhaps they are trying to “hide” the fact that this uncited “unpublished research” is a blogpost (which their new, improved “rules” have decreed is not acceptable as a reference source!) And speaking of the IPCC and “peer-reviewed” / non-peer-reviewed …
tokyoboy says: February 2, 2012 at 4:44 pm

I wonder if Ms. Laframboise has regarded references to FAR, SAR and TAR as “peer-reviewed” ones or “grey” ones during her team auditing of the AR4 citation.

As one who participated in this Citizen Audit project, I can confirm that the only references we considered as “peer-reviewed” were those that bore all the hallmarks of a Journal; references to any of the IPCC reports did not meet the criteria for “peer-reviewed”.
[Pls See: http://www.noconsensus.org/ipcc-audit/quality-assurance.php for details]

1 4 5 6