Cites and Signs of the times

Guest Post by Willis Eschenbach

I’ve been involved in climate science for a while now, this is not my first rodeo. And I’ve read so many pseudo-scientific studies that I’m starting to develop a list of signs that indicate when all is not well with a particular piece of work.

One sign is whether, how, and when they cite the IPCC “Bible”, their “IPCC Fourth Assessment Report”. The previous report was called the “T. A. R.” for “Third Assessment Report”, but the most recent one is called “AR4” rather than the “F. A. R. “, presumably to avoid using the “F-word”. This report is thousands upon thousands of pages of … of … of a complex mix of poorly documented “facts”, carefully selected computer model runs, good science, blatantly political screeds from Greenpeace and the World Wildlife fund, excellent science, laughable errors, heavily redacted observations, poor science, “data” which turns out to be computer model output, claims based on unarchived data, things that are indeed known and correctly described, shabby science, alarmist fantasies, things they claim are known that aren’t known or are incorrectly described, post-normal science, overstated precision, and understated uncertainty. That covers most of the AR4, at least.

Since many of the opinions expressed therein are vague waffle-mouthed mush, loaded with “could” and “may” and “the chance of” and “we might see by 2050”, you can find either support or falsification within its pages for almost any position you might take.

I have an “IPCC fail-scale” that runs from 1 to 30. The higher the number, the more likely it is that the paper will be quoted in the next IPCC report, and thus the less likely it is that the paper contains any actual science.

Image Source

I’d seen some high-scoring papers, but a team of unknowns has carried off the prize, and very decisively, with a perfect score of 30 out of 30. So how does my “IPCC Fail-Scale” work, and how did the newcomers walk off with the gold?

First, there are three categories, “how”, “whether”, and “when”. They are each rated from zero to ten. The most important of these is how they cite the IPCC report in the text. If they cite it as something like “IPCC Fourth Assessment Report: Climate Change 2007 (AR4), Volume I, pages 37-39 and p. 40, Footnote [3]”, they get no points at all. That’s far too scientific and too specific. You could quickly use that citation to see if it supports their claims, without blindly searching and guessing at what they are citing. No points at all for that.

If they cite it as “IPCC Fourth Assessment Report: Climate Change 2007 (AR4), Volume I” I award them five points for leaving out the page and paragraph numbers. They get only two points if they just omit the paragraph. And they get eight points if they leave out the volume. Leaving out a URL so their version can’t be found gets a bonus point.  But to get the full ten points, they have to disguise the report in the document. They can’t seem to be building their castles on air. So how did the winning paper list the IPCC Fourth Assessment Report in their study?

They list it in the text as “Solomon 2007”. That’s absolutely brilliant. I had to award the full ten points just for style. Plus they stuck the landing, because Susan Solomon is indeed listed as the chief culprit in the IPCC documents, and dang, I do like the way they got around advertising that they haven’t done their homework. 10 full points.

Next, where do they cite it? Newcomers to the field sometimes cite it way at the end of their study (0 to 5 points) or in the middle somewhere (six to nine points). But if you have real nerve, you throw it in as your very first reference. That’s what got them the so-called “brownie point”, the extra score named after the color of their nose, the final point that improves their chances of  being in the Fifth Assessment Report. Once again, 10 out of 10 points to the winner, “Solomon 2007” is the first reference out of the box.

Finally, do they cite the IPCC at all? Of course, the authors not citing the IPCC Report greatly improves the odds that the author has actually read, understood, and classified the IPCC document as a secondary source, so no points if they don’t cite it, 10 points if they cite it. One points per occurrence for citing it indirectly through one of their citations, to a maximum of 8. And of course, the winner has ten points in this category as well.

And what is this paragon of scientific studies, this ninja reference-master of analyses, this brazen grab by the newcomers for the crown?

Quite appropriately, it is a study which shows that when the Arctic is warmer, we should expect Northern winters to be colder.

Lately there have been a string of bitterly cold winters … who would have guessed? Well, as the authors of the study point out, none of the climate models guessed it, that’s for sure.

The study is “Arctic warming, increasing snow cover and widespread boreal winter cooling“,  by Judah L Cohen, Jason C Furtado, Mathew A Barlow, Vladimir A Alexeev and Jessica E Cherry. This study proves once again that in the topsy-turvy world of climate science, all things are explainable by the AGW hypothesis … but only in hindsight.

It’s also a curious study in that the authors, who are clearly AGW supporters, are baldly stating that the climate models are wrong, and trying to explain why they are wrong … man, if I say the models are wrong, I get my hand slapped by the AGW folks, but these authors can say it no problem. It does put them into a difficult position, though, explaining why their vaunted models got it wrong.

Finally, if they are correct that a warmer Arctic has cooler winters, then for the average Arctic temperature to be rising, it would have to be much, much warmer in the summers. I haven’t seen any data supporting that, but I could have missed it. In fact, thinking about cooling winters, one of the longest underlying claims was that CO2 warming was going to lead to warming winters in the extra-tropics and polar regions … what happened to that claim?

CONCLUSIONS in no particular order

• I have no idea if what they are claiming, about snow and cold being the result of warming, is correct or not. They say:

Understanding this counterintuitive response to radiative warming of the climate system has the potential for improving climate predictions at seasonal and longer timescales.

And they may be right in their explanation. My point was not whether they are correct. I just do love how every time the models are shown to be wrong, it has the “possibility of improving climate predictions”. It’s never “hmmm … maybe there’s a fundamental problem with the models.” It’s always the Panglossian “all is for the best in the best of all possible worlds.” From their perspective, this never ever means that the models were wrong up until now. Instead, it just makes them righter in the future. They’ve been making them righter and even righterer for so long that any day now we should reach righterest, and in all that time, the models have never been wrong. In fact, we are advised to trust them because they are claimed to do so well …

• Mrs. Henninger, my high school science teacher, had very clear rules about references. The essence of it was the logical scientific requirement that the reader be able to unambiguously identify exactly what you were referencing. For example, I couldn’t list “The Encyclopedia Britannica, Volume ‘Nox to Pat'” as a reference in a paper I submitted to her. I’d have gotten the paper back with a huge red slash through that reference, and deservedly so.

Now imagine if I’d cited my source as just “The Encyclopedia Britannica”? A citation to “The Encyclopedia Britannica” is worse than no citation, because it is misleading. It lends a scientifically deceptive mask of actual scholarship to a totally unsupported claim. And as a result …

Citing the IPCC TAR in its entirety, without complete volume, page, and if necessary paragraph numbers, is an infallible mark of advocacy disguised as science. It means that the authors have drunk the koolaid, and that the reviewers are asleep at the switch.

• Mrs. Henninger also would not let us cite secondary sources as being authoritative. If we wanted a rock to build on, it had to, must be, was required to refer to the original source. Secondary sources like citing Wikipedia were anathema to her. The Encyclopedia Britannica was OK, but barely, because the articles in the Britannica are signed by the expert who wrote each article. She would not accept Jones’s comments on Smith’s work except in the context of discussing Smith’s work itself.

But the IPCC is very upfront about not doing a single scrap of science themselves. They are just giving us their gloss on the science, a gloss from a single highly-slanted point of view that assumes what they are supposed to be setting out to establish.

As a result, the IPCC Reports are a secondary source. In other words, if there is something in the IPCC report that you are relying on, you need to specify the underlying original source. The IPCC’s comments on the original source are worthless, they are not the science you are looking for.

• If the global climate models were as good as their proprietors claim, if the models were based on physical principles as the programmers insist … how come they all missed it? How come every one of them, without exception, got the wrong answer about cold wintertimes?

• And finally, given that the models are unanimously wrong on the decadal scale, why would anyone place credence in the unanimity of their predictions of the upcoming Thermageddon™ a century from now? Seriously, folks, I’ve written dozens of computer models, from the simple to the very complex. They are all just solid, fast-calculating embodiments of my beliefs, ideas, assumptions, errors, and prejudices. Any claim that my models make is nothing more than my beliefs and errors made solid and tangible. And my belief gains no extra credibility simply because I have encoded it plus the typical number of errors into a computer program.

If my beliefs are right, then my model will be accurate. But all too often, my models, just like everyones’ models, end up being dominated by my errors and my prejudices. Computer climate models are no different. The programmers didn’t believe that arctic warming would cause cooler winters, so guess what? The models agree, they say that arctic warming will cause warmer winters. Fancy that. Now that the modelers think it will happen, guess what future models will do.

Now think about their century-long predictions, and how they can only reflect the programmers beliefs, prejudices, and errors … here is the part that many people don’t seem to understand about models:

The climate models cannot show whether our beliefs are correct or not, because they are just the embodiment of our beliefs. So the fact that their output agrees with our beliefs means nothing. People keep conflating computer model output and evidence. The only thing it is evidence of is the knowledge, assumptions, and theoretical mistakes of the programmers. It is not evidence about the world, it is only evidence of the programmers’ state of mind. And if the programmers don’t believe in cooling winters accompanying Arctic warming, the models will show warmer winters. As a result, the computer models all agreeing that the winters will be warmer is not evidence about the real world. No matter how many of the models agree, no matter how much the modelers congratulate each other on the agreement between their models, it’s still not evidence.

My best to all,

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

132 Comments
Inline Feedbacks
View all comments
John Marshall
February 2, 2012 2:11 am

Doug Cotton, Sorry it is my computer. Asking it to print your article, through several routes, produces the overprinting on page 2. Your web page is OK. I thought Windows 7 sorted these problems.
Do you have a PDF version?

Richards in Vancouver
February 2, 2012 2:15 am

Anders, I agree with you. But don’t you mean “a bit less overbearing” rather than “a bit more…”?
We have been treated to a head-butting contest between two antagonists, each of whom has much good to offer. But their obvious self-regard is beyond any level to which mere mortals such as you and i dare aspire.

Disko Troop
February 2, 2012 2:15 am

Ken Hall. The one thing you forgot to mention was THE DRIVER. He is the one guy who can actually come back in and say, this is wrong. All the computer programmers in the world cannot come up with that final response. This is why drivers and test drivers are so highly valued. Some are faster than others and they get the champagne and the kudos, but every team has to have a driver who can come back in and say WHY the car isn’t going as fast as the models and simulations. Schumacher was an example of the driver that could do both, drive fast and identify design flaws on the track. Prost was another one. What is needed in climatology is less modelers and a few more guys out on the track doing the work. It would help if even one of them occasionally looked up from his keyboard and glanced out of the window at the weather.

Fredrick Lightfoot
February 2, 2012 2:22 am

Conrad Clark,
Write us a program showing/explaining the relevance of the number 9 to infinity
Willis, what a wonderful world we live in, and you Sir, make it more worth while.

Eric (skeptic)
February 2, 2012 2:36 am

The authors of that study ought to look out the window. This year the polar jet is strong, there is little blocking and cold air is bottled up in Canada and Alaska (where Jan was 20F below normal in many places). The models “predicted” this back in the 90’s and early 2000’s. I put quotes around that word because models don’t predict anything. Also the polar jet was strengthening at that time, so the models matched up with reality. Those model results directly contradict the claims of the last few winters that Arctic warming or low sea ice was responsible for continental cooling, here in the US and in Europe.
The original models seem more applicable since the strong polar jet theory has a physical basis in a cooling stratosphere. One of the problems with verifying the theory is that the stratosphere is cooling from lowered solar ultraviolet and that seems like a more probable cause of the recent colder Northern winters.

Anders Valland
February 2, 2012 2:49 am

Willis, get a grip. A pissing contest on who has done the most advanced programming for the longest time gets you nowhere – and even if you say you don’t care, for the CAGW believers you give them food for telling everybody else what a jerk you are. I care, because I really think you are not the jerk you act like sometimes. So – you were both wrong, doing the “best defense is attack”-variety has no effect with me.
Conrad, from what you write I guess you haven’t been here long. Willis position on models is quite clear in my view, he has been very vocal on the issue of models vs. reality (AKA observations), where observations trump models anytime. Although that is not very apparent if this post is all you have read from Willis. I would like to know what you think when you say that the Machine Learning class has any relevance to this – do you feel neural networks and self-learning models should be used for modelling climate? Do you think that is feasible, given the complexity of the issue?

Snotrocket
February 2, 2012 3:38 am

John Marshall, February 2, 2012 at 2:11 am
“Do you have a PDF version?”
John, if it’s any help, and if you have a Kindle, Chrome has a gizmo that allows you to send any web article straight to your Kindle (you don’t get the comments, though).
Thanks for a great post Willis. I once had a go at what could be called modelling, but which we then called robotics. I had to code a PC to interact with and control six systems. The initial code was about 2k lines. And it worked. But then, we sat back and tossed around all (ALL?? Hah!!) the ‘what ifs’. That added another 20k lines to the code. It was pretty small beer in those days, but fun, and very educational. It worked, but it was NEVER perfect.

Steve Keohane
February 2, 2012 3:56 am

Straight on to sunrise Willis, your bearings are fine. Quite frankly, I don’t know where you find the time nor fortitude to sieve these brain damaging theses. I certainly appreciate your ability to do so. Thank you sir.

ImranCan
February 2, 2012 4:04 am

The fact that it only takes them 5 words into the abstract to use the word “consensus” tells you absolutely everything you need to know. “Consensus” is a word that has application in politics – it has no place at all in science. In fact the application of the word “consensus” in science actually prevents the evolution of scientific theories towards scientific truth. The use of the word consensus in science stifles alternative views and the evolution towards truth because the required skeptisicm that goes with the formulation of alternative views immediately puts those who articulate it on the outside of society. In the context of scientific progress it is a blocker. As an example,. we do not have a “consensus” that the world is spherical. That is something which is quite simply recognised as a fact, made clear by countless and repeatable observations that match scientific theory and with evidence from multiple sources and angles. The ground-truthing of that particular fact did not go through some magical phase called “consensus”.
It only took 5 words.

February 2, 2012 4:17 am

I agree with Willis that models cannot prove that your theory is correct, but they can prove that it is wrong.
Model output says “if my theory is correct, I expect the following real-life behaviour”. If said behaviour is observed, the theory may be correct or you may just be lucky. However, if the behaviour is not observed then clearly the theory is wrong.
If numerous attempts fail to prove the theory wrong, then you may have some hope that the theory is correct, but never certainty.
Note: the above fails if you “tune” the model with known results. As soon as you depart from pure theory and add in fudge-factors, the results no longer say anything useful about the theory.

Tony McGough
February 2, 2012 4:24 am

Thanks for the interesting article.
Willis Eschenbach is always a good read, in spite of (or perhaps because of) being a stroppy blighter, who knows his own mind only too well. Read him on his terms, and you will be better informed and possibly a little wiser.

MarkW
February 2, 2012 4:47 am

conrad clark says:
February 1, 2012 at 10:12 pm

You criticize Willis for what he says. Willis quotes himself to show that you misunderstood what he said.
Then you criticize Willis for quoting himself.
You claim to have been writting for years, yet your behavior is nothing more than that of a poorly schooled grad student.

H.R.
February 2, 2012 5:03 am

“The study is “Arctic warming, increasing snow cover and widespread boreal winter cooling“, by Judah L Cohen, Jason C Furtado, Mathew A Barlow, Vladimir A Alexeev and Jessica E Cherry. This study proves once again that in the topsy-turvy world of climate science, all things are explainable by the AGW hypothesis … but only in hindsight.”
================================================================
How many points for “More handwaving than a Rose Bowl Parade?”

Frank K.
February 2, 2012 5:54 am

conrad clark says:
February 1, 2012 at 9:35 pm
w.
Re 1986 neural networks and machine learning. Were you in the same IBM SRI class as me? Believe me, you need to be dragged into the 21st century.
Iterative models dont seem to work with climate science (or am I misreading the lack of actual predictions)?
Conrad

Conrad, before you start saying stuff like “iterative models don’t seem to work” please educate yourself on how numerical methods work for solving systems of partial differential equations, which is what a climate model is at its core. Essentially, you are numerically solving a set of discretized equations which estimate the time rate of change of key physical variables such are air temperature, velocity, pressure, moisture content, etc. (if you have a coupled ocean model, then the time rate of change of ocean current velocities, temperature, etc. will be determined). There are many submodels associated with radiation, cloud formation, aerosol transport, etc. which also are solved in support of the basic equations. You then “march” the numerical solution iteratively over “short” time step (1 time step = several hours for a climate model) to get the solution at the next (future) time level. Do this thousands and thousands of times to cover days, months, or years of prediction time.
The problem is that the equations being solved are non-linear and coupled, with many different characteristic time scales. Even with the much simpler compressible Navier-Stokes equations, it can be very difficult to get solutions to basic problems like separated flow over a wing at high angle of attack. Climate simulations are at least one order of magnitude more complicated. In addition, with these kinds of problems, then solutions can be highly sensitive to initial conditions and boundary conditions. And depending on your assumptions, you may or may not get a valid solution – nothing can be guaranteed for non-linear systems!
I’m not against using models for climate predictions, except that many research groups (particularly NASA-GISS and Model E) are sloppy in how they document what they’re solving. In Model E’s case, there is NO one place that you can find all the equations being solved adequately documented! And the FORTRAN code is not well written at all. They have some word descriptions which are a bunch of unverified fluff and say almost nothing about their numerics – and this is the code being used by Hansen and his cronies to assert that the “missing heat” is 0.58 W/m^2 and that humans are responsible!!
On the other hand, groups like the GFDL at Princeton and NCAR do a great job with their models, both in coding and documentation. Which leads me to the inevitable question about redundancy – why do we need to be funding dozens of these climate models??? Get a competent group to develop a single code and go with that. It would save a LOT of money and remove the uncertainties associated with presenting climate model results from disparate groups (and then averaging them as ensembles – yikes!).

Jim Turner
February 2, 2012 6:00 am

“And finally, given that the models are unanimously wrong on the decadal scale, why would anyone place credence in the unanimity of their predictions of the upcoming Thermageddon™ a century from now?”
I don’t know (though we can all guess) but they still do.
http://www.telegraph.co.uk/earth/earthnews/9038988/Climate-change-will-make-UK-new-holiday-destination.html
At least they are no longer suggesting that warmer means unmitigated disaster – a big step for some.

February 2, 2012 6:04 am

Mrs. Henninger must have been a real gem. Not only did she have you writing research papers, but she actually schooled you in proper citation! We never got further than elementary textbooks. My ‘science teacher’, Mr. Cooper (who also coached girls’ softball), believed that rockets would not work in space, because “they had nothing to push against.” Citing Willy Ley (The Conquest of Space) and Newton was of no use.
/Mr Lynn

KNR
February 2, 2012 6:52 am

In many areas such incorrect referencing can lead to undergrads work being failed, perhaps only in climate ‘science’ would it be regarded as acceptable professional standard for published research.

DennisA
February 2, 2012 7:17 am

sceptical says:
February 1, 2012 at 9:05 pm
“Or perhaps because there was already a FAR, First Assessment Report.”
Maybe they should have called AR4, “2 FAR”

kcom
February 2, 2012 7:23 am

“Isn’t it grand to know that each error we find only serves to improve the models? What an accomplishment. So productive!”
Yes, and any day now we’ll have the geometry of those epicycles completely worked out. Every error we find in predicting planetary motion improves them just that little bit more. Won’t the future be grand!

Crispin in Waterloo
February 2, 2012 7:30 am

@Anders
“Willis position on models is quite clear in my view, he has been very vocal on the issue of models vs. reality (AKA observations), where observations trump models anytime.”
One of the ultimate crimes against science is to get a set of observations and compare them to what spits out of a model in which one has invested a lot of time. Then, correct the observations to align with the modelled output and publish the model as a better and more complete record of what is real. This has to stand as the essence of lunacy. Utter madness. Even alchemists had more common sense and logic than that. It passes beyond wilful blindness into the realm of madness for only a madman thinks altering reality will make his fantasy come true.
Climate ‘science’ is the only branch of anything where such lunacy is given a plinth from which to dictate social, economic and political action. Normally, in the land of pre-post-normal science, exposure of craziness or fraud or incompetence is rewarded with oblivion.
It would be interesting, in a morbidly fascinating way, to see the CAGW climate science community itself modelled to predict their collective behaviour. There would be a problem finding suitable analogous models in the animal world because even weasels are not that perfidious. It will have to be done using publications, websites and personal observations. Given the level of consistence seen thus far, and the trends to replace reality with models, Willis’ 0-30 scale may be a useful metric for making chart predictions about the content, warmings, predictions and facts one might find in the coming AR5.

Agnostic
February 2, 2012 7:47 am

Willis, I agree again with anders:
http://wattsupwiththat.com/2012/02/01/cites-and-signs-of-the-times/#comment-882480
The problem is, if you conduct a debate in such combative terms, it detracts from the very laudable and interesting points you make. You should relish the opportunity to take on someone like Conrad to drive home your point.
Also, we normally associate such combativeness with defensiveness. It’s been noted that those on a losing side of an argument start attacking their detractors rather than their arguments. As far as i am able to tell, you are no where near losing this specific point as an argument, so no need for the defensiveness. And I am not sure justifying it by pointing to the blogs popularity or your proliffacy as a guest poster is wise either, given that we generally object to argument by consensus or argument by authority.
As far as engagement with Conrad is concerned, I am nowhere near qualified. But you are….so engage! Don’t beat him up straight away at least. And while you are at ‘im the rest of us can learn something by way of the exchange. We don’t like the way RC deride anyone who questions the orthodoxy, so let’s not start here.

Ken in the Keys
February 2, 2012 7:48 am

Thinking back to several years spent at the Cavendish Laboratory, and several decades spent in high-tech research, I never, ever, heard the term “THE SCIENCE” used by a scientist. On the few occasions when this terminology surfaced, it usually came from lawyers or public-relations flacks. Maybe Willis doesn’t need his ingenious “scoring” system for spotting the fakers and poseurs — any time “THE SCIENCE” pops up in a paper or discussion, we can discount the source!

February 2, 2012 7:51 am

Can I cite this as “Eschenbach W in Watts A, 2012”?

PhilH
February 2, 2012 8:03 am

Mr Lynn: reminds me of a science teacher I had in high school who said that you could put a fan on the back of a sailboat and it would drive the boat.

February 2, 2012 8:04 am

Willis:
Could I put my hand up in a Tee, and ask for a TIME OUT between you and Conrad?
One of my specialties is FINITE ELMENT ANALYSIS. When you write up an FEA program, and you make the “stiffness matrix” for the elements, it’s based on the well established “mechanics of materials”.
If I just use the PURE MATH of the mechanics of materials, and write up a proper set of stiffness matrices, and do a model…say of a pressure vessel, when I run it if I do the simplest evaluation (say the hoop stress, at the center of a vessel) I should match to about the numerical accuracy of the materials properties. (Commensurately, in the literature, we can actualy find historic TESTING on various PV’s and match their measured strains (displacements) with our FEA and know it’s a fairly robust system.)
There are a multitude of “closed, limited variable” systems for which this work extremely well.
HOWEVER the Atmosphere is neither a completely closed, nor a limited variable(s) system.
It is hugely complex. I really liken some of the Atm models to attempts to model the economic system and therefore, say the stock market.
As Willis says, this WILL be dominated by prejudices, “beliefs” and the results will depend as much on the programmer as any data or manipulation thereof.
Max