Guest Post by Willis Eschenbach
I was reading a study published in November 2011 in Science mag, paywalled of course. It’s called “The Pace of Shifting Climate in Marine and Terrestrial Ecosystems”, by Burrows et al. (abstract here, hereinafter B2011). However, I believe that the Supplementary Online Information (SOI) may not be paywalled, and it is here.
The study has 19 authors, clear proof of the hypothesis that the quality of the science is inversely proportional to the square of the named authors. They study has plenty of flash, something akin to what the song calls “28 color glossy photos with circles and arrows and a paragraph on the back of each one”, like the following:
Figure 1 from B2011. ORIGINAL CAPTION: (A) Trends in land (Climate Research Unit data set CRU TS3.1) and ocean (Hadley Centre data set Had1SST 1.1) temperatures for 1960–2009, with latitude medians (red, land; blue, ocean).
It’s interesting how they don’t waste any time. In the very first sentence of the study, they beg the conclusion of the paper. Surely that must break the existing land speed record. The paper opens by saying:
Climate warming is a global threat to biodiversity (1).
I’d have thought that science was about seeing if a warming of a degree or two in a century might be a global threat to biodiversity, and if so, exactly which bio might get less diverse.
I would have expected them to establish that through scientific studies of the plants and animals of our astounding planet. Observations. Facts. Analyses of biodiversity in areas that have warmed. But of course, since they state it as an established fact in the very first sentence, all the observations and evidence and analyses must surely have been laid out in reference (1).
So I looked in the list of references to identify reference (1), expecting to find a hard-hitting scientific analyses with observations and facts that showed conclusively that plants and animals around the globe hate warming and that it damages them and saps their vital bodily fluids.
It was neither encouraging, nor entirely unexpected, to find that reference (1) is entitled “Global Biodiversity Scenarios for the Year 2100”.
Again the paper is paywalled, must be a better way to do science, abstract here. The abstract says:
Scenarios of changes in biodiversity for the year 2100 can now be developed based on scenarios of changes in atmospheric carbon dioxide, climate, vegetation, and land use and the known sensitivity of biodiversity to these changes. This study identified a ranking of the importance of drivers of change, a ranking of the biomes with respect to expected changes, and the major sources of uncertainties.
There you have it, folks. They didn’t bother looking at the real world at all. Instead, they had their computer models generate some “scenarios of change” for what the world might look like in 2100. These model results represent the current situation as projected forwards a century by carefully following, in the most scientificalistic and mathematically rigorous manner, the prejudices and preconceptions of the programmers who wrote the model.
But they didn’t just release the model forecasts. That wouldn’t be science, and more to the point, it entails the risk that people might say “wait a minute … what does a glorified adding machine know about what’s gonna happen in a century, anyway?” Can’t have that.
So first, they intensively studied the results in the most intensive and studious manner. They pored over them, they weighed and measured them, they pieced them and plotted them and mapped them, they took their main conclusion and “washed it in permanganate with carbolated soap” as the poet has it, they pondered the eigenvectors, they normalized the results and standardized them and area-adjusted them and de-normalized them again. That is the kind of mystical alchemy that transmutes plain old fallible computer model results into infallible golden Science.
And what did they find? To no one’s surprise, they found conclusive proof that the programmers’ prejudices and preconceptions were 100% correct, that plants and animals despise warming, and they do all they can to avoid warm places. They showed beyond doubt that even the slightest warming over a century is intolerable to wildlife, that there are only costs and no benefits from gradual warming, and … wait, say what?
In other words, the B2011 study is models all the way down. No one has shown that a few degrees of warming over a century is a “global threat to biodiversity”, that is a very poorly supported hypothesis, not a fact. If the feared warming does occur, the majority of the warming is projected to be at night, in the winter, in the extratropics. Call me crazy, but I don’t foresee huge effects on biodiversity if midnights in Siberia in December are minus 37° rather than minus 40° … sure, every change brings changes, and if it warms there will be some, but I don’t see any evidence supporting a “global threat to biodiversity”.
In any case, I started out by looking at their results of the first study, B2011, but I got totally sidetractored by their error bars on their results shown in Figure 1. (That’s like being sidetracked but with a lot more pull.) They used a tiny, 1° x 1° grid size, and given the scarcity of temperature observations in many parts of the world, I wondered how they dealt with the uneven spacing of the ground stations. At that size, many of the grids wouldn’t have a single temperature station. So I looked to see how they handled the error estimate for the temperature trend in a 1° x 1° gridcell that contained no temperature stations at all. Interesting philosophical question, don’t you think? What are the error bars on your results when you have zero data?
I was amazed by their error procedure, which is what led me to write this post. Here’s what the B2011 SOI says about error estimates for their work:
We do not reflect uncertainty for our estimates or attempt statistical tests because all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to display your error estimates? If you use model results as input data, you can just blow off “statistical tests”? This “post-normal science” is sure easier than the regular kind.
It was not enough that their first sentence, the underlying rock on which their paper is founded, the alleged “danger” their whole paper is built around, exists only in the spectral midnight world of computer models wherein any fantasy can be given a realistic looking appearance and heft and ostensible substance.
Indeed, I might suggest that we are witnessing the birth of a new paradigm. The movie industry has been revolutionized by CGI, or “computer-generated imagery”. This includes imagery so realistic it is hard to distinguish from images of the actual world. Here’s an example:
Figure 2. Computer generated fractal image of an imaginary high mountain meadow. Image Source.
CGI has saved the movie industry millions of dollars. Instead of requiring expensive sets or filming on location, they can film anywhere that is comfortable, and fill in the rest with CGI.
We may be seeing the dawn of the same revolution in science, using what can only be described as CGR, or “computer-generated reality”. I mean, the actual reality seems to specialize in things like bad weather and poisonous snakes and muddy streams filled with leeches, and it refuses to arrange itself so that I can measure it easily. Plus it’s hard to sneak up on the little critters to find out what they’re actually doing, somehow they always seem to hear my footsteps. But consider the CGR mice and rabbits and small animals that live in the lovely high CGR meadows shown in Figure 2. When the temperature rises there in the high meadow, it’s easy for me to determine how far the shrews and rock coneys that live in the meadow will have to move. Using CGR a man can do serious, rigorous, and most importantly, fundable scientific study without all the messy parts involving slipping on rocks and wet boots and sleeping on the ground and mosquitoes and sweating. Particularly the sweating part, I suspect that many of those CGR guys only sweat when there’s emotional involvement. Personally, I think they are way ahead of their time, they’re already 100% into CGR, because studying actual reality is soooo twentieth century. Instead, they are studying the effects of CG climate on CG foxes preying on CG voles, in the computer-generated reality of the high mountain meadow shown above … to my dismay, CGR seems to be the wave of the future of climate science.
But it’s not bad enough that they have forsaken studying real ecosystems for investigating cyberworlds. In addition, they are asserting a special exemption from normal scientific practices, specifically because they have built their study, not on the rock of solid scientific investigation of the real world, but on the shifting sand of conclusions based on their CGR world. It reminds me of the guy who kills his parents, and then wants special treatment because he’s an orphan … you can’t choose to study CGR, and then claim that the fact that you are not studying actual reality somehow exempts you from the normal requirements of science.
Finally, they’ve modeled the global temperature on a 1° x 1° grid, but they say they need “more detailed modeling”. Now, that’s a curious claim in itself, but it also brings up an interesting question, viz:
They say they can’t give error estimates or uncertainty bounds on their current work because they are using modeled results as input data … and their proposed cure for this is “more detailed modeling” to “reflect inherent uncertainty”?
I’d rave about this, but it’s a peaceful morning and the sun is shining. And besides, in response to the urging of my friends, not to mention the imprecations of my detractors, I’ve given up my wicked ways. I’m a reformed cowboy, but it’s a work in progress, and it looks like I have to reform some more, no news there. So let me simply say that this is an example of post-normal, post-reality climate “science” and peer-review at its worst. Why does using a model somehow make you exempt from the normal scientific requirement to make error estimates and conduct statistical tests?
Sadly, this is all too typical of what passes for climate science these days, models all the way down. Far too much of climate science is merely the study of CGR, and special exemptions apply …
My regards, as always, to everyone.