The Birth of CGR Science

Guest Post by Willis Eschenbach

I was reading a study published in November 2011 in Science mag, paywalled of course. It’s called “The Pace of Shifting Climate in Marine and Terrestrial Ecosystems”, by Burrows et al. (abstract here,  hereinafter B2011). However, I believe that the Supplementary Online Information (SOI) may not be paywalled, and it is here.

The study has 19 authors, clear proof of the hypothesis that the quality of the science is inversely proportional to the square of the named authors. They study has plenty of flash, something akin to what the song calls “28 color glossy photos with circles and arrows and a paragraph on the back of each one”, like the following:

Figure 1 from B2011.  ORIGINAL CAPTION: (A) Trends in land (Climate Research Unit data set CRU TS3.1) and ocean (Hadley Centre data set Had1SST 1.1) temperatures for 1960–2009, with latitude medians (red, land; blue, ocean).

It’s interesting how they don’t waste any time. In the very first sentence of the study, they beg the conclusion of the paper. Surely that must break the existing land speed record. The paper opens by saying:

Climate warming is a global threat to biodiversity (1). 

I’d have thought that science was about seeing if a warming of a degree or two in a century might be a global threat to biodiversity, and if so, exactly which bio might get less diverse.

I would have expected them to establish that through scientific studies of the plants and animals of our astounding planet. Observations. Facts. Analyses of biodiversity in areas that have warmed. But of course, since they state it as an established fact in the very first sentence, all the observations and evidence and analyses must surely have been laid out in reference (1).

So I looked in the list of references to identify reference (1), expecting to find a hard-hitting scientific analyses with observations and facts that showed conclusively that plants and animals around the globe hate warming and that it damages them and saps their vital bodily fluids.

It was neither encouraging, nor entirely unexpected, to find that reference (1) is entitled “Global Biodiversity Scenarios for the Year 2100”.

Again the paper is paywalled, must be a better way to do science, abstract here. The abstract says:

ABSTRACT

Scenarios of changes in biodiversity for the year 2100 can now be developed based on scenarios of changes in atmospheric carbon dioxide, climate, vegetation, and land use and the known sensitivity of biodiversity to these changes. This study identified a ranking of the importance of drivers of change, a ranking of the biomes with respect to expected changes, and the major sources of uncertainties.

There you have it, folks. They didn’t bother looking at the real world at all. Instead, they had their computer models generate some “scenarios of change” for what the world might look like in 2100. These model results represent the current situation as projected forwards a century by carefully following, in the most scientificalistic and mathematically rigorous manner, the prejudices and preconceptions of the programmers who wrote the model.

But they didn’t just release the model forecasts. That wouldn’t be science, and more to the point, it entails the risk that people might say “wait a minute … what does a glorified adding machine know about what’s gonna happen in a century, anyway?” Can’t have that.

So first, they intensively studied the results in the most intensive and studious manner. They pored over them, they weighed and measured them, they pieced them and plotted them and mapped them, they took their main conclusion and “washed it in permanganate with carbolated soap” as the poet has it, they pondered the eigenvectors, they normalized the results and standardized them and area-adjusted them and de-normalized them again. That is the kind of mystical alchemy that transmutes plain old fallible computer model results into infallible golden Science.

And what did they find? To no one’s surprise, they found conclusive proof that the programmers’ prejudices and preconceptions were 100% correct, that plants and animals despise warming, and they do all they can to avoid warm places. They showed beyond doubt that even the slightest warming over a century is intolerable to wildlife, that there are only costs and no benefits from gradual warming, and … wait, say what?

In other words, the B2011 study is models all the way down. No one has shown that a few degrees of warming over a century is a “global threat to biodiversity”, that is a very poorly supported hypothesis, not a fact. If the feared warming does occur, the majority of the warming is projected to be at night, in the winter, in the extratropics. Call me crazy, but I don’t foresee huge effects on biodiversity if midnights in Siberia in December are minus 37° rather than minus 40° … sure, every change brings changes, and if it warms there will be some, but I don’t see any evidence supporting a “global threat to biodiversity”.

In any case, I started out by looking at their results of the first study, B2011, but I got totally sidetractored by their error bars on their results shown in Figure 1. (That’s like being sidetracked but with a lot more pull.)  They used a tiny, 1° x 1° grid size, and given the scarcity of temperature observations in many parts of the world, I wondered how they dealt with the uneven spacing of the ground stations. At that size, many of the grids wouldn’t have a single temperature station. So I looked to see how they handled the error estimate for the temperature trend in a 1° x 1° gridcell that contained no temperature stations at all. Interesting philosophical question, don’t you think? What are the error bars on your results when you have zero data?

I was amazed by their error procedure, which is what led me to write this post. Here’s what the B2011 SOI says about error estimates for their work:

We do not reflect uncertainty for our estimates or attempt statistical tests because all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.

So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to display your error estimates? If you use model results as input data, you can just blow off “statistical tests”? This “post-normal science” is sure easier than the regular kind.

It was not enough that their first sentence, the underlying rock on which their paper is founded, the alleged “danger” their whole paper is built around, exists only in the spectral midnight world of computer models wherein any fantasy can be given a realistic looking appearance and heft and ostensible substance.

Indeed, I might suggest that we are witnessing the birth of a new paradigm. The movie industry has been revolutionized by CGI, or “computer-generated imagery”. This includes imagery so realistic it is hard to distinguish from images of the actual world. Here’s an example:

Figure 2. Computer generated fractal image of an imaginary high mountain meadow. Image Source.

CGI has saved the movie industry millions of dollars. Instead of requiring expensive sets or filming on location, they can film anywhere that is comfortable, and fill in the rest with CGI.

We may be seeing the dawn of the same revolution in science, using what can only be described as CGR, or “computer-generated reality”. I mean, the actual reality seems to specialize in things like bad weather and poisonous snakes and muddy streams filled with leeches, and it refuses to arrange itself so that I can measure it easily. Plus it’s hard to sneak up on the little critters to find out what they’re actually doing, somehow they always seem to hear my footsteps. But consider the CGR mice and rabbits and small animals that live in the lovely high CGR meadows shown in Figure 2. When the temperature rises there in the high meadow, it’s easy for me to determine how far the shrews and rock coneys that live in the meadow will have to move. Using CGR a man can do serious, rigorous, and most importantly,  fundable scientific study without all the messy parts involving slipping on rocks and wet boots and sleeping on the ground and mosquitoes and sweating. Particularly the sweating part, I suspect that many of those CGR guys only sweat when there’s emotional involvement. Personally, I think they are way ahead of their time, they’re already 100% into CGR, because studying actual reality is soooo twentieth century. Instead, they are studying the effects of CG climate on CG foxes preying on CG voles, in the computer-generated reality of the high mountain meadow shown above … to my dismay, CGR seems to be the wave of the future of climate science.

But it’s not bad enough that they have forsaken studying real ecosystems for investigating cyberworlds. In addition, they are asserting a special exemption from normal scientific practices, specifically because they have built their study, not on the rock of solid scientific investigation of the real world, but on the shifting sand of conclusions based on their CGR world. It reminds me of the guy who kills his parents, and then wants special treatment because he’s an orphan … you can’t choose to study CGR, and then claim that the fact that you are not studying actual reality somehow exempts you from the normal requirements of science.

Finally, they’ve modeled the global temperature on a 1° x 1° grid, but they say they need “more detailed modeling”. Now, that’s a curious claim in itself, but it also brings up an interesting question, viz:

They say they can’t give error estimates or uncertainty bounds on their current work because they are using modeled results as input data … and their proposed cure for this is “more detailed modeling” to “reflect inherent uncertainty”?

I’d rave about this, but it’s a peaceful morning and the sun is shining. And besides, in response to the urging of my friends, not to mention the imprecations of my detractors, I’ve given up my wicked ways. I’m a reformed cowboy, but it’s a work in progress, and it looks like I have to reform some more, no news there. So let me simply say that this is an example of post-normal, post-reality climate “science” and peer-review at its worst. Why does using a model somehow make you exempt from the normal scientific requirement to make error estimates and conduct statistical tests?

Sadly, this is all too typical of what passes for climate science these days, models all the way down. Far too much of climate science is merely the study of CGR, and special exemptions apply …

My regards, as always, to everyone.

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

156 Comments
Inline Feedbacks
View all comments
January 22, 2012 6:02 am

‘a physicist’ linked to a preposterous model-based ‘study’ that says:
“To convert habitat loss to species loss, the principles of island ecology are applied… from this one can predict how many species should become extinct… These doomed species…” &etc.
Rank speculation. If 100,000 species go extinct per year, then in 70 years there won’t be any species left. Crap ‘studies’ like that are nothing but grant-trolling nonsense. The authors would go extinct themselves if they actually had to work for a living, instead of coasting on tenure and writing propaganda for the WWF.
‘A physicist’ says: “And yes, wildlife biologists do regard those studies as being right.” I highly doubt that. Produce verifiable evidence that a majority of wildlife biologists take that position, or retract. By “verifiable” I mean that they were asked specifically if the extinction numbers and rates claimed in the link are correct, and that they answered Yes.

R Kcin
January 22, 2012 11:42 am

GIGO = garbage in Gospel out?

January 22, 2012 11:24 pm

>>
Willis Eschenbach says:
January 22, 2012 at 3:14 am
Jim, see Dr. Craig Loehl’s post on our peer-reviewed paper showing that EO Wilson was very wrong …
<<
Willis,
Thanks for the link. I remember the post, and your post about the missing bodies.
Jim

Septic Matthew
January 23, 2012 10:15 am

Willis Eschenbach: My point is that if I, the US taxpayer, am paying for the research to be done, then I should be able to read the results I paid for without some journal getting in the game at all. One way would be to give the journals say three months and then NSF posts it on their website … I don’t know how it might work. I’m just saying that a system that means that people in the developing world do not have access to scientific results because a school library in Lesotho can’t afford to buy what I already paid for doesn’t make sense to me.
Well, you did not in fact pay very much of America’s $3trillion annual budget in the first place; and in the second place the publication of the results is an additional cost beyond the cost of the research: either you have to pay more in taxes to cover that cost, or you have to pay a subscription cost for the articles that you want. One way or another, getting the research result to Lesotho is yet another additional cost.
Only a few ras clots like myself were asking “Where are the corpses?”
They have decayed or been eaten, along with the trillions of corpses of the species that have not gone extinct. I grant you that Wilson’s computation is only an educated guess, but demanding the corpses of extinct species of salamanders, ants and fungi is absurd.
The paper was not science,
More absurdity: actual science exceeds the boundaries placed upon it by science commentators. The paper is no worse than the paper by Einstein, Podolsky and Rosen, or the much ridiculed paper on gravitational singularities by Oppenheimer and Snyder, or a few of Paul Dirac’s unsuccessful papers. I should say “not necessarily much worse”. These complex models are analogous to the precursors of the periodic table: with continuous work, of a larger scale than 19th century analytical chemistry, they’ll eventually be reliable. The original atomic theory was based on inaccurate measurements, and when measurement techniques and laboratory skills increased they revealed serious anomalies in the theory. Not for decades did anyone learn what the nature of the error was. What you call “not science”, namely the publication of a model that is not accurate enough, has many examples in the history of science.
The study has 19 authors, clear proof of the hypothesis that the quality of the science is inversely proportional to the square of the named authors.
Possibly you are unaware of the papers on gene sequencing and gene mapping that have long authorship lists, or the paper in particle physics that have long authorship lists.
If I had to choose, and possibly I have to, I’d bet against the predictions of this model, but I’d hedge my bet somehow. If I were a nation, I’d be investing in alternative energy supplies and other expensive projects as a guard in case they turn out to be right.

Roguewave
January 25, 2012 9:24 am

W.E. documenting the death of empiricism in climate science under the avalanche of virtual “proofs” by science’s new castrati.

1 5 6 7