Guest Post by Willis Eschenbach
I was reading a study just published in Science mag, pay-walled of course. It’s called “The Pace of Shifting Climate in Marine and Terrestrial Ecosystems”, by Burrows et al. (hereinafter B2011). However, I believe that the Supplementary Online Information (SOI) may not be paywalled, and it is here. The paper itself has all kinds of impressive looking graphs and displays, like this one:
Figure 1. Temperature change 1960-2009, from B2011. Blue and red lines on the left show the warming by latitude for the ocean (blue) and the land (red).
I was interested in their error bars on this graph. They were using a 1° x 1° grid size, and given the scarcity of observations in many parts of the world, I wondered how they dealt with the uneven spacing of the ground stations, the lack of data, “infilling”, and other problems with the data itself. I finally found the details regarding how they dealt with uncertainty in their SOI. I was astounded by their error estimation procedure, which was unlike any I’d ever seen.
Here’s what the B2011 SOI says about uncertainty and error bars (emphasis mine):
We do not reflect uncertainty for our estimates or attempt statistical tests because …
Say what? No error bars? No statistical tests? Why not?
The SOI continues with their reason why no error bars. It is because:
… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to calculate error estimates for your work? If you use a model you can just blow off all “statistical tests”? When did that change happen? And more to the point, why didn’t I get the memo?
Also, they’ve modeled the global temperature on a 1° x 1° grid, but they say they need “more detailed modeling” … which brings up two interesting questions. First question is, what will a .5° x 0.5° (“more detailed”) model tell us that a 1° x 1° doesn’t tell us? I don’t get that at all.
The second question is more interesting, viz:
They say they can’t give error estimates now because they are using modeled results … and their proposed cure for this problem is “more detailed modeling”???
I’d rave about this, but it’s a peaceful morning, the sun is out after yesterday’s storm, and my blood is running cool, so let me just say that this is a shabby, childish example of modern climate “science” (and “peer-review”) at its worst. Why does using a model somehow mean you can’t make error estimates or conduct statistical tests?
Sadly, this is all too typical of what passes for climate science these days … and the AGW supporters wonder why their message isn’t getting across?
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Not only does this sort of junk pass as science, it makes news worldwide.
Following is a sample of the media coverage of this study:
The Atlantic: New Evidence That Climate Change Threatens Marine Biodiversity
ABC Australia: Climate change affecting oceans faster: Study
Times of India: Marine life ‘needs to swim faster to survive climate change’
Sky News Australia: Aussie marine life climate change threat
The Australian: Marine life in climate change hot water
Softpedia: Species will have to move fast to adapt to climate change
Fish Update: Climate shifts could leave some species homeless, new research shows
Deccan Chronicle (India): Marine life ‘needs to swim faster to survive climate change’
FishNewsEU.com: Climate warming poses serious conservation challenge for marine life
Sometimes it feels like trying to swim against a strong current
I’ve reproduced the original graphic showing the appropriate degree of uncertainty.

Oh by the way the full paper can be accessed here
W,
You are so in the past here. Don’t you understand that the customer is always right? Error-bars just confuse the statistics, confound the democratic process and bring into question the motives of our great and good political masters!,
Never forget that these savants have an inordinantly large amount of Tax$ to spend. If they mess up the odd Trillion or two by innocently sending a few Mill into their accounts, then should we blame them?
At least they tried to save our planet and make a few bucks unlike you . All that I’ve noted about what you’ve done to date is no more than a few brilliant and heart-warming essays, a realistic hypothesis or two and a determined effort to take Post-Modernist Science back into the enlightenment!
You’ve clearly no shame. Unlike them.
keith says:
“This reminds me of the NASA GISS 1200km ‘averaging’ – no attempt to account for significance. I remember using their online tool and reducing the distance to around 200kms – suddenly all these gaps appear all over the globe where previously there was massive warming and the global temp increase dropped by a third…”
I have a beef with some of what goes into GISS. Large areas are likely too-often represented by a thermometer at an Arctic location having above-regional-average local surface albedo feedback, or a land thermometer while the area represented includes a lot of ocean area.
There is another decadal temperature trend map of the world, from January 1979 to sometime recently, and it avoids surface station siting issues, growth of urban effects on surface station thermometers, and large regions represented by thermometers in local warming hotspots:
1st global map image (lower troposphere), in:
http://www.remss.com/msu/msu_browse.html
That one omits the “pole holes” within 7.5 degrees of the poles, and other areas on basis of little or no lower troposphere above the surface (elevation above 3 km or within 20 degrees of the South Pole).
Their argument against using statistics on computer models is valid. Models contain man-made fudge factors called parameters. Statistical measures like averages, means and confidence intervals of model outputs are arbitrary – they are what programmers choose them to be.
Another example of this what has amazed me is the concept of ensemble mean. Its like a report of climate modeling is some kind of opinion poll. Ensemble statistics describe how well research groups agree on parameters and as a consequence on the outputs of the computer runs that they have chosen to publish.
Baa Humbug says:
November 6, 2011 at 7:51 pm
Thanks, Baa. Sometimes people ask why I bother. It’s because I can’t just sit quiet while this kind of nonsense is being put forward and then goes round the world three times. Like they say … better to light one small forest fire than to curse the darkness …
Or something like that.
w.
And that’s why I love ya
AGW Belief systems have errors which are relegated to the faith based error processing model employed by South-Park and its GONE.
Willis – ask them for the error bars and expect the reposne:
“Why should I give you my error bars when you only want to find fault in it?”
I don’t understand any of this. The modeling methodology they are using should be about the same as I use for a coal deposit or a metallic ore body. The reason one develops such models is the statistical confidence that is part of the spacial (grid cell) calculation. ( ore bodies are static so the methods work very well). Other important model results include maximum distance between data points, to achieve confidence and any directionality trends. That is, depending on the element or factor being modeled the confidence distance will often very between them and the cells for each may have different shapes: squares, rectangles or sometimes triangles and parallelograms.
If you don’t look at all these factors and work out the probability or confidence for each, you can be down the garden path faster then a loaded haul truck heading down a 10% grade. Sounds to me like the authors are on just such a ride. My mining related advice, jump boys before I need attend your funeral.
Though this paper is more detailed and houses more pretty graphs than that infamous toilet paper about the 4 dead polar bears, it nonetheless rivals that paper in the best a$$ wiper stakes.
What the authors of this paper have done is taken the CRU TS3.1 data for land and the Hadley Had1 SST1.1 data for oceans covering the period 1960 to 2009.
From the above data they have determined that at various regions of the world, spring arrives X days earlier and autumn arrives X days later. They have also plotted the spatial coverage of these changes and concluded that unless species are able to shift their range quick enough, they will be adversely affected.
But have any of the authors jumped on a boat or squirmed into a diving suit? NO
Have they named a single species, nay, a single specimen, that has shifted range due to these changes? NO
They open the paper with
a bullshyte statement taken from a paper O. E. Sala et al., Global biodiversity scenarios for the year 2100. and finish with an equally bullshyte statement
No wonder Willis gets all hot under the collar
Dennis Nikols, P. Geo. says:
November 6, 2011 at 11:14 pm
Geologists often have that reaction when they read about the climate. The estimation of these kinds of uncertainties are routine in other fields of endeavor.
w.
MrX says:
WOW! They believe that the “garbage-in, garbage-out” rule doesn’t apply to them, which also implies that uncertain input will produce uncertain output. KUDOS! They’ve just broken a universal rule of computing (and hence models). They will be renown worldwide in computer science literature for this feat. NOPE!
Except as an example of how NOT to do things.
Wonder if they are aware of the actual shape of the Earth or if they have treated the “squares” on a Mercator projection as being real 🙂 Since they mention using a “1° x 1° grid”.
Quote of the week: “…buys you a climate indulgence …”
Nice work Mr Eschenbach.
As the current global data have about 1200 stations (and even at the peak it was less than 8000) the use of any gridding with more more cells than that is just pointless. It is just going to hide the ignorance of the data behind an ever larger number of ‘fantasy cells’ filled with an homogenized data food product.
So what they are saying is that it’s awful hard to put an error bar on all those 1 degree grid boxes as they are (all but about 1200 of them) filled with a fantasy value anyway. What is the error bar on a fantasy? Why, even more meaningless than the non-data created to ‘fill’ it…
I don’t know how their cells are constructed, but a 360 circle of 360 degrees of longitude I think gives about 129600 cells. Compare with 1200 actual thermometer values in the present.
Yeah, that’s a lot of fantasy “values”… So again I ask: How do you calculate error bars on those fantasy values?…
If the objective was to generate a pretty picture with lots of orange bits, and they have succeeded in that.
A point in that direction is that they look to have used a Mercator projection or similar, which is a poor choice where comparitive areas are significant. It massively exaggerates the area of higher and lower latitudes, including all those intense orange areas in Canada, Alaska, Siberia, etc. For example, Greenland is actually about 2/3 the land area of India, not 4-5 times larger as it appears on this map.
I take great delight into your tilting at windmills Willis, and have a chuckle at the stupidity of some of the nonsense science. That said I would like to go off topic, your deductions about the tropical thermostat and the almost constant heat input regardless of changing parameters was very good.
There remain four other thermostats that are a tad more perplexing, the two temperate zones like piggy in the middle, hot on one side and cold on the other, and the two poles both totally different in aspect but thermostats non the less.
Your odd analytical brain maybe can make some sense of their workings and tie them back to the tropic input. I am not even half clever enough to do this but I have a sneaking suspicion that you can do it.
E.M.Smith says:
November 7, 2011 at 1:22 am
“”I don’t know how their cells are constructed, but a 360 circle of 360 degrees of longitude I think gives about 129600 cells. Compare with 1200 actual thermometer values in the present.””
Actually it is 360 degrees of longitude X 180 degrees of Latitude for half that many cells 64,800 so if the data points were distributed with the nearest neighbors more than 1 degree apart 1 out of every 54 grids would contain real data.
Statistics? We don’t need no steenkin’ statistics…
This is not science.
It is complete and utter propaganda fed to the media as ” a peer-reviewed paper” to give it more weight in the eyes of the public.
All it actually does is give science another black eye. NO Science was done. Some idiot took a false computer model and used it to come up with “conclusions” designed to scare the (self-snip) out of the public.
A skeptic could to the same thing showing a “rapid descent” into an ice age based on a model, temperatures falling in the last 10 to 15 years and the end of the Holocene starting with a CAGW paper.
Hi Willis,
thanks for this interesting post. I work on the same issue as you regarding how to deal with inevitable uncertainties. I met you in chicago in 2010 an dwould like to send an private email to you. Please send me your contact adress the one I have from the heartland literature 201o is invalid.
best
Michael
[Done -w.]
I do kinda wonder if these folks have ever heard of or seen ‘2001’ and even if they have, did they come away with any sort of clue as to what the heck went on in there. Maybe they really are as utterly dumb and completely vacant like the love interest was in that recent youtube video (where the sceptic took fire and burned down to a cinder)
Also, have they ever come across Schrödinger’s Cat (hopefully not as we all hope its still alive and well inside its box) and would they care to repeat the experiment with their own children as ‘The Cat’ and their computer determining what happened inside the box. Would they allow the peer reviewers (the peeps who bought the computer, are paying their wages and heating their offices) to tweak said computer and check their results. Would they go there or allow that with their own kids at stake? How do we go about asking ’em?
They don’t estimate the error on SWAGs because it’s a constant. One hundred percent (100%).
I’ve seen this before, and I can tell you the real reason.
1: They either have no fricken clue what the actual error is due to incompetence, or
2: They calculated it to be something insane and discarded the result as impossible, not realizing that this meant their data was worthless.