Uncertain about Uncertainty

Guest Post by Willis Eschenbach

I was reading a study just published in Science mag, pay-walled of course. It’s called “The Pace of Shifting Climate in Marine and Terrestrial Ecosystems”, by Burrows et al. (hereinafter B2011). However, I believe that the Supplementary Online Information (SOI) may not be paywalled, and it is here. The paper itself has all kinds of impressive looking graphs and displays, like this one:

Figure 1. Temperature change 1960-2009, from B2011. Blue and red lines on the left show the warming by latitude for the ocean (blue) and the land (red).

I was interested in their error bars on this graph. They were using a 1° x 1° grid size, and given the scarcity of observations in many parts of the world, I wondered how they dealt with the uneven spacing of the ground stations, the lack of data, “infilling”, and other problems with the data itself. I finally found the details regarding how they dealt with uncertainty in their SOI. I was astounded by their error estimation procedure, which was unlike any I’d ever seen.

Here’s what the B2011 SOI says about uncertainty and error bars (emphasis mine):

We do not reflect uncertainty for our estimates or attempt statistical tests because …

Say what? No error bars? No statistical tests? Why not?

The SOI continues with their reason why no error bars. It is because:

… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.

So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to calculate error estimates for your work? If you use a model you can just blow off all “statistical tests”? When did that change happen? And more to the point, why didn’t I get the memo?

Also, they’ve modeled the global temperature on a 1° x 1° grid, but they say they need “more detailed modeling” … which brings up two interesting questions. First question is, what will a .5° x 0.5° (“more detailed”) model tell us that a  1° x 1°  doesn’t tell us? I don’t get that at all.

The second question is more interesting, viz:

They say they can’t give error estimates now because they are using modeled results … and their proposed cure for this problem is “more detailed modeling”???

I’d rave about this, but it’s a peaceful morning, the sun is out after yesterday’s storm, and my blood is running cool, so let me just say that this is a shabby, childish example of modern climate “science” (and “peer-review”) at its worst. Why does using a model somehow mean you can’t make error estimates or conduct statistical tests?

Sadly, this is all too typical of what passes for climate science these days … and the AGW supporters wonder why their message isn’t getting across?

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

78 Comments
Inline Feedbacks
View all comments
Adam Gallon
November 6, 2011 2:50 pm

Peer reviewed was it?

Truthseeker
November 6, 2011 2:51 pm

Willis, don’t you know that since the models are correct, there are no errors, and since our “peer reviewed” paper agrees with the models, it must be correct? How do they know their models are correct? Because they are used by peer reviewed papers of course! CAGW climate “science” use of circular arguments means that they will soon end up dissappearing up their own … (fill in orifice of choice).

Patrik
November 6, 2011 2:55 pm

I guess they want to spare the media proponents for CAGW and Al Gore types of this world the hassle of removing the error bars before using the graphs to spread alarm.

Boels069
November 6, 2011 2:56 pm

The question is: can one better the measurement uncertainty by using statistical methods?

terrybixler
November 6, 2011 2:56 pm

Why should a political agenda have error bars? What is needed is a change of political agenda where faulty science is not the goal.

Ryan Welch
November 6, 2011 2:57 pm

So because they use interpolation and computer models there is no need to report uncertainty or error bars? They might as well say that because we say it it is infallible. How can anyone call that science?

November 6, 2011 2:57 pm

Completely ruins the peer review process. I’m a relative simpleton with statistics and the like, but to arrogantly state that the don’t need to do any of that is just plain childish. If you are postponinig analysis until later, how the hell can you publish such incomplete material? Pal review, plain and simple.

Nick Shaw
November 6, 2011 3:02 pm

Because they use really, really nice graphics?

giko
November 6, 2011 3:11 pm

http://health-med-news.com
don’t you know that since the models are correct, there are no errors, and since our “peer reviewed” paper agrees with the models, it must be correct? How do they know their models are correct?

Brad
November 6, 2011 3:17 pm

And the areas of greatest increase look like either urban heat islands or areas without many stations…

SOYLENT GREEN
November 6, 2011 3:19 pm

Seems they took Curry’s advice. 😉

MrX
November 6, 2011 3:26 pm

WOW! They believe that the “garbage-in, garbage-out” rule doesn’t apply to them, which also implies that uncertain input will produce uncertain output. KUDOS! They’ve just broken a universal rule of computing (and hence models). They will be renown worldwide in computer science literature for this feat. NOPE!

Eric (skeptic)
November 6, 2011 3:41 pm

Because they are lazy. What they should be doing is assessing model uncertainty and subsequent uncertainty of their results. http://classes.soe.ucsc.edu/ams206/Winter05/draper.pdf What they are really saying is that they like pretty pictures.

Steve C
November 6, 2011 3:42 pm

In other model runs, it was demonstrated that, in climate science models, pure drivel readily displaces ordinary drivel.

kim
November 6, 2011 3:44 pm

And they wonder why it’s not warming.
=============

DirkH
November 6, 2011 3:46 pm

Since the model calculations introduce a small error on every time step (unless one believes they perform a 100% realistic computation), the error bars would after a few time steps comprise the entire possible state space, rendering the output meaningless. This is the reason why every model run produces vastly different results. GCM’s are modern pinball machines.
Or maybe Pachinko machines, as they’re massively parallelized.
Picture of ten Japanese climate modelers at work:
http://aris.ss.uci.edu/rgarfias/japan99/scenes.html

KnR
November 6, 2011 3:50 pm

Another awful paper that if handing in as an essay by a undergraduate would have been failed.
The worst issue is that its passed peer review and that those that should know better and called them out for this type of trick out, have said nothing , standard practice in climate science perhaps , but still a awful way to do science.

Don Horne
November 6, 2011 3:51 pm

Not even Pal-Review. More like Rubber Stamped & Fast-Tracked, doncha know?

November 6, 2011 4:08 pm

This reminds me of the NASA GISS 1200km ‘averaging’ – no attempt to account for significance. I remember using their online tool and reducing the distance to around 200kms – suddenly all these gaps appear all over the globe where previously there was massive warming and the global temp increase dropped by a third…
Basically manufactured warming – if the significance was actually used in the calcs that mean figure should not change in my mind; but of course why let the correct usage of measurements get in the way of a good graphic…

GeologyJim
November 6, 2011 4:10 pm

Looks like output from the same algorithm that produced the Steig et al paper on Antarctic warming (2008?) – used as the cover illustration on Nature, I believe.
That was the study where measurement errors (falsely warm – hmmmm, who wouldda guessed?) were smeared across the whole continent. Gee, they were pretty casual about uncertainty too.
In post-normal science, uncertainty only “matters” if the results contradict the established “truth”

Kaboom
November 6, 2011 4:11 pm

Belief systems are absolute and contain no error.

Frank Stembridge
November 6, 2011 4:29 pm

So what makes anyone so sure they actually did not run any stats? Maybe they did, didn’t like the results, and “censored” them. Wouldn’t be the first time.

November 6, 2011 4:30 pm

“We do not reflect uncertainty for our estimates or attempt statistical tests because
… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions”
“Say what? No error bars? No statistical tests? Why not?”
Whats the matter Willis, don’t you understand plain english.
Its no wonder you don’t get asked to peer review stuff, look what a mess you would have made of this peer reviewed paper.
Had they given it to you, it might never have been published. You really should be ashamed of youreself. Sarc/

Gary Hladik
November 6, 2011 4:35 pm

“We do not reflect uncertainty for our estimates or attempt statistical tests because all of our input data include some degree of model-based interpolation.”
They may not be claiming perfection here. Maybe they realize that estimating “uncertainty” in a fantasy experiment is just another exercise in fantasy, i.e. it’s pointless.
Or maybe pushing the “uncertainty” estimates into a second paper is just a good way to pad CVs.

Paul Nevins
November 6, 2011 4:35 pm

So what we have here is a WAG. Why publish a WAG? Or at least shouldn’t it be in a Sci fi magazine?

1 2 3 4
Verified by MonsterInsights