Guest Post by Willis Eschenbach
I was reading a study just published in Science mag, pay-walled of course. It’s called “The Pace of Shifting Climate in Marine and Terrestrial Ecosystems”, by Burrows et al. (hereinafter B2011). However, I believe that the Supplementary Online Information (SOI) may not be paywalled, and it is here. The paper itself has all kinds of impressive looking graphs and displays, like this one:
Figure 1. Temperature change 1960-2009, from B2011. Blue and red lines on the left show the warming by latitude for the ocean (blue) and the land (red).
I was interested in their error bars on this graph. They were using a 1° x 1° grid size, and given the scarcity of observations in many parts of the world, I wondered how they dealt with the uneven spacing of the ground stations, the lack of data, “infilling”, and other problems with the data itself. I finally found the details regarding how they dealt with uncertainty in their SOI. I was astounded by their error estimation procedure, which was unlike any I’d ever seen.
Here’s what the B2011 SOI says about uncertainty and error bars (emphasis mine):
We do not reflect uncertainty for our estimates or attempt statistical tests because …
Say what? No error bars? No statistical tests? Why not?
The SOI continues with their reason why no error bars. It is because:
… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to calculate error estimates for your work? If you use a model you can just blow off all “statistical tests”? When did that change happen? And more to the point, why didn’t I get the memo?
Also, they’ve modeled the global temperature on a 1° x 1° grid, but they say they need “more detailed modeling” … which brings up two interesting questions. First question is, what will a .5° x 0.5° (“more detailed”) model tell us that a 1° x 1° doesn’t tell us? I don’t get that at all.
The second question is more interesting, viz:
They say they can’t give error estimates now because they are using modeled results … and their proposed cure for this problem is “more detailed modeling”???
I’d rave about this, but it’s a peaceful morning, the sun is out after yesterday’s storm, and my blood is running cool, so let me just say that this is a shabby, childish example of modern climate “science” (and “peer-review”) at its worst. Why does using a model somehow mean you can’t make error estimates or conduct statistical tests?
Sadly, this is all too typical of what passes for climate science these days … and the AGW supporters wonder why their message isn’t getting across?
w.
It’s pretty bad when computer games are more rigorous than scientific papers, but it looks like they’ve got a good start on something new for the XBox 360.
Anyway Willis, a puzzle occured to me about ten minutes ago, regarding the assumption that adding an absorbing IR layer must decrease emissions, which involves the narrow frequency bands of absorption spectra. To avoid talking in nanometers, I’ll just use two colors, green and yellow.
I have a light source that emits 100 green photons, and high above is an optical filter that always absorbs half of all green photons, so 50 photons always escape. Trying to cut down on the green glow to the sky, I add another filter in between the source and the top filter that absorbs 50% of green photons, but then emits 40 yellow photons (a little red shifted from the original green), but still blocking 10% of the photons moving through it.
So the original setup was 100 green photons emitted, 50 absorbed, 50 passed to the sky.
In the new setup, with an extra absorbing filter, 100 green photons are emitted, 50 pass through the new filter unchanged, 10 are stopped, and 40 new yellow photons are added. Passing through the final filter, the 50 green photons are cut down to 25, but the 40 yellow go right on through, so 25 green and 40 yellow hit the sky. That’s a total of 65 photons emitted to the skyt instead of 50, because I added an extra absorbing layer.
It’s not CO2 and H2O in MODTRAN, but it does illustrate that simplistic assumptions about adding an absorbing gas might not stand up experimentally.
Who needs data when god has handed you a model?
Note the small g.
Willis,
You write:
“So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to calculate error estimates for your work? If you use a model you can just blow off all “statistical tests”? When did that change happen? And more to the point, why didn’t I get the memo?”
But you probably got the memo, but it was destructed on access by your “FOI antivirus”, the very best!
Very good post, thanks!
By the way, please check http://www.oarval.org/ThermostatBW.htm (B/W version of your Thermostat Hypothesis in ARVAL)
Also http://www.oarval.org/ClimateChangeBW.htm B/W version of my Climate Change page.
These Black on White versions were suggested by JohnWho, a WUWT commenter (Thanks!).
This paper links in nicely to Prof Trenberth’s opinion piece that climatology should be exempt from testing against a null hypothesis. Both aim at lowering the demarcation between science & non-science.
Peer Review should be a means of quality control. It should be a check to say that the conclusions are supported, and arrived at by appropriate means. Instead the control seems to increasingly be on agreement with the consensus.
http://wattsupwiththat.com/2011/11/03/trenberth-null-and-void/
Nice map. Very “robust.” Looks like a definite warming trend in Faith-based Model World.
Andres Valencia says:
November 6, 2011 at 4:49 pm
These Black on White versions were suggested by JohnWho, a WUWT commenter (Thanks!).
Your welcome, of course, but are you sure it was me?
Wouldn’t want to slight anyone.
Oops, now I remember.
Yes, I think the pages look much better in BW than with the red text on black background.
Lots of good information there Andres – thank you for bringing it all together.
Maybe climatomodelists have seen too many sunsets and only see reds and oranges in everything they produce.
Would any faculty accept this as even a Bachelor’s thesis? I mean, where’s the beef!
DirkH says: “Picture of ten Japanese climate modelers at work:
http://aris.ss.uci.edu/rgarfias/japan99/scenes.html ”
Did you notice that guy in the back, hiding the decline?
I didn’t read the statement quite the way you did, Willis:
We do not reflect uncertainty for our estimates or attempt statistical tests because all of our input data include some degree of model-based interpolation.
means: “Our model-based interpolation is so friggin’ complicated that we weren’t able to figure out how to do any significance tests.”
Here we seek only to describe broad regional patterns
means “Here we seek only to give you the impression most of the world is red-hot. Let it soak into your mind. Don’t think too hard; it’s just a crude bit of brain-washing. Drink up the Kool-Aid!”
more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
means “The inherent uncertainty will be revealed when that specific small scale region known as “Hell” freezes over.”
With these people, it’s just ‘any tool at hand’ to make their case…no matter how inappropriate. It wasn’t a ‘mistake’ or ‘lazy’…it’s just what it took to make the numbers support their conclusions.
I was wading through through the paper when I came across this little gem:
“We therefore excluded latitudes within 20° of the equator from our
calculations of global values for rates of seasonal shift and obscured them in the Figures.”
Let’s see…latitudes “within” 20 degrees of the equator. That would be everything from 20N to 20S. First of all that’s what roughly? 1/3 of the surface of the globe? Second, if one breaks down HadCrut or GISS etc by latitude, one soon discovers that the tropics….uhm, that would be from 20N to 20S… are extremely stable temperature wise and have changed almost not at all in comparison to the temperate zones.
So…they’re trying to calculate rates of change of the climate “on average” on earth, and they begin by excluding about 1/3 of the data that also happens to have the lowest rate of change?
Perhaps the lack of error bars is just a ruse to distract us from the really Really REALLY deep flaws in the paper? I quit reading at that point, maybe there’s some sort of logical explanation, but I just can’t be bothered to spend anymore time trying to ferret one out.
Estimates without error? Total rubbish
Jr Researcher: Sorry sir, I don’t understand. What are we doing again?
Sr Researcher: I told you. We’re analyzing this data to show that the velocity of climate change is increasing.
Jr: Uhm… but we haven’t analyzed it yet, so how can we-
Sr: Shut up. You’re a grad student. Do you want to continue to be a grad student?
Jr: Well, uhm, yes…
Sr: OK then. Shut up an listen. We’re going to show that the velocity of climate change is increasing using this data. You with me so far?
Jr: Yes….
Sr: Good. Now, we’ve got all this data, and we’re going to multiply it all by the factors calculated in this modeling program.
Jr: Uhm… but… sir… that program is just a random number generator.
Sr: Yes it is. It is a computer model of random numbers. We generate those and apply them to the data. When we’re done, we graph the output. Here, you do the first one.
Jr: Uhm…ok. Here’s the graph. Looks like gibberish.
Sr: That’s because it is gibberish. We may have to run the analysis thousands of times to get the graph we want.
Jr: Won’t that take us a long time sir?
Sr: No. It will take YOU a long time. Let me know when you are done. Here’s a copy of the graph we’re looking for. Let me know when you find it.
Jr: Yes sir. Anything else sir?
Sr: Yes. Don’t forget to delete all the data and all the graphs that didn’t work out.
Its all about “Faith” Willis!
… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
Silly me, I was trying to read this in English, when of course it was written in b****cks!
When in doubt about your data, always use the words “modelling” “model-based interpolation” to reflect inherent uncertainty in your vacuous grasp of basic science.
So, the map sez the world is pretty much warming at 5K per century at the moment? Sounds like a Climate Science model, all right.
Nice article. It is one of my constant gripes.
Exciting alarmist results – FUN and GLAMOROUS!
Error bars – dull unnecessary wet-blanket killjoys…
Kaboom says:
Belief systems are absolute and contain no error.
Exactly!
The other significant point must surely be that if their “data” includes model results, it isn’t data.
The AGW theory needs a few peer reviewed papers locating the “missing heat.” This paper seems to meet that need- and in time for the next IPCC summary.
By chance was the research supported by a grant from anyone?
From the abstract: These indices give a complex mosaic of predicted range shifts
and phenology changes that deviate from simple poleward migration and earlier springs or
later falls. T
That’s already an improvement over what has gone before.
A convincing method of estimating the error bars, other than a monte carlo or bootstrappiing method, might be hard to devise for this problem. their note basically announces that’s a job for someone else. It is, as you write, hard to believe that Science would approve publication.
They are not just saying that they couldn’t caluclate the uncertainty range for their paper. They are saying it can’t be done. And they are correct.
On top of the uncertainty generated in the model, which one comenter noted above will rapidly aproach the complet set of posible model states, they would need to consider the unsertainty in the underlying grided temperature set that was used to tune the model.
You get uncertainty from the way the temperature mesaurements are averaged both in space and time.
Then you get uncertainty created by data adjustments and interpolation.
Then you have to consider measurement uncertainty in the underlieing station data that went into the gridded temperature product.
The uncertainty in this paper approaches infinity, there isn’t enough computing power in North America to calculate it in less than a year.
Paul Nevins says at November 6, 2011 at 4:35 pm
It was.