Uncertain about Uncertainty

Guest Post by Willis Eschenbach

I was reading a study just published in Science mag, pay-walled of course. It’s called “The Pace of Shifting Climate in Marine and Terrestrial Ecosystems”, by Burrows et al. (hereinafter B2011). However, I believe that the Supplementary Online Information (SOI) may not be paywalled, and it is here. The paper itself has all kinds of impressive looking graphs and displays, like this one:

Figure 1. Temperature change 1960-2009, from B2011. Blue and red lines on the left show the warming by latitude for the ocean (blue) and the land (red).

I was interested in their error bars on this graph. They were using a 1° x 1° grid size, and given the scarcity of observations in many parts of the world, I wondered how they dealt with the uneven spacing of the ground stations, the lack of data, “infilling”, and other problems with the data itself. I finally found the details regarding how they dealt with uncertainty in their SOI. I was astounded by their error estimation procedure, which was unlike any I’d ever seen.

Here’s what the B2011 SOI says about uncertainty and error bars (emphasis mine):

We do not reflect uncertainty for our estimates or attempt statistical tests because …

Say what? No error bars? No statistical tests? Why not?

The SOI continues with their reason why no error bars. It is because:

… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.

So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to calculate error estimates for your work? If you use a model you can just blow off all “statistical tests”? When did that change happen? And more to the point, why didn’t I get the memo?

Also, they’ve modeled the global temperature on a 1° x 1° grid, but they say they need “more detailed modeling” … which brings up two interesting questions. First question is, what will a .5° x 0.5° (“more detailed”) model tell us that a  1° x 1°  doesn’t tell us? I don’t get that at all.

The second question is more interesting, viz:

They say they can’t give error estimates now because they are using modeled results … and their proposed cure for this problem is “more detailed modeling”???

I’d rave about this, but it’s a peaceful morning, the sun is out after yesterday’s storm, and my blood is running cool, so let me just say that this is a shabby, childish example of modern climate “science” (and “peer-review”) at its worst. Why does using a model somehow mean you can’t make error estimates or conduct statistical tests?

Sadly, this is all too typical of what passes for climate science these days … and the AGW supporters wonder why their message isn’t getting across?

w.

0 0 votes
Article Rating
78 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Adam Gallon
November 6, 2011 2:50 pm

Peer reviewed was it?

Truthseeker
November 6, 2011 2:51 pm

Willis, don’t you know that since the models are correct, there are no errors, and since our “peer reviewed” paper agrees with the models, it must be correct? How do they know their models are correct? Because they are used by peer reviewed papers of course! CAGW climate “science” use of circular arguments means that they will soon end up dissappearing up their own … (fill in orifice of choice).

Patrik
November 6, 2011 2:55 pm

I guess they want to spare the media proponents for CAGW and Al Gore types of this world the hassle of removing the error bars before using the graphs to spread alarm.

Boels069
November 6, 2011 2:56 pm

The question is: can one better the measurement uncertainty by using statistical methods?

terrybixler
November 6, 2011 2:56 pm

Why should a political agenda have error bars? What is needed is a change of political agenda where faulty science is not the goal.

Ryan Welch
November 6, 2011 2:57 pm

So because they use interpolation and computer models there is no need to report uncertainty or error bars? They might as well say that because we say it it is infallible. How can anyone call that science?

Mike Bromley the Kurd
November 6, 2011 2:57 pm

Completely ruins the peer review process. I’m a relative simpleton with statistics and the like, but to arrogantly state that the don’t need to do any of that is just plain childish. If you are postponinig analysis until later, how the hell can you publish such incomplete material? Pal review, plain and simple.

Nick Shaw
November 6, 2011 3:02 pm

Because they use really, really nice graphics?

giko
November 6, 2011 3:11 pm

http://health-med-news.com
don’t you know that since the models are correct, there are no errors, and since our “peer reviewed” paper agrees with the models, it must be correct? How do they know their models are correct?

Brad
November 6, 2011 3:17 pm

And the areas of greatest increase look like either urban heat islands or areas without many stations…

SOYLENT GREEN
November 6, 2011 3:19 pm

Seems they took Curry’s advice. 😉

MrX
November 6, 2011 3:26 pm

WOW! They believe that the “garbage-in, garbage-out” rule doesn’t apply to them, which also implies that uncertain input will produce uncertain output. KUDOS! They’ve just broken a universal rule of computing (and hence models). They will be renown worldwide in computer science literature for this feat. NOPE!

Eric (skeptic)
November 6, 2011 3:41 pm

Because they are lazy. What they should be doing is assessing model uncertainty and subsequent uncertainty of their results. http://classes.soe.ucsc.edu/ams206/Winter05/draper.pdf What they are really saying is that they like pretty pictures.

Steve C
November 6, 2011 3:42 pm

In other model runs, it was demonstrated that, in climate science models, pure drivel readily displaces ordinary drivel.

kim
November 6, 2011 3:44 pm

And they wonder why it’s not warming.
=============

DirkH
November 6, 2011 3:46 pm

Since the model calculations introduce a small error on every time step (unless one believes they perform a 100% realistic computation), the error bars would after a few time steps comprise the entire possible state space, rendering the output meaningless. This is the reason why every model run produces vastly different results. GCM’s are modern pinball machines.
Or maybe Pachinko machines, as they’re massively parallelized.
Picture of ten Japanese climate modelers at work:
http://aris.ss.uci.edu/rgarfias/japan99/scenes.html

KnR
November 6, 2011 3:50 pm

Another awful paper that if handing in as an essay by a undergraduate would have been failed.
The worst issue is that its passed peer review and that those that should know better and called them out for this type of trick out, have said nothing , standard practice in climate science perhaps , but still a awful way to do science.

Don Horne
November 6, 2011 3:51 pm

Not even Pal-Review. More like Rubber Stamped & Fast-Tracked, doncha know?

November 6, 2011 4:08 pm

This reminds me of the NASA GISS 1200km ‘averaging’ – no attempt to account for significance. I remember using their online tool and reducing the distance to around 200kms – suddenly all these gaps appear all over the globe where previously there was massive warming and the global temp increase dropped by a third…
Basically manufactured warming – if the significance was actually used in the calcs that mean figure should not change in my mind; but of course why let the correct usage of measurements get in the way of a good graphic…

GeologyJim
November 6, 2011 4:10 pm

Looks like output from the same algorithm that produced the Steig et al paper on Antarctic warming (2008?) – used as the cover illustration on Nature, I believe.
That was the study where measurement errors (falsely warm – hmmmm, who wouldda guessed?) were smeared across the whole continent. Gee, they were pretty casual about uncertainty too.
In post-normal science, uncertainty only “matters” if the results contradict the established “truth”

Kaboom
November 6, 2011 4:11 pm

Belief systems are absolute and contain no error.

Frank Stembridge
November 6, 2011 4:29 pm

So what makes anyone so sure they actually did not run any stats? Maybe they did, didn’t like the results, and “censored” them. Wouldn’t be the first time.

November 6, 2011 4:30 pm

“We do not reflect uncertainty for our estimates or attempt statistical tests because
… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions”
“Say what? No error bars? No statistical tests? Why not?”
Whats the matter Willis, don’t you understand plain english.
Its no wonder you don’t get asked to peer review stuff, look what a mess you would have made of this peer reviewed paper.
Had they given it to you, it might never have been published. You really should be ashamed of youreself. Sarc/

Gary Hladik
November 6, 2011 4:35 pm

“We do not reflect uncertainty for our estimates or attempt statistical tests because all of our input data include some degree of model-based interpolation.”
They may not be claiming perfection here. Maybe they realize that estimating “uncertainty” in a fantasy experiment is just another exercise in fantasy, i.e. it’s pointless.
Or maybe pushing the “uncertainty” estimates into a second paper is just a good way to pad CVs.

Paul Nevins
November 6, 2011 4:35 pm

So what we have here is a WAG. Why publish a WAG? Or at least shouldn’t it be in a Sci fi magazine?

November 6, 2011 4:45 pm

It’s pretty bad when computer games are more rigorous than scientific papers, but it looks like they’ve got a good start on something new for the XBox 360.
Anyway Willis, a puzzle occured to me about ten minutes ago, regarding the assumption that adding an absorbing IR layer must decrease emissions, which involves the narrow frequency bands of absorption spectra. To avoid talking in nanometers, I’ll just use two colors, green and yellow.
I have a light source that emits 100 green photons, and high above is an optical filter that always absorbs half of all green photons, so 50 photons always escape. Trying to cut down on the green glow to the sky, I add another filter in between the source and the top filter that absorbs 50% of green photons, but then emits 40 yellow photons (a little red shifted from the original green), but still blocking 10% of the photons moving through it.
So the original setup was 100 green photons emitted, 50 absorbed, 50 passed to the sky.
In the new setup, with an extra absorbing filter, 100 green photons are emitted, 50 pass through the new filter unchanged, 10 are stopped, and 40 new yellow photons are added. Passing through the final filter, the 50 green photons are cut down to 25, but the 40 yellow go right on through, so 25 green and 40 yellow hit the sky. That’s a total of 65 photons emitted to the skyt instead of 50, because I added an extra absorbing layer.
It’s not CO2 and H2O in MODTRAN, but it does illustrate that simplistic assumptions about adding an absorbing gas might not stand up experimentally.

RS
November 6, 2011 4:46 pm

Who needs data when god has handed you a model?
Note the small g.

November 6, 2011 4:49 pm

Willis,
You write:
“So … using model based interpolation somehow buys you a climate indulgence releasing you from needing to calculate error estimates for your work? If you use a model you can just blow off all “statistical tests”? When did that change happen? And more to the point, why didn’t I get the memo?”
But you probably got the memo, but it was destructed on access by your “FOI antivirus”, the very best!
Very good post, thanks!
By the way, please check http://www.oarval.org/ThermostatBW.htm (B/W version of your Thermostat Hypothesis in ARVAL)
Also http://www.oarval.org/ClimateChangeBW.htm B/W version of my Climate Change page.
These Black on White versions were suggested by JohnWho, a WUWT commenter (Thanks!).

manicbeancounter
November 6, 2011 4:57 pm

This paper links in nicely to Prof Trenberth’s opinion piece that climatology should be exempt from testing against a null hypothesis. Both aim at lowering the demarcation between science & non-science.
Peer Review should be a means of quality control. It should be a check to say that the conclusions are supported, and arrived at by appropriate means. Instead the control seems to increasingly be on agreement with the consensus.
http://wattsupwiththat.com/2011/11/03/trenberth-null-and-void/

Al Gored
November 6, 2011 5:08 pm

Nice map. Very “robust.” Looks like a definite warming trend in Faith-based Model World.

November 6, 2011 5:10 pm

Andres Valencia says:
November 6, 2011 at 4:49 pm
These Black on White versions were suggested by JohnWho, a WUWT commenter (Thanks!).

Your welcome, of course, but are you sure it was me?
Wouldn’t want to slight anyone.

November 6, 2011 5:19 pm

Oops, now I remember.
Yes, I think the pages look much better in BW than with the red text on black background.
Lots of good information there Andres – thank you for bringing it all together.

November 6, 2011 5:20 pm

Maybe climatomodelists have seen too many sunsets and only see reds and oranges in everything they produce.

November 6, 2011 5:21 pm

Would any faculty accept this as even a Bachelor’s thesis? I mean, where’s the beef!

jorgekafkazar
November 6, 2011 5:21 pm

DirkH says: “Picture of ten Japanese climate modelers at work:
http://aris.ss.uci.edu/rgarfias/japan99/scenes.html
Did you notice that guy in the back, hiding the decline?

jorgekafkazar
November 6, 2011 5:33 pm

I didn’t read the statement quite the way you did, Willis:
We do not reflect uncertainty for our estimates or attempt statistical tests because all of our input data include some degree of model-based interpolation.
means: “Our model-based interpolation is so friggin’ complicated that we weren’t able to figure out how to do any significance tests.”
Here we seek only to describe broad regional patterns
means “Here we seek only to give you the impression most of the world is red-hot. Let it soak into your mind. Don’t think too hard; it’s just a crude bit of brain-washing. Drink up the Kool-Aid!”
more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
means “The inherent uncertainty will be revealed when that specific small scale region known as “Hell” freezes over.”

rc
November 6, 2011 5:41 pm

With these people, it’s just ‘any tool at hand’ to make their case…no matter how inappropriate. It wasn’t a ‘mistake’ or ‘lazy’…it’s just what it took to make the numbers support their conclusions.

November 6, 2011 5:46 pm

I was wading through through the paper when I came across this little gem:
“We therefore excluded latitudes within 20° of the equator from our
calculations of global values for rates of seasonal shift and obscured them in the Figures.”
Let’s see…latitudes “within” 20 degrees of the equator. That would be everything from 20N to 20S. First of all that’s what roughly? 1/3 of the surface of the globe? Second, if one breaks down HadCrut or GISS etc by latitude, one soon discovers that the tropics….uhm, that would be from 20N to 20S… are extremely stable temperature wise and have changed almost not at all in comparison to the temperate zones.
So…they’re trying to calculate rates of change of the climate “on average” on earth, and they begin by excluding about 1/3 of the data that also happens to have the lowest rate of change?
Perhaps the lack of error bars is just a ruse to distract us from the really Really REALLY deep flaws in the paper? I quit reading at that point, maybe there’s some sort of logical explanation, but I just can’t be bothered to spend anymore time trying to ferret one out.

David L
November 6, 2011 5:54 pm

Estimates without error? Total rubbish

November 6, 2011 5:59 pm

Jr Researcher: Sorry sir, I don’t understand. What are we doing again?
Sr Researcher: I told you. We’re analyzing this data to show that the velocity of climate change is increasing.
Jr: Uhm… but we haven’t analyzed it yet, so how can we-
Sr: Shut up. You’re a grad student. Do you want to continue to be a grad student?
Jr: Well, uhm, yes…
Sr: OK then. Shut up an listen. We’re going to show that the velocity of climate change is increasing using this data. You with me so far?
Jr: Yes….
Sr: Good. Now, we’ve got all this data, and we’re going to multiply it all by the factors calculated in this modeling program.
Jr: Uhm… but… sir… that program is just a random number generator.
Sr: Yes it is. It is a computer model of random numbers. We generate those and apply them to the data. When we’re done, we graph the output. Here, you do the first one.
Jr: Uhm…ok. Here’s the graph. Looks like gibberish.
Sr: That’s because it is gibberish. We may have to run the analysis thousands of times to get the graph we want.
Jr: Won’t that take us a long time sir?
Sr: No. It will take YOU a long time. Let me know when you are done. Here’s a copy of the graph we’re looking for. Let me know when you find it.
Jr: Yes sir. Anything else sir?
Sr: Yes. Don’t forget to delete all the data and all the graphs that didn’t work out.

Pete H
November 6, 2011 6:00 pm

Its all about “Faith” Willis!

Andrew Harding
Editor
November 6, 2011 6:02 pm

… all of our input data include some degree of model-based interpolation. Here we seek only to describe broad regional patterns; more detailed modeling will be required to reflect inherent uncertainty in specific smaller-scale predictions.
Silly me, I was trying to read this in English, when of course it was written in b****cks!
When in doubt about your data, always use the words “modelling” “model-based interpolation” to reflect inherent uncertainty in your vacuous grasp of basic science.

Brian H
November 6, 2011 6:15 pm

So, the map sez the world is pretty much warming at 5K per century at the moment? Sounds like a Climate Science model, all right.

Lew Skannen
November 6, 2011 6:15 pm

Nice article. It is one of my constant gripes.
Exciting alarmist results – FUN and GLAMOROUS!
Error bars – dull unnecessary wet-blanket killjoys…

Ryan Welch
November 6, 2011 6:22 pm

Kaboom says:
Belief systems are absolute and contain no error.
Exactly!

November 6, 2011 6:33 pm

The other significant point must surely be that if their “data” includes model results, it isn’t data.

Mark M
November 6, 2011 6:37 pm

The AGW theory needs a few peer reviewed papers locating the “missing heat.” This paper seems to meet that need- and in time for the next IPCC summary.
By chance was the research supported by a grant from anyone?

Septic Matthew
November 6, 2011 6:50 pm

From the abstract: These indices give a complex mosaic of predicted range shifts
and phenology changes that deviate from simple poleward migration and earlier springs or
later falls. T

That’s already an improvement over what has gone before.
A convincing method of estimating the error bars, other than a monte carlo or bootstrappiing method, might be hard to devise for this problem. their note basically announces that’s a job for someone else. It is, as you write, hard to believe that Science would approve publication.

Matt
November 6, 2011 6:57 pm

They are not just saying that they couldn’t caluclate the uncertainty range for their paper. They are saying it can’t be done. And they are correct.
On top of the uncertainty generated in the model, which one comenter noted above will rapidly aproach the complet set of posible model states, they would need to consider the unsertainty in the underlying grided temperature set that was used to tune the model.
You get uncertainty from the way the temperature mesaurements are averaged both in space and time.
Then you get uncertainty created by data adjustments and interpolation.
Then you have to consider measurement uncertainty in the underlieing station data that went into the gridded temperature product.
The uncertainty in this paper approaches infinity, there isn’t enough computing power in North America to calculate it in less than a year.

dp
November 6, 2011 7:42 pm

Paul Nevins says at November 6, 2011 at 4:35 pm

So what we have here is a WAG. Why publish a WAG? Or at least shouldn’t it be in a Sci fi magazine?

It was.

Baa Humbug
November 6, 2011 7:51 pm

Not only does this sort of junk pass as science, it makes news worldwide.
Following is a sample of the media coverage of this study:
The Atlantic: New Evidence That Climate Change Threatens Marine Biodiversity
ABC Australia: Climate change affecting oceans faster: Study
Times of India: Marine life ‘needs to swim faster to survive climate change’
Sky News Australia: Aussie marine life climate change threat
The Australian: Marine life in climate change hot water
Softpedia: Species will have to move fast to adapt to climate change
Fish Update: Climate shifts could leave some species homeless, new research shows
Deccan Chronicle (India): Marine life ‘needs to swim faster to survive climate change’
FishNewsEU.com: Climate warming poses serious conservation challenge for marine life
Sometimes it feels like trying to swim against a strong current

November 6, 2011 7:52 pm

I’ve reproduced the original graphic showing the appropriate degree of uncertainty.
uncertainty

Baa Humbug
November 6, 2011 7:58 pm

Oh by the way the full paper can be accessed here

RoyFOMR
November 6, 2011 8:26 pm

W,
You are so in the past here. Don’t you understand that the customer is always right? Error-bars just confuse the statistics, confound the democratic process and bring into question the motives of our great and good political masters!,
Never forget that these savants have an inordinantly large amount of Tax$ to spend. If they mess up the odd Trillion or two by innocently sending a few Mill into their accounts, then should we blame them?
At least they tried to save our planet and make a few bucks unlike you . All that I’ve noted about what you’ve done to date is no more than a few brilliant and heart-warming essays, a realistic hypothesis or two and a determined effort to take Post-Modernist Science back into the enlightenment!
You’ve clearly no shame. Unlike them.

November 6, 2011 8:30 pm

keith says:
“This reminds me of the NASA GISS 1200km ‘averaging’ – no attempt to account for significance. I remember using their online tool and reducing the distance to around 200kms – suddenly all these gaps appear all over the globe where previously there was massive warming and the global temp increase dropped by a third…”
I have a beef with some of what goes into GISS. Large areas are likely too-often represented by a thermometer at an Arctic location having above-regional-average local surface albedo feedback, or a land thermometer while the area represented includes a lot of ocean area.
There is another decadal temperature trend map of the world, from January 1979 to sometime recently, and it avoids surface station siting issues, growth of urban effects on surface station thermometers, and large regions represented by thermometers in local warming hotspots:
1st global map image (lower troposphere), in:
http://www.remss.com/msu/msu_browse.html
That one omits the “pole holes” within 7.5 degrees of the poles, and other areas on basis of little or no lower troposphere above the surface (elevation above 3 km or within 20 degrees of the South Pole).

Cherry Pick
November 6, 2011 9:06 pm

Their argument against using statistics on computer models is valid. Models contain man-made fudge factors called parameters. Statistical measures like averages, means and confidence intervals of model outputs are arbitrary – they are what programmers choose them to be.
Another example of this what has amazed me is the concept of ensemble mean. Its like a report of climate modeling is some kind of opinion poll. Ensemble statistics describe how well research groups agree on parameters and as a consequence on the outputs of the computer runs that they have chosen to publish.

Baa Humbug
November 6, 2011 9:43 pm

Sometimes people ask why I bother. It’s because I can’t just sit quiet while this kind of nonsense is being put forward and then goes round the world three times. Like they say … better to light one small forest fire than to curse the darkness …
Or something like that.

And that’s why I love ya

paulsNZ
November 6, 2011 9:49 pm

AGW Belief systems have errors which are relegated to the faith based error processing model employed by South-Park and its GONE.

John Trigge
November 6, 2011 10:17 pm

Willis – ask them for the error bars and expect the reposne:
“Why should I give you my error bars when you only want to find fault in it?”

November 6, 2011 11:14 pm

I don’t understand any of this. The modeling methodology they are using should be about the same as I use for a coal deposit or a metallic ore body. The reason one develops such models is the statistical confidence that is part of the spacial (grid cell) calculation. ( ore bodies are static so the methods work very well). Other important model results include maximum distance between data points, to achieve confidence and any directionality trends. That is, depending on the element or factor being modeled the confidence distance will often very between them and the cells for each may have different shapes: squares, rectangles or sometimes triangles and parallelograms.
If you don’t look at all these factors and work out the probability or confidence for each, you can be down the garden path faster then a loaded haul truck heading down a 10% grade. Sounds to me like the authors are on just such a ride. My mining related advice, jump boys before I need attend your funeral.

Baa Humbug
November 6, 2011 11:15 pm

Though this paper is more detailed and houses more pretty graphs than that infamous toilet paper about the 4 dead polar bears, it nonetheless rivals that paper in the best a$$ wiper stakes.
What the authors of this paper have done is taken the CRU TS3.1 data for land and the Hadley Had1 SST1.1 data for oceans covering the period 1960 to 2009.
From the above data they have determined that at various regions of the world, spring arrives X days earlier and autumn arrives X days later. They have also plotted the spatial coverage of these changes and concluded that unless species are able to shift their range quick enough, they will be adversely affected.
But have any of the authors jumped on a boat or squirmed into a diving suit? NO
Have they named a single species, nay, a single specimen, that has shifted range due to these changes? NO
They open the paper with

“Climate warming is a global threat to biodiversity”,

a bullshyte statement taken from a paper O. E. Sala et al., Global biodiversity scenarios for the year 2100. and finish with an equally bullshyte statement

“Maps of the velocity of climate change and seasonal shift show the areas where the threat to biodiversity from organisms’ need to rapidly track thermal conditions by shifting distributions and retiming seasonal thermal events may be greatest; these areas may coincide with high biodiversity, especially in the oceans.”

No wonder Willis gets all hot under the collar

Mark
November 7, 2011 12:53 am

MrX says:
WOW! They believe that the “garbage-in, garbage-out” rule doesn’t apply to them, which also implies that uncertain input will produce uncertain output. KUDOS! They’ve just broken a universal rule of computing (and hence models). They will be renown worldwide in computer science literature for this feat. NOPE!
Except as an example of how NOT to do things.
Wonder if they are aware of the actual shape of the Earth or if they have treated the “squares” on a Mercator projection as being real 🙂 Since they mention using a “1° x 1° grid”.

Mike Jowsey
November 7, 2011 12:56 am

Quote of the week: “…buys you a climate indulgence …”
Nice work Mr Eschenbach.

E.M.Smith
Editor
November 7, 2011 1:22 am

As the current global data have about 1200 stations (and even at the peak it was less than 8000) the use of any gridding with more more cells than that is just pointless. It is just going to hide the ignorance of the data behind an ever larger number of ‘fantasy cells’ filled with an homogenized data food product.
So what they are saying is that it’s awful hard to put an error bar on all those 1 degree grid boxes as they are (all but about 1200 of them) filled with a fantasy value anyway. What is the error bar on a fantasy? Why, even more meaningless than the non-data created to ‘fill’ it…
I don’t know how their cells are constructed, but a 360 circle of 360 degrees of longitude I think gives about 129600 cells. Compare with 1200 actual thermometer values in the present.
Yeah, that’s a lot of fantasy “values”… So again I ask: How do you calculate error bars on those fantasy values?…

Jim Turner
November 7, 2011 3:03 am

If the objective was to generate a pretty picture with lots of orange bits, and they have succeeded in that.
A point in that direction is that they look to have used a Mercator projection or similar, which is a poor choice where comparitive areas are significant. It massively exaggerates the area of higher and lower latitudes, including all those intense orange areas in Canada, Alaska, Siberia, etc. For example, Greenland is actually about 2/3 the land area of India, not 4-5 times larger as it appears on this map.

wayne Job
November 7, 2011 3:16 am

I take great delight into your tilting at windmills Willis, and have a chuckle at the stupidity of some of the nonsense science. That said I would like to go off topic, your deductions about the tropical thermostat and the almost constant heat input regardless of changing parameters was very good.
There remain four other thermostats that are a tad more perplexing, the two temperate zones like piggy in the middle, hot on one side and cold on the other, and the two poles both totally different in aspect but thermostats non the less.
Your odd analytical brain maybe can make some sense of their workings and tie them back to the tropic input. I am not even half clever enough to do this but I have a sneaking suspicion that you can do it.

November 7, 2011 3:19 am

E.M.Smith says:
November 7, 2011 at 1:22 am
“”I don’t know how their cells are constructed, but a 360 circle of 360 degrees of longitude I think gives about 129600 cells. Compare with 1200 actual thermometer values in the present.””
Actually it is 360 degrees of longitude X 180 degrees of Latitude for half that many cells 64,800 so if the data points were distributed with the nearest neighbors more than 1 degree apart 1 out of every 54 grids would contain real data.

UK Sceptic
November 7, 2011 3:38 am

Statistics? We don’t need no steenkin’ statistics…

Gail Combs
November 7, 2011 4:07 am

This is not science.
It is complete and utter propaganda fed to the media as ” a peer-reviewed paper” to give it more weight in the eyes of the public.
All it actually does is give science another black eye. NO Science was done. Some idiot took a false computer model and used it to come up with “conclusions” designed to scare the (self-snip) out of the public.
A skeptic could to the same thing showing a “rapid descent” into an ice age based on a model, temperatures falling in the last 10 to 15 years and the end of the Holocene starting with a CAGW paper.

Lesson from the past: present insolation minimum holds potential for glacial inception 2007
Abstract
The community of climatologists predicts a progressive global warming [IPCC Fourth Assessment Report—Climate Change, 2007. The Scientific Basis. Cambridge University Press, Cambridge] that will not be interrupted by a glacial inception for the next 50 ka [Berger and Loutre, 2002. An exceptionally long Interglacial ahead? Science 297, 1287–1288]. These predictions are based on continuously increasing anthropogenic greenhouse gas emissions and on the orbital forcing that will provide only muted insolation variations for the next 50 ka. To assess the potential climate development without human interference, we analyse climate proxy records from Europe and the North Atlantic of Marine Isotope Stage (MIS) 11 (423–362 ka BP), an interval when insolation variations show a strong linear correlation with those of the recent past and the future. This analysis suggests that the insolation minimum at 397 ka BP, which provides the best available analogue to the present insolation minimum, terminated interglacial conditions in Europe. At that time, tundra–steppe vegetation spread in Central Europe and pine forests dominated in the eastern Mediterranean region. Because the intensities of the 397 ka BP and present insolation minima are very similar, we conclude that under natural boundary conditions the present insolation minimum holds the potential to terminate the Holocene interglacial. Our findings support the Ruddiman hypothesis [Ruddiman, W., 2003. The Anthropogenic Greenhouse Era began thousands of years ago. Climate Change 61, 261–293], which proposes that early anthropogenic greenhouse gas emission prevented the inception of a glacial that would otherwise already have started.

http://www.sciencedirect.com/science/article/pii/S0277379107002715
And this (deleted) Article

Abrupt Climate Change: Should We Be Worried? – Woods Hole Oceanographic Institution
Robert B. Gagosian
President and Director
Woods Hole Oceanographic Institution
Prepared for a panel on abrupt climate change at the
World Economic Forum
Davos, Switzerland, January 27, 2003
Most of the studies and debates on potential climate change, along with its ecological and economic impacts, have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. This line of thinking, however, fails to consider another potentially disruptive climate scenario. It ignores recent and rapidly advancing evidence that Earth’s climate repeatedly has shifted abruptly and dramatically in the past, and is capable of doing so in the future.
Fossil evidence clearly demonstrates that Earth vs climate can shift gears within a decade….

But the concept remains little known and scarcely appreciated in the wider community of scientists, economists, policy makers, and world political and business leaders. Thus, world leaders may be planning for climate scenarios of global warming that are opposite to what might actually occur…
REFERENCES:
1 “Are We on the Brink of a New Little Ice Age?”—testimony to the US Commission on Ocean Policy, September 25, 2002, by T. Joyce and L. Keigwin (Woods Hole Oceanographic Institution).
2 Abrupt Climate Change: Inevitable Surprises, US National Academy of Sciences, National Research Council Committee on Abrupt Climate Change, National Academy Press, 2002.
3 “Thermohaline Circulation, the Achilles’ Heel of Our Climate System: Will Man-Made CO2 Upset the Current Balance?” in Science, Vol. 278, November 28, 1997, by W. S. Broecker (Lamont-Doherty Earth Observatory, Columbia University).
4 “Rapid Freshening of the Deep North Atlantic Ocean Over the Past Four Decades,” in Nature, Vol. 416, April 25, 2002, by B. Dickson (Centre for Environment, Fisheries, and Aquaculture Science, Lowestoft, UK), I. Yashayaev, J. Meincke, B. Turrell, S. Dye, and J. Hoffort.
5 “Decreasing Overflow from the Nordic Seas into the Atlantic Ocean Through the Faroe Bank Channel Since 1950,” in Nature, Vol. 411, June 21, 2001, by B. Hansen (Faroe Fisheries Laboratory, Faroe Islands), W. Turrell, and S. østerhus.
6 “Increasing River Discharge to the Arctic Ocean,” in Science, Vol. 298, December 13, 2002, by B. J. Peterson (Marine Biological Laboratory), R. M. Holmes, J. W. McClelland, C. J. Vörösmarty, R. B. Lammers, A. I. Shiklomanov, I. A. Shiklomanov, and S. Rahmstorf.
7 “Ocean Observatories,” in Oceanus, Vol. 42, No. 1, 2000, published by the Woods Hole Oceanographic Institution.
8 The Little Ice Age: How Climate Made History 1300-1850, by Brian Fagan (University of California, Santa Barbara), Basic Books, 2000.
9 “Cultural Responses to Climate Change During the Late Holocene,” in Science, Vol. 292, April 27, 2001, by P. B. deMenocal (Lamont-Doherty Earth Observatory, Columbia University).
10 “Holocene Climate Instability: A Prominent, Widespread Event 8,200 Years Ago,” in Geology, Vol. 26, No. 6, 1997, by R. B. Alley and T. Sowers (Pennsylvania State University), P. A. Mayewski, M. Stuiver, K. C. Taylor, and P. U. Clark.
11 “A High-Resolution Absolute-Dated Late Pleistocene Monsoon Record From Hulu Cave, China,” in Science, Vol. 294, December 14, 2001, by Y. J. Wang (Nanjing Normal University, China), H. Cheng, R. L. Edwards, Z. S. An, J. Y. Wu, C. C. Shen, and J. A. Dorale.
Originally published: February 10, 2003
Last updated: September 3, 2009

(Wayback snapshot: http://web.archive.org/web/20091118222058/http://www.whoi.edu/page.do?pid=12455&tid=282&cid=9986)
Notice the usual “get more grants free card” that is played

…..It is important to clarify that we are not contemplating a situation of either abrupt cooling or global warming. Rather, abrupt regional cooling and gradual global warming can unfold simultaneously. Indeed, greenhouse warming is a destabilizing factor that makes abrupt climate change more probable. A 2002 report by the US National Academy of Sciences (NAS) said, “available evidence suggests that abrupt climate changes are not only possible but likely in the future, potentially with large impacts on ecosystems and societies.”2…

Looks like he is saying we can have a ” regional ” Ice Age along with global warming… (Rolls eyes)

November 7, 2011 4:13 am

Hi Willis,
thanks for this interesting post. I work on the same issue as you regarding how to deal with inevitable uncertainties. I met you in chicago in 2010 an dwould like to send an private email to you. Please send me your contact adress the one I have from the heartland literature 201o is invalid.
best
Michael
[Done -w.]

Pete in Cumbria UK
November 7, 2011 4:15 am

I do kinda wonder if these folks have ever heard of or seen ‘2001’ and even if they have, did they come away with any sort of clue as to what the heck went on in there. Maybe they really are as utterly dumb and completely vacant like the love interest was in that recent youtube video (where the sceptic took fire and burned down to a cinder)
Also, have they ever come across Schrödinger’s Cat (hopefully not as we all hope its still alive and well inside its box) and would they care to repeat the experiment with their own children as ‘The Cat’ and their computer determining what happened inside the box. Would they allow the peer reviewers (the peeps who bought the computer, are paying their wages and heating their offices) to tweak said computer and check their results. Would they go there or allow that with their own kids at stake? How do we go about asking ’em?

Chris B
November 7, 2011 6:18 am

They don’t estimate the error on SWAGs because it’s a constant. One hundred percent (100%).

Ben of Houston
November 7, 2011 8:18 am

I’ve seen this before, and I can tell you the real reason.
1: They either have no fricken clue what the actual error is due to incompetence, or
2: They calculated it to be something insane and discarded the result as impossible, not realizing that this meant their data was worthless.

higley7
November 7, 2011 11:54 am

What’s being overlooked is that the error bars are indeed there. They are simply off the charts on both sides.

Economart
November 7, 2011 3:14 pm

They are using the error bars to club the skeptics.
GM

Tenuc
November 8, 2011 2:43 am

Willis, you forgot that of course they don’t need error bars. The climate for the virtual world that has been created musts, by definition, to 100% correct. The fact that it is based on bad data about the real world, wrong assumption about real world climate systems and the need to prove global warming is happening is irrelevant – as is this paper if you need to know about what is going on in the real world..