# Shakun, Not Stirred, and Definitely Not Area-Weighted

I’d like to highlight one oddity in the Shakun et al. paper, “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation” (Shakun2012), which I’ve discussed here and here. They say:

The data were projected onto a 5°x5° grid, linearly interpolated to 100-yr resolution and combined as area-weighted averages.

The oddity I want you to consider is the area-weighting of the temperature data from a mere 80 proxies.

Figure 1. Gridcells of latitude (North/South) and longitude (East/West)

What is area-weighting, and why is it not appropriate for this data?

“Area-weighting” means that you give more weight to some data than others, based on the area of the gridcell where the data was measured. Averaging by gridcell and then area-weighting attempts to solve two problems. The first problem is that we don’t want to overweight an area where there are lots of observations. If some places have 3 observations and others have 30 observations in the same area, that’s a problem if you simply average the data. You will overweight the places with lots of data.

I don’t like the usual solution, which is to use gridcells as shown in Figure 1, and then take a distance-weighted average from the center of the gridcell for each gridcell. This at least attenuates some of the problem of overweighting of neighboring proxies by averaging them together in gridcells … but like many a solution, it introduces a new problem.

The next step, area-averaging, attempts to solve the new problem introduced by gridcell averaging. The problem is that, as you can see from Figure 1, gridcells come in all different sizes. So if you have a value for each gridcell, you can’t just average the gridcell values together. That would over-weight the polar regions, and under-weight the equator.

So instead, after averaging the data into gridcells, the usual method is to do an “area-weighted average”. Each gridcell is weighted by its area, so a big gridcell gets more weight, and a small gridcell gets less weight. This makes perfect sense, and it works fine, if you have data in all of the gridcells. And therein lies the problem.

For the Shakun 2012 gridcell and area-averaging, they’ve divided the world into 36 gridcells from Pole to Pole and 72 gridcells around the Earth. That’s 36 times 72 equals 2592 gridcells … and there are only 80 proxies. This means that most of the proxies will be the only observation in their particular gridcell. In the event, the 80 proxies occupy 69 gridcells, or about 3% of the gridcells. No less than 58 of the gridcells contain only one proxy.

Let me give an example to show why this lack of data is important. To illustrate the issue, suppose for the moment that we had only three proxies, colored red, green, and blue in Figure 2.

Figure 2. Proxies in Greenland, off of Japan, and in the tropical waters near Papua New Guinea (PNG).

Now, suppose we want to average these three proxies. The Greenland proxy (green) is in a tiny gridcell. The PNG proxy (red) is in a very large gridcell. The Japan proxy (blue) has a gridcell size that is somewhere in between.

But should we give the Greenland proxy just a very tiny weight, and weight the PNG proxy heavily, because of the gridcell size? No way. There is no ex ante reason to weight any one of them.

Remember that area weighting is supposed to adjust for the area of the planet represented by that gridcell. But as this example shows, that’s meaningless when data is sparse, because each data point represents a huge area of the surface, much larger than a single gridcell. So area averaging is distorting the results, because with sparse data the gridcell size has nothing to do with the area represented by a given proxy.

And as a result, in Figure 2, we have no reason to think that any one of the three should be weighted more heavily than another.

All of that, to me, is just more evidence that gridcells are a goofy way to do spherical averaging.

In Section 5.2 of the Shakun2012 supplementary information, they authors say that areal weighting changes the shape of the claimed warming, but does not strongly affect the timing. However, they do not show the effect of areal weighting on their claim that the warming proceeds from south to north.

My experiments have shown me that the use of a procedure I call “cluster analysis averaging” gives better results than any gridcell based averaging system I’ve tried. For a sphere, you use the great-circle distance between the various datapoints to define the similarity of any two points. Then you just use simple averaging at each step in the cluster analysis. This avoids both the inside-the-gridcell averaging and the between-gridcell averaging … I suppose I should write that analysis up at some point, but so many projects, so little time …

One final point about the Shakun analysis. The two Greenland proxies show a warming over the transition of ~ 27°C and 33°C. The other 78 proxies show a median warming of about 4°C, with half of them in the range from 3° to 6° of warming. Figure 3 shows the distribution of the proxy results:

Figure 3. Histogram of the 80 Shakun2012 proxy warming since the most recent ice age. Note the two Greenland ice core temperature proxies on the right.

It is not clear why the range of the Greenland ice core proxies should be so far out of line with the others. It seems doubtful that if most of the world is warming by about 3°-6°C, that Greenland would warm by 30°C. If it were my study, I’d likely remove the two Greenland proxies as wild outliers.

Regardless of the reason that they are so different from the others, the authors areal-weighting scheme means that the Greenland proxies will be only lightly weighted, removing the problem … but to me that feels like fortuitously offsetting errors, not a real solution.

A good way to conceptualize the issue with gridcells is to imagine that the entire gridding system shown in Figs. 1 & 2 were rotated by 90°, putting the tiny gridcells at the equator. If the area-averaging is appropriate for a given dataset, this should not change the area-averaged result in any significant way.

But in Figure 2, you can see that if the gridcells all came together down by the red dot rather than up by the green dot, we’d get a wildly different answer. If that were the case, we’d weight the PNG proxy (red) very lightly, and the Greenland proxy (green) very heavily. And that would completely change the result.

And for the Shakun2012 study, with only 3% of the gridcells containing proxies, this is a huge problem. In their case, I say area-averaging is an improper procedure.

w.

## 164 thoughts on “Shakun, Not Stirred, and Definitely Not Area-Weighted”

1. The data were projected onto a 5°x5° grid, linearly interpolated to 100-yr resolution and combined as area-weighted averages.
Where do they say that they weight with the area of each grid cell? The way I would weight would be to divide the globe into a number of equal-area pieces [not the grid cells, obviously, as they don’t have equal area] and then calculate the average value of a proxy by computing the average of the grid cells that fall into each equal area piece, then average all the pieces. This is the standard [and correct] way of doing it. Why do you think they didn’t do it this way?

2. The two Greenland proxies show a warming over the transition of ~ 27°C and 33°C. The other 78 proxies show a median warming of about 4°C

Could it be the the Greenland proxies are closer to polar, and others more tropical? I’ve not followed where the proxies originate, but I suspect a warming vs latitude chart might show something.

3. Leif Svalgaard says:
April 9, 2012 at 11:28 am
then calculate the average value of a proxy by computing the average of the grid cells
One could argue if that average should be weighted by the area of each grid cell. I’m inclined not to, but in any event the variation of the grid cell areas with each ‘piece’ would be small, so it may not matter.

4. the grid cell areas within each ‘piece’ would be small, so it may not matter

5. W. ,
I just thought of a way that CO2 could lead temp during a transition.

Latent Heat. When ice is melting, temperature doesn’t rise. CO2 would be released though.

You would have an initial warming without much CO2 release (pretty much just where water is warmed). As older ice melts, CO2 would be released in greater amounts. Because of the large volume of melt, temperature would not rise much since a very large amount of heat would be used to break molecular bonds. As the rate of melt decrease, temp would begin to rise faster.

6. Willis Eschenbach says:

Leif Svalgaard says:
April 9, 2012 at 11:28 am

The data were projected onto a 5°x5° grid, linearly interpolated to 100-yr resolution and combined as area-weighted averages.

Where do they say that they weight with the area of each grid cell? The way I would weight would be to divide the globe into a number of equal-area pieces [not the grid cells, obviously, as they don’t have equal area] and then calculate the average value of a proxy by computing the average of the grid cells that fall into each equal area piece, then average all the pieces. This is the standard [and correct] way of doing it. Why do you think they didn’t do it this way?

Hey, Leif, good to hear from you. You can do it with equal-area cells as you suggest. However, many folks (including apparently Shakun et al., since they don’t mention equal-area anywhere) just area weight by the area of the 5°X5° gridcells. For example, GISS uses equal-area gridcells, whereas HadCRUT uses area-weighted 5°X5° gridcells.

w.

7. Pull My Finger says:

Climate reconstructions are like picking three games over the 20+ seasons Ty Cobb played and determing his career batting average. You could state that Ty Cobb was worhtless, batting .000 for his career by picking three games where he was 0 for 4, or that he was a career .800 but didn’t play very often picking games where he went 4 for 4, 0 for 1, and one game where he didn’t play. We all know this is an absurd way to rate baseball players, but somehow perfectly acceptable to determine the economic fate of the world.

8. Joe Public says:

Willis, some of the comments should be in degrees K.

9. xham says:

Wouldn’t a geodesic grid system be the way to go?

10. Pull My Finger says:

As a former GIS analyst and cartographer, I can state with grim authority that when it comes to spatial statistical analysis, the vast majority of people across all disciplines haven’t a clue as to what they are doing when it comes to studies like this one.

11. Willis Eschenbach says:
April 9, 2012 at 11:41 am
You can do it with equal-area cells as you suggest. However, many folks (including apparently Shakun et al., since they don’t mention equal-area anywhere) just area weight by the area of the 5°X5° gridcells.
How do you know that they just weight by grid cells? If I had written the paper, I would also not have elaborated on how to weight by area, as one would simply just do it correctly, so no need to dwell on the obvious, just say ‘area weighted’ implying the correct meaning.

12. JJ says:

Leif Svalgaard says:

This is the standard [and correct] way of doing it. Why do you think they didn’t do it this way?

Willis’ point seems to be that they did do it that way, but that it is not correct for them to have done so. For the reasons given, I concur.

13. BioBob says:

I have a hypothetical for you all.

Is there something wrong with admitting that there is not enough data nor is the data available of good enough quality to draw the feverishly desired conclusions ? Is there something wrong with admitting that we just do not know all the answers or even most of them or even a few of them ?

“I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong.” Richard Feynman

14. Phil Cartier says:

The big problem with this paper is that they don’t have a significant amount of data. If there is only data in 3% of the gridcells, any kind of averaging and projecting to the whole globe is meaningless, unless all the proxies have samples equally spaced around the globe. Even then, the data is just too sparse to be meaningful.

15. As I was reading the discussion I was thinking you need an equal area projection type of approach, and then I saw Dr. Svalgaard’s comment. I agree. I also question why anyone would attempt to extrapolate ± 80 points, that aren’t necessarily representing the same thing, to the entire globe in the first place. Shakun2012 proves nothing, demonstrates little except their questionable motives, and fails to apply the scientific method to a degree of rigor needed to qualify as being anything other than fiction, or wishful thinking.

16. Jean Parisot says:

Gridding is major problem. They should be areas related by content, such as: %land, %ice, annual sunload, etc., with the related proxies used therein. At least use a cell based on the true earth shape, not a spherical map projection.

Has any of this been analyzed by spatial statisticians?

17. You know my feelings about this already. Using 5×5 lat/long cells is enormously stupid. I personally favor a rescalable icosahedral map — one that can be subdivided into enough grid cells that important geographic features — coastlines, mountain ranges, deserts, larger islands, forests, cities — are typically resolvable, so that a continental map done in cells is recognizable (and can be done at double the resolution easily).

That still doesn’t resolve the many other problems with Shakun — the utter dominance of coastal sites in a world the odds are at least 10 or 20 to 1 against a pin dropped at random onto the planet landing within 50 or 100 miles of a coast, for example — but it does handle the problem with cell weights a lot better.

I don’t think the climate scientists realize how important it is to address and resolve this geometric issue before they spend all of the time that they spend making blind assertions about “global average temperature”. Numerical integration on the surface of the sphere — which is what this is, whether or not they understand that — is difficult business, and one that it is almost impossible to carry out correctly in spherical polar coordinates. There is an entire literature on the subject, complete with alternative tilings and coordinatizations. But even using the Jacobean doesn’t fix spherical polar coordinates when one relies on a sparse grid of samples, let alone a sparse grid of samples biased by their geography.

For example, is there anyone who thinks that a coastal sample from Greenland (assuming one could generate such a thing from the same span of times) would in any way resemble an interior glacial ice core? Or that a glacial ice core from Greenland in any way would resemble a sample drawn from (say) Minnesota or Idaho or Alaska or Mongolia? Yet there are two samples from there and none from Minnesota, which is probably a far more typical inland environment.

rgb

18. thelastdemocrat says:

The geocoding / geospatial analysis people have excellent ways of addressing limited information across geography. They would not use a lat/long approach, but would use a mosaic approach, whih would look more like a soccer ball pattern.

19. thelastdemocrat says:

There is another way to look at the issue of 80 data points. If you have a hypothesis, that increased CO2 leads to warming, you could grab one location and see if the data fit. That would not be very conclusive. If you find a reliable pattern across the majority of 80 sites, that would be fairly persuasive. This is what is being done with evidence for the MWP: people keep finding local evidence of a temp pattern fitting MWP.

I am not saying I support this study – just sayin’ that 80 data points, each taken individually as one bit of info, would be persuasive regarding the MWP pattern, and would be if local CO2-and-temp data fit a certain pattern.

No, 80 data points cannot hardly serve to represent global temp. You need decent sampling from each of the various regions – ocean/land, the various latitudinal regions, etc.

20. JJ says:
April 9, 2012 at 11:51 am
Willis’ point seems to be that they did do it that way
My point was that how do Willis know that?

21. Steven Mosher says:

Well, it looks like there are multiple proxies per grid cell.
So when they average per grid cell you will have some grid cells with say 2-3 measures while
other gridcells have only one measure. equal area weighting would not be correct either.

In any case looks like you get C02 going from about 180 to 260 and the temp response is 4C
lukewarmer.

22. What all of this Skakun and a Rattlin’ is really about is that they are (rather shakily) trying to prop up the basic meme hypothesis: CO2 causes warming. That clearly indicates that there is some rather nagging doubt in the AGW community about their holy null hypothesis…and that they are basically in a rush to trowel some gumbo over the rather large leak in their sinking vessel. Grab the trowel, Willis!

23. Mosher, isn’t a 4C response to a 45% increase in CO2 physically impossible?

24. Steven Mosher says:
April 9, 2012 at 12:24 pm
So when they average per grid cell you will have some grid cells with say 2-3 measures while
other gridcells have only one measure.

I would not see that as a problem. Suppose there were 10,000 measurements in a grid cell and 1 in a neighboring one. Since temperatures are strongly autocorrelated spatially, the 10,000 measurements may not be any better than the 1, so it would be OK to include both cells without taking into account that there are many more in one than in the other.

25. The authors also looked at unweighted data and a “meridional weighting” procedure described in the supplementary, and they also tested the degree to which 80 randomly selected sites in the instrumental record reflect the global average (pretty good). It’s easy to nitpick details or tell what you’d do differently but none of this changes the key conclusions, and they are in line with what some previous papers have argued.

• Anthony Watts says:

Yes nitpick details like inverted tiljander and deleted proxies after 1960 never change the conclusions in the type of science Colose supports.

26. Jon says:

You have to weight for latitude and is suddendly the effect of CO2 linear and not longer logarithmic?

27. Chris Colose says:
April 9, 2012 at 12:35 pm
the key conclusions, and they are in line with what some previous papers have argued.
that is like arguing that smoking is healthy because so many people do it…

28. Beam me up Scotty says:

It is odd that you are trying to use science to debunk science.

29. srvdisciple says:

Anthony, just a typo note : “Japan proxy(red)” should read “Japan proxy(blue)”

30. Eric Webb says:

Shakun’s paper just seems like more warmist crap to me, they should know that CO2 responds to tempertature, not the other way around. i wish we could conduct an experiment to prove that.This post made me wonder if NOAA uses similar tactics to average their data and make it look warm? I know they’ve removed many data stations from cold areas and moved them to cold areas. What also strikes me as odd is when they have random large red circles near the arctic
which are sometimes off by themselves or surrounded by much smaller circles, makes the data look suspect and it doesn’t look right. In other news, supposedly the US saw it’s warmest march on record, according to NOAA, I however, think otherwise, considering that NOAA doesn’t include the UHI effect in their results, and for their pro AGW views.

31. daved46 says:

Two things.

First, your paragraph under figure 1 says the Japan proxy is red instead of the blue it actually is.

Second, since the point of the paper is showing whether CO2 or temperature leads, I’m not sure why it’s of much interest whether the averaging is done correctly or not. If changing the weights of the individual proxies by a factor of 2 or so can change the lead, trail order, then the data set is useless for deciding the question at hand. You need to go back and examine the individual proxies to see what their physical attributes are.

32. Two remarks, obviously you can average out a temperature trend, but can you average out timing of events? A few decades ago Wallie Broecker and friends already noted that the northern hemisphere warming was much later than the southern. So there is nothing new here, only that the first events was the start of the isotope peak in the antarctica proxies, before the CO2 records. So you can average out whatever but but none of this changes the key conclusion, that whatever happened, it started in the southern hemisphere and it was not CO2.

Secondly, good to note that the greenland ice cores are outliers. They are indeed and it greatly falsifies the idea of the isotopes in precipitation being a proxy for temperature. Instead, they are a proxy for dewpoint though (most basic meteorology) in combination with rain out rayleigh effect. Non calor sed umor, it”s not the heat but the humidity.

33. JDN says:

Gridding is probably inappropriate, period!

If you start by picking 80 surface stations around the world, many of them won’t correlate with a square or geodesic grid. As an example, this year in the USA, average temperatures will correlate well on a north-south line on the east coast thanks to so much air moving up the coast inland during the winter. In most years, however, temperature will correlate well along roughly east-west great circles, at least with the midwest temperatures. None of this correlation lies on a regular grid. How can any grid be justified for sparse data if you were to use actual temperature meaasurements, much less proxies?

34. George E. Smith; says:

Well wake me up when these gridders discover the theory of sampled data systems, and the Nyquist theorem. In the mean time; 80 samples of “anything” is just that; 80 samples of anything; and it isn’t necessarily DATA about ANYTHING.

35. Steve from Rockwood says:

I was thinking about how Shakun’s “averaging” worked the other day. My original concern was the high correlation in some areas (such as sites in Greenland) and the low correlation in other areas, both one site to the next closest and also mid-latitude sites with either Greenland or Antarctica. So the proxy series contain apples and oranges.

After reading the grid was 5 x 5 degrees I did a cross-plot of LAT versus LON from the Metadata. Just in the SH from -90 to -45 LAT there are 648 cells and only 6 data points with no more than one data point per cell. How can this be a global average if less than 1% of the cells in the SH have a single point?

In the NH it is almost as bad. Of the 648 cells from 45 to 90 degree LAT there are 9 data points.
So from 45 degrees to each pole there are 15 data points in almost 1300 cells.

To make matters worse, most of the data points are clustered around the same latitude (varying along longitude), especially near +30 and also the equator.

From 100 degrees west to 180 west from pole to pole (1440 cells at 5×5) there are 6 proxies.

36. Steve from Rockwood says:
April 9, 2012 at 1:11 pm
How can this be a global average if less than 1% of the cells in the SH have a single point?
since you have the data already tallied it would be nice if you could post a lat-long grid of the number of data points in each cell.

37. Steve from Rockwood:
If you have the data as a text file or an Excel file, send it to me leif@leif.org and I’ll graph it.

38. Steven Mosher says:

Leif Svalgaard says:
April 9, 2012 at 12:32 pm (Edit)

I would not see that as a problem. Suppose there were 10,000 measurements in a grid cell and 1 in a neighboring one. Since temperatures are strongly autocorrelated spatially, the 10,000 measurements may not be any better than the 1, so it would be OK to include both cells without taking into account that there are many more in one than in the other.

###############

That’s not the problem. The problem is when you have 10000 measures in one cell reading
20C and ONE measure in the next cell reporting 10C. A simple average gives you 15C.
Now, if the 10000 agree with the oneneighbor then averaging is not a problem. So, The method I use ( Nick stokes actually ) is inverse density weighted.

39. Steven Mosher says:

JDN

“If you start by picking 80 surface stations around the world, many of them won’t correlate with a square or geodesic grid. As an example, this year in the USA, average temperatures will correlate well on a north-south line on the east coast thanks to so much air moving up the coast inland during the winter. In most years, however, temperature will correlate well along roughly east-west great circles, at least with the midwest temperatures. None of this correlation lies on a regular grid. How can any grid be justified for sparse data if you were to use actual temperature meaasurements, much less proxies?”

The correlation length is a function of latitude and season. In the end whether you use regular gridding, verroni tesselation, EOFs, or kridging the answer comes out the same.

1. CRU: equal angle grid
2. GISS: equal area grid
3. Nick stokes: equal angle ( with inverse density) AND veronni tesselation
4. NCDC: EOFs
5. Berkeley Earth : kridging.

The answers given by each and every one of these approaches to averaging spatial data is……………… THE SAME.. ok.. differences the size of mousenuts.

40. Steven Mosher says:
April 9, 2012 at 1:27 pm
That’s not the problem. The problem is when you have 10000 measures in one cell reading
20C and ONE measure in the next cell reporting 10C. A simple average gives you 15C.

And that would be correct, as there is not much extra information in the 10,000 measure average. Imagine, you increase that to 1000,000,000 measurements with a thermometer every square meter, the average [20C] would not change significantly, yet the 10C data point would be swamped out by all those new measurements with no new information..

41. Steven Mosher says:

george, let us know when you discover spatial auto correlation.

42. Steven Mosher says:

Eric Webb says:
April 9, 2012 at 12:54 pm (Edit)
Shakun’s paper just seems like more warmist crap to me, they should know that CO2 responds to tempertature, not the other way around.

###################

Its actually BOTH. Added C02 will warm the earth and the ocean will respond by outgassing more C02.

43. Steven Mosher says:
April 9, 2012 at 1:36 pm
Its actually BOTH. Added CO2 will warm the earth and the ocean will respond by outgassing more CO2.
nice positive feedback loop there…

44. Steven Mosher says:

Chris Colose says:
April 9, 2012 at 12:35 pm (Edit)
The authors also looked at unweighted data and a “meridional weighting” procedure described in the supplementary, and they also tested the degree to which 80 randomly selected sites in the instrumental record reflect the global average (pretty good).

##############

ya it looks like nobody read the SI. Looking at the 80 sites they picked and the latitudinal distribution I would say the 80 locations they have would do pretty well. Then again, you are talking to some folks who think that Viking settlements represent the entire world and that frost fairs in England can reconstruct the temperature in australia.

Here is what you will find Chris. When a skeptic has one data point they like, they forget about the global average. When they have 80 they dont like, they crow about the small number.

60 optimally chosen site is enough. I’m not surprised they did well with 80.

45. Rogelio escobar says:

so does water, methane, nitrogen etc… excess heat is mostly lost to space, so its all c%%%, strongly recommend you read Lindzen, Spencer’s recent papers etc, who are actually atmospheric physicists BTW)

46. Andreas says:

Why 5°x5° grid boxes?

The answer is in the supplementary information (http://www.nature.com/nature/journal/v484/n7392/extref/nature10915-s1.pdf)

Shakun et al tested other methods and found that the methods lead to quite similar results, only the amplitude of warming differs about 0.7°C, which doesn’t influence their results (there’s a instructive graph in the suppl.).

More important, it’s a convenient choice, because they tested with present HadCrut temperature data, whether their proxy locations are representative for average global temperature (quite well). HadCru uses 5°x5° grids, so this choice was easy to compare.

47. pochas says:

Steven Mosher says:
April 9, 2012 at 1:34 pm

“Added C02 will warm the earth and the ocean will respond by outgassing more C02.’

Very good, Steven!

48. Gary Pearse says:

If you have a limited number of data points, you are going to have a poor temp estimation no matter what. Perhaps rather than grid cell averaging, it would be better to give each data point (possibly adjusted to sea level) a latitudinal band with the idea that temp roughly decreases with distance from the equator. It would be a lousy estimate but probably better than grid cell, weighted average one. The bands would be area weighted since they get shorter as you move toward the poles.

49. Ian H says:

As I understand it previous studies compared CO2 as measured from ice cores with temperatures as measured from ice cores. In other words essentially the same samples were used to determine both temperature and CO2. Consequently although absolute dates were inaccurate the relative dating of the CO2 rise and the temperature rise could be measured with much greater precision. By contrast this study attempts compare completely unrelated measures of temperature and CO2. I can’t see how anything like the precision in measurement of relative timing that can be achieved comparing ice cores to ice cores can be obtained.

The only justification offered for rejecting the ice core to ice core comparison method with all its advantages is a poorly explained speculation that the bulk of the world might have warmed somewhat later than the poles. Even if one were to accept that this were so, it still would not admit the conclusion that CO2 caused the warming. That is because you would still have to explain why THE POLES warmed BEFORE CO2 levels rose, a result which this study does nothing to invalidate and which is remains known with much greater certainty (because it is obtained by comparing like with like) than the results in this paper, which as pointed out here, seem to be attempting to paper over a lack of accurate data with dubious statistics.

50. Mosh

There are numerous scientific studies that suggest there is some sort of temperature relationship between British climate and that in other parts of the world. I thought you liked scientific studies?

By the way, over at the other place I asked for the links to the papers you cited-Julio etc-they’ve become lost in the mass of posts on both threads since you made your comment.
Tonyb

51. Greg House says:

Posted on April 9, 2012 by Willis Eschenbach
Area-weighting” means that you give more weight to some data than others, based on the area of the gridcell where the data was measured…
===========================================
Willis, you are probably right about handling the gridcells to calculate “global temperature” and trends, however the whole thing can only be correct, if the termometer data is representative for the gridcells.

To put it in a simple way, if you have a termometer in a certain area, you can only use it for area weighting if you can prove, that the termometer’s data is representative for the whole area. If you can not prove it, then you can not do the weighting.

What we know for sure is, that the termometer data (if collected correctly) is representative for the box containing ther termometer. You can not just draw a gridcell around the box on the map and claim the termometer represents the whole gridcell. But unfortunately this is what some climate scientist do.

So, even if the operations with gridcells are mathematically correct, the result would be a fiction, if there is no scientific proof, that the termoter data is representative for the gridcells.

52. Willis Eschenbach says:

aaron says:
April 9, 2012 at 11:39 am

W. ,
I just thought of a way that CO2 could lead temp during a transition.

Latent Heat. When ice is melting, temperature doesn’t rise. CO2 would be released though.

Interesting thought … not sure how you’d determine if that were the case, though.

Perhaps the place to look would be the CO2 records from the South Pole. During Antarctic summer (Dec-Jan-Feb) there’s a significant meltback of sea ice. If the melting ice released CO2, we’d expect to see a corresponding variation in the CO2 down there in the south during that time.

… OK, just took a look. Bad news, the annual increase in CO2 at the south pole starts around March and ends in September … and that’s when the ice is forming, not melting as your theory suggests.

Sorry,

w.

53. Willis Eschenbach says:

xham says:
April 9, 2012 at 11:46 am

Wouldn’t a geodesic grid system be the way to go?

You are right, xham. And there are a variety of ways to have equal-area gridcells, not just geodesic. But I still say averaging by any kind of gridcells introduces problems. That’s why I use cluster analysis averaging, because of the errors introduced by any system of gridcells.

These areas just get worse when you have scarce data. You might end up with three adjacent gridcells that contain most of the data, and one gridcell in a distant area. The fact that the gridcells have the same area is meaningless in that situation.

w.

Nice work again Willis!
Off topic Salby Video!

For anyone intrested in Murry Salbys game changing work on the carbon cycle you here have a video where his slides is shown. Its a fantastic lecture!

55. Jon says:

Weighting aprox is in accordance to the latitude area of global area a grid box is.
A 5×5 box at Equator is double area than an 5×5 box around 60N of 60S. At the poles the 5×5 box is not much at at all.
Cos of the latitude. Cos of 0 is 1, Cos of 10 is 0.985, Cos of 20 is 0.94, Cos of 30 is 0.866,Cos of 40 is 0.766, Cos of 50 is 0.643, Cos of 60 is 0.5, Cos of 70 is 0.342, Cos of 80 is 0.174and Cos of 90 is 0.0

56. Steve from Rockwood says:

Leif Svalgaard says:
April 9, 2012 at 1:22 pm
Steve from Rockwood:
If you have the data as a text file or an Excel file, send it to me leif@leif.org and I’ll graph it.
————————————————-
On its way.

57. cui bono says:

Steven Mosher says (April 9, 2012 at 1:36 pm)
“Its actually BOTH. Added CO2 will warm the earth and the ocean will respond by outgassing more CO2.”

And that’s why the temperature has been going up exponentially since the end of the ice age?
/definite sarc

58. Theo Goodwin says:

rgbatduke says:
April 9, 2012 at 12:09 pm

“I don’t think the climate scientists realize how important it is to address and resolve this geometric issue before they spend all of the time that they spend making blind assertions about “global average temperature”. Numerical integration on the surface of the sphere — which is what this is, whether or not they understand that — is difficult business, and one that it is almost impossible to carry out correctly in spherical polar coordinates. There is an entire literature on the subject, complete with alternative tilings and coordinatizations. But even using the Jacobean doesn’t fix spherical polar coordinates when one relies on a sparse grid of samples, let alone a sparse grid of samples biased by their geography.”

Yes, climate scientists have proved time and again that they care little or nothing for the assumptions that underlie their assertions. They are quite happy to take any two temperature readings and treat them as comparable though the two readings exist in geographic areas that bear no resemblance to one another. Thank God that you, Willis, and other serious scientists are willing to “call them out” on this most egregious systemic error.

59. Jon says:

A 5×5 box from Equator to either 5 N or S is weighted 0.9990. A 5×5 box from the poles to either 85N or S is weighted 0.04362.
That means that the area at the 5×5 grid at Equator is 23 times larger than the 5×5 grid at the poles.

60. Dung says:

Just for the record, this paper is totally at odds with the recent paper by Lu et al due to be published this month.
Using Ikaite crystals and their heavy Oxygen content he claimed that the Medieval Warm Period was not confined to the Northern Hemisphere but also happened in Antarctica. This suggests a close correlation between Antarctic temperature and Global Temperature.

61. Steve from Rockwood says:

Steven Mosher says:
April 9, 2012 at 1:27 pm

Leif Svalgaard says:
April 9, 2012 at 12:32 pm (Edit)

I would not see that as a problem. Suppose there were 10,000 measurements in a grid cell and 1 in a neighboring one. Since temperatures are strongly autocorrelated spatially, the 10,000 measurements may not be any better than the 1, so it would be OK to include both cells without taking into account that there are many more in one than in the other.

###############

That’s not the problem. The problem is when you have 10000 measures in one cell reading
20C and ONE measure in the next cell reporting 10C. A simple average gives you 15C.
Now, if the 10000 agree with the oneneighbor then averaging is not a problem. So, The method I use ( Nick stokes actually ) is inverse density weighted.

This is not even close to the problem. Up to 90% of the cells have NO DATA at all. Only a few cells have more than one point. So perhaps Mosher can repeat his averaging discussion with 1 sample in one cell and no data in the surrounding 50 cells.

A quick review of the proxies shows that some are highly auto correlated spatially, but only where they are located close together. This may sound like a stupid observation but localized high correlation and regional correlation are very different. Local correlation only gives you confidence to extrapolate outward to cells that are in the vicinity of the closely correlated points (such as in one area of Greenland). This process only gets you a small amount of grid coverage. Now take that average and move it several thousand kilometers to an area that is physically very different. The extrapolation should be meaningless. If a proxy in central Canada does not correlate well with a proxy in Greenland and there is no data between the two points, how can you say that an average proxy between the two points exists midway distance-wise and is a real estimate of the actual proxy at that location?

62. aaron says:

W., the seasonality of co2 is due geographic biological factors, not temp. Might look at co2 over a full pdo cycle, unfortunately I don’t think we have that data yet.

For the melt rate, we can look at sea level.

63. Nick Stokes says:

I largely agree with Willis here. I think they probably did as he described and that area weighting by cells is not helpful. If they did something else, I can’t see any reason for invoking 5×5 cells. And if they did want to use cell-based weighting, much larger cells would have been better.

But I also note Chris Colose point that they looked at other methods as well. In fact there is a section (4) in their SI which does, among other things, a Monte Carlo test to see if their average is sensitive to spatial randomness. And the answer seems to be no. That section 4 is headed “Hpw well do the proxy sites represent the globe?”

So while a better weighting scheme could have been used, it would likely make little difference.

64. I find it hard to believe it really 2012 and we are having this discussion. I could understand it if it was the 13th century and people were just beginning to figure such things out, but now with all our scientific understanding, this strikes me a very elementary. Surely there is standard way of doing this.

65. bubbagyro says:

Steve from Rockwood is correct.
Not only as he states, but if you regard each of the 80 proxies (I have done so) you will see that very few of the proxies are from high latitudes. Most are from very temperate areas (15-25°C). Of course: that is where the proxy-meisters like to go! Where it is not too cold. SO the underweighting of critical zones has another positive feedback. I call this the “Caribbean Proxy Feedback”.

66. bubbagyro says:

TomT says:
April 9, 2012 at 2:57 pm

Surely there is standard way of doing this.

There is. H.G. Wells wrote about the only sure way. It is going back in time with a thermometer and a cannister of lithium hydroxide.

67. Leif Svalgaard says: April 9, 2012 at 12:32 pm
……………..
Steven Mosher says:April 9, 2012 at 12:24 pm
……………..
Least flawed way to set the area weighting is triangulation. No output of a single station is used, but each triangle area is given average of the 3 corner stations.

68. Interstellar Bill says:

In line with rgbatduke:
From occasional browsing on Google Scholar I’ve found 55 PDFs on spherical geometry, grids, and data collection, a mere dilletante sample.

The Warmistas badly need the classic text “Statistical Analysis of Spherical Data” by Fisher.

My previous aerospace work includes orbital coverage analysis that encountered data gridding problems similar to those of irregularly situated thermometers. One similarity was how the Earth’s rotational velocity makes latitude paramount.

Fisher’s methods are overwhelmingly successful for such applications, but they also provide a metric for data sufficiency, which I gather this 80-point load would fail. Must be why they ignore Fisher.

69. Steve from Rockwood says:
April 9, 2012 at 2:37 pm
Up to 90% of the cells have NO DATA at all. Only a few cells have more than one point. So perhaps Mosher can repeat his averaging discussion with 1 sample in one cell and no data in the surrounding 50 cells.
Here is the distribution of proxies [from Steve]: http://www.leif.org/research/Shakun-Proxies.png

70. Dyspeptic Curmudgeon says:

Looks like none of these guys have ever heard of plotting on an equal area stereonet (or Schmidt net). In geology when you want to know about groupings of data points, you plot on an equal area net, and count the points falling within a circle representing 1% of the total area. In practice you use a plat with a 10cm radius and count the points falling with a 1 cm radius of your major lat/long intersections. There is no need to “correct” for area (ie multiply by sin(latitude) as that is taken care of already.

A Schmidt net of lat/long lines is constructed by calculating the projection of any point on the interior of a half globe into the horizontal plane of the globe *as seen from a point one radius above the centre of the plane*. Somewhere here I have the equations for these which I used long long ago on a galaxy far far away, to do this (in Fortran on an IBM360 (with output to a lineprinter!!) and later on a series of HP calculators.)

Quick course on stereonet usage here: http://folk.uib.no/nglhe/e-modules/Stereo%20module/1%20Stereo%20new.swf

But Willis is essentially correct, that there is no meaningful way to average such paucity of readings over such a large area. A stereonet might be usefully applied to compute an ‘average temperature’ for any particular 1% of the area, but this is totally unhelpful for any area where there ARE no records, and is explicitly based upon a number of assumptions about the homogeneity of the data which is represented. It is one thing to determine the “average pole to the plane” for a couple of hundred individual measurements of strike and dip and use the resultant average as a representative for the plane being measured, and quite another to do so for temperatures. If you are unsure of what I mean here in reference to planes, think of averaging many GPS position readings taken while stationary and using a minimum RMS distance measure to find the “position”. Schmidt nets sorta, kinda do that on paper…sorta, without the RMS Uses the Mark 1 eyeball.

It would of course be beyond hope that any of the warmists would think about talking to geologists or even GIS people about methodologies of plotting data representing 3 dimensional spatial arrangements.

71. Dyspeptic Curmudgeon says:

Arrgh. Memory FAIL. A Schmidt net projection is ‘seen’ from a point square root of 2 times the radius above the center of the plane, not the radius distance (that’s for wulff nets).

72. Mosher: “and that frost fairs in England can reconstruct the temperature in australia”

“During the Great Frost of 1683–84, the worst frost recorded in England”

The first sub zero DJF HADCET was 1684. There have only been 3. 1684, 1740 and 1963.

Does Mosher have thermometer records from Australia in 1684?

73. trevor says:

From a geostatistical point of the view – my critique of the interpolation technique used is as follows:
a) the grid cells are too small. The cell should ‘match’ the density fo the data so most (if not all) of the cells have a sample point. Very roughly and without revisiting the dataset presented in Willis E’s post, I would say the cells need to be at lest 20x bigger. b) The data is extremely clustered. Kriging (a type of averaging) has the effect of declustering the data and may have proved useful. However as a general comment extrapolating the data beyond the limited sample points is problematic and will mislead. Ask yourself the question, if you had to invest sizeable capital on the bet that a proxy point in Japan accurately predicted the value near Midway, how confident would you be in the proxy sample dataset.

74. Willis, I am beginning to think “the more you try to debunk Shakun’s paper (and you are doing a great job of it on the one hand – but on the other – the more you write the more Shakun likes it”.

By now it is beginning to look (to me) like we have lost sight of the basic question as to what came first; – was it CO2 or was it global warming?

It looks like the Shakun et al. paper is only telling us what we already knew, i.e. that the Earth did recover from glaciation. –

Can we from now on expect to have “skeptics” who believe that not only is CO2 capable of causing warming, on a global scale of about 1°C but also that it was responsible for the “Full Blown Global Warming” (FBGW) – feed-backs and all – at least in 80 places on the planet?

Or what am I missing here?

75. JDN says:

@Steven Mosher says:
April 9, 2012 at 1:33 pm
…whether you use regular gridding, verroni tesselation, EOFs, or kridging the answer comes out the same….

Is that good or bad? :) I think you missed that my comment related to sparse data. If you have some proof that 80 data points correlate well with current global temperature using any gridding scheme, that would be good for Shakun. However, I’m also skeptical that gridding works if you don’t pick the grid boundaries judiciously. But, you appear to believe that “improved” gridding procedures don’t improve anything. True?

There was an IgNobel awarded recently based on the idea that random promotions would improve a company, rather than performance evaluations: (http://www.guardian.co.uk/education/2010/nov/01/random-promotion-research). Do you also feel that gridding using randomly sized cells would be just as good as regular gridding? It would be hilarious, if true.

76. Oh and by the way CO2 released in melting “Antarctic Sea Ice” this year is only what was trapped there last year and should, as Willis say, make no difference

77. DocMartyn says:

On a rotating planet I would have thought this analysis pretty stupid. Why not just do lines of latitude?

78. mfo says:

A very clear explanation of yet another problem in the way Shakun et al. calculated their global temperatures. I think writing up your idea of “cluster analysis averaging” would be very useful. Great-circle formulae for air navigation seemed very complex to me.

I appreciate why an equal-area map projection would continue to pose problems. But just for interest there is a great Java tool demonstrating different map projections from the Instituto de Matemática e Estatística da Universidade Federal Fluminense in Brazil here:

http://www.uff.br/mapprojections/mp_en.html

From the description of how to use it:

“To rotate the globe, press the left button mouse over its surface, keep the button pressed and, then, drag the mouse. To zoom in or to zoom out the globe, keep the key “s” pressed, click with the left button mouse over the globe and, then, drag the mouse.

“To mark a point on the Earth’s surface, keep the key “i” pressed, press the left button over the globe and, then, drag the mouse. The latitude and longitude of this point will be displayed in the tab “Position” on the right side of the applet. In this same tab, there is a tool that computes the distance between capitals. The corresponding geodesical arc is drawn on the globe’s surface. The applet also draws the loxodrome curve joining the two places. “

79. Kev-in-UK says:

Dyspeptic Curmudgeon says:
April 9, 2012 at 3:21 pm

I read Willis post and was thinking the same thing regarding stereo nets, before reading through the comments and coming across yours – but anyways, my memory is probably worse than yours, and it’s far to late (12.30 am here) after a long weekend to try and engage brain!
Interesting you mention the old Fortran (77?)and HP’s too – I spent some time programing those blighters….still it was marginally easier than ZX80 or 650 m/code! LOL

80. David A. Evans says:

I can see a 30°C rise in temperature at the poles as feasable. Why? Because it’s arid!
Why are you all so concentrated on temperature?
It’s not alone relevant!
Haven’t any of you heard of humidity & enthalpy?
I give up! you’re fighting on their terms and so will never win!

81. Leif Svalgaard says:
April 9, 2012 at 3:16 pm
Steve from Rockwood says:
April 9, 2012 at 2:37 pm
Up to 90% of the cells have NO DATA at all. Only a few cells have more than one point. So perhaps Mosher can repeat his averaging discussion with 1 sample in one cell and no data in the surrounding 50 cells.
Here is the distribution of proxies [from Steve]

Sorry: looked at the distribution. Didn’t look TOO bad, all things considered. But the weighting could screw it up, I suppose ….

What conclude, thee?

82. Sparks says:
April 9, 2012 at 4:46 pm
“‘Nul points’ au soleil !”
Ha, Classic! Quick!, someone get the shovel.

Days with no spots are expected in a low cycle even at solar maximum, e.g. compare cycle 14 and 24:

83. Rogelio escobar says:

oh no!

84. beng says:

*****
pochas says:
April 9, 2012 at 1:47 pm

Steven Mosher says:
April 9, 2012 at 1:34 pm

“Added C02 will warm the earth and the ocean will respond by outgassing more C02.’

Very good, Steven!
*****

Except, regarding the current situation, he only scored 50%…….

85. Willis Eschenbach says:

Chris Colose says:
April 9, 2012 at 12:35 pm

… none of this changes the key conclusions, and they are in line with what some previous papers have argued.

“Some” indicates more than two. Please provide citations to three previous papers arguing that the CO2 went up before the temperatures, as their paper title claims. Remember, the other papers have to argue that the global warming was preceded by increasing carbon dioxide concentrations during the last deglaciation. I await your citations.

w.

86. AJB says:

vukcevic says, April 9, 2012 at 12:12 pm
Montrez-moi de nouveau quelques jours plus tard que le 23 avril. C’est l’anniversaire de ma mère, le soleil scintille toujours alors :-)

87. Bill Illis says:

The data (including the data used in the Shakun paper) says that temperatures increased BEFORE CO2 in Antarctica AND in the southern hemisphere.

They are trying to argue that “global temperatures” lagged behind the CO2 numbers (because the “northern hemisphere” temperatures lagged WAY behind the CO2 numbers).

I’m not sure that is really true.

The northern hemisphere is more complex because there was a lot of ice that needed to melt first before temperatures could increase and the northern hemisphere just has more variability than the south.

I see lots of “lagging” of CO2 behind Greenland temperatures, for example, but the Dansgaard Oeschger events and the Older (14,500 years ago) and Younger Dryas (12,800 years ago)events makes it hard to tell.

Shakun 2012 southern hemisphere temperature stack, northern hemisphere and CO2. Southern hemisphere leading CO2 by 1,400 years.

Extend Greenland and Antarctica out to 30,000 years rather than cutting off the data between 22,000 and 6,500 years ago and a different perspective emerges. Now the northern hemisphere variability is also leading CO2.

And go back through the whole last ice age. Northern hemisphere variability is pronounced and is not responding to CO2 at all.

88. Willis Eschenbach says:

Nick Stokes says:
April 9, 2012 at 2:50 pm

… In fact there is a section (4) in their SI which does, among other things, a Monte Carlo test to see if their average is sensitive to spatial randomness.

Thanks, Nick. I’d looked at that before. Actually, section (4) has nothing about monte carlo analysis. They discuss that in section 3. In section 4 they use subsampling to determine how well the proxy sites represent the globe … but they subsample gridcells, not individual temperature stations. They also sub-sample them at random … seems like you’d want to pick 80 individual temperature records near the proxy locations, not gridcell averages, to match against the 80 individual proxies.

In addition, as I commented elsewhere, they did not give enough detail in their monte carlo section (3) to determine if it is valid. A proper Monte Carlo analysis is quite hard to do, you have to be very careful with your assumptions. If all they did is add gaussian random noise, or autocorrelated random noise, I wouldn’t expect much difference … the problem is systematic error, not random error.

But then, I hardly expect more from a group that doesn’t even mention autocorrelation, and shows results with 1 sigma errors. Nick, you do realize that their figure 5a showing the changes in trends as you go northwards is 100% statistically insignificant? If you put in the proper 2 sigma error bars, not one of their findings is significant … and that says something very bad about either their knowledge of basic statistics, or their willingness to promote statistically meaningless results.

Not sure which one is worse …

w.

89. Nick Stokes says:

vukcevic says: April 9, 2012 at 3:05 pm
“Least flawed way to set the area weighting is triangulation.”

True. I think Willis is right here because I’ve been through this sequence myself. I did a calc of temperature based on 61 stations worldwide. It worked pretty well in terms of reproducing indices calculated with much larger samples. But I used 5×5 cells weighted as Willis described, and ran up against the same difficulty that the weighting really isn’t helping.

So I weighted by triangulation, again for about 60 stations. It made some difference (better), but not a lot.

90. Steve from Rockwood says:

Doug Proctor says:

April 9, 2012 at 4:54 pm
Leif Svalgaard says:
April 9, 2012 at 3:16 pm
Steve from Rockwood says:
April 9, 2012 at 2:37 pm
Up to 90% of the cells have NO DATA at all. Only a few cells have more than one point. So perhaps Mosher can repeat his averaging discussion with 1 sample in one cell and no data in the surrounding 50 cells.
Here is the distribution of proxies [from Steve]

Sorry: looked at the distribution. Didn’t look TOO bad, all things considered. But the weighting could screw it up, I suppose ….

What conclude, thee?

Trying not to find fault with Shakun et al for the sport of it, let’s assume that distribution at the poles is not relevant to their paper. After all they compare the Antarctic proxies and Greenland proxies to global (all) proxies. So you should look at mid-latitude distribution.

But I have one problem with the paper I can’t get over. They acknowledge that the SH warmed first up to 2,000 years before the NH and that SH temperatures led CO2 increases. If this is the case, why do we need their complex theory that the earth wobbled, warmed the NH, melted the ice sheets, cut off circulation of the oceans, leading to SH ocean warming, leading to SH CO2 release, finally leading to SH warming. Why can’t we just have SH warmed, CO2 was then released, NH warmed much later and lagged the CO2 release of the SH?

91. 1) One of the great weaknesses of the tree ring theory of thermometers is that they only reflect conditions during the growing season while proxies like O18 don’t seem to depend on seasons as much. How many of these proxies are organic in nature and – possibly – impacted by season? Shouldn’t these be less valuable than the ones not so tied to seasonal temps?

2) Couldn’t we get a good idea of globally descriptive these proxies are by comparing current instrumental readings from the same points, manipulating them the same way, and seeing how well they describe our current climate?

92. FergalR says:

Why don’t they do it with hexagons?

93. DocMartyn says:

Surely the best way to see if a gridded mean actualy has any realistic decription of the datasets is to examine the correlation of the equitorial datasets? The proxies at plus15 to minus 15 should all give pretty much the same results. Do they?

94. Willis Eschenbach says:

David A. Evans says:
April 9, 2012 at 4:53 pm

I can see a 30°C rise in temperature at the poles as feasable. Why? Because it’s arid!
Why are you all so concentrated on temperature?
It’s not alone relevant!
Haven’t any of you heard of humidity & enthalpy?
I give up! you’re fighting on their terms and so will never win!

Oh, please. Not only have I heard of enthalpy, I’ve actually done the calculations on how much difference humidity makes. Have you?

The answer I got is, it doesn’t make all that much difference … if you got some other answer, please show us.

Also, the 30°C rise is not at the poles as you fatuously assume. It’s on the Greenland ice cap, which is not arid at all, but is rather moist compared to say Antarctica, which indeed is arid as you mention.

Finally, let me suggest that you cut down on the attitude, my friend. I may be wrong, but I and most of the commenters are not fools. Yes, we’ve heard of enthalpy. Like they say … it’s not the heat that matters, it’s the humility …

w.

95. Don Monfort says:

I am with Mosher on this one. Pick eighty good trees in the right places and forget about the surface stations. How much money would that save us?

96. numerobis says:

Steven Mosher: when you say veronni do you mean Voronoi? And does that imply each cell is assumed to have a constant temperature in it, each point on Earth has the temperature of the closest proxy? Whereas triangulating means computing the Delaunay and linearly interpolating in each triangle?

In any case, it certainly looks like the original authors addressed this class of objection already. Certainly more proxies would be better, and I’m sure we’ll see followup papers doing exactly that. From that to claiming it was a horrible mistake to publish is quite a stretch.

97. Sparks says:

Leif Svalgaard says:
April 9, 2012 at 5:00 pm

“Days with no spots are expected in a low cycle even at solar maximum, e.g. compare cycle 14 and 24:”

Leif,
Not everyone expected this low solar cycle that we’re witnessing.
We haven’t had such a low cycle compared to SC14 in our life time, it is a bit exciting, don’t you think?
How many of these low cycles do you think we can expect in the future?

98. Sparks says:
April 9, 2012 at 8:47 pm
Not everyone expected this low solar cycle that we’re witnessing.
Using a method I and my colleagues pioneered in the 1970s [ http://www.leif.org/research/Using%20Dynamo%20Theory%20to%20Predict%20Solar%20Cycle%2021.pdf ] we did: http://www.leif.org/research/Cycle%2024%20Smallest%20100%20years.pdf and Ken, of course, too: http://adsabs.harvard.edu/abs/2003SPD….34.0603S .
We haven’t had such a low cycle compared to SC14 in our life time, it is a bit exciting, don’t you think?
Very exciting, indeed !
How many of these low cycles do you think we can expect in the future?
At least two, possibly many more.

99. Sparks says:

Leif,
Remember this article from 2004 when the Suns spot cycle was blamed on amplifying man made global warming?

“…over the last century the number of sunspots rose at the same time that the Earth’s climate became steadily warmer”, “…the warming is being amplified by gases from fossil fuel burning”

“This latest analysis shows that the Sun has had a considerable indirect influence on the global climate in the past, causing the Earth to warm or chill, and that mankind is amplifying the Sun’s latest attempt to warm the Earth. ”

The BBC 6 July 2004
Sunspots reaching 1,000-year high
http://news.bbc.co.uk/1/hi/3869753.stm

@Willis, sorry for the OT, I’m not convinced at all that CO2 is a driver or catalist of Ice ages, it’s funny how the twisting from CO2 drives global warming to now driving the ice-ages change with the tide, but it’s always man made. (including farm animals their flatuance is aparentaly now an anthropogenic foot print).

100. Are there scientific methods of choosing which gridding method to use, or some other method even, other than selecting the method to use via darts and a dart board or perhaps pulling out the method to use from a dark cloth bag?
All this seems silly and absurd to me. Using a mathematical model (to put it simply) of a sphere, and testing different methods against the mathematical ideal should clearly show how much error to expect using the various methods, and which one if any would be best. Or is it because this is climate science, and climate science has special properties.

101. JDN says:

@Nick Stokes
The 61 station reconstruction is amazing that it works. Are the temps from the GHCN you are using adjusted or unadjusted? If adjusted, doesn’t the entire global data set used to make the adjustment feed back in some way into your calculations?

102. Andrew says:

“And for the Shakun2012 study, with only 3% of the gridcells containing proxies, this is a huge problem. In their case, I say area-averaging is an improper procedure.”

I think that might be an under-statement Willis, but great work, once again.

I’m reminded of the quote: “It isn’t an optical illusion. It just looks like one.”

Can there be any doubters left? This paper is a travesty of the scientific method. It probably has somethiing to say about integrity too. But it is most certainly junk science. Have we reached the bottom do you think, or are there further depths yet to plumb?

103. Don Monfort says:

I am with numerobis on this one. Just cobble some more proxies together. Doesn’t matter what or where they are, it’s the average and the interpolation tricks that count. How many sediment proxies are among the 80? Just turn them upside down and we can use them twice. How many are we up to now?

104. Chuck Nolan says:

It seems to me they’re saying that sometimes the temp goes up before co2 and then sometimes co2 goes up before temp. Sounds like they’re not so much related after all…..seems to me.

105. Nick Stokes says:

“JDN says: April 9, 2012 at 9:20 pm”

I’m using unadjusted GHCN data (v2.mean for those older posts). In v2 I found the adjusted gave much the same result – I haven’t tried V3 there.

106. Allan MacRae says:

I’m pretty sure that Shakun2012l is nonsense, and I really appreciate all the hard work done here to poke holes in this paper’s “Shakun all over” methodology and logic, BUT:

The big question for me is WHY are the warming alarmists making such a big deal out of Shakun2012? (And for that matter, why are we? OK, I know, it’s just WRONG!)

Are the warmists deliberately trying to shift the debate, possibly because there has been no net global warming for the past 10-15 years? The warmists are clearly losing the “mainstream debate”. Is this a deliberate warmist attempt to obfuscate and to “move the goal posts” to new and better ground?

Note that the question raised by Shakun2012 is not even core to the “mainstream debate”, in which BOTH SIDES START with the assumption that atmospheric CO2 drives temperature and then argue “how much warming will truly occur” – the mainstream debate is about “climate sensitivity” and “water vapour feedbacks” to increasing atmospheric CO2, NOT whether CO2 drives temperature or temperature drives CO2. Both sides concede that CO2 drives temperature (even though they are probably wrong, imo).

Hardly anyone out there is arguing that temperature primarily drives CO2 – I can recall the late Ernst Beck, Jan Veizer (~2003), me (since 2008), Roy Spencer (2008) and Murry Salby (~2011). I should also acknowledge Richard Courtney, who is publicly agnostic on this issue and has had great debates with Ferdinand Engelbeen regarding the “material balance argument”. Sorry if I left anyone out. Oh yes, Kuo et al (1990) and Keeling et al (1995) – see below.

Sadly, Ernst Beck was often dismissed and even disrespected, despite the fact that few if any adequately addressed his data and hypothesis.

Prominent skeptic Fred Singer even suggested recently that those who espoused the argument that temperature primarily drives CO2 were clouding the mainstream debate.

Repeating my earlier post:

Although this questions is scientifically crucial, it is not that critical to the current “social debate” about alleged catastrophic manmade global warming (CAGW), since it is obvious to sensible people that IF CO2 truly drives temperature, it is an insignificant driver (climate sensitivity to CO2 is very low; “feedbacks” are negative) and minor increased warmth and increased atmospheric CO2 are both beneficial to humanity AND the environment.

In summary, the “climate skeptics” are trouncing the warming alarmists in the “mainstream CAGW debate”.

————————

First of all Rob, you are possibly on the right track – see Henry’s Law (1803) and the bit about temperature.
http://en.wikipedia.org/wiki/Henry's_law

Next, Shakun et al is nonsense. The paper is a veritable cornucopia of apples and oranges, grapes and bananas – and let’s not forget the watermelons.

It is interesting how often the global warming alarmists choose to ignore the Uniformitarian Principle AND Occam’s Razor.

CO2 lags temperature at all measured time scales from ~~600-800 years in the ice core records on a long temperature-time cycle, to 9 months on a much shorter time scale.

We really don’t know how much of the recent increase in atmospheric CO2 is natural and how much is manmade – possibilities range from entirely natural (~~600-800 years ago was the Medieval Warm Period) to entirely manmade (the “material balance argument”). I lean towards mostly natural, but I’m not certain.

Although this questions is scientifically crucial, it is not that critical to the current “social debate” about alleged catastrophic manmade global warming (CAGW), since it is obvious to sensible people that IF CO2 truly drives temperature, it is an insignificant driver (climate sensitivity to CO2 is very low; “feedbacks” are negative) and minor increased warmth and increased atmospheric CO2 are both beneficial to humanity AND the environment.

In summary, the “climate skeptics” are trouncing the warming alarmists in the “mainstream CAGW debate”.

Back to the crucial scientific question – is the current increase in atmospheric CO2 largely natural or manmade?

Please see this 15fps AIRS data animation of global CO2 at
[video src="http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4" /]

It is difficult to see the impact of humanity in this impressive display of nature’s power.

All I can see is the bountiful impact of Spring, dominated by the Northern Hemisphere with its larger land mass, and some possible ocean sources and sinks.

I’m pretty sure all the data is there to figure this out, and I suspect some already have – perhaps Jan Veizer and colleagues.

Best wishes to all for the Easter Weekend.

____________

Keeling et al (1995)
http://www.nature.com/nature/journal/v375/n6533/abs/375666a0.html
Nature 375, 666 – 670 (22 June 1995); doi:10.1038/375666a0
Interannual extremes in the rate of rise of atmospheric carbon dioxide since 1980
C. D. Keeling*, T. P. Whorf*, M. Wahlen* & J. van der Plichtt†
*Scripps Institution of Oceanography, La Jolla, California 92093-0220, USA
†Center for Isotopic Research, University of Groningen, 9747 AG Groningen, The Netherlands
OBSERVATIONS of atmospheric CO2 concentrations at Mauna Loa, Hawaii, and at the South Pole over the past four decades show an approximate proportionality between the rising atmospheric concentrations and industrial CO2 emissions1. This proportionality, which is most apparent during the first 20 years of the records, was disturbed in the 1980s by a disproportionately high rate of rise of atmospheric CO2, followed after 1988 by a pronounced slowing down of the growth rate. To probe the causes of these changes, we examine here the changes expected from the variations in the rates of industrial CO2 emissions over this time, and also from influences of climate such as El Niño events. We use the13C/12C ratio of atmospheric CO2 to distinguish the effects of interannual variations in biospheric and oceanic sources and sinks of carbon. We propose that the recent disproportionate rise and fall in CO2 growth rate were caused mainly by interannual variations in global air temperature (which altered both the terrestrial biospheric and the oceanic carbon sinks), and possibly also by precipitation. We suggest that the anomalous climate-induced rise in CO2 was partially masked by a slowing down in the growth rate of fossil-fuel combustion, and that the latter then exaggerated the subsequent climate-induced fall.
Kuo et al (1990)
http://www.nature.com/nature/journal/v343/n6260/abs/343709a0.html
Nature 343, 709 – 714 (22 February 1990); doi:10.1038/343709a0
Coherence established between atmospheric carbon dioxide and global temperature
Cynthia Kuo, Craig Lindberg & David J. Thomson
Mathematical Sciences Research Center, AT&T Bell Labs, Murray Hill, New Jersey 07974, USA
The hypothesis that the increase in atmospheric carbon dioxide is related to observable changes in the climate is tested using modern methods of time-series analysis. The results confirm that average global temperature is increasing, and that temperature and atmospheric carbon dioxide are significantly correlated over the past thirty years. Changes in carbon dioxide content lag those in temperature by five months.

107. James White, a paleo-climatologist at the University of Colorado at Boulder, said changes in stable isotope ratios — an indicator of past temperatures in the Taylor Dome ice core from Antarctica — are almost identical to changes seen in cores from Greenland’s GISP 2 core from the same period.
“The ice cores from opposite ends of the earth can be accurately cross-dated using the large, rapid climate changes in the methane concentrations from the atmosphere that accompanied the warming,” White said.
The evidence from the greenhouse gas bubbles indicates temperatures from the end of the Younger Dryas Period to the beginning of the Holocene some 12,500 years ago rose about 20 degrees Fahrenheit in a 50-year period in Antarctica, much of it in several major leaps lasting less than a decade.

http://www.sciencedaily.com/releases/1998/10/981002082033.htm

Casts considerable doubt on Shakun’s CO2 and temperature dating from the EPICA Dome C ice core. If this paper is correct, the Antarctic Cold Reversal aligned with the Younger Dryas and Shakun’s Antarctic CO2 dating is 1,000 years too early.

108. Jon says:

Scientifically I would first average all measurement within each 10 or 5 deg latitude paralell band and then weight that average number.
Otherwise the dominant large area around equator, where little happens(temperature) would be
dominated/colored by the little area towards the poles where much is happening(temperature).

109. ferd berple says:

Leif Svalgaard says:
April 9, 2012 at 1:40 pm
Steven Mosher says:
April 9, 2012 at 1:36 pm
Its actually BOTH. Added CO2 will warm the earth and the ocean will respond by outgassing more CO2.
nice positive feedback loop there…

Which would make life on earth a physical impossibility, given the volume of CO2 stored in the ocenas. Temperature would have run away long ago and cooked the earth.

110. Andrew says:

RE
Steven Mosher says:
@ April 9, 2012 at 1:42 pm

“Here is what you will find Chris. When a skeptic has one data point they like, they forget about the global average. When they have 80 they dont like, they crow about the small number.
60 optimally chosen site is enough. I’m not surprised they did well with 80.”
———————-

That might be true of the planet from whence you came Steven but I don’t think you need to be a ‘climate scientist’ to understand that 60 or 80 (and possibly not even 8000) sites would be adequate to detect accurately relatively small differences in the timing of temperature changes over a period of tens of thousands of years across the varied surface of this planet. And Shakun’s sites were certainly not “optimally chosen” – at least if by that term you require the conditions of 1) credibility and 2) representative to be satisfied (see also rgb’s comment @ 12.09pm).

111. Sparks says:

Leif,
:)

112. Willis Eschenbach says:

Oh, yeah, I remember now and far too late that I wanted to comment on their statement that they

… linearly interpolated to 100-yr resolution.

I’m not a fan of interpolation in general. For one thing, it reduces the variance in your record. Why? Because you’re guaranteed to eliminate almost all of the high and low points in the record.

For another, you’re making up data where none exists. You are taking actual observations, and you are turning them into imaginary data.

Now, I don’t mind infilling say one month in a twenty-year record. But when you start replacing a small amount of data with a whole lot of interpolated data, who knows where you’ll end up.

How much data is Shakun2012? Well, here’s a histogram of the increase or decrease in the number of data points for the individual proxies:

As you can see, for a number of the proxies, for every ten real observations, they’ve replaced them with thirty or forty or more interpolated numbers. One record has only 37 actual data points … and it gets interpolated to 292 imaginary data points.

One problem with this procedure is that when the increase in data points is large, the resulting interpolated dataset is strongly autocorrelated. This causes greater uncertainty (wider error bars) in the trend results that they are using to try to establish their claims in their Fig. 5a.

They have not commented on any of these issues …

w.

113. jorgekafkazar says:

“…In the event, the 80 proxies occupy 69 gridcells, or about 3% of the gridcells.”

Then the Shakun study should clearly have error bars that reach to the moon.

It’s hard to create data where none exist.

114. P. Solar says:

This study is just like Mann’s hockey stick. It is based on the assumption this if you get enough really crap unreliable, dubious “proxy” data and shake it really, really hard, all the errors will magically cancel out and the cream will float to the top.

This is just not science. It is banal, juvenile fiddling.

It is a travesty that this sort of garbage ever gets published in so-called learned reviews.

115. tty says:

That the Greenland temperature data are outliers does not mean that they are wrong. One only has to consider conditions on Greenland. At the present time most of Greenland’s icecap is really a huge temperate glacier with quite cold winters, but with temperatures close to or above zero in summer. During the Ice Age the Greenland Icecap was much larger, extending all the way to the edge of the continental shelf. It was also contiguous with the vast North American Ice Cap and partly surrounded by shelf ice, and conditions must have been much like in the interior of East Antarctica today, with temperatures below -50 centigrade for most of the year. The Ice Cap being larger, it must also have been thicker, so the sampling sites may have been as much as 1000 meters higher than at present, which would mean 7-10 degrees cooling by itself.

116. P. Solar says:

BTW Willis, great title. In reality it’s probably more like Shaken AND stirred.
They may have done better to weight the proxies according to the published uncertainty of the time scale. But since they seem quite happy to ignore the need for uncertainty in their own work I guess they would not want to bring up the subject.

How they can produce a paper that reports on relative timing without taking about the uncertainty of the timescales is curious.

117. Somebody says:

“This makes perfect sense” No, it doesn’t. The averaging done by the climate pseudo-scientists make no sense, no matter how they do it. No matter what kind of numerology you apply on the intensive value, you won’t get a ‘world temperature’, since Earth is not at thermodynamic equilibrium. It does not make any sense to attempt to do so, temperature for such a system cannot be defined.

118. Alan Wilkinson says:

Is it in any way realistic to think hemispheric temperature changes lagged each other by some thousands of years? That just triggers my crap detector. My prediction would be that the accuracy of the time estimates is far too poor to support any conclusions wrt sequence.

119. Mac says:

What does the standardized temperature and CO2 proxies graph look like minus the interpolation?

120. J.H. says:

Steven Mosher says:

April 9, 2012 at 1:36 pm

Eric Webb says:
April 9, 2012 at 12:54 pm (Edit)
Shakun’s paper just seems like more warmist crap to me, they should know that CO2 responds to tempertature, not the other way around.

###################

Its actually BOTH. Added C02 will warm the earth and the ocean will respond by outgassing more C02.
—————————————————————————————————————-

No… You forget that right at the start of the paper, they say that there is extra solar energy entering the system which melts extra ice on Greenland…. But you forget that bit and concentrate on CO2…. Chicken and egg stuff.

It is the extra energy in the system… CO2 is insignificant.

121. phlogiston says:

Leif Svalgaard says:
April 9, 2012 at 11:28 am
The data were projected onto a 5°x5° grid, linearly interpolated to 100-yr resolution and combined as area-weighted averages.
Where do they say that they weight with the area of each grid cell? The way I would weight would be to divide the globe into a number of equal-area pieces [not the grid cells, obviously, as they don’t have equal area] and then calculate the average value of a proxy by computing the average of the grid cells that fall into each equal area piece, then average all the pieces. This is the standard [and correct] way of doing it. Why do you think they didn’t do it this way?

Why they do this is probably to give greater weighting to the tropics where end-glacial temperature rises were smallest and latest – as described here at the inconvenient skeptic

122. Robbie says:

Come on Mr. Eschenbach and Mr. Easterbrook: Present the rebutal to Nature magazine and ‘humiliate’ Shakun et al in a scientific way of speaking. That’s how it should be done. Not this way.

123. Nick Stokes says:

Willis,
“One problem with this procedure is that when the increase in data points is large, the resulting interpolated dataset is strongly autocorrelated. This causes greater uncertainty (wider error bars) in the trend results that they are using to try to establish their claims in their Fig. 5a.

They have not commented on any of these issues …”

They did. There’s a whole section (3) in the SI on the Monte Carlo simulation they did to derive error estimates. These involve perturbing the original data and checking the variability of the output. It accounts for the effect of interpolation. They used autocorrelated noise to emulate the original autocorrelation between observations.

Interpolation itself is no big deal. They are down around the limit of time resolution, and the interpolation just eases the mechanics of lining up differently timed data points for analysis. It’s the resolution uncertainty that is the issue; interpolation on that scale doesn’t add to it.

124. Somebody says:

“These involve perturbing the original data and checking the variability of the output.” – this is just an error estimation for the pseudo-scientific models. It’s unrelated with reality. If you have a real system, that behaves like, let’s say f(x) = x^2 + e^(very, very large random value), and you model it with f(x) = x^2 + very, very, very small random value, if you compare the outputs of the models, you’ll get a very small ‘error’. See how well is related with reality.

125. Mac says:

The gaps in data are just as important as real data.

It is by only showing all the gaps in the data and all the real data at hand that a proper story can be told of what we know and crucially what we don’t know.

We should not allow statistical licence to tell one particular story.

126. Steve from Rockwood says:

Willis Eschenbach says:
April 9, 2012 at 11:22 pm

Oh, yeah, I remember now and far too late that I wanted to comment on their statement that they

… linearly interpolated to 100-yr resolution.

I’m not a fan of interpolation in general. For one thing, it reduces the variance in your record. Why? Because you’re guaranteed to eliminate almost all of the high and low points in the record.

For another, you’re making up data where none exists. You are taking actual observations, and you are turning them into imaginary data.

This is total nonsense. First, take a look at the Metadata column “resolution”. You will find that the average resolution is 200 yrs and is as coarse as 600. So sub-sampling to 100 yr will add to the higher frequency variation in 70 of the 80 proxies (those sampled at coarser than 100 yr). It does not eliminate the high and low points in the record (this only happens when data is sampled down to a lower resolution) and the resulting data is no more imaginary than the real data (the data can be considered over-sampled for this paper).

The sampling issue with Shakun et al is not the linear interpolation of the temperature proxies to 100 yr intervals. In fact you can’t do anything with the data (between proxies) if you don’t resample the data to the same time points. (Sheesh Willis!)

What is a problem is where they go once they map the proxies onto the 5 x 5 degree cells. At this point the earth is so highly under-sampled that they need to do some extra work to convince us why this low sampling is OK.

For example, in the NH you can select some proxies and compare them with the Greenland data and there is a lot of similarity. But not all the proxies show a good match. In the SH you can compare proxies from New Zealand to Antarctica and, for example, while the NZ proxies show the same early warming they don’t have a 13,775 y BP peak at all. Look at proxy MD97-2120 and Vostock for example. So the Arctic warms by several degrees, the Antarctic warms by 2 degrees during this same time period and New Zealand warming does not change at all (the rate of warming remains the same right through the peak at 13,775 y BP). I don’t know what the warming event at 13,775 y BP is but it happens 500 years earlier in Greenland. So why are these distinct short period events not simultaneously occurring in the NH and SH? How is New Zealand avoiding global climate change?

Another annoying point is the wobble theory to set off the NH warming. If the NH is tilted toward the sun and receives more incoming solar energy to set off the warming, then wouldn’t the SH be tilted away from the sun and received proportionally less warming? Why is the SH warming before, during and after this sudden earth wobble?

127. Allan MacRae says:

Steven Mosher says: April 9, 2012 at 1:36 pm
It’s actually BOTH. Added CO2 will warm the earth and the ocean will respond by outgassing more CO2.

Leif Svalgaard says: April 9, 2012 at 1:40 pm
nice positive feedback loop there

ferd berple says: April 9, 2012 at 10:38 pm
Which would make life on earth a physical impossibility, given the volume of CO2 stored in the oceans. Temperature would have run away long ago and cooked the earth.

You may be correct “ferd”.

A similar positive feedback loop exists in the CAGW climate computer models, where a small increase in the alleged warming impact of CO2 is multiplied several-fold by the alleged positive feedback of water vapour.

Take out the (bogus) positive water vapour feedback in the models, and there is NO global warming crisis – the models then project a little warming.

Furthermore, there is NO evidence that such positive water vapour feedbacks to CO2 actually exist, and ample evidence to the contrary.

So here we have two different “positive feedbacks” crucial to the global warming alarmist position , both of which are unlikely to exist.

And as you point out, one of the stronger pieces of evidence that these positive feedbacks do not exist is that if they did, life on Earth would be very different, if it existed al all.

128. Mac says:

We need to remember that climate scientists have corrupted peer review so deconstruction of Shakun2012 could well be limited to the internet. You need only consider Climategate in general and Steig09 in particular how nigh on impossible it is to correct flawed papers to understand that truth. Further the impact of the internet means that the court of public opinion now prevails over the settled science. Publicly revealing the flaws of Shakun2012 carries more weight for it embarrasses all the scientific community.

129. Steve from Rockwood says:

Nick Stokes says:
April 10, 2012 at 2:45 am

Willis,
“One problem with this procedure is that when the increase in data points is large, the resulting interpolated dataset is strongly autocorrelated. This causes greater uncertainty (wider error bars) in the trend results that they are using to try to establish their claims in their Fig. 5a.

They have not commented on any of these issues …”

They did. There’s a whole section (3) in the SI on the Monte Carlo simulation they did to derive error estimates. These involve perturbing the original data and checking the variability of the output. It accounts for the effect of interpolation. They used autocorrelated noise to emulate the original autocorrelation between observations.

Interpolation itself is no big deal. They are down around the limit of time resolution, and the interpolation just eases the mechanics of lining up differently timed data points for analysis. It’s the resolution uncertainty that is the issue; interpolation on that scale doesn’t add to it.

Sorry, Nick, you’re wrong. If you re-read section (3) the authors make two important comments:

1. Stacking. They project the data onto 5 x 5 degree cells and linearly interpolate the time series to 100 yr intervals. They do not otherwise account for spatial biases in the data set.

2. The two types of uncertainty they analyse are a) age models and b) temperature calibration. The Monte Carlo method is applied along the proxy time series (in time) not spatially outward (in area or distance).

If you combine 1 and 2 you see they did not do an analysis on the effect of spatially averaging the proxies. This is a major problem with their paper. For a global data set, almost 97% of the data is missing and the remaining data is not uniformly distributed.

Finally, if you believe in the Shakun et al paper, take a close look at their Figure 5. They divide the averaged proxies by latitude. Notice how 60-90S, 30-60S, 0-30S, 0-30N all show warming before CO2 started increasing. Only 30-60N and 60-90N show the lag. The 60-90N proxies really need to be considered as input parameters to their argument so they shouldn’t be used in the “global” mix. Same for the 60-90S as they are trying to compare polar warming to global warming. This leaves 4 regions for comparison, three of which fail their argument. This suggests to me that the NH proxies have too much influence in the averaging process.

130. Pamela Gray says:

Two questions

1. If water vapor is the primary agent of GHG warming, what paleo-proxies exists for increased water vapor? Soil layers?

2. Warming creates tons more ground fuel for catastrophic fires. Might the increased CO2 in ice cores be from such fires? Again, might soil layers demonstrate such global phenomenon?

131. Allan MacRae says:

Pascal Bruckner: The Ideology Of Catastrophe
The Wall Street Journal, 10 April 2012

A time-honored strategy of cataclysmic discourse, whether performed by preachers or by propagandists, is the retroactive correction. This technique consists of accumulating a staggering amount of horrifying news and then—at the end—tempering it with a slim ray of hope. First you break down all resistance; then you offer an escape route to your stunned audience.

As an asteroid hurtles toward Earth, terrified citizens pour into the streets of Brussels to stare at the mammoth object growing before their eyes. Soon, it will pass harmlessly by—but first, a strange old man, Professor Philippulus, dressed in a white sheet and wearing a long beard, appears, beating a gong and crying: “This is a punishment; repent, for the world is ending!”

We smile at the silliness of this scene from the Tintin comic strip “L’Étoile Mystérieuse,” published in Belgium in 1941. Yet it is also familiar, since so many people in both Europe and the United States have recently convinced themselves that the End is nigh. Professor Philippulus has managed to achieve power in governments, the media and high places generally. Constantly, he spreads fear: of progress, science, demographics, global warming, technology, food. In five years or in 10 years, temperatures will rise, Earth will be uninhabitable, natural disasters will multiply, the climate will bring us to war, and nuclear plants will explode.

Man has committed the sin of pride; he has destroyed his habitat and ravaged the planet; he must atone.

My point is not to minimize our dangers. Rather, it is to understand why apocalyptic fear has gripped so many of our leaders, scientists and intellectuals, who insist on reasoning and arguing as though they were following the scripts of mediocre Hollywood disaster movies.

Over the last half-century, leftist intellectuals have identified two great scapegoats for the world’s woes. First, Marxism designated capitalism as responsible for human misery. Second, “Third World” ideology, disappointed by the bourgeois indulgences of the working class, targeted the West, supposedly the inventer of slavery, colonialism and imperialism.

The guilty party that environmentalism now accuses—mankind itself, in its will to dominate the planet—is essentially a composite of the previous two, a capitalism invented by a West that oppresses peoples and destroys the Earth.

Environmentalism sees itself as the fulfillment of all earlier critiques. “There are only two solutions,” Bolivian president Evo Morales declared in 2009. “Either capitalism dies, or Mother Earth dies.”

“Our house is burning, but we are not paying attention,” said Jacques Chirac, then president of France, at the World Summit on Sustainable Development in 2002. “Nature, mutilated, overexploited, cannot recover, and we refuse to admit it.”

Sir Martin Rees, a British astrophysicist and former president of the Royal Society, gives humanity a 50% chance of surviving beyond the 21st century. Oncologists and toxicologists predict that the end of mankind should arrive even earlier, around 2060, thanks to a general sterilization of sperm.

One could cite such quotations forever, given the spread of apocalyptic literature. Authors, journalists, politicians and scientists compete in their portrayal of abomination and claim for themselves a hyperlucidity: They alone see the future clearly while others vegetate in the darkness.

The fear that these intellectuals spread is like a gluttonous enzyme that swallows up an anxiety, feeds on it, and then leaves it behind for new ones. When the Fukushima nuclear plant melted down after the enormous earthquake in Japan in March 2011, it only confirmed an existing anxiety that was looking for some content. In six months, some new concern will grip us: a pandemic, bird flu, the food supply, melting ice caps, cell-phone radiation.

The fear becomes a self-fulfilling prophecy, with the press reporting, as though it were a surprise, that young people are haunted by the very concerns about global warming that the media continually broadcast. As in an echo chamber, opinion polls reflect the views promulgated by the media.

We are inoculated against anxiety by the repetition of the same themes, which become a narcotic we can’t do without.

A time-honored strategy of cataclysmic discourse, whether performed by preachers or by propagandists, is the retroactive correction. This technique consists of accumulating a staggering amount of horrifying news and then—at the end—tempering it with a slim ray of hope.

First you break down all resistance; then you offer an escape route to your stunned audience. Thus the advertising copy for the Al Gore documentary “An Inconvenient Truth” reads: “Humanity is sitting on a time bomb. If the vast majority of the world’s scientists are right, we have just ten years to avert a major catastrophe that could send our entire planet’s climate system into a tail-spin of epic destruction involving extreme weather, floods, droughts, epidemics and killer heat waves beyond anything we have ever experienced—a catastrophe of our own making.”

Here are the means that the former vice president, like most environmentalists, proposes to reduce carbon-dioxide emissions: using low-energy light bulbs; driving less; checking your tire pressure; recycling; rejecting unnecessary packaging; adjusting your thermostat; planting a tree; and turning off electrical appliances. Since we find ourselves at a loss before planetary threats, we will convert our powerlessness into propitiatory gestures, which will give us the illusion of action. First the ideology of catastrophe terrorizes us; then it appeases us by proposing the little rituals of a post-technological animism.

But let’s be clear: A cosmic calamity is not averted by checking tire pressure or sorting garbage.

Another contradiction in apocalyptic discourse is that, though it tries desperately to awaken us, to convince us of planetary chaos, it eventually deadens us, making our eventual disappearance part of our everyday routine. At first, yes, the kind of doom that we hear about—acidification of the oceans, pollution of the air—charges our calm existence with a strange excitement. But the certainty of the prophecies makes this effect short-lived.

We begin to suspect that the numberless Cassandras who prophesy all around us do not intend to warn us so much as to condemn us.

In classical Judaism, the prophet sought to give new life to God’s cause against kings and the powerful. In Christianity, millenarian movements embodied a hope for justice against a church wallowing in luxury and vice. But in a secular society, a prophet has no function other than indignation. So it happens that he becomes intoxicated with his own words and claims a legitimacy with no basis, calling down the destruction that he pretends to warn against.

You’ll get what you’ve got coming! That is the death wish that our misanthropes address to us. These are not great souls who alert us to troubles but tiny minds who wish us suffering if we have the presumption to refuse to listen to them. Catastrophe is not their fear but their joy. It is a short distance from lucidity to bitterness, from prediction to anathema.

Another result of the doomsayers’ certainty is that their preaching, by inoculating us against the poison of terror, brings about petrification. The trembling that they want to inculcate falls flat. Anxiety has the last word. We were supposed to be alerted; instead, we are disarmed. This may even be the goal of the noisy panic: to dazzle us in order to make us docile. Instead of encouraging resistance, it propagates discouragement and despair. The ideology of catastrophe becomes an instrument of political and philosophical resignation.

Mr. Bruckner is a French writer and philosopher whose latest book is “The Paradox of Love” (Princeton University Press, 2012). This article, translated by Alexis Cornel, is excerpted from the Spring 2012 issue of City Journal.

132. Steve from Rockwood says:

Ulric Lyons says:
April 10, 2012 at 6:34 am

Steven Mosher says:
April 9, 2012 at 1:42 pm
“…and that frost fairs in England can reconstruct the temperature in australia”

I don`t see why not. If it`s a cold winter in the north there is more likely to be an El Nino, giving drought and warmer conditions in Australia.
http://en.wikipedia.org/wiki/River_Thames_frost_fairs

There are two aspects to this. The first as Mosher points out is the extent to which local proxies can be treated as regional. This is easy to solve. Compare two distant proxies and where they correlate the effects are regional. Local effects do not correlate.

The second aspect requires some help from H.G. Wells. What happens when a regional effect is missing from a proxy record? Does this mean the area never experienced the global effect or that the proxy is wrong?

133. Don Monfort says:

I am with Nick Stokes on this one. Willis is being way too picky about this stuff. It’s the climate science for chrissakes. Willis is like some highbrow sportswriter picking apart a “pro-wrestling” performance. It’s entertainment, Willis. Lighten up!

Steve from Rockwood: please write rebuttal and send to Nature.

134. paulhan says:

Voronoi diagrams strike me as a very elegant way of spreading / smearing /averaging what available data there is. At least with them, one is guaranteed that there is exactly one measurement per cell. Where the boundaries of the cell are depends on where the neighbouring datapoints are. If the datapoints are close together, then the cells are smaller, which naturally gives them less weight. Using Delauney triangulation, one can then derive the area of the cell.
It’s handy too where a datapoint drops out for a period of time. All that is done is to recaluate the surrounding diagrams.
Where it would be weak is where there is a very sparse dataset, in which case, one could have cells taking up huge areas, but that applies to any other methodology too. And at least each cell has a measurement.

135. Jon says:

The focus should be scientifically, what is probably right and what is probably wrong.
NOT who is right and who is wrong.

136. George E. Smith; says:

“”””” Steven Mosher says:

April 9, 2012 at 1:34 pm

george, let us know when you discover spatial auto correlation. “””””

Ah ! spatial auto correlation ; I think you got me there Steven.

In 1958 I signed up for a course in “Autocorrelation of Non-Integrable Discontinuous Functions”.

But then I played hooky, and went fishing on the day they gave the lecture; and of course I lost the text book, so I feel a big gap in my knowledge. I’ve always been puzzled about the non-simultaneity of spatial sampling of time varying functions too. I guess it can all be rectified by averaging; because you can always average ANY set of arbitrary real numbers, and get an average; as well as virtually any defined statistical parameter. Of course none of it relates to, or means anything real, but you can do the motions on the numbers as if it meant something.

Of course the very same thing applies to discrete autocorrelation. The mechanics of the calculation can be applied to any arbitrary numbers, just as can the mechanics of statistical mathematics, and so you will get an autocorrelation value, for spatial or any other functional variable you like. The problem is the same as statistics of arbitrary numbers. It doesn’t necessarily have any connection to anything real. You might as well count the total number of animals per square metre (larger than an ant say) and do statistics or autocorrelations on that, and make some learned report to the World Wildlife Federation.

So perhaps Steven, since some of us missed the lecture, you could enlghten us about it.

137. Jimmy Howbuilt says:

Folks here at WUWT (and Willis Eschenbach in particular) seem to be totally unaware of an article that NASA’s James Hansen wrote titled Enchanted Rendezvous: John C. Houbalt and the Genesis of the Lunar-Orbit Rendezvous Concept (Monographs in Aerospace History, Series 4, December 1995).

Fans of the history of science and engineering will find plenty of lessons in Hansen’s article that are relevant to climate change. And you can bet too, that present-day NASA’s administrators haven’t forgotten the lesson that Houbalt and Hansen both preach. And even folks who disagree with Hansen’s climate analysis will find that he does a terrific job of analyzing the processes by which NASA (at its best) reliably makes technical choices that lead to success, rather than disaster.

NASA/Hansen’s Simple Lesson: Nothing good comes of NASA administrators and astronauts over-ruling NASA scientists and engineers.

Just ask the Challenger astronauts, and the Apollo 1 astronauts, and the NASA administrators overseeing those tragic programs, about the catastrophes that have followed when NASA administrators, and NASA professional discipline, bowed to the pressures of politics, schedule, and budget.

It’s significant too, that of more than 300 NASA astronauts, only seven signed the letter. The rest of the astronauts used common sense: Muzzle individual scientists? Bad idea. Because very many scientists agree with Hansen. Does NASA want to be in the business of censoring scientists and engineers en masse?. Muzzle selected ideas? That’s a bad idea under all circumstances. And its a *worse* idea when NASA administrators are the ones selecting the ideas to be muzzled.

Bottom Line: Quite properly, NASA will do nothing to muzzle its scientists and engineers.

138. Andrew says:

RE
Don Monfort says:
@ April 10, 2012 at 9:24 am
I am with Nick Stokes on this one. Willis is being way too picky about this stuff. It’s the climate science for chrissakes. Willis is like some highbrow sportswriter picking apart a “pro-wrestling” performance. It’s entertainment, Willis. Lighten up!

——

It’s called Grand larceny, not entertainment.

139. Andrew says:

RE
Steve from Rockwood says:
@ April 10, 2012 at 5:40 am

—————

Quite so Steve. That nailed it.

This idea that keeps cropping-up among ‘climate scientists’ who consider that linear interpolation of extremely sparse datasets is always appropriate whatever the sampling regime employed is bizare. Linear interpolation, assumes necessarily the existing samples are representative of the likely range of the variable under consideration. Given the gross imbalance in sampling of continental interiors versus coastal locations, and, frankly, the gross under-sampling of the Earth surface in its totality, this assumption cannot hold. It is not interploation – but extrapolation by another name.

It’s Fiction. Fantasy. Nonsense. Junk Science.

140. don penman says:

The antarctic warming preceded co2 rise in the antarctic and the warming in Greenland preceded co2 rise in Greenland however the earth warmed due to increase in co2 because the warming in Antarctica and Greenland was just local warming,this is very much like the AGW interpretation of why there was no global MWP.I think that it is more likely that during the last ice age the NH and the SH could have had very similar temperatures ,because the continents were covered in ice in the NH which would have prevented them heating up.I don’t think co2 drives the climate and I don’t think this paper shows that.I think Willis has made some good points about the statistics and the methods employed in this paper.

141. Don Monfort says:

I am with Jimmy on this one. NASA will do nothing to muzzle its scientists and engineers, as long as they don’t deviate from the CAGW climate consensus party line. They are sensitive to threats to funding. And rightly so, I might add. (Am I doing OK, Jimmy?)

142. Don Monfort says:

Yes Andrew, grand larceny too. I was thinking of a tragi comedie, along the lines of The Gang That Couldn’t Shoot Straight. Or, The China Syndrome meets the Keystone Cops. But whatever it is, it ain’t science.

143. The Navier Stokes equations describe fluid flow with changes in temperature and density. They are non-linear, chaotic, and show sensitive dependence on initial conditions. That means a state trajectory with temperature 0.1 C will differ from a trajectory with 0.1001 C, with the difference between the trajectories doubling every few days. That has been known since the paper of Edward Lorenz in 1963 “Deterministic Aperiodic Flow”.

Because of the sensitive dependence on initial conditions, future states can not be predicted accurately from ANY finite set of past states. Future prediction is not possible. All we can do is react to the current states we measure. Any policy or procedure based on long term prediction of future states is either an error, in those with little knowledge, or a hoax from those who have greater scientific knowledge, (or perhaps both!). To the extent that global warming depends on predicting long term future states, it is wrong.

144. numerobis says:

Your characterization of chaotic systems is slightly off: if the distance between the two trajectories reliably doubles then you’re not chaotic — 2^t is a very clean little function, being off by epsilon just means being off by (1+e)^t later. Perturbations have to mix, so that a small perturbation in initial conditions leads to either a small or large perturbation after some time. That’s the hallmark of chaos. However, the mixture doesn’t need to be uniform, which is why you can get useful information about climate. So for example, I can’t predict what the weather will be on June 24 at mid-day. But I can predict that it is very likely to be much warmer than the weather on April 10 at midnight.

145. michael hart says:

“So for example, I can’t predict what the weather will be on June 24 at mid-day. But I can predict that it is very likely to be much warmer than the weather on April 10 at midnight.”
-Not at the South Pole. So, moving Northwards, at what point might that statement become true?

146. numerobis says:

You got me localized, I must live in the northern hemisphere given my claim. Send in the ICBMs! I’ll help you out: June 24 is a major holiday around here. It should reduce your cost.

147. Don Monfort says:

My much more educated alter ego, Don M, is correct. And your counter example is silly (I think). You are relying on non-chaotic and very predictable seasonal and day/night variations to make your prediction. Very dumb, numerobis (I think). Perhaps my alter ego will back me up on this.

148. numerobis says:

What’s dumb about responding to “weather is chaotic and thus you can’t know anything about it” with an example that shows that actually you can know something?

You put your finger straight on the point: if you have a decent model for the climate (namely, it’s typically warmer in June than April — not true everywhere, mind you — and it’s typically warmer at noon than midnight) you can make probabilistic predictions.

149. Willis Eschenbach says:

Nick Stokes says:
April 10, 2012 at 2:45 am
Willis,

“One problem with this procedure is that when the increase in data points is large, the resulting interpolated dataset is strongly autocorrelated. This causes greater uncertainty (wider error bars) in the trend results that they are using to try to establish their claims in their Fig. 5a.

They have not commented on any of these issues …”

They did. There’s a whole section (3) in the SI on the Monte Carlo simulation they did to derive error estimates. These involve perturbing the original data and checking the variability of the output. It accounts for the effect of interpolation. They used autocorrelated noise to emulate the original autocorrelation between observations.

Interpolation itself is no big deal. They are down around the limit of time resolution, and the interpolation just eases the mechanics of lining up differently timed data points for analysis. It’s the resolution uncertainty that is the issue; interpolation on that scale doesn’t add to it.

Thanks as always Nick, but they didn’t comment on the issue of how the autocorrelation affects their error bars. All they did was say (in section 3 as you point out_ that they used autocorrelated disturbances in their Monte Carlo analysis, which is an entirely different thing.

I’m sorry, but interpolation on any scale increases autocorrelation, and autocorrelation, despite your soothing words, can be a “big deal”. It is particularly a big deal when they jack the number of data points by a factor of three or four, as they have done with a number of their proxies. That can have a huge effect on the error bars in two ways—first, by artificially increasing “N”, the number of data points, and second, by increasing the autocorrelation. I see know evidence that they are aware of or have corrected for these known issues.

w.

150. Willis Eschenbach says:

Steve from Rockwood says:
April 10, 2012 at 4:46 am

Willis Eschenbach says:
April 9, 2012 at 11:22 pm

Oh, yeah, I remember now and far too late that I wanted to comment on their statement that they

… linearly interpolated to 100-yr resolution.

I’m not a fan of interpolation in general. For one thing, it reduces the variance in your record. Why? Because you’re guaranteed to eliminate almost all of the high and low points in the record.

For another, you’re making up data where none exists. You are taking actual observations, and you are turning them into imaginary data.

This is total nonsense. First, take a look at the Metadata column “resolution”. You will find that the average resolution is 200 yrs and is as coarse as 600. So sub-sampling to 100 yr will add to the higher frequency variation in 70 of the 80 proxies (those sampled at coarser than 100 yr). It does not eliminate the high and low points in the record (this only happens when data is sampled down to a lower resolution) and the resulting data is no more imaginary than the real data (the data can be considered over-sampled for this paper).

If you have data with a high peak of say +5 at say every fifty years, and a low point of say -5 every fifty years, and you interpolate them to 100 year resolution, you will get a straight line … so yes, you do lose the high and low points, because they don’t coincide with the 100 year interpolation time. And this is true also with randomly spaced points—unless they fall at exactly the 100 year resolution, you will lose their highs and lows. It is also true whether you are increasing or decreasing the resolution (up-sampling or down-sampling).

Here’s an example of what interpolation does. I’ve used data from this very paper to show how the peaks and valleys get cut off.

As you can see, other than the peak at 2400 yrs BP, which falls right at 2400 years and doesn’t get interpolated, the rest of the peaks and valleys are cut down. This affects the standard deviation, which is 0.80 in the original data in the graph, and only 0.62 in the interpolated graph … and this in turn affects the claimed accuracy of things like average of the data.

The sampling issue with Shakun et al is not the linear interpolation of the temperature proxies to 100 yr intervals. In fact you can’t do anything with the data (between proxies) if you don’t resample the data to the same time points. (Sheesh Willis!)

You can do all kinds of things with data that is not evenly sampled, Steve. Look at all of the work that I’ve done in these four papers on Shakun, and not a scrap of interpolation in sight. Sheesh, indeed.

This is total nonsense. First, take a look at the Metadata column “resolution”. You will find that the average resolution is 200 yrs and is as coarse as 600. So sub-sampling to 100 yr will add to the higher frequency variation in 70 of the 80 proxies (those sampled at coarser than 100 yr). It does not eliminate the high and low points in the record (this only happens when data is sampled down to a lower resolution) and the resulting data is no more imaginary than the real data (the data can be considered over-sampled for this paper).

Again, if you are interpolating to an exact 100 years it does indeed eliminate all of the low and high points that do not fall at the exact point of interpolation. And it doesn’t matter what frequency you sampled the original data at.

And yes, the results are imaginary. You are correct that if you have a data point at 12,135 yr BP, and another at 12,615 years BP, you can draw a straight line between them and interpolate the values every hundred years. But those are NOT THE TEMPERATURES for those points in time. They are imaginary numbers, not observations.

w.

PS—I’d advise you to save your “sheesh” for something where you actually know what you are talking about …

151. Dyspeptic Curmudgeon says:

Kev-in-UK says:
April 9, 2012 at 4:29 pm

“I read Willis post and was thinking the same thing regarding stereo nets …
Interesting you mention the old Fortran (77?)and HP’s too ”
Fortran IV with Watfor iirc. It was 1968-70. Then on an HP9100B in 1971 and a bunch of HP’s: HP-65. 67. 41. 48SX and finally 48GX, of which I have 2 the youngest of which dates to 1996.

152. Steve from Rockwood says:

Willis Eschenbach says:
April 11, 2012 at 12:40 pm

153. Steve from Rockwood says:

Dyspeptic Curmudgeon says:
April 11, 2012 at 1:31 pm

Watfor I believe stood for Waterloo Fortran (as in University of Waterloo). I believe that university also spun out RIM (Research in Motion – Blackberry). My first book on programming was “Fortran IV for Programmers” which had me wondering if they wrote books for other people, such as “Fortran IV for Managers”.

154. Steve from Rockwood says:
April 11, 2012 at 5:51 pm
My first book on programming was “Fortran IV for Programmers” which had me wondering if they wrote books for other people, such as “Fortran IV for Managers”.
Nowadays it would be “Fortran IV for Dummies”…

155. Don Monfort says:

Numerdoobis

Like I said, dumb. And now it appears, dishonest My much better educated alter ego did not say:

“weather is chaotic and thus you can’t know anything about it”

You made that up. If you are going to quote somebody, don’t make crap up out of thin air. The learned Don M stated:

“To the extent that global warming depends on predicting long term future states, it is wrong.”

You should shut up now.

156. numerobis says:

“Future prediction is not possible.” I know, I’m a complete idiot for believing that this was a claim that it was impossible to predict the future of a chaotic system — though, I insist, an honest idiot. But instead of merely putting me down, perhaps you could elevate me to your moderate level of education at least, if not the erudite heights reached by your alter ego. What does DonM mean, exactly?