I’d like to highlight one oddity in the Shakun et al. paper, “Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation” (Shakun2012), which I’ve discussed here and here. They say:
The data were projected onto a 5°x5° grid, linearly interpolated to 100-yr resolution and combined as area-weighted averages.
The oddity I want you to consider is the area-weighting of the temperature data from a mere 80 proxies.
Figure 1. Gridcells of latitude (North/South) and longitude (East/West)
What is area-weighting, and why is it not appropriate for this data?
“Area-weighting” means that you give more weight to some data than others, based on the area of the gridcell where the data was measured. Averaging by gridcell and then area-weighting attempts to solve two problems. The first problem is that we don’t want to overweight an area where there are lots of observations. If some places have 3 observations and others have 30 observations in the same area, that’s a problem if you simply average the data. You will overweight the places with lots of data.
I don’t like the usual solution, which is to use gridcells as shown in Figure 1, and then take a distance-weighted average from the center of the gridcell for each gridcell. This at least attenuates some of the problem of overweighting of neighboring proxies by averaging them together in gridcells … but like many a solution, it introduces a new problem.
The next step, area-averaging, attempts to solve the new problem introduced by gridcell averaging. The problem is that, as you can see from Figure 1, gridcells come in all different sizes. So if you have a value for each gridcell, you can’t just average the gridcell values together. That would over-weight the polar regions, and under-weight the equator.
So instead, after averaging the data into gridcells, the usual method is to do an “area-weighted average”. Each gridcell is weighted by its area, so a big gridcell gets more weight, and a small gridcell gets less weight. This makes perfect sense, and it works fine, if you have data in all of the gridcells. And therein lies the problem.
For the Shakun 2012 gridcell and area-averaging, they’ve divided the world into 36 gridcells from Pole to Pole and 72 gridcells around the Earth. That’s 36 times 72 equals 2592 gridcells … and there are only 80 proxies. This means that most of the proxies will be the only observation in their particular gridcell. In the event, the 80 proxies occupy 69 gridcells, or about 3% of the gridcells. No less than 58 of the gridcells contain only one proxy.
Let me give an example to show why this lack of data is important. To illustrate the issue, suppose for the moment that we had only three proxies, colored red, green, and blue in Figure 2.
Figure 2. Proxies in Greenland, off of Japan, and in the tropical waters near Papua New Guinea (PNG).
Now, suppose we want to average these three proxies. The Greenland proxy (green) is in a tiny gridcell. The PNG proxy (red) is in a very large gridcell. The Japan proxy (blue) has a gridcell size that is somewhere in between.
But should we give the Greenland proxy just a very tiny weight, and weight the PNG proxy heavily, because of the gridcell size? No way. There is no ex ante reason to weight any one of them.
Remember that area weighting is supposed to adjust for the area of the planet represented by that gridcell. But as this example shows, that’s meaningless when data is sparse, because each data point represents a huge area of the surface, much larger than a single gridcell. So area averaging is distorting the results, because with sparse data the gridcell size has nothing to do with the area represented by a given proxy.
And as a result, in Figure 2, we have no reason to think that any one of the three should be weighted more heavily than another.
All of that, to me, is just more evidence that gridcells are a goofy way to do spherical averaging.
In Section 5.2 of the Shakun2012 supplementary information, they authors say that areal weighting changes the shape of the claimed warming, but does not strongly affect the timing. However, they do not show the effect of areal weighting on their claim that the warming proceeds from south to north.
My experiments have shown me that the use of a procedure I call “cluster analysis averaging” gives better results than any gridcell based averaging system I’ve tried. For a sphere, you use the great-circle distance between the various datapoints to define the similarity of any two points. Then you just use simple averaging at each step in the cluster analysis. This avoids both the inside-the-gridcell averaging and the between-gridcell averaging … I suppose I should write that analysis up at some point, but so many projects, so little time …
One final point about the Shakun analysis. The two Greenland proxies show a warming over the transition of ~ 27°C and 33°C. The other 78 proxies show a median warming of about 4°C, with half of them in the range from 3° to 6° of warming. Figure 3 shows the distribution of the proxy results:
Figure 3. Histogram of the 80 Shakun2012 proxy warming since the most recent ice age. Note the two Greenland ice core temperature proxies on the right.
It is not clear why the range of the Greenland ice core proxies should be so far out of line with the others. It seems doubtful that if most of the world is warming by about 3°-6°C, that Greenland would warm by 30°C. If it were my study, I’d likely remove the two Greenland proxies as wild outliers.
Regardless of the reason that they are so different from the others, the authors areal-weighting scheme means that the Greenland proxies will be only lightly weighted, removing the problem … but to me that feels like fortuitously offsetting errors, not a real solution.
A good way to conceptualize the issue with gridcells is to imagine that the entire gridding system shown in Figs. 1 & 2 were rotated by 90°, putting the tiny gridcells at the equator. If the area-averaging is appropriate for a given dataset, this should not change the area-averaged result in any significant way.
But in Figure 2, you can see that if the gridcells all came together down by the red dot rather than up by the green dot, we’d get a wildly different answer. If that were the case, we’d weight the PNG proxy (red) very lightly, and the Greenland proxy (green) very heavily. And that would completely change the result.
And for the Shakun2012 study, with only 3% of the gridcells containing proxies, this is a huge problem. In their case, I say area-averaging is an improper procedure.
w.
Steven Mosher says:
April 9, 2012 at 1:36 pm
Eric Webb says:
April 9, 2012 at 12:54 pm (Edit)
Shakun’s paper just seems like more warmist crap to me, they should know that CO2 responds to tempertature, not the other way around.
###################
Its actually BOTH. Added C02 will warm the earth and the ocean will respond by outgassing more C02.
—————————————————————————————————————-
No… You forget that right at the start of the paper, they say that there is extra solar energy entering the system which melts extra ice on Greenland…. But you forget that bit and concentrate on CO2…. Chicken and egg stuff.
It is the extra energy in the system… CO2 is insignificant.
Leif Svalgaard says:
April 9, 2012 at 11:28 am
The data were projected onto a 5°x5° grid, linearly interpolated to 100-yr resolution and combined as area-weighted averages.
Where do they say that they weight with the area of each grid cell? The way I would weight would be to divide the globe into a number of equal-area pieces [not the grid cells, obviously, as they don’t have equal area] and then calculate the average value of a proxy by computing the average of the grid cells that fall into each equal area piece, then average all the pieces. This is the standard [and correct] way of doing it. Why do you think they didn’t do it this way?
Why they do this is probably to give greater weighting to the tropics where end-glacial temperature rises were smallest and latest – as described here at the inconvenient skeptic
Come on Mr. Eschenbach and Mr. Easterbrook: Present the rebutal to Nature magazine and ‘humiliate’ Shakun et al in a scientific way of speaking. That’s how it should be done. Not this way.
Willis,
“One problem with this procedure is that when the increase in data points is large, the resulting interpolated dataset is strongly autocorrelated. This causes greater uncertainty (wider error bars) in the trend results that they are using to try to establish their claims in their Fig. 5a.
They have not commented on any of these issues …”
They did. There’s a whole section (3) in the SI on the Monte Carlo simulation they did to derive error estimates. These involve perturbing the original data and checking the variability of the output. It accounts for the effect of interpolation. They used autocorrelated noise to emulate the original autocorrelation between observations.
Interpolation itself is no big deal. They are down around the limit of time resolution, and the interpolation just eases the mechanics of lining up differently timed data points for analysis. It’s the resolution uncertainty that is the issue; interpolation on that scale doesn’t add to it.
“These involve perturbing the original data and checking the variability of the output.” – this is just an error estimation for the pseudo-scientific models. It’s unrelated with reality. If you have a real system, that behaves like, let’s say f(x) = x^2 + e^(very, very large random value), and you model it with f(x) = x^2 + very, very, very small random value, if you compare the outputs of the models, you’ll get a very small ‘error’. See how well is related with reality.
The gaps in data are just as important as real data.
It is by only showing all the gaps in the data and all the real data at hand that a proper story can be told of what we know and crucially what we don’t know.
We should not allow statistical licence to tell one particular story.
Willis Eschenbach says:
April 9, 2012 at 11:22 pm
This is total nonsense. First, take a look at the Metadata column “resolution”. You will find that the average resolution is 200 yrs and is as coarse as 600. So sub-sampling to 100 yr will add to the higher frequency variation in 70 of the 80 proxies (those sampled at coarser than 100 yr). It does not eliminate the high and low points in the record (this only happens when data is sampled down to a lower resolution) and the resulting data is no more imaginary than the real data (the data can be considered over-sampled for this paper).
The sampling issue with Shakun et al is not the linear interpolation of the temperature proxies to 100 yr intervals. In fact you can’t do anything with the data (between proxies) if you don’t resample the data to the same time points. (Sheesh Willis!)
What is a problem is where they go once they map the proxies onto the 5 x 5 degree cells. At this point the earth is so highly under-sampled that they need to do some extra work to convince us why this low sampling is OK.
For example, in the NH you can select some proxies and compare them with the Greenland data and there is a lot of similarity. But not all the proxies show a good match. In the SH you can compare proxies from New Zealand to Antarctica and, for example, while the NZ proxies show the same early warming they don’t have a 13,775 y BP peak at all. Look at proxy MD97-2120 and Vostock for example. So the Arctic warms by several degrees, the Antarctic warms by 2 degrees during this same time period and New Zealand warming does not change at all (the rate of warming remains the same right through the peak at 13,775 y BP). I don’t know what the warming event at 13,775 y BP is but it happens 500 years earlier in Greenland. So why are these distinct short period events not simultaneously occurring in the NH and SH? How is New Zealand avoiding global climate change?
Another annoying point is the wobble theory to set off the NH warming. If the NH is tilted toward the sun and receives more incoming solar energy to set off the warming, then wouldn’t the SH be tilted away from the sun and received proportionally less warming? Why is the SH warming before, during and after this sudden earth wobble?
Steven Mosher says: April 9, 2012 at 1:36 pm
It’s actually BOTH. Added CO2 will warm the earth and the ocean will respond by outgassing more CO2.
Leif Svalgaard says: April 9, 2012 at 1:40 pm
nice positive feedback loop there
…
ferd berple says: April 9, 2012 at 10:38 pm
Which would make life on earth a physical impossibility, given the volume of CO2 stored in the oceans. Temperature would have run away long ago and cooked the earth.
You may be correct “ferd”.
A similar positive feedback loop exists in the CAGW climate computer models, where a small increase in the alleged warming impact of CO2 is multiplied several-fold by the alleged positive feedback of water vapour.
Take out the (bogus) positive water vapour feedback in the models, and there is NO global warming crisis – the models then project a little warming.
Furthermore, there is NO evidence that such positive water vapour feedbacks to CO2 actually exist, and ample evidence to the contrary.
So here we have two different “positive feedbacks” crucial to the global warming alarmist position , both of which are unlikely to exist.
And as you point out, one of the stronger pieces of evidence that these positive feedbacks do not exist is that if they did, life on Earth would be very different, if it existed al all.
We need to remember that climate scientists have corrupted peer review so deconstruction of Shakun2012 could well be limited to the internet. You need only consider Climategate in general and Steig09 in particular how nigh on impossible it is to correct flawed papers to understand that truth. Further the impact of the internet means that the court of public opinion now prevails over the settled science. Publicly revealing the flaws of Shakun2012 carries more weight for it embarrasses all the scientific community.
Nick Stokes says:
April 10, 2012 at 2:45 am
Sorry, Nick, you’re wrong. If you re-read section (3) the authors make two important comments:
1. Stacking. They project the data onto 5 x 5 degree cells and linearly interpolate the time series to 100 yr intervals. They do not otherwise account for spatial biases in the data set.
2. The two types of uncertainty they analyse are a) age models and b) temperature calibration. The Monte Carlo method is applied along the proxy time series (in time) not spatially outward (in area or distance).
If you combine 1 and 2 you see they did not do an analysis on the effect of spatially averaging the proxies. This is a major problem with their paper. For a global data set, almost 97% of the data is missing and the remaining data is not uniformly distributed.
Finally, if you believe in the Shakun et al paper, take a close look at their Figure 5. They divide the averaged proxies by latitude. Notice how 60-90S, 30-60S, 0-30S, 0-30N all show warming before CO2 started increasing. Only 30-60N and 60-90N show the lag. The 60-90N proxies really need to be considered as input parameters to their argument so they shouldn’t be used in the “global” mix. Same for the 60-90S as they are trying to compare polar warming to global warming. This leaves 4 regions for comparison, three of which fail their argument. This suggests to me that the NH proxies have too much influence in the averaging process.
Steven Mosher says:
April 9, 2012 at 1:42 pm
“…and that frost fairs in England can reconstruct the temperature in australia”
I don`t see why not. If it`s a cold winter in the north there is more likely to be an El Nino, giving drought and warmer conditions in Australia.
http://en.wikipedia.org/wiki/River_Thames_frost_fairs
http://sites.google.com/site/medievalwarmperiod/Home/historic-el-nino-events
Two questions
1. If water vapor is the primary agent of GHG warming, what paleo-proxies exists for increased water vapor? Soil layers?
2. Warming creates tons more ground fuel for catastrophic fires. Might the increased CO2 in ice cores be from such fires? Again, might soil layers demonstrate such global phenomenon?
Pascal Bruckner: The Ideology Of Catastrophe
The Wall Street Journal, 10 April 2012
A time-honored strategy of cataclysmic discourse, whether performed by preachers or by propagandists, is the retroactive correction. This technique consists of accumulating a staggering amount of horrifying news and then—at the end—tempering it with a slim ray of hope. First you break down all resistance; then you offer an escape route to your stunned audience.
As an asteroid hurtles toward Earth, terrified citizens pour into the streets of Brussels to stare at the mammoth object growing before their eyes. Soon, it will pass harmlessly by—but first, a strange old man, Professor Philippulus, dressed in a white sheet and wearing a long beard, appears, beating a gong and crying: “This is a punishment; repent, for the world is ending!”
We smile at the silliness of this scene from the Tintin comic strip “L’Étoile Mystérieuse,” published in Belgium in 1941. Yet it is also familiar, since so many people in both Europe and the United States have recently convinced themselves that the End is nigh. Professor Philippulus has managed to achieve power in governments, the media and high places generally. Constantly, he spreads fear: of progress, science, demographics, global warming, technology, food. In five years or in 10 years, temperatures will rise, Earth will be uninhabitable, natural disasters will multiply, the climate will bring us to war, and nuclear plants will explode.
Man has committed the sin of pride; he has destroyed his habitat and ravaged the planet; he must atone.
My point is not to minimize our dangers. Rather, it is to understand why apocalyptic fear has gripped so many of our leaders, scientists and intellectuals, who insist on reasoning and arguing as though they were following the scripts of mediocre Hollywood disaster movies.
Over the last half-century, leftist intellectuals have identified two great scapegoats for the world’s woes. First, Marxism designated capitalism as responsible for human misery. Second, “Third World” ideology, disappointed by the bourgeois indulgences of the working class, targeted the West, supposedly the inventer of slavery, colonialism and imperialism.
The guilty party that environmentalism now accuses—mankind itself, in its will to dominate the planet—is essentially a composite of the previous two, a capitalism invented by a West that oppresses peoples and destroys the Earth.
Environmentalism sees itself as the fulfillment of all earlier critiques. “There are only two solutions,” Bolivian president Evo Morales declared in 2009. “Either capitalism dies, or Mother Earth dies.”
“Our house is burning, but we are not paying attention,” said Jacques Chirac, then president of France, at the World Summit on Sustainable Development in 2002. “Nature, mutilated, overexploited, cannot recover, and we refuse to admit it.”
Sir Martin Rees, a British astrophysicist and former president of the Royal Society, gives humanity a 50% chance of surviving beyond the 21st century. Oncologists and toxicologists predict that the end of mankind should arrive even earlier, around 2060, thanks to a general sterilization of sperm.
One could cite such quotations forever, given the spread of apocalyptic literature. Authors, journalists, politicians and scientists compete in their portrayal of abomination and claim for themselves a hyperlucidity: They alone see the future clearly while others vegetate in the darkness.
The fear that these intellectuals spread is like a gluttonous enzyme that swallows up an anxiety, feeds on it, and then leaves it behind for new ones. When the Fukushima nuclear plant melted down after the enormous earthquake in Japan in March 2011, it only confirmed an existing anxiety that was looking for some content. In six months, some new concern will grip us: a pandemic, bird flu, the food supply, melting ice caps, cell-phone radiation.
The fear becomes a self-fulfilling prophecy, with the press reporting, as though it were a surprise, that young people are haunted by the very concerns about global warming that the media continually broadcast. As in an echo chamber, opinion polls reflect the views promulgated by the media.
We are inoculated against anxiety by the repetition of the same themes, which become a narcotic we can’t do without.
A time-honored strategy of cataclysmic discourse, whether performed by preachers or by propagandists, is the retroactive correction. This technique consists of accumulating a staggering amount of horrifying news and then—at the end—tempering it with a slim ray of hope.
First you break down all resistance; then you offer an escape route to your stunned audience. Thus the advertising copy for the Al Gore documentary “An Inconvenient Truth” reads: “Humanity is sitting on a time bomb. If the vast majority of the world’s scientists are right, we have just ten years to avert a major catastrophe that could send our entire planet’s climate system into a tail-spin of epic destruction involving extreme weather, floods, droughts, epidemics and killer heat waves beyond anything we have ever experienced—a catastrophe of our own making.”
Here are the means that the former vice president, like most environmentalists, proposes to reduce carbon-dioxide emissions: using low-energy light bulbs; driving less; checking your tire pressure; recycling; rejecting unnecessary packaging; adjusting your thermostat; planting a tree; and turning off electrical appliances. Since we find ourselves at a loss before planetary threats, we will convert our powerlessness into propitiatory gestures, which will give us the illusion of action. First the ideology of catastrophe terrorizes us; then it appeases us by proposing the little rituals of a post-technological animism.
But let’s be clear: A cosmic calamity is not averted by checking tire pressure or sorting garbage.
Another contradiction in apocalyptic discourse is that, though it tries desperately to awaken us, to convince us of planetary chaos, it eventually deadens us, making our eventual disappearance part of our everyday routine. At first, yes, the kind of doom that we hear about—acidification of the oceans, pollution of the air—charges our calm existence with a strange excitement. But the certainty of the prophecies makes this effect short-lived.
We begin to suspect that the numberless Cassandras who prophesy all around us do not intend to warn us so much as to condemn us.
In classical Judaism, the prophet sought to give new life to God’s cause against kings and the powerful. In Christianity, millenarian movements embodied a hope for justice against a church wallowing in luxury and vice. But in a secular society, a prophet has no function other than indignation. So it happens that he becomes intoxicated with his own words and claims a legitimacy with no basis, calling down the destruction that he pretends to warn against.
You’ll get what you’ve got coming! That is the death wish that our misanthropes address to us. These are not great souls who alert us to troubles but tiny minds who wish us suffering if we have the presumption to refuse to listen to them. Catastrophe is not their fear but their joy. It is a short distance from lucidity to bitterness, from prediction to anathema.
Another result of the doomsayers’ certainty is that their preaching, by inoculating us against the poison of terror, brings about petrification. The trembling that they want to inculcate falls flat. Anxiety has the last word. We were supposed to be alerted; instead, we are disarmed. This may even be the goal of the noisy panic: to dazzle us in order to make us docile. Instead of encouraging resistance, it propagates discouragement and despair. The ideology of catastrophe becomes an instrument of political and philosophical resignation.
Mr. Bruckner is a French writer and philosopher whose latest book is “The Paradox of Love” (Princeton University Press, 2012). This article, translated by Alexis Cornel, is excerpted from the Spring 2012 issue of City Journal.
Ulric Lyons says:
April 10, 2012 at 6:34 am
There are two aspects to this. The first as Mosher points out is the extent to which local proxies can be treated as regional. This is easy to solve. Compare two distant proxies and where they correlate the effects are regional. Local effects do not correlate.
The second aspect requires some help from H.G. Wells. What happens when a regional effect is missing from a proxy record? Does this mean the area never experienced the global effect or that the proxy is wrong?
I am with Nick Stokes on this one. Willis is being way too picky about this stuff. It’s the climate science for chrissakes. Willis is like some highbrow sportswriter picking apart a “pro-wrestling” performance. It’s entertainment, Willis. Lighten up!
Steve from Rockwood: please write rebuttal and send to Nature.
Voronoi diagrams strike me as a very elegant way of spreading / smearing /averaging what available data there is. At least with them, one is guaranteed that there is exactly one measurement per cell. Where the boundaries of the cell are depends on where the neighbouring datapoints are. If the datapoints are close together, then the cells are smaller, which naturally gives them less weight. Using Delauney triangulation, one can then derive the area of the cell.
It’s handy too where a datapoint drops out for a period of time. All that is done is to recaluate the surrounding diagrams.
Where it would be weak is where there is a very sparse dataset, in which case, one could have cells taking up huge areas, but that applies to any other methodology too. And at least each cell has a measurement.
The focus should be scientifically, what is probably right and what is probably wrong.
NOT who is right and who is wrong.
“”””” Steven Mosher says:
April 9, 2012 at 1:34 pm
george, let us know when you discover spatial auto correlation. “””””
Ah ! spatial auto correlation ; I think you got me there Steven.
In 1958 I signed up for a course in “Autocorrelation of Non-Integrable Discontinuous Functions”.
But then I played hooky, and went fishing on the day they gave the lecture; and of course I lost the text book, so I feel a big gap in my knowledge. I’ve always been puzzled about the non-simultaneity of spatial sampling of time varying functions too. I guess it can all be rectified by averaging; because you can always average ANY set of arbitrary real numbers, and get an average; as well as virtually any defined statistical parameter. Of course none of it relates to, or means anything real, but you can do the motions on the numbers as if it meant something.
Of course the very same thing applies to discrete autocorrelation. The mechanics of the calculation can be applied to any arbitrary numbers, just as can the mechanics of statistical mathematics, and so you will get an autocorrelation value, for spatial or any other functional variable you like. The problem is the same as statistics of arbitrary numbers. It doesn’t necessarily have any connection to anything real. You might as well count the total number of animals per square metre (larger than an ant say) and do statistics or autocorrelations on that, and make some learned report to the World Wildlife Federation.
So perhaps Steven, since some of us missed the lecture, you could enlghten us about it.
Folks here at WUWT (and Willis Eschenbach in particular) seem to be totally unaware of an article that NASA’s James Hansen wrote titled Enchanted Rendezvous: John C. Houbalt and the Genesis of the Lunar-Orbit Rendezvous Concept (Monographs in Aerospace History, Series 4, December 1995).
Fans of the history of science and engineering will find plenty of lessons in Hansen’s article that are relevant to climate change. And you can bet too, that present-day NASA’s administrators haven’t forgotten the lesson that Houbalt and Hansen both preach. And even folks who disagree with Hansen’s climate analysis will find that he does a terrific job of analyzing the processes by which NASA (at its best) reliably makes technical choices that lead to success, rather than disaster.
NASA/Hansen’s Simple Lesson: Nothing good comes of NASA administrators and astronauts over-ruling NASA scientists and engineers.
Just ask the Challenger astronauts, and the Apollo 1 astronauts, and the NASA administrators overseeing those tragic programs, about the catastrophes that have followed when NASA administrators, and NASA professional discipline, bowed to the pressures of politics, schedule, and budget.
It’s significant too, that of more than 300 NASA astronauts, only seven signed the letter. The rest of the astronauts used common sense: Muzzle individual scientists? Bad idea. Because very many scientists agree with Hansen. Does NASA want to be in the business of censoring scientists and engineers en masse?. Muzzle selected ideas? That’s a bad idea under all circumstances. And its a *worse* idea when NASA administrators are the ones selecting the ideas to be muzzled.
Bottom Line: Quite properly, NASA will do nothing to muzzle its scientists and engineers.
RE
Don Monfort says:
@ur momisugly April 10, 2012 at 9:24 am
I am with Nick Stokes on this one. Willis is being way too picky about this stuff. It’s the climate science for chrissakes. Willis is like some highbrow sportswriter picking apart a “pro-wrestling” performance. It’s entertainment, Willis. Lighten up!
——
It’s called Grand larceny, not entertainment.
RE
Steve from Rockwood says:
@ur momisugly April 10, 2012 at 5:40 am
—————
Quite so Steve. That nailed it.
This idea that keeps cropping-up among ‘climate scientists’ who consider that linear interpolation of extremely sparse datasets is always appropriate whatever the sampling regime employed is bizare. Linear interpolation, assumes necessarily the existing samples are representative of the likely range of the variable under consideration. Given the gross imbalance in sampling of continental interiors versus coastal locations, and, frankly, the gross under-sampling of the Earth surface in its totality, this assumption cannot hold. It is not interploation – but extrapolation by another name.
It’s Fiction. Fantasy. Nonsense. Junk Science.
The antarctic warming preceded co2 rise in the antarctic and the warming in Greenland preceded co2 rise in Greenland however the earth warmed due to increase in co2 because the warming in Antarctica and Greenland was just local warming,this is very much like the AGW interpretation of why there was no global MWP.I think that it is more likely that during the last ice age the NH and the SH could have had very similar temperatures ,because the continents were covered in ice in the NH which would have prevented them heating up.I don’t think co2 drives the climate and I don’t think this paper shows that.I think Willis has made some good points about the statistics and the methods employed in this paper.
I am with Jimmy on this one. NASA will do nothing to muzzle its scientists and engineers, as long as they don’t deviate from the CAGW climate consensus party line. They are sensitive to threats to funding. And rightly so, I might add. (Am I doing OK, Jimmy?)
Yes Andrew, grand larceny too. I was thinking of a tragi comedie, along the lines of The Gang That Couldn’t Shoot Straight. Or, The China Syndrome meets the Keystone Cops. But whatever it is, it ain’t science.
The Navier Stokes equations describe fluid flow with changes in temperature and density. They are non-linear, chaotic, and show sensitive dependence on initial conditions. That means a state trajectory with temperature 0.1 C will differ from a trajectory with 0.1001 C, with the difference between the trajectories doubling every few days. That has been known since the paper of Edward Lorenz in 1963 “Deterministic Aperiodic Flow”.
Because of the sensitive dependence on initial conditions, future states can not be predicted accurately from ANY finite set of past states. Future prediction is not possible. All we can do is react to the current states we measure. Any policy or procedure based on long term prediction of future states is either an error, in those with little knowledge, or a hoax from those who have greater scientific knowledge, (or perhaps both!). To the extent that global warming depends on predicting long term future states, it is wrong.