Proxy spikes: The missed message in Marcott et al

Story submitted by WUWT reader Nancy Green

There is a message in Marcott that I think many have missed. Marcott tells us almost nothing about how the past compares with today, because of the resolution problem. Marcott recognizes this in their FAQ. The probability function is specific to the resolution. Thus, you cannot infer the probability function for a high resolution series from a low resolution series, because you cannot infer a high resolution signal from a low resolution signal. The result is nonsense.

However, what Marcott does tell us is still very important and I hope the authors of Marcott et al will take the time to consider. The easiest way to explain is by analogy:

50 years ago astronomers searched extensively for planets around stars using lower resolution equipment. They found none and concluded that they were unlikely to find any at the existing resolution. However, some scientists and the press generalized this further to say there were unlikely to be planets around stars, because none had been found.

This is the argument that since we haven’t found 20th century equivalent spikes in low resolution paleo proxies, they are unlike to exist. However, this is a circular argument and it is why Marcott et al has gotten into trouble. It didn’t hold for planets and now we have evidence that it doesn’t hold for climate.

What astronomy found instead was that as we increased the resolution we found planets. Not just a few, but almost everywhere we looked. This is completely contrary to what the low resolution data told us and this example shows the problems with today’s thinking. You cannot use a low resolution series to infer anything reliable about a high resolution series.

However, the reverse is not true. What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.

Thus, what Marcott is telling us is that we should expect to find a 20th century type spike in many high resolution paleo series. Rather than being an anomaly, the 20th century spike should appear in many places as we improve the resolution of the paleo temperature series. This is the message of Marcott and it is an important message that the researchers need to consider.

Marcott et al: You have just looked at your first star with high resolution equipment and found a planet. Are you then to conclude that since none of the other stars show planets at low resolution, that there are no planets around them? That is nonsense. The only conclusion you can reasonably make is that as you increase the resolution of other paleo proxies, you are more likely to find spikes in them as well.

==============================================================

As a primer for this, our own “Charles the Moderator” submitted this low resolution Marcott proxy plot with the Jo Nova’s plot of the Vostok ice core proxy overlaid to match the time scale. Yes the vertical scales don’t match (numerically on the scales due to the ticks being different and the offset difference), but this image is solely for entertainment purposes in the context of this article, and does make the point visually.

Spikes anyone? – Anthony

marcottvostok2[1]

(Added) Study: Recent heat spike unlike anything in 11,000 years  “Rapid” head spike unlike anything in 11,000 years. Research released Thursday in the journal Science uses fossils of tiny marine organisms to reconstruct global temperatures …. It shows how the globe for several thousands of years was cooling until an unprecedented reversal in the 20th century. — Seth Borenstein, The Associated Press, March 7th

Note: If somebody can point me to a comma delimited file of both the Marcott and Vostok datasets, I’d be happy to add a plot on a unified axis, or if you want to do one, leave a link to the finished image in comments using a service like Tinypic, Imageshack or Flickr. – Anthony

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

219 Comments
Inline Feedbacks
View all comments
April 6, 2013 2:22 am

Mark Bofill, Werner Brozek and everybody except sceptical:
The GLOBAL temperature does rise and fall 3.8deg.C during each year. And Hemispheric temperatures seasonally fluctuate much more than this.
I will explain this later in this post and try to do it in language sceptical may be capable of understanding.
At April 4, 2013 at 8:55 pm sceptical wrote a post addressed to me which said in total

richardscourtney,
I really tried to get through your whole post but was unable to. I had to stop reading after you wrote, “Each year mean global temperature rises by 3.8 deg.C deg.C from June to January and falls by 3.8 deg.C deg.C from January to June.
So, the warming which you want to exaggerate is about a fifth of the rise experienced during 6 months of each year.”
Please, could you reference where you got this from. I am at a lost as to how you came up with that number but feel it explains so much about your posts.

That post states that
(a) an anonymous troll chooses not to read information I provide because the troll does not like the information
but
(b) the troll demands I provide him/her/them/it with more information for the troll to not read.
I chose to ignore that nonsense because I have better things to do than to flatter the ego of an anonymous troll by spending time providing references the troll says he/she/they/it will not read.
At April 5, 2013 at 5:39 pm the troll again demanded that I provide the reference for him/her/them/it to not read.
I will not do that. Instead, I intended to provide information which would enable the troll to use the ‘Way back Machine’ to find it. Thus, if the troll really wanted the information then the troll could do the homework which the troll demanded of me.
However, that intention has been prevented by Werner who – in his typically useful, helpful and informative manner – has provided a link which I did not know and is one such requested reference . I am grateful for the link but disappointed that the troll has had his/her/their/its question answered without being required to do any homework.
THE REASON GLOBAL TEMPERATURE VARIES THROUGHOUT EACH YEAR
The Earth is warmed by the Sun and the Earth goes round the Sun. This passage around the Sun is called the Earth’s orbit.
The Earth’s orbit is an elipse – not a circle – so the Earth moves towards and away from the Sun as it makes an orbit. One orbit is one year.
The Earth obtains more warming from the Sun when it is nearest to the Sun than when it is furthest from the Sun. The Earth is nearest to the Sun in December/January and furthest in June/July.
The Earth gets hotter when most warmed: this is similar to a person getting warmer when moving nearer to a fire.
The Earth’s axis is tilted relative to the perpendicular to the plane of the Earth’s orbit. Ooops!” Strike that last sentence: clearly, the troll will not ‘get it’. I will try again.
The Earth leans over so its Northern half gets more Sun for half the year, and this is called Northern Hemisphere summer. And the Earth’s Southern half gets more Sun for the other half of the year, and this is called Southern Hemisphere summer.
The Northern Hemisphere is mostly covered in land and the Southern hemisphere is mostly covered in oceans. Oceans are made of water, and water needs to get a lot more heat than land for them to warm by the same amount. This means the average temperature of the Northern Hemisphere varies more than the average temperature of the Southern Hemisphere throughout the year.
The average temperature of the Earth is the average including both the Northern and Southern Hemisphere’s.
Variation in heating of the Earth around its orbit and the different responses of the Earth’s Hemispheres combine to give a variation of the average temperature of the globe throughout the year. This variation is +/- 3.8 degrees Celsius with the Earth being hottest in January of each year.
Global temperature is usually quoted as ‘anomalies’. These are the differences in temperature from the Earth’s average temperature over a period of 30 years. The anomalies for an individual month (e.g. April) are obtained as differences from an average of 30 of those months (e.g. 30 Aprils).
So, in a year when January and June have the same global temperature anomaly (e.g. 0.0 deg.C) then the global average temperature January was 3.8 deg.C higher than in June.
In times past NASA GISS posted actual monthly global temperatures on its web site. NASA GISS removed this information from its web site when it was pointed out (e.g. by me) that the twentieth century global warming was put into perspective by this information on global temperature variation throughout each year.
Global temperature rose ~0.8 deg.C throughout the twentieth century. This is about a fifth of the rise in global temperature which occurs during 6 months of each year.
NASA GISS now only posts global temperature anomalies.
Richard

April 6, 2013 6:44 am

Werner Brozek:
Thankyou for your post addressed to me at April 5, 2013 at 5:47 pm.
Following several attempts to answer your specific questions in different ways – and cognisant of the need to use language comprehensible to onlookers – I have decided to provide this general answer in hope that it adequately covers all the issues you raise.
A full and proper consideration of these issues requires reference to text books concerning the use of statistical procedures as part of the philosophy of science. This very brief answer is my attempt at an overall view of confidence limits.
Nothing is known with certainty, but some things can be inferred from a data set to determine probabilities of being ‘right’ and ‘wrong’. These probabilities are the ‘confidence’ which can be stated.
As illustration, consider a beach covered in pebbles.
There are millions of the pebbles. A random sample of, say, 100 pebbles is collected and each pebble is weighed. This provides 100 measurements each of the weight of an individual pebble. From this an average weight of a pebble can be calculated. One such average is the ‘mean’ and it is obtained by dividing the total weight of all the pebbles by the number of pebbles (in this case, dividing by 100).
The pebbles on the beach can then be said to have the deduced mean weight.
However, none of the pebbles in the sample may have a weight equal to the obtained average. In fact, none of the millions of pebbles on the beach may have a weight equal to that average. Any average – including a mean – is a statistical construct and is not reality.
(This difference of an average from reality is demonstrated by the average – i.e. mean – number of legs on people. On average people have less than two legs because nobody has more than two and a few people have less than two.)
In addition to considering the mean weight of pebbles in the sample, one can also determine the ‘distribution’ of weights in the sample. Every pebble may be within 1 gram of the mean weight. In this case, there is high probability that any pebble collected from the beach will have a weight within 1 gram of the obtained average. But that does NOT indicate there are no pebbles on the beach which are 10 grams heavier than the obtained average. (This leads to the wider discussions of sampling and randomness which I am ignoring.)
Importantly, there is likely to be a distribution of weights such that most pebbles in the sample each have a weight near the mean weight, and a few pebbles have weights much lower and much higher than that average. This may provide a uniform distribution of weights within the sample. However, the sample may not have a uniform distribution because no pebble can weigh less than nothing, but a few pebbles may be much, much heavier than the mean weight: in this case, the sample is said to be ‘skewed’.
Assuming the sample is uniform then it is equally likely that a pebble will be within a range of weights heavier or lighter than the mean weight. (If the sample is skewed in the suggested manner then the likely range of weights heavier than mean weight is greater than the likely range of weights lighter than that average.) These ranges are the + and – ‘errors’ of the average and have determined probabilities.
The probable error range of +/-X at 99% confidence says that 99 out of 100 pebbles will probably be within X of the mean.
The probable error range of +/-X at 95% confidence says that 95 out of 100 (i.e. 19 out of 20) pebbles will probably be within X of the mean.
The probable error range of +/-X at 90% confidence says that 90 out of 100 (i.e. 9 out of 10) pebbles will probably be within X of the mean.
etc.
And that is all the confidence limits say; nothing more.
Therefore, if the weights of two pebbles are within the probable error range then at the stated confidence they cannot be distinguished as being different from the sample mean. And if a ‘heavy’ pebble has a weight within the probable error then that pebble’s weight says nothing about the sample mean.
In the above example, the sample mean is a statistical construct obtained from individual measurements of pebbles. And it is meaningless in the absence of a probable error with stated confidence: such an absence removes any indication of what the weight of a pebble from the beach is likely to be. And every pebble has equal chance of being within the +/- range of the mean.
The linear trend of a time series is also a statistical construct obtained from individual measurements. It has confidence limits with stated probability and it, too, is meaningless without such confidence limits.
At stated confidence a trend is equally likely to have any value within its limits of probable error.
(This is the same as any pebble is equally likely to have any weight within its limits of probable error from the mean weight.)
Therefore, if the trend is 0.003 ±0.223 °C/decade at 95% confidence then there is a 19:1 probability that the trend is somewhere between
(0.003-0.223) = 0.220 °C/decade and (0.003+0.223) = 0.226 °C/decade.
And, in this example, 0.220 °C/decade is not discernibly different from 0.226 °C/decade or from any value between them. There is a range of values from 0.220 °C/decade to 0.226 °C/decade which are not discernibly different. And similar is true for all probable error ranges.
Importantly, this lack of discernible difference is not affected by whether or not the range straddles 0.000 °C/decade.
I hope this is adequately clear.
Richard

Mark Bofill
April 6, 2013 7:13 am

Werner, Richard, thanks. It’s always a good day when I learn something.

April 6, 2013 7:49 am

Friends:
I made a stupid typing error in my post at April 6, 2013 at 6:44 am.
This does not affect my argument but may cause confusion.
I wrote:

Therefore, if the trend is 0.003 ±0.223 °C/decade at 95% confidence then there is a 19:1 probability that the trend is somewhere between
(0.003-0.223) = 0.220 °C/decade and (0.003+0.223) = 0.226 °C/decade.
And, in this example, 0.220 °C/decade is not discernibly different from 0.226 °C/decade or from any value between them. There is a range of values from 0.220 °C/decade to 0.226 °C/decade which are not discernibly different. And similar is true for all probable error ranges.

It seems that another of my keys is getting dodgy.
Obviously, I should have written
Therefore, if the trend is 0.003 ±0.223 °C/decade at 95% confidence then there is a 19:1 probability that the trend is somewhere between
(0.003-0.223) = -0.220 °C/decade and (0.003+0.223) = 0.226 °C/decade.
And, in this example, -0.220 °C/decade is not discernibly different from 0.226 °C/decade or from any value between them. There is a range of values from -0.220 °C/decade to 0.226 °C/decade which are not discernibly different. And similar is true for all probable error ranges.
Sorry.
Richard

Werner Brozek
April 6, 2013 8:32 am

Hello Richard, Thank you for your posts at 6:44 and 7:49. All is clear there. However in your post from 2:22, you had the warm and cold months backwards. The earth is actually hottest when furthest from the sun.
This variation is +/- 3.8 degrees Celsius with the Earth being hottest in January of each year.
It should read: This variation is +/- 3.8 degrees Celsius with the Earth being hottest in July of each year.

Nancy Green
April 6, 2013 9:09 am

richardscourtney says:
April 6, 2013 at 2:22 am
“Each year mean global temperature rises by 3.8 deg.C deg.C from June to January and falls by 3.8 deg.C deg.C from January to June.”
Global temperature rose ~0.8 deg.C throughout the twentieth century. This is about a fifth of the rise in global temperature which occurs during 6 months of each year.
=============
Wow! I’d forgotten all about this. That certainly puts the “alarm” over global warming into perspective!! Could one use this 3.8C to provide a ballpark estimate as to natural variability?
What if one considered this 3.8C year to year to be similar to waves on the ocean? With no change in forcings we would expect some waves to be naturally smaller and others to be naturally larger. We already have a good body of work to estimate peak waves, so in theory it might work here.
I’ll leave it to others to do the formal math. Here is a quick estimate. The standard deviation of the data is the square root of the variance. Thus:
68% = (sqrt(3.8/2)) = 1.4 C
95% = 2(sqrt(3.8/2)) = 2.8 C
99.7% = 3(sqrt(3.8/2)) = 4.1 C
This would suggest that 0.8 C is well within natural variability. That most of the most times variability should be within 1.4 C, but that we shouldn’t be surprised by a natural variability of 2.8C and that in extreme cases we could see 4.1C.
This would suggest that any attempt to keep average temperatures within 2.0 C is futile. I’d be interested in comments. Does this guesstimate seems reasonable? Maybe there is another paper here?

phi
April 6, 2013 12:02 pm

Climate variability is much lower than that. Decadal averages represent a good parameter with respect to the effect on economy. At this scale sigma is 2 or 3 tenths of degree at the regional level. It is probably thus for several millennia, including twentieth century. 0.8 ° C of secular warming is a misunderstanding. Thermometers of weather stations do not measure changes in regional temperatures, while proxies do vaguely the job.

April 6, 2013 12:09 pm

Werner Brozek:
Thankyou for your post at April 6, 2013 at 8:32 am .
I am writing both to thank you for the correction and to draw attention to it.
Richard

April 6, 2013 12:57 pm

Nancy Green and phi:
I am replying to your posts at April 6, 2013 at 9:09 am and April 6, 2013 at 12:02 pm, respectively.
Global climate variability is a statistical construct and – as I explained in my “statistics primer” post to Werner at at April 6, 2013 at 6:44 am – a statistical construct is not ‘real’: it is a formulation of the definition used to obtain it.
Hence, comments such as “Climate variability is much lower than that” are statements of what commenter means by “climate variability”.
Transition from a glacial to an interglacial state is a variation in global climate.
This poses several problems.
Firstly, what definition of “climate variability” is appropriate?
There are several global climate parameters which could be assessed as indicators of climate variability; e.g. temperature, total system heat content, precipitation, etc.
If global temperature were chosen then the seasonal variation each year would indicate that ~4deg.C is well within natural variability because that variation occurs each year.
Using global temperature anomaly enables political objectives such as avoiding 2deg.C rise. In global temperature terms this is hopeless because global temperature rises by double that during each year.
However, global temperature anomaly has been chosen. This seems to be a strange choice that has been made to advance political – not scientific – objectives.
The second problem is that the chosen indicator of climate variability is certainly not a good choice. It is based on the assumption of AGW and not on any real science.
It is difficult to see a problem if the global temperature anomaly were to rise by 2deg.C when the actual global temperature varies by 4deg.C each year.
However, an increase to the ‘seasonal’ variation by 4deg.C would be problematic but global temperature anomaly may not indicate it: colder winters would cancel hotter summers in a calculated anomaly.
Simply, until a sensible, rational and scientific definition of climate variability is obtained there cannot be a sensible, rational and scientific determination of climate variability.
Several people have done similar calculations to those in Nancy’s post.
And several people have used the change to solar radiative forcing during each year in attempt to estimate climate sensitivity.
But I think all such calculations are pointless. The basic point is that ‘climate variability’ is a statistical construct. No such construct is ‘real’: it is an expression of its definition (as I explained in my post at April 6, 2013 at 6:44 am). And nobody has provided a rational definition of ‘climate variability’.
But I can, do, and will say that a rise in global temperature anomaly of 2deg.C would be very unlikely to have discernible effects when global temperature varies by double that each year.
Anyway, those are my thoughts on the subject, and I hope they are helpful to your thoughts.
Richard

April 6, 2013 1:03 pm

sceptical,
Richard Courtney’s April 4, 2013 at 2:35 pm comment on the Null Hypothesis is accurate: current climate parameters, including global temperatures, precipitation, extreme weather events, etc., are neither unusual nor unprecedented. All current climate parameters have been exceeded during the Holocene, therefore the Null Hypothesis has not been falsified.
Richard also make a cogent point when he states that the global temperature over the past century and a half has been extremely small. It is not unusual for global temperatures to fluctuate by tens of degrees, on decadal time scales. A period of more than 150 years with only a minuscule 0.8ºC temperature fluctuation is unusually benign. We are fortunate to be living in this “Goldilocks” climate. Things could be much worse.
Finally, there is no scientific evidence showing that CO2 is the major cause of global warming. In fact, there is no empirical evidence showing that CO2 is the cause of any global warming. It is my personal belief that AGW may exist as a minor forcing. But that is only my belief, as there are no testable measurements showing that CO2 causes global warming. Without any real world measurements, my belief is merely a conjecture.
In any case, the possible effect of CO2 can be completely disregarded, as it is at best a 3rd-order forcing — which is swamped by second-order forcings, and which are in turn swamped by first-order forcings. Observing a putative AGW effect in that context shows how very inconsequential it is.

Nancy Green
April 6, 2013 1:41 pm

phi says:
April 6, 2013 at 12:02 pm
Climate variability is much lower than that.
============
The unexpected leveling of temps over the past 16 years suggests that natural variability has been grossly under-estimated — which has led mainstream climate science astray. As a result of this poor estimate, climate science mistakenly attributes almost every change to human activity. This has caused the climate models to go off the rails.
A more reasoned explanation is that their estimates of natural variability are wrong. Given that we see a 3,8 C variation in the overall temp of the earth’s surface every year if is hard to believe this doesn’t induce some sort of instability in the climate system. At the very least this oscillation should induce all sorts of harmonics and sympathetic vibrations that should be visible at scales much larger than a year.
One obvious answer is that by reducing temperatures to anomalies climate science has hidden the true magnitude of this vibration. In effect climate science has place a set of noise cancelling headphones on the climate data, which has fooled the models into underestimating the amount of noise in the system.
note: my ballpark guesstimate of 1.9 as the variance looks a bit low. 2.0 would probably have been better, but it doesn’t significantly affect the result.

Nancy Green
April 6, 2013 1:56 pm

richardscourtney says:
April 6, 2013 at 12:57 pm
===========
Thanks Richard, that makes a great deal of sense.
My intuition tells me that if the difference in temperature over 1 year was closer to 0 C than 3.8 C, there would be less variability in the temps year to year, decade to decade. If instead of 3.8 the difference was 13.8 C then I would expect the variability in to me greater year to year and decade to decade.
Is there any indication that this might be the case?

April 6, 2013 2:27 pm

Nancy Green:
In your post at April 6, 2013 at 1:41 pm you say

Given that we see a 3,8 C variation in the overall temp of the earth’s surface every year if is hard to believe this doesn’t induce some sort of instability in the climate system. At the very least this oscillation should induce all sorts of harmonics and sympathetic vibrations that should be visible at scales much larger than a year.

YES!
I have been saying this in many places – including on WUWT – for years.
re. your question to me in your post at April 6, 2013 at 1:56 pm.
I share your intuition, but I know of no indication that it is correct and do not know what such an indication would be.
Please note that the concept of radiative forcing as the driver of climate change is completely without demonstration: it may be correct but has been adopted without any indication it is right.
The climate system exhibits bi-stability (i.e. stable in glacial and interglacial states). This is indicative of the chaotic system having two strange attractors. If this possibility is the reality then the concepts of climate variability and radiative forcing are both mistaken. All we see is the system constantly adjusting to its acting attractor while constantly being disturbed in that adjustment by the oscillations you mention in your paragraph I have quoted.
In my opinion ‘climate scientists’ have forgotten that the most profound scientific statement is
“We don’t know”. And climate science has been halted in its advance for three decades because they have forgotten it.
Richard

Werner Brozek
April 6, 2013 4:17 pm

Nancy Green says:
April 6, 2013 at 1:56 pm
My intuition tells me that if the difference in temperature over 1 year was closer to 0 C than 3.8 C, there would be less variability in the temps year to year, decade to decade.
richardscourtney says:
April 6, 2013 at 2:27 pm
I share your intuition, but I know of no indication that it is correct and do not know what such an indication would be.
What I would often tell my physics students was to imagine an extreme situation and see what conclusions you would draw from them. So in this case, let us assume two very different scenarios. In one case, assume the earth is not tilted on its axis at all and that the orbit is a perfect circle. Then the temperature difference would be closer to 0. In the other case, assume the earth is tilted at 90 degrees instead of 23.5. And also assume the orbit is highly elliptical so the distance varies like a comet throughout the year. That would obviously give a huge variation! And in terms of our present discussion, I do not see how very wild swings in anomalies from year to year can be avoided. When put into its proper perspective as Richard has done, an increase of 2 C does not seem that much any more. Thank you for that!

Nancy Green
April 6, 2013 11:10 pm

richardscourtney says:
April 6, 2013 at 2:27 pm
The climate system exhibits bi-stability
==========
The 600 million year paleo record certainly shows this, with a stable states at 11C and 22C. Which suggests our present state of 14.5C is inherently unstable. Which suggests that our present “natural variability” could be quite large and highly non-linear. It also suggests that most standard statistical methods will deliver spurious results.

April 7, 2013 1:51 am

Nancy Green:
I am between duties I have to perform today so I apologise if this post is perfunctory or I fail to give adequately quick replies to further messages to me today. I will certainly try to ‘catch up’ this evening.
Firstly, I take this opportunity to say that I like your desire to evaluate: it is lack of this desire which most offends me about the bulk of what is called ‘climate science’. Thank you.
I am replying to your post addressed to me at April 6, 2013 at 11:10 pm. It says

richardscourtney says:
April 6, 2013 at 2:27 pm

The climate system exhibits bi-stability

==========
The 600 million year paleo record certainly shows this, with a stable states at 11C and 22C. Which suggests our present state of 14.5C is inherently unstable. Which suggests that our present “natural variability” could be quite large and highly non-linear. It also suggests that most standard statistical methods will deliver spurious results.

I agree.
Indeed, in my opinion the problem is greater than you suggest.
Transition between the two states has been in the form of ‘flickers’ which shift between the states in periods of a few decades. The system often switched between the two states in a series of flickers until it stabilised in one or other of the states.
This poses the questions as to why the two stable states exist and why we are not now in one of them.
I repeat that ‘climate science’ has stalled because it has forgotten the importance of recognising the importance of what is known to be not known. A starting point for understanding the variability of the existing global climate would be an attempt to understand the constraints (i.e. true boundary conditions) of the climate system in each of its observed states including that which now exists.
Until these constraints are understood then factors which could shift the system to its 11C or 22C states cannot be known. I usually try to avoid this subject because it excuses scare-mongering about imagined ‘tipping points’. However, as you say, it has fundamental importance to any investigation of climate changes which may or may not happen.
It also raises your concern about statistical analysis. Climatology is a statistical science by definition, but ‘climate science’ as currently practiced displays astonishing ignorance of fundamental statistical limitations. For example, a NASA GISS scientist has come to WUWT and made statements about confidence limits that display a total lack of understanding concerning what confidence limits do – and do not – indicate.
Linear trends are the standard method for assessing climate changes but – as you say – non-linearity is the norm for all climate behaviours. This alone justifies your suggestion that “most standard statistical methods will deliver spurious results”. Applying an assumption of linearity to a non-linear process is guaranteed to provide “spurious results”.
There are real problems with ‘climate science’ and your post I am answering highlights them. Science is about seeking the closest approximation to ‘truth’ which we can obtain. But ‘climate science’ as currently practiced is about providing evidence which shows AGW is a problem. Indeed, this thread is about your very fine explanation of why one attempt to provide such evidence is flawed according to basic principles of science, logic and statistics.
Reality is often a problem (as anybody exposed to a hurricane will say) and it is time to consider what the realities of climate are. As D B Stealey says, we are in a “Goldilocks climate”, and I think it would be useful to know why because the 11C and 22C climates are more typical of past global climates.
Is the present climate state stable or likely to switch to one of the other states? I don’t know. Nobody knows. And until we understand the nature of existing climate nobody can know or can knowledgably apply appropriate statistical analyses of climate.
Adopting and promoting pet theories of climate – be they radiative forcing, chaotic attractors, AGW, or anything else – is pure hubris. Nemesis followed Hubris.
Richard

Nancy Green
April 7, 2013 7:47 am

richardscourtney says:
April 7, 2013 at 1:51 am
Applying an assumption of linearity to a non-linear process is guaranteed to provide “spurious results”.
==============
The problems inherent in solving multivariate non-linear equations make it very tempting to approximate the problems using linear methods. And in many industries this can make sense – because you can validate the results to see if the approximation is valid.
One example that comes to mind is the oil industry and ground penetrating sonar data. A fast solution can be found by iteration of linear least squares approximations. If your result is not accurate you won’t find oil and you are quickly out of business. If you method is valid you will find oil and your product will be in demand. This quickly eliminates errors.
Similarly in weather forecasting. If your forecast is no good this will quickly become apparent. To the point where in Australia it was discovered that if you simply forecast yesterdays actual weather today it was more accurate than the government weather forecasts bureau.
However, in climate science there is no FACTUAL feedback loop to separate bad science from good science, because of the 30 year lag between weather and climate. By the time someone discovers your science is worthless you are retired. The only true feedback comes in the form of other people’s OPINIONS about your work, which makes the field inherent political. It matters not how accurate your results are, rather what matters is how accurate people believe they are.
This leads to the “David Copperfield” effect in climate science. Dress up a simple trick in an elaborate stage setting and people will forget that they are seeing an illusion.

April 7, 2013 10:13 am

Nancy Green:
re your reply to me at April 7, 2013 at 7:47 am.
Yes. You are right and I stand corrected. I should have written;
Applying an assumption of linearity to a non-linear process is guaranteed to provide “spurious results” unless the approximation of linearity can be validated empirically.
However, I don’t feel chastened because I was discussing ‘climate science’ and – as you say – in that context the practicalities of the required empirical validation are usually insurmountable.
Anyway, I take your point and thank you for the correction.
And I like your point about “no factual feedback loop to separate bad science from good science”.
I will use it if you don’t mind my using it.
Richard

Keitho
Editor
April 9, 2013 6:39 am

This seems to have a grip on what was done too …

1 7 8 9