Circular Logic not worth a Millikelvin

Guest post by Mike Jonas

A few days ago, on Judith Curry’s excellent ClimateEtc blog, Vaughan Pratt wrote a postMultidecadal climate to within a millikelvin” which provided the content and underlying spreadsheet calculations for a poster presentation at the AGU Fall Conference. I will refer to the work as “VPmK”.

VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.

Background

The background to VPmK was outlined as “Global warming of some kind is clearly visible in HadCRUT3 [] for the three decades 1970-2000. However the three decades 1910-1940 show a similar rate of global warming. This can’t all be due to CO2 []“.

The aim of VPmK was to support the hypothesis that “multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming that can be accounted for 99.98% by the AHH law []“,

where

· the sawtooth is a collection of “all the so-called multidecadal ocean oscillations into one phenomenon“, and

· AHH law [Arrhenius-Hofmann-Hansen] is the logarithmic formula for CO2 radiative forcing with an oceanic heat sink delay.

The end result of VPmK was shown in the following graph

clip_image002

Fig.1 – VPmK end result.

where

· MUL is multidecadal climate (ie, global temperature),

· SAW is the sawtooth,

· AGW is the AHH law, and

· MRES is the residue MUL-SAW-AGW.

Millikelvins

As you can see, and as stated in VPmK’s title, the residue was just a few millikelvins over the whole of the period. The smoothness of the residue, but not its absolute value, was entirely due to three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“.

If the aim of VPmK is to provide support for the IPCC model of climate, naturally it would remove all of those things that the IPCC model cannot handle. Regardless, the astonishing level of claimed accuracy shows that the result is almost certainly worthless – it is, after all, about climate.

The process

What VPmK does is to take AGW as a given from the IPCC model – complete with the so-called “positive feedbacks” which for the purpose of VPmK are assumed to bear a simple linear relationship to the underlying formula for CO2 itself.

VPmK then takes the difference (the “sawtooth”) between MUL and AGW, and fits four sinewaves to it (there is provision in the spreadsheet for five, but only four were needed). Thanks to the box filters, a good fit was obtained.

Given that four parameters can fit an elephant (great link!), absolutely nothing has been achieved and it would be entirely reasonable to dismiss VPmK as completely worthless at this point. But, to be fair, we’ll look at the sawtooth (“The sinewaves”, below) and see if it could have a genuine climate meaning.

Note that in VPmK there is no attempt to find a climate meaning. The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.

The sinewaves

The two main “sawtooth” sinewaves, SAW2 and SAW3, are:

clip_image004

Fig.2 – VPmK principal sawtooths.

(The y-axis is temperature). The other two sinewaves, SAW4 and SAW5 are much smaller, and just “mopping up” what divergence remains.

It is surely completely impossible to support the notion that the “multidecadal ocean oscillations” are reasonably represented to within a few millikelvins by these perfect sinewaves (even after the filtering). This is what the PDO and AMO really look like:

clip_image006

Fig.3 – PDO.

(link) There is apparently no PDO data before 1950, but some information here.

clip_image008

Fig.4 – AMO.

(link)

Both the PDO and AMO trended upwards from the 1970s until well into the 1990s. Neither sawtooth is even close. The sum of the sawtooths (SAW in Fig.1) flattens out over this period when it should mostly rise quite strongly. This shows that the sawtooths have been carefully manipulated to “reserve” the 1970-2000 temperature increase for AGW.

clip_image010

Fig.5 – How the sawtooth “reserved” the1980s and 90s warming for AGW.

 

Conclusion

VPmK aimed to show that “multidecadal climate has only two significant components”, AGW and something shaped like a sawtooth. But VPmK then simply assumed that AGW was a component, called the remainder the sawtooth, and had no clue as to what the sawtooth was but used some arbitrary sinewaves to represent it. VPmK then claimed to have shown that the climate was indeed made up of just these two components.

That is circular logic and appallingly unscientific. The poster presentation should be formally retracted.

[Blog commenter JCH claims that VPmK is described by AGU as “peer-reviewed”. If that is the case then retraction is important. VPmK should not be permitted to remain in any “peer-reviewed” literature.]

Footnotes:

1. Although VPmK was of so little value, nevertheless I would like to congratulate Vaughan Pratt for having the courage to provide all of the data and all of the calculations in a way that made it relatively easy to check them. If only this approach had been taken by other climate scientists from the start, virtually all of the heated and divisive climate debate could have been avoided.

2. I first approached Judith Curry, and asked her to give my analysis of Vaughan Pratt’s (“VP”) circular logic equal prominence to the original by accepting it as a ‘guest post’. She replied that it was sufficient for me to present it as a comment.

My feeling is that posts have much greater weight than comments, and that using only a comment would effectively let VP get away with a piece of absolute rubbish. Bear in mind that VPmK has been presented at the AGU Fall Conference, so it is already way ahead in public exposure anyway.

That is why this post now appears on WUWT instead of on ClimateEtc. (I have upgraded it a bit from the version sent to Judith Curry, but the essential argument is the same). There are many commenters on ClimateEtc who have been appalled by VPmK’s obvious errors. I do not claim that my effort here is in any way better than theirs, but my feeling is that someone has to get greater visibility for the errors and request retraction, and no-one else has yet done so.

 

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

118 Comments
Inline Feedbacks
View all comments
RobertInAz
December 15, 2012 6:40 pm

Mike Rossander says:
December 14, 2012 at 1:18 pm
Mike Rossander is my hero!

chris y
December 15, 2012 7:17 pm

By showing that a climate sensitivity to CO2 of 0 C/W/m^2 gives as good (or better) fit than the consensus climate sensitivity, Mike Rossander and Mike Jonas have completely rubbished the poster.
Thanks for your efforts!
Now assume a climate sensitivity to CO2 of -3 C/W/m^2. That should be fun! I predict an almost perfect fit once again can be achieved with the correct weighting of suitable sinusoids.

December 15, 2012 9:00 pm

Scafetta doesnt share his code or his data. he is a fraud

December 15, 2012 10:47 pm

Vukcevic says that:
he has discovered/invented following formula
AMO = SSN x Fmf
where:
AMO = Atlantic Multidecadal Oscillation (or de-trended N.H. temp)
SSN = Sunspot number with polarity
Fmf = frequency of the Earth’s magnetic field ripple (undulation).
He calls above arithmetic sum ‘Geo-Solar Cycle’
http://www.vukcevic.talktalk.net/GSC1.htm
calculations are accurate, but he is unable to provide valid physical mechanism.
Svalgaard says that:
Vukcevic – writes nonsense, pseudo science, suffers from Denning-Kruger mental aberration, making up data, deceptive in the extreme (implies fraud), honesty matters (implies dishonest)
In the years gone by, where I come from, the above attributes had to be earned. Fortunately that is not the case any more. Similar pronouncements were often repeated by the self-appointed ‘guardians of eternal truth’ regardless of geography or historic epoch.
( mosher forwarded the relevant Excel file to you too)

Matthew R Marler
December 15, 2012 11:18 pm

Mike Jonas: Every single argument of VP’s in support of the process that he used with IPCC-AGW applies absolutely equally to the exact same process with AGW = zero.
On that we agree. The test between the two models will be made with the next 20 years’ worth of data. Having found filters and estimated coefficients, they are not free to modify those coefficients willy-nilly to get good fits each time the data disconfirm their model forecast.

December 16, 2012 6:21 am

vukcevic says:
December 15, 2012 at 10:47 pm
AMO = SSN x Fmf
AMO = Atlantic Multidecadal Oscillation (or de-trended N.H. temp)
SSN = Sunspot number with polarity
Fmf = frequency of the Earth’s magnetic field ripple (undulation).
He calls above arithmetic sum ‘Geo-Solar Cycle’

Zeroth: Is it AMO or [manipulated] temps? Not the same.
First, the formula uses multiplication [I assume that is what the ‘x’ stands for], so is not a sum.
Second, which SSN is used? The International [Zurich] SSN or the Group SSN?
Third, the ‘polarity’ of the SSN is nonsense. You might talk about the polarity of the HMF, but then that should go from maximum to maximum [when the polar fields change]. In any case, the polarity that is important for the interaction with the Earth is the North-South polarity which changes from hour to hour.
Fourth, ‘frequency’ is not a magnetic field [you said that you added two magnetic fields].
Fifth, ‘ripple’ is what? and undulation?
Sixth, ‘Earth’s field’, measured where? And why there?
In the years gone by, where I come from, the above attributes had to be earned.
With the expertise you perfected back then, you are still earning it in earnest now.
Similar pronouncements were often repeated by the self-appointed ‘guardians of eternal truth’ regardless of geography or historic epoch.
Nonsense is nonsense no matter where and when.

December 16, 2012 10:35 am

lsvalgaard says:
December 16, 2012 at 6:21 am
……..
Some of the points you raise are explained beforehand, see my post above
The above formula is a summary in its most abstract form. I have also made it clear that all four basic arithmetic calculations for simplicity I always address as ‘sum’, but for your benefit (see the link above) each of the arithmetic calculations has been specifically itemized in the Excel file description.
You also have access to my article, which is over 20 pages long, contains 39 illustrations of which 35 are my own product, and many of the questions you pose are elaborated in detail in the article. You also have the Excel file with further information.
You will appreciate that the blog is not capable of furnishing full reproduction, therefore reading of the article is a prerequisite, which of course you are welcome to do.
Thank you for the note, once the final publication (this is the first draft) is composed the points you made will be fully considered.
For time being you may consider snippets of information as ‘unofficial leaks’ from a future publication, which currently is a ‘high vogue’ on the fringes of climate science, and should be treated as such, or else ignored.
Thanks again for your attention.

December 16, 2012 11:09 am

lsvalgaard says:
December 16, 2012 at 6:21 am
……
May I add, I feel highly privileged and grateful that most if not all, of your attention and time during the last few days, on this blog and elsewhere, was devoted to the ‘ironing out’ of any inadvertent inconsistencies, that may be found in the draft of my article, to the preference of the Dr. Pratt’s paper, which surely must be this year’s the most important contribution to understanding of the anthropogenic warming.
Thank you again, sir.

December 16, 2012 11:24 am

vukcevic says:
December 16, 2012 at 11:09 am
May I add, I feel highly privileged and grateful that most if not all, of your attention and time during the last few days, on this blog and elsewhere, was devoted to the ‘ironing out’ of any inadvertent inconsistencies, that may be found in the draft of my article
As I said, your descriptions here on WUWT have been deceptive and your ‘paper’ is incomprehensible. You cannot presume that anybody would try to decipher what you actually mean. The purpose of publication is to make the paper comprehensible to a [scientifically literate] reader with only cursory knowledge of the subject to understand your claims by simply skimming the paper [your version is much too dense with details and loose ends]. You could start by responding to my seven points, right here on WUWT. As things stand, the paper is still nonsense, and you commit the deadly sin of arguing with the referee instead of responding concisely to the points raised.

Vaughan Pratt
December 17, 2012 3:22 am

Mike Rossander is the first, and so far only, person to respond to my challenge to provide an alternative description of modern secular (> decadal) climate to the one I presented at the AGU Fall Meeting 12 days ago. I only wish there were more Mike Rossanders so as to increase the chances of obtaining a meaningful such description. Thank you, Mike!
Although Mike’s magic number of 99.992% may seem impressive, it’s unclear to me how a single number addresses the conclusion of my poster, which starts as follows.
“We infer from this analysis that the only significant contributors to modern multidecadal climate are SAW and AGW, plus a miniscule amount from MRES after 1950.”
If one views MRES as merely the “unexplained” portion of modern secular climate then 99.992% does indeed beat 99.99%.
However a cursory glance at these charts reveals two clear differences between the upper and lower charts, respectively mine and Mike’s.
1. On the left, the upper chart is “within a millikelvin” (as measured by standard deviation) for the “quiet” century from 1860-1960. It then moves up until 1968, quietens back down to 1990, then moves up again. At no point does it go below the century-long quiet period. (Ignore the decade at each end, secular or multidecadal climate can’t be measured accurately to within a single decade.)
I justify my “within a millikelvin” title by claiming that, although the fluctuations from 1860 to 1960 are certainly “unexplained” (R2 is defined as unity minus the “unexplained variance” relative to the total variance), the non-negative bumps thereafter admit explanations, namely non-greenhouse impacts of growing population and technology in conjunction with the adoption of emissions controls. Their clear pattern therefore makes it unreasonable to count them as part of the unexplained variance.
The lower chart on the other hand simply wiggles up and down throughout the entire period, and moreover with a standard deviation three times as large. It is just as happy to go negative as positive, and it draws absolutely no distinction between the low-tech low-population 19th century and the next century. WW1 consisted largely of firing Howitzers, killing many millions of soldiers with bayonets and machine guns, and dropping bricks from planes, while WW2 consisted largely of blowing cities and dams partially or in some cases completely to smithereens with conventional and nuclear weapons of devastating power. The clear consequences of this trend led to the cancellation of WW3 by mutual agreement.
There is not a trace of this progression in Mike’s chart, just oscillations both above and below the line. Moreover they even die down a little near the end—what clearer proof could you ask for that the increasing human population and its technology cannot be having any impact whatsoever on the climate?
2. On the right, the upper chart shows in orange the Hale or magnetic cycle of the Sun. The lower one does the same except that it is much messier and gives no reason to suppose that a cleaner picture is possible.
Now let’s look at how far one needs to bend over in order to cope with zero AGW. Here are the ten coefficients Mike and I are using to express the sawtooth shape, expressed in natural units rather than the incomprehensible slider units. For shifts this is the fraction of a sawtooth period t to be shifted by, so for example 0.37t means a shift of 37% of the sawtooth period. For scale it is the attenuation of the harmonics, so for example 0.37X means an attenuation down to 37% of full strength for that harmonic in the case of a perfect sawtooth. 0 and X are synonymous with 0X and 1X respectively.
Mike:
Shifts: 0.092848t 0.268605t 0.335671t 0.246856t 0.198283t
Scales: 1.48903X 1.38624X 2.23807X 0 0.78158X
(The 0 for Scale4 is clearly a bug in how the problem was presented to Solver.)
Me:
Shifts: 0 0 0 t/40 t/40
Scales: 0 X X X/8 X/2
If we take the number of bits needed to represent these coefficients as a measure of how much information each of us has to pump into the formulas to force them to fit the data, the difference should be clear to those who can count bits. Also note that all of my shifts are way smaller than all of Mike’s shifts. His five sine waves bear no resemblance whatsoever to the harmonics of a sawtooth.
My question to Mike, and to anyone else who shares Mike’s and my interest in modern secular climate analysis, is, can one create a more plausible MRES, a cleaner HALE cycle, and a less obviously contrived collection of coefficients, while still setting climate sensitivity to zero?
If this can be done we would have a much stronger case against global warming.
(Incidentally the reason the logic looks circular to MIke Jonas is that iterative least-squares fitting is circular: it entails a loop and loops are circular. The correct question is not whether the loop is circular but whether it converges.)

Reply to  Vaughan Pratt
December 18, 2012 6:48 am

Vaughan.
I have used a statistical curve fitting approach in testing one component of the AGW model that you assume to be valid.That component is the assumption that anthropogenic emissions has caused all of the atmospheric increase in CO2. Read http://www.retiredresearcher.wordpress.com.

mikerossander
December 17, 2012 11:39 am

Good afternoon, Vaughan. I think you may have misunderstood the intent of my comment. And perhaps of the original criticism in the post above. My analysis was trivial, limited and deeply flawed. It had to be so because it was based on no underlying hypothesis about the factors being modeled (other than the ClimSens control value). It was an exercise that took about 15 min and was intended only to illustrate the dangers of over-fitting one’s data.
For example, you argue above that the fact that the shifts in your scenario are smaller is an element in favor of that scenario. Unless there is a physical reason why small shifts should be preferred, that claim is unjustifiable. A shift of zero may be perfectly legitimate – or totally irrelevant. That statistics are only useful to the extent that they illuminate some underlying physical process.
By the same token, you can’t say that the coefficients are “contrived” unless you have a physical understanding that indicates what they SHOULD be.
The closeness of R2 is also essentially irrelevant. Minor changes to the parameters drove that value off the 0.99x values quite easily. We can reasonably interpret an R2 of 0.2 as “bad” and an R2 of “0.8” good but the statement that “R2 of 0.99992 is better than 0.9990” is well beyond the reliability of the statistics to honestly say. On the contrary, given everything we know about the randomness of the input data (and about the known uncertainties of the measurement techniques), an R2 that high is almost certainly evidence that we have gone beyond the underlying data. There’s too little noise left in the solution. I say that because my physical model includes assumptions about the existance and magnitude of human error in data collection, transcription, etc. and that those errors should be random, not systemic.
What you need (and what neither of us have done) is a formal test of overfitting. Unfortunately, without an assumed physical model, I don’t know of any reliable way to structure that test. A common approach would be to run the Student’s t Test on each parameter in isolation, comparing the results of the model at the set parameter value vs the hypothesized null value. But your model does not make apparent what the null value ought to be. As noted above, we can not blindly assume that it is zero.
An alternate approach would be to restructure the model so you can feed an element of noise into all your data and rerun the analysis a few thousand times. Parameters which remain relevant across the noisy samples are probably more reliable. That’s not an approach that can be easily built in Excel, however.
Having said all that, you are certainly more familiar with your model than I ever will be. (The organization of your workbook is well above average but reverse-engineering someone else’s excel spreadsheet is almost always an exercise in futility.) Maybe you see a way to run a proper test against the parameters that I don’t.
One last thought. I ran my trivial test with the hypothesis that ClimSens should equal zero. There is no reason why that is necessarily the appropriate null, either. ClimSens really should be its own parameter, also included in the statistical tests of overfitting.

Vaughan Pratt
December 18, 2012 9:08 pm

Hi Mike. All your points are eminently sensible, particularly as regards the dangers of over-fitting (your main concern). That was also a concern of mine, and is why I locked down 5 degrees of freedom in SAW.
One way to structure a test of overfitting is to compare the dimensions of the data and model spaces. To my mind the best protection against overfitting is to keep the latter smaller than the former. When it can be made a lot smaller one can claim to have a genuine theory. When only a little smaller, or barely at all, it is not much better than a mere description from some point of view (namely that of a choice of basis). I would say my Figure 10 was more the latter, argued as follows.
For this data, 161 years of annualized HadCRUT3 anomalies is a point in the 160-dimensional space of all anomaly time series centered on the same interval, one dimension being lost to the definition of anomaly. My F3 filter projects this onto the 14-dimensional space of the first seven harmonics of a 161-year period. However it attenuates harmonics 6 and 7 down into the noise, making 10 dimensions (two per harmonic) a more reasonable estimate, perhaps 12 if you can crank up the R2 really high to bring up the signal-to-noise ratio.)
In principal my model has 14 dimensions, of which I lock down 5 leaving 9. You locked down the 3 AGW dimensions leaving the 11 SAW dimensions. However two of these only partially benefited you because Solver wanted to drive Scale4 negative. I’m guessing you left the box checked that told Solver not to use negative coefficients, so it stopped moving Scale4 when it hit 0, which in turn made tShift4 ineffective. The Evolutionary solver might have been able to add 1250 (half a period) to tShift4 discontinuously to simulate Scale4 going negative.
9 dimensions for the model space is dangerously close to the 10 dimensions of the model space, so I’m within a dimension of being as guilty of overfitting as you. The real difference is not dimensional however but choice of basis: sine waves in your case, a more mixed basis in mine including 3 dimensions for the evident rise that I’ve modeled as log(1+exp(t)) per the known physics of radiative forcing and increasing CO2.
Rather than using a t-test I’d be inclined to move the data and model dimensions apart by dropping the 4th and 5th harmonics altogether (since they aren’t really carrying their weight in increasing R2) while halving the period of F3. The data space would then have 20-24 dimensions and the model space 6.
But I would then spend 5-6 dimensions in describing HALE as a smoothly varying sine wave. That would greatly reduce the contribution of HALE to the unexplained variance, at the price of reducing the 6:20 gap to 11-12:20-24. That’s still an 8-13 dimensional gap between the data and model spaces, which I would interpret as not being at serious risk of overfitting. 3 of the HALE dimensions describe the basic 20-year oscillation, which the remaining 2-3 modulate.
As Max Manacker insightfully remarked at Climate Etc. a day or so ago, it’s not the number of parameters that counts so much as whether they’re the right ones. Whether parameters are meaningful depends heavily on the choice of basis for the space. Do they have any physical relevance?
Physics aside, your suggesting of feeding noise in would be a straightforward way of quantifying the dimension gap empirically that also took into account the attenuation by F3 of harmonics 6 and 7 as well as the role of R2, all in the one test so that would be very nice. I have this on my list of things to look into, hopefully the other things won’t push it down too far.
I would have replied sooner except that I spent today following up on the suggestion to try other ClimSens values besides 0 and 2.83, made by both you and “Bill” on CE. Very interesting results, more on this later as this comment is already so deep into tl;dr territory that I ought to submit it for the 2013 literature Nobel.

Vaughan Pratt
December 18, 2012 9:43 pm

Hi fhaynie. I’m having trouble reading your first figure about dependence on latitude. When I click on it I get only an arctic plot. Is there some way I can blow it up to a readable size?
Your emphasis on carbon 13 and 14 is commendable, but I must confess I have thought less about them than the raw CDIAC emissions and land-use-change data since 1750. These can be converted to ppmv contributions using 5140 teratonnes as the mass of the atmosphere and 28.97 as its average molecular weight. One GtC of emitted CO2 therefore contributes 28.97/12/5.14 = 0.47 ppmv CO2 to the atmosphere.
CDIAC says that in 2010 the anthropogenic contribution including land use changes was 10.6 GtC. For that year Mauna Loa recorded an increase of 2.13 ppmv. The former translates to a contribution of 10.6 * 0.47 = 4.98 ppmv.. Hence 2.13/4.98 = 42.7% percent of our contribution was retained in the atmosphere in 2010, with the remaining 57.3% being presumably taken up by the ocean, plants, soil, etc.
It would be very interesting to compare this with your analysis based on molecular species.of CO2. Do you have an estimate of the robustness of this sort of analysis?
I very much like this direction you’re pursuing, it could lead to useful insights. What feedback have you had from those knowledgeable about this sort of approach? And are there more detailed papers I can read on this?

December 19, 2012 7:37 am

Vaughan,
I am in the process of using similar techniques doing mass balances dividing the earth’s surface into five regions. If you are interested, you can find my email address at http://www.kidswincom.net.
I will gladly share what I am doing as well as thoughts on the mistakes we can make using curve fitting programs. I have had some long email conversations with two individuals that promote the global mass balance you cite. They have a strong vested interest in being right. Most of the favorable comments did not go into any technical detail.

1 3 4 5