Circular Logic not worth a Millikelvin

Guest post by Mike Jonas

A few days ago, on Judith Curry’s excellent ClimateEtc blog, Vaughan Pratt wrote a postMultidecadal climate to within a millikelvin” which provided the content and underlying spreadsheet calculations for a poster presentation at the AGU Fall Conference. I will refer to the work as “VPmK”.

VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.

Background

The background to VPmK was outlined as “Global warming of some kind is clearly visible in HadCRUT3 [] for the three decades 1970-2000. However the three decades 1910-1940 show a similar rate of global warming. This can’t all be due to CO2 []“.

The aim of VPmK was to support the hypothesis that “multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming that can be accounted for 99.98% by the AHH law []“,

where

· the sawtooth is a collection of “all the so-called multidecadal ocean oscillations into one phenomenon“, and

· AHH law [Arrhenius-Hofmann-Hansen] is the logarithmic formula for CO2 radiative forcing with an oceanic heat sink delay.

The end result of VPmK was shown in the following graph

clip_image002

Fig.1 – VPmK end result.

where

· MUL is multidecadal climate (ie, global temperature),

· SAW is the sawtooth,

· AGW is the AHH law, and

· MRES is the residue MUL-SAW-AGW.

Millikelvins

As you can see, and as stated in VPmK’s title, the residue was just a few millikelvins over the whole of the period. The smoothness of the residue, but not its absolute value, was entirely due to three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“.

If the aim of VPmK is to provide support for the IPCC model of climate, naturally it would remove all of those things that the IPCC model cannot handle. Regardless, the astonishing level of claimed accuracy shows that the result is almost certainly worthless – it is, after all, about climate.

The process

What VPmK does is to take AGW as a given from the IPCC model – complete with the so-called “positive feedbacks” which for the purpose of VPmK are assumed to bear a simple linear relationship to the underlying formula for CO2 itself.

VPmK then takes the difference (the “sawtooth”) between MUL and AGW, and fits four sinewaves to it (there is provision in the spreadsheet for five, but only four were needed). Thanks to the box filters, a good fit was obtained.

Given that four parameters can fit an elephant (great link!), absolutely nothing has been achieved and it would be entirely reasonable to dismiss VPmK as completely worthless at this point. But, to be fair, we’ll look at the sawtooth (“The sinewaves”, below) and see if it could have a genuine climate meaning.

Note that in VPmK there is no attempt to find a climate meaning. The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.

The sinewaves

The two main “sawtooth” sinewaves, SAW2 and SAW3, are:

clip_image004

Fig.2 – VPmK principal sawtooths.

(The y-axis is temperature). The other two sinewaves, SAW4 and SAW5 are much smaller, and just “mopping up” what divergence remains.

It is surely completely impossible to support the notion that the “multidecadal ocean oscillations” are reasonably represented to within a few millikelvins by these perfect sinewaves (even after the filtering). This is what the PDO and AMO really look like:

clip_image006

Fig.3 – PDO.

(link) There is apparently no PDO data before 1950, but some information here.

clip_image008

Fig.4 – AMO.

(link)

Both the PDO and AMO trended upwards from the 1970s until well into the 1990s. Neither sawtooth is even close. The sum of the sawtooths (SAW in Fig.1) flattens out over this period when it should mostly rise quite strongly. This shows that the sawtooths have been carefully manipulated to “reserve” the 1970-2000 temperature increase for AGW.

clip_image010

Fig.5 – How the sawtooth “reserved” the1980s and 90s warming for AGW.

 

Conclusion

VPmK aimed to show that “multidecadal climate has only two significant components”, AGW and something shaped like a sawtooth. But VPmK then simply assumed that AGW was a component, called the remainder the sawtooth, and had no clue as to what the sawtooth was but used some arbitrary sinewaves to represent it. VPmK then claimed to have shown that the climate was indeed made up of just these two components.

That is circular logic and appallingly unscientific. The poster presentation should be formally retracted.

[Blog commenter JCH claims that VPmK is described by AGU as “peer-reviewed”. If that is the case then retraction is important. VPmK should not be permitted to remain in any “peer-reviewed” literature.]

Footnotes:

1. Although VPmK was of so little value, nevertheless I would like to congratulate Vaughan Pratt for having the courage to provide all of the data and all of the calculations in a way that made it relatively easy to check them. If only this approach had been taken by other climate scientists from the start, virtually all of the heated and divisive climate debate could have been avoided.

2. I first approached Judith Curry, and asked her to give my analysis of Vaughan Pratt’s (“VP”) circular logic equal prominence to the original by accepting it as a ‘guest post’. She replied that it was sufficient for me to present it as a comment.

My feeling is that posts have much greater weight than comments, and that using only a comment would effectively let VP get away with a piece of absolute rubbish. Bear in mind that VPmK has been presented at the AGU Fall Conference, so it is already way ahead in public exposure anyway.

That is why this post now appears on WUWT instead of on ClimateEtc. (I have upgraded it a bit from the version sent to Judith Curry, but the essential argument is the same). There are many commenters on ClimateEtc who have been appalled by VPmK’s obvious errors. I do not claim that my effort here is in any way better than theirs, but my feeling is that someone has to get greater visibility for the errors and request retraction, and no-one else has yet done so.

 

0 0 votes
Article Rating
118 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
James
December 13, 2012 8:45 am

Assume AGW is a flat line and repeat the analysis. When the fit remains near perfect, trumpet the good news that AGW is no more!

December 13, 2012 8:47 am

Another failed attempt to force dynamic, poorly understood, under-sampled, natural process into some kind of linear deterministic logic system. It simply will not work. This foolishness is not worth any further time or effort on the part of serious scientists.

Steveta_uk
December 13, 2012 8:55 am

What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do.
Nobody that I’ve seen has yet done so. Now I’m not in any way suggesting that what VP has done is in any way useful science. But still, can you not simply alter the 4 or 5 sines waves to show that you can provide just as good a fit without the AHH curve?
If you can, please present it here.
And if you cannot, then VP remains uncontested.

December 13, 2012 8:56 am

Why is it that the rationalizations of the Warmistas are beginning to remind me of Ptolemy and The Almagest?

December 13, 2012 9:01 am

I worked in the Banking Industry for most of my adult life. During that time, many people would be applying for finance for this business or that business – maybe a mortgage, maybe a loan.
All would arrive with their shiny spreadsheet proving their business model was viable and would soon show profitability
I never saw any proposed business plan to the Bank that didn’t show remarkable profit – certainly none ever predicted a loss.
Nonetheless, the vast majority of those business plans would fail abysmally.
Just goes to show, any spreadsheet can be made to produce whatever results the author wants – just tweak here or tweak there.
Now about milli-kelvins ?
Andi

December 13, 2012 9:04 am

‘VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.”
1. its not circular.
2. its not a proof or support for models.
3. you cant retract a poster.
4. This is basically the same approach that many here praise when scafetta does it.
basically he is showing that GIVEN the truth of AGW, the temperature series can be explained by a few parameters. GIVEN, is the key you misunderstand the logic of his approach.

Taphonomic
December 13, 2012 9:11 am

“[Blog commenter JCH claims that VPmK is described by AGU as “peer-reviewed”. If that is the case then retraction is important. VPmK should not be permitted to remain in any “peer-reviewed” literature.]”
Describing VPmk as peer-reviewed is incorrect. Abstracts published by AGU for either poster sessions or presentations made at the meeting are not peer-reviewed. There are quite a few comments at the blog after JCH on this topic.

Matthew R Marler
December 13, 2012 9:13 am

VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.
That is over-wrought. Vaughan Pratt described exactly what he did and found, and published the data that he used and his result. If the temperature evolution of the Earth over the next 20 years matches his model, then people will be motivated to find whatever physical process generates the sawtooth. If not, his model will be disconfirmed along with plenty of other models. Lots of model-building in scientific history has been circular over the short-term: in “The Structure of Scientific Revolutions” Thomas Kuhn mentions Ohm’s law as an example, and Einstein’s special relativity; lots of people have noted the tautology of F = dm/dt where m here stands for momentum.
Pratt merely showed that, with the data in hand, it is possible to recover the signal of the CO2 effect with a relatively low-dimensional filter. No doubt, the procedure is post hoc. The validity of the approach will be tested by data not used in fitting the functions that he found.

Matthew R Marler
December 13, 2012 9:14 am

Steven Mosher wrote: 4. This is basically the same approach that many here praise when scafetta does it.
I agree with that.

richardscourtney
December 13, 2012 9:23 am

Steven Mosher:
You innumerate four points in your post at December 13, 2012 at 9:04 am. I address each of them in turn.
1. its not circular.
(Clearly, it is “circular” in that it removes everything from the climate data except what the climate models emulate then says the result of the removal agrees with what the climate emulate when tuned to emulate it.)
2. its not a proof or support for models.
(Agreed, it is nonsense.)
3. you cant retract a poster.
(Of course you can! All you do is publish a statement saying it should not have been published, and you publish that statement in one or more of the places where the “poster” was published; e.g. in this case, on Judith Curry’s blog.)
4. This is basically the same approach that many here praise when scafetta does it.
(So what! Many others – including me – object when Scafetta does it. Of itself that indicates nothing.)
The poster by Vaughan Pratt only indicates that Pratt is a prat: live with it.
Richard

richardscourtney
December 13, 2012 9:27 am

OOOps! I wrote
(Clearly, it is “circular” in that it removes everything from the climate data except what the climate models emulate then says the result of the removal agrees with what the climate emulate when tuned to emulate it.)
Obviously I intended to write
(Clearly, it is “circular” in that it removes everything from the climate data except what the climate models emulate then says the result of the removal agrees with what the models emulate when tuned to emulate it.)
Sorry.
Richard

Arno Arrak
December 13, 2012 9:31 am

I agree. I commented about it on Curry’s blog and called it worthless. I was particularly annoyed that he used HadCRUT3 which is error-ridden and anthropogenically distorted. I could see that he was using his computer skills to create something out of nothing and did not understand why that sawtooth did not go away. That millikelvin claim is of course nonsense and was simply part of his applying his computer skills without comprehending the data he was working with. Suggested that he write a program to find and correct those anthropogenic spikes in HadCRUT and others.

December 13, 2012 9:33 am

On the sidelines of the V.Pratt’s blog presentation there was a secondary discussion between myself and Dr. Svalgaard about the far more realistic causes of the climate change. Since Dr.S. often does peer review on the articles relating to solar matters, leaving the trivia out, I consider our exchanges as an ‘unofficial peer review of my calculations’, no mechanism is considered in the article, just the calculations. This certainly was not ‘friendly’ review, although result may not be conclusive, I consider it a great encouragement.
http://www.vukcevic.talktalk.net/PR.htm
If there are any scientists who are occasionally involved in the ‘peer review’ type processes, I would welcome the opportunityto submit my calculations. My email as in my blog id followed by @yahoo.com.

Editor
December 13, 2012 9:35 am

Vaughan presented an interesting idea which has been roundly tested by many commenters in a spirit of science and hotly contested views within a framework of courtesy by Vaughan defending his ideas. Personally I’m not convinced that CET demonstrates his theory , in fact I think it shows he is wrong, but if every post, whether here or at climate etc, was discussed in such a thorough manner everyone would gain, whatever side of the fence they are on.
Tonyb

December 13, 2012 10:07 am

vukcevic says:
December 13, 2012 at 9:33 am
This certainly was not ‘friendly’ review, although result may not be conclusive, I consider it a great encouragement
It seems that In the twisted world of pseudo-science even a flat-out rejection is considered a great encouragement.

David L. Hagen
December 13, 2012 10:21 am

Steveta_uk
Re Pratt’s “if you can provide a better fit, please do.”
Science progresses by “kicking the tires”. Models are only as robust and the challenges put to them and their ability to provide better predictions when compared against hard data – not politics.
The “proof the pudding is in the eating”. Currently the following two models show better predictive performance than IPCC’s models that average 0.2C/decade warming:
Relationship of Multidecadal Global Temperatures to Multidecadal Oceanic Oscillations Joseph D’Aleo and Don Easterbrook, Evidence-Based Climate Science. Elsevier 2011, DOI: 10.1016/B978-0-12-385956-3.10005-1
Nicola Scafetta, Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models Journal of Atmospheric and Solar-Terrestrial Physics 80 (2012) 124–137
For others views on CO2, see Fred H. Haynie The Future of Global Climate Change
No amount of experimentation can ever prove me right; a single experiment can prove me wrong. Albert Einstein

Tim Clark
December 13, 2012 10:26 am

“22-year and 11-year solar cycles and all faster phenomena“.
What happened to longer term cycles?
Oh, they must be AGW./sarc

jorgekafkazar
December 13, 2012 10:27 am

Curve-fitting is not proof of anything, especially when the input data is filtered. That heat-sink delay also needs some scrutiny. Worse, the data time range is fairly short, in geological terms. On top of that, a four component wave function? Get real.
It’s wiggle-matching, with some post hoc logic thrown in. I’m underwhelmed.

P. Solar
December 13, 2012 10:34 am

Mosh: 4. This is basically the same approach that many here praise when scafetta does it.
Sorry what N. Scaffeta does is fit all parameters freely and see what results. What Pratt did was fit his exaggerated 3K per doubling model ; see what’s left , then make up a totally unfounded waveform to eliminate it. Having thus eliminated it, he screwed up his maths and managed to also eliminate the huge discrepancy that all 3K sensitivity model have after 1998.
Had he got the maths correct it would have been circular logic. As presented to AGU it was AND STILL IS a shambles.
Attribution to AMO PDO is fanciful. The whole work is total fiction intended to remove the early 20th c. warming that has always been a show-stopper for CO2 driven AGW.
At the current state of the discussion on Curry’s site, he has been asked to state whether he recognises there is an error or stands by the presentation as given to AGU and published on Climate etc.
At the time of this posting , no news from Emeritus Prof Vaughan Pratt.

Bill Illis
December 13, 2012 10:37 am

The IPCC says that the current total forcing (all sources – RCP 6.0 scenario) is supposed to be about 2.27 W/m2 in 2012.
On top of that, we should be getting some water vapour and reduced low cloud cover feedbacks from this direct forcing so that there should be a total of about 5.0 W/m2 right now.
The amount of warming, however, (the amount that is accumulating in the Land, Atmosphere, Oceans and Ice) is only about 0.5 W/m2.
Simple enough to me.
Climate Science is much like the study of Unicorns and their invisibility cloaks.

December 13, 2012 11:00 am

I want to draw people’s attention to the frequency content of VPmK SAW2 and SAW3 wave forms. Just by eye-ball, these appear to be 75 year and 50 year frequencies. As Mike Jonas points out that early in the paper VP posits they come from major natural ocean oscillations but later a more flexible “whatever its origins.”
I am not going to debate the origins of the low frequency. Take only from VPmK the temperature record contains significant very low frequency wave forms, wavelengths greater than 25 years, needed to match even heavily filtered temperatures records where

three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“.

All that is left in VPmK data is very low frequency content and there appears to be a lot of it.
My comment below takes the importance of low frequency in VPmK and focuses on BEST: Berkley Earth and what to me appears to be minimally discussed wholesale decimation and counterfeiting of low frequency information happening within the BEST process. If you look at what is going on in the BEST process from the Fourier domain, there seems to me to be major losses of critical information content. I first wrote my theoretical objection to the BEST scalpel back in April 2, 2011 in “Expect the BEST, plan for the worst.” I expounded at Climate Audit, Nov. 1, 2011 and some other sites.
My summary argument remains unchanged after 20 months:
1. The Natural climate and Global Warming (GW) signals are extremely low frequency, less than a cycle per decade.
2. A fundamental theorem of Fourier analysis is frequency resolution dw/2π Hz = 1/(N*dt) .where dt is the sample time and N*dt is the total length of the digitized signal.
3. The GW climate signal, therefore, is found in the very lowest frequencies, low multiples of dw, which can only come from the longest time series.
4. Any scalpel technique destroys the lowest frequencies in the original data.
5. Suture techniques recreate long term digital signals from the short splices.
6. Sutured signals have in them very low frequency data, low frequencies which could NOT exist in the splices. Therefore the low frequencies, the most important stuff for the climate analysis, must be derived totally from the suture and the surgeon wielding it. From where comes the low-frequency original data to control the results of the analysis ?
Have I misunderstood the BEST process? Consider this from Muller (WSJ Eur 10/20/2011)

Many of the records were short in duration, … statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

 “Simply sliced the data.” “Avoided data-selection bias” – and by the theorems of Fourier embraced high frequency selection bias and created a bias against low frequencies. There is no free lunch here. Look at what is happening in the Fourier Domain. You are throwing a way signal and keeping the noise. How can you possibly be improving signal/noise ratio?
 
Somehow BEST takes all these fragments lacking low frequency, and “glues” them back together to present a graph of temperatures from 1750 to 2010. That graph has low frequency data – but from where did it come? The low frequencies must be counterfeit – contamination from the gluing process, manufacturing what appears to be a low frequency signal from fitting high frequency from slices. This seems so fundamentally wrong I’d sooner believe a violation of the 1st Law of Thermodynamics.
 
A beautiful example of frequency content that I expect to be found in century scale un-sliced temperature records is found in Lui-2011 Fig. 2. reprinted in WUWT In China there are no hocky sticks Dec. 7, 2011 The grey area on the left of the Fig. 2 chart is the area of low frequency, the climate signal. In the Lui study, a lot of the power is in that grey area. It is this portion of the spectrum that BEST’s scalpel removes! Fig. 4 of Lui-2011 is a great illustration of what happens to a signal as you add first the lowest frequency and successively add higher frequencies.
 
Power vs Phase & Frequency is the dual formulation of Amplitude vs Time. There is a one to one correspondence. If you apply a filter to eliminate low frequencies in the Fourier Domain, and a scalpel does that, where does it ever come back? If there is a process in the Berkley glue that preserves low frequency from the original data, what is it? And were is the peer-review discussion of its validity?
If there is no preservation of the low frequencies the scalpel removes, results from BEST might predict the weather, but not explain climate.

December 13, 2012 11:07 am

The real shame here lies in The Academy. As the author did supply all details behind the work when asked, I must assume it was done in good faith. The problem is that PhD’s are being awarded without the proper training/education in statistical wisdom. Anybody can build a model and run the numbers with a knowledge of mathematical nuts and bolts and get statistical “validation.” But a key element that supports the foundation upon which any statistical work stands seems to be increasingly ignored. That element has a large qualitative side to it which makes it more subtle thus less visible. Of course I am speaking of the knowing, understanding, and verifying of all the ASSUMPTIONS (an exercise with a large component of verbal logic) demanded by any particular statistical work to be trustworthy. I had this drilled into me during my many statistical classes at Georgia Tech 30 years ago. Why this aspect seems to be increasingly ignored I can’t say, but I can say, taking assumptions into account can be a large hurdle to any legitimate study, thus very inconvenient. I imagine publish or perish environments and increasing politicization may have much to do here. The resultant fallout and real crime is the population of scientists we are cultivating are becoming less and less able to discriminate between the different types of variation that need to be identified so that GOOD and not BAD decisions are more likely. Until the science community begins to take the rigors of statistics seriously, its output must be considered seriously flawed. To do otherwise risks the great scientific enterprise that has achieved so much.

December 13, 2012 11:11 am

lsvalgaard says:
December 13, 2012 at 10:07 am
…………
Currently I am only concerned with the calculations, no particular mechanism is considered in my article, just volumes of data, AMO, CET, N.Hemisphere, Arctic, atmospheric pressure, solar activity, the Earth’s magnetic variability, comparisons against other known proxies and reconstructions.
Since you couldn’t fail my calculations, you insisted on steering discussion away from the subject (as shown in this condensed version) with all trivia from both sides excluded:
http://www.vukcevic.talktalk.net/PR.htm
Let’s remember:
Dr. L. Svalgaard :If the correlation is really good, one can live with an as yet undiscovered mechanism.
I do indeed do consider it a great encouragement that you didn’t fail calculations for
http://www.vukcevic.talktalk.net/GSC1.htm
One step at the time. Thanks for the effort., its appreciated. Soon I’ll email Excel data on the
http://www.vukcevic.talktalk.net/SSN-NAP.htm
using 350 years of geological records instead of geomagnetic changes .Two reinforce each other.
We still don’t exactly understand how gravity works, but maths is 350 years old.
I missed your usually ‘razor sharp dissection’ of Dr. Pratt’s hypothesis

Rob Dawg
December 13, 2012 11:20 am

I don’t understand the objections to simplifying models until the correct outcome is achieved. After all if the sun really had anything to do with temperature it would get colder at night and warmer during the day.

December 13, 2012 11:21 am

vukcevic says:
December 13, 2012 at 11:11 am
Since you couldn’t fail my calculations
Of course, one cannot fail made-up ‘data’. What is wrong with your approach is to compute a new time series from two unrelated time series, and to call that ‘observed data’.

richardscourtney
December 13, 2012 11:29 am

Stephen Rasey:
re your post at December 13, 2012 at 11:00 am.
Every now and then one comes across a pearl shining on the sand of WUWT comments. The pearls come in many forms.
Your post is a pearl. Its argument is clear, elegant and cogent. Thankyou.
Richard

Jens Bagh
December 13, 2012 11:41 am

Should it not be milliKelvin?

Mooloo
December 13, 2012 11:49 am

Steveta_uk says:
What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do..

A “better fit” is not useful. In cases like this the correct model will not provide a better fit, because the correct model has deviations between theory and reality due to noise..
If I have a 100% normally distributed population and take 100 samples then the result will never be an exact normal distribution. I could model a “distribution” that better matched my samples, but it would most certainly not tell me anything useful. In fact it would lead me to believe my actual population was not normal.
This is why I am highly suspicious of any model that is trained on old data. It is basically an exercise in wiggle matching, not an exercise in getting the underlying physics correct. The best climate models will have pretty poor fit to old temperatures.

December 13, 2012 12:17 pm

Substitute a 1300 year wave length sine with a max at MWP and min at LIA for the AGW function and you will get similar results.

December 13, 2012 12:22 pm

lsvalgaard says:
December 13, 2012 at 11:21 am
vukcevic says:
December 13, 2012 at 11:11 am
Since you couldn’t fail my calculations
Of course, one cannot fail made-up ‘data’. What is wrong with your approach is to compute a new time series from two unrelated time series, and to call that ‘observed data’.
………………..
Wrong Doc.
Magnometer at the Tromso does it every single minute of the day and night.
http://www.vukcevic.talktalk.net/Tromso.htm
In red are Incoming variable solar magnetic field sitting on the top of the variable Earth’s magnetic field.
Rudolf Wolf started it with a compass needle, Gauss did it with a bit more sophisticated apparatus, and today numerous geomagnetic stations do it as you listed dozens in your paper on IDV.
So it is OK for Svalgaard of Stanford to derive IDV from changes of two combined magnetic fields, but is not for Vukcevic.
Reason plain and obvious, it would show that the SUN DOES IT !
Here is how geomagnetic field (Eart + solar) is measured and illustrated by our own Dr. Svalgaard
http://www.leif.org/research/Rudolf%20Wolf%20and%20the%20Sunspot%20Number.ppt#8
and he maintains they are not added together in his apparatus.
Can anyone spot 3 magnets?
Dr. S are you really serious to suggest that no changes in the Earth field are registered by your apparatus?
Case closed!

Gail Combs
December 13, 2012 12:26 pm

Steveta_uk says:
December 13, 2012 at 8:55 am
What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do.
Nobody that I’ve seen has yet done so. Now I’m not in any way suggesting that what VP has done is in any way useful science. But still, can you not simply alter the 4 or 5 sines waves to show that you can provide just as good a fit without the AHH curve?
If you can, please present it here.
And if you cannot, then VP remains uncontested.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
HUH?
Let me get this correct.
I publish a paper showing that the rise and fall of women’s skirts plus a saw tooth pattern provides a good fit to the curve. Since no one can provide a better ‘fit’ than that the paper has to stand?

December 13, 2012 12:43 pm

vukcevic says:
December 13, 2012 at 12:22 pm
So it is OK for Svalgaard of Stanford to derive IDV from changes of two combined magnetic fields, but is not for Vukcevic.
It is OK to derive two time series of the external field driven by the same source, but not to confuse and mix the external and internal fields that have different sources and don’t interact. Case is indeed closed, as you are incapable of learning.

RobertInAz
December 13, 2012 12:47 pm

As mentioned by Steveta_uk and others ….
Rather than engage in histrionics, the way to refute the Vaughan Pratt poster is to create a similar spreadsheet (or modify his spreadsheet) to show a nominal AGW signal.
As nearly as I can tell, Dr. Pratt has done everything responsible skeptics ask:
– Formulated a hypothesis
– Presented all of the supporting data
– Published in an accessible forum
– Asked for feedback.
I have not dropped in on the thread for a couple of days. However, I suspect that if someone has published a spreadsheet model that refutes Dr. Pratt’s, then it would have been mentioned here.
I do not care whether four parameters can fit an elephant. I would like to see someone mathematically refute Dr. Pratt’s model. I took a look and realized I do not have the time to reacquire the expertise to do it. (I had the expertise years ago and even have the optimization code I wrote for my AI class that could be adapted to this problem).
In my case, I strongly believe there are contradictory cases (it is just math and there are a lot of variables), but until someone devotes the mental sweat to create one (maybe Nick Scafetta has per Steven Mosher @ 9:04 am), Dr. Pratt’s result stands as he has described it. He asks people to show the contradictions.
Finally re circularity. I agree that that post hoc curve fitting can be described as circular. All of the GCMs do it to reproduce historical temperature. What Dr. Pratt has done is simplify the curve fitting to a spreadsheet we can all use.

Don Monfort
December 13, 2012 12:52 pm

I am with Steveta, on this one. Unless someone comes up with better numerology, Professor Pratt’s numerology stands.

P. Solar
December 13, 2012 12:56 pm

Steveta_uk says: “What VP has said repeatedly on JC’s site is basically that if you can provide a better fit, please do.”
Why would anyone want spend time searching for a “better fit” of an exaggerated exponential, bend down my a broken filter, plus a non physically attributable wiggle to ANYTHING?
Please explain the motivation and rewards of such an exercise.

P. Solar
December 13, 2012 1:03 pm

RobertInAz: I have not dropped in on the thread for a couple of days. …. Dr. Pratt’s result stands as he has described it. He asks people to show the contradictions.
Then you ought to do so before commenting , no?
He asks for criticisms but it’s fake openness. He clearly has no intent of admitting even the most blatant errors in his pseudo-paper-poster.
Oops is not in the vocabulary of this great scientific authority.

Gail Combs
December 13, 2012 1:18 pm

jack hudson says:
December 13, 2012 at 11:07 am
……………….
On statistics –
I have notice that since computers and statistical packages became readily available in the 1980’s there has been a shift away from using a trained statistician to do-it-yourself statistics. ‘Six Sigma’ in industry is an example.
The statistical training I got from the ‘Six Sigma’ program at work was absolute crap. All they taught was how to use the computer program with not even a basic explanation of different types of distribution to go with it or even the warning to PLOT THE DATA so you could see the shape of the distribution. They did not even get into attributes vs variables!
It reminds me of the shift from the use of well trained secretaries who would clean up a technonut’s English and pry the need infor out of him to having everyone write their own reports. My plant manager in desperation insisted EVERYONE in the plant take night courses in English composition.
Too bad Universities do not insist that anyone using statistics must take at least three semesters of Stat.

December 13, 2012 1:22 pm

lsvalgaard says:
December 13, 2012 at 12:43 pm
It is OK to derive two time series of the external field driven by the same source, but not to confuse and mix the external and internal fields that have different sources and don’t interact.
As this apparatus does:
http://www.leif.org/research/Rudolf%20Wolf%20and%20the%20Sunspot%20Number.ppt#8
records combined solar and Earth’s fields
or as Vukcevic does in here:
http://www.vukcevic.talktalk.net/EarthNV.htm
calculates combined solar and Earth’s fields
Do you suggest than the combined field curve that happens to match temperature change in the N. Hemisphere is coincidental, it just appeared by chance ?

Gail Combs
December 13, 2012 1:25 pm

fhhaynie says:
December 13, 2012 at 12:17 pm
Substitute a 1300 year wave length sine with a max at MWP and min at LIA for the AGW function and you will get similar results.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
That is pretty much what the Chinese, Liu Y, Cai Q F, Song H M, et al. did. They used 1324 years.

GRAPH: http://jonova.s3.amazonaws.com/graphs/china/liu-2011-cycles-climate-tibet-web.gif
Figure 4 Decomposition of the main cycles of the 2485-year temperature series on the Tibetan Plateau and periodic function simulation. Top: Gray line,original series; red line, 1324 a cycle; green line, 199 a cycle; blue line, 110 a cycle. Bottom: Three sine functions for different timescales. 1324 a, red dashed line (y = 0.848 sin(0.005 t + 0.23)); 199 a, green line (y = 1.40 sin(0.032 t – 0.369)); 110 a, blue line (y = 1.875 sin(0.057 t + 2.846)); time t is the year from 484 BC to 2000 AD.
http://wattsupwiththat.com/2011/12/07/in-china-there-are-no-hockey-sticks/

December 13, 2012 1:39 pm

@Gail. anyone using statistics must take at least three semesters of Stat.
I agree. Three semesters of statistics would be more generally useful throughout adult life than three semesters of Calc. I’m 34 years out of my B.Sc, 31 from my Ph.D. The college texts I return to most often are my Stat books, Johnson & Leone 1977.
Not that calc isn’t useful. Not that it isn’t required for “Diff_E_Q”. But statistics is the one of the only courses that by design gives you training in uncertainty, to quantify what you don’t know.

December 13, 2012 1:41 pm

vukcevic says:
December 13, 2012 at 1:22 pm
records combined solar and Earth’s fields
It records the external and internal fields superposed by Nature and thus existing in Nature
or as Vukcevic does in here:
calculates combined solar and Earth’s fields

no, you calculate a field that does not exist in Nature by combining two that are not physically related
Do you suggest than the combined field curve that happens to match temperature change in the N. Hemisphere is coincidental, it just appeared by chance ?
I say that the quantity you calculate does not exist in Nature and therefore that any correlation is spurious or worse. But I thought your case was closed. Keep it that way, instead of carpet bombing every thread on every blog with it.

David L
December 13, 2012 2:09 pm

Can’t you just take the fourier transform of the raw data to find the frequency components and phase shifts and whatever else is left over?
I do applaud the guy’s work with sines and exponentials. At least it isn’t the standard linear regression garbage!!!

December 13, 2012 3:01 pm

lsvalgaard says:
December 13, 2012 at 1:41 pm
………
I calculate combined effect of two variables, as you could calculate combined effect of wind and temperature on the evaporation, but in my case it happens that both variables are magnetic fields.
Are you happy now?
I post on other blogs so the readers also should be aware You don’t need to follow me around, if you think it is not worth your attention. Why are you so concerned ?
In a way I am pleasantly surprised that you are devoting all your attention to my ‘nonsense’ rather than the ‘brilliant’ work of your Stanford colleague, discussed above. Either you think Dr. Pratt’s work is of a superb quality or utter rubbish, in either case no comment of yours is required.
Good night.

December 13, 2012 3:22 pm

vukcevic says:
December 13, 2012 at 3:01 pm
I calculate combined effect of two variables, as you could calculate combined effect of wind and temperature on the evaporation, but in my case it happens that both variables are magnetic fields.
Are you happy now?

Wind and temperature and evaporation are physically related. Your inputs are not. that they are both magnetic fields is irrelevant, it makes as much sense to combine them as it would the fields of the Sun and Sirius.
I post on other blogs so the readers also should be aware
I think the readers are ill served with nonsense.
You don’t need to follow me around, if you think it is not worth your attention. Why are you so concerned ?
Because scientists have an obligation to combat pseudo-science and provide the public with correct scientific information. Even though not all do that.
In a way I am pleasantly surprised that you are devoting all your attention to my ‘nonsense’ rather than the ‘brilliant’ work of your Stanford colleague, discussed above.
You should be ashamed of peddling your nonsense, not pleased when found out.
Either you think Dr. Pratt’s work is of a superb quality or utter rubbish
Curve fitting is what it is. If one believes in it has little bearing on the mathematical validity of the fitting procedure. I asked him to make an experiment for me and the result was that what he called the ‘solar curve’ was different in solar data and in CET and HadCRUT3 temperature data and between the latter two as well. This settled the matter for me at least.

Matthew R Marler
December 13, 2012 4:41 pm

Mike Jonas: But the result was still obtained by circular logic.
In filtering, there is a symmetry: if you know the signal, you can find a filter that will reveal it clearly; if you know the noise, you can design a filter to reveal the signal clearly. Pratt assumed a functional form for the signal (he said so at ClimateEtc), and worked until he had a filter that revealed it clearly.
The thought process becomes “circular” if you “complete the circle”, so to speak, and conclude that: since he found what he assumed, then it must be true. My only claim is that, given what he did, the result can be, and should be, tested on future data. I have written about the same regarding the modeling of Vukcevic and Scafetta. I would say the same regarding the curve-fitting of Liu et al cited by Gail Combs above. Elsewhere I have written the same of the modeling of Latif and Tsonis, and of the GCMs. I do not expect any extant model to survive the next 20 years’ worth of data collection, but I think that the data collected to date do not clearly rule out very much — though alarmist predictions made in 1988-1990 look less credible year by year.

Matthew R Marler
December 13, 2012 4:46 pm

Mike Jonas: However, others have looked at NAT. eg, http://wattsupwiththat.com/2010/09/30/amopdo-temperature-variation-one-graph-says-it-all/
I haven’t investigated their workings, so I am not in a position to say whether their graph is worth anything, but at least it is using real data on the PDO and AMO. If they have got it right (NB. that’s an “If”), then they have nailed NAT, and HUM looks to be around a flat zero.

Well said.

December 13, 2012 5:22 pm

I submit that a simpler and better fit of the unfiltered data is 0.573-0.973sine(x/608+.96)+0.108sine(x/63+1.21)+0.038sine(x/20+1.46) where x=2PI*year. AGW may be covarient with that 608 year cycle and contributes a little bit to the magnitude of the coefficient -.973. Most of the residual looks like a three to five year cycle.

RACookPE1978
Editor
December 13, 2012 6:24 pm

Marler & Jonas, Vuk, et al. 8<)
The thought process becomes “circular” if you “complete the circle”, so to speak, and conclude that: since he found what he assumed, then it must be true. My only claim is that, given what he did, the result can be, and should be, tested on future data. I have written about the same regarding the modeling of Vukcevic and Scafetta. I would say the same regarding the curve-fitting of Liu et al cited by Gail Combs above. Elsewhere I have written the same of the modeling of Latif and Tsonis, and of the GCMs. I do not expect any extant model to survive the next 20 years’ worth of data collection, but I think that the data collected to date do not clearly rule out very much — though alarmist predictions made in 1988-1990 look less credible year by year.
OK, so try this:
We have a (one) actual real world temperature record. It has a “lot” of noise in it, but it is the only one that has actual data insideits noise and high-frequency (short-range or month-to-month) variation and its longer-frequency (60 ?? year and (maybe) 800-1000-1200 year variations. Behind the noise of recent changes – somewhere “under” the temperatures between 1945 and 2012 – there “might be” a CAGW HUM signal that “might be” related to CO2 levels: and there “might be” a non-CO2 signal related to UHI effect starting about 1860 for large cities that tails off to a steady value , and starts off between 1930 and 1970 for smaller cities and what are now rural areas. Both UHI “signals” would begin at near 0, ramp up as population increases in the area between 5000 and 25,000, then slows as the area saturates with new buildings and people after 25,000 people.
The satellite data must be assumed correct for the whole earth top to bottom.
The satellite data varies randomly month-to-month by 0.20 degrees. So it appears you must first put an “error band” of +/-0.20 degrees around your thermometer record BEFORE looking for any trends to analyze. Any data within +/- .20 degrees of any running average can be proven with today’s instruments for the entire earth to be just noise.
Then, you try to eliminate the 1000 year longer term cycle – if you see it at all.
Then, after all the “gray band” is eliminated, then you can start eliminating the short cycle term. Or looking at the problem differently, start looking for the potential short cycle.

Doubting Rich
December 13, 2012 6:45 pm

So we take a mathematical process we know can fit any curve (I seem to recall Fourrier first showed that a set of sinusoidal oscillations at various frequencies and amplitudes would fit any curve to any accuracy given enough different sine waves; in other words any curve can be expressed as a spectrum) and gasp when it only takes 4 curves to fit to a few mK data that are (a) known to oscillate, so will fit closely with relatively few curves (b) have the non-oscilliatory component arbitrarily removed (see (a)) and (c) don’t actually vary very much, so a few milli Kelvin is proportionally not as precise a fit as it sounds. Of course we don’t mention that none of the data points can possibly be defined to the level that expressing in milli Kelvin has any meaning.

Matthew R Marler
December 13, 2012 9:12 pm

Mike Jonas: VP’s claimed results flowed from his initial assumptions. That’s what makes it circular.
When results follow from assumptions that’s logic or mathematics. It only becomes circular if you then use the result to justify the assumption. You probably recall that Newton showed that Kepler’s laws followed from Newton’s assumptions. Where the “circle” was broken was in the use of Newton’s laws to derive much more than was known at the time of their creation. In like fashion, Einstein’s special theory of relativity derived the already known Lorentz-Fitzgerald contractions; that was a really nice modeling result, but the real tests came later.
I tested the key part of his result (the sawtooth) against existing data (the PDO and AMO) and found that it did not represent the “multidecadal ocean oscillations” as claimed.
Pratt claimed that he did not know the exact mechanism generating the sawtooth. You showed that one possible mechanism does not fit.
I think we agree that Pratt’s modeling does not show the hypothesized CO2 effect to be true. At ClimateEtc I wrote that Pratt had “rescued” the hypothesis, not tested it. That’s all.
Most of what you have written is basically true but “over-wrought”. There is no reason to issue a retraction.

December 13, 2012 9:32 pm

Just because something can be explained by sine waves proves nothing. In VPmK the anthropogenic component could be replaced by another sine wave of long periodicity to represent the rising AGW component.
That said, I am of the opinion that much of the change in global temperatures can indeed be explained by AGW and the AMO (plus TSI (sunspots) and Aerosols (volcanoes)). See for example my discussion of the paper by Zhou and Tung and my own calculations at:
http://www.climatedata.info/Discussions/Discussions/opinions.php
The main conclusion of this work is that AOGCMs have overestimated the AGW component of the temperature increase by a factor of 2 (welcomed by sceptics but not by true believers in AGW) but there is still a significant AGW component (welcomed by true believers but not by sceptics).

December 13, 2012 9:32 pm

Steven Mosher says:
December 13, 2012 at 9:04 am
4. This is basically the same approach that many here praise when scafetta does it.
****************************************
As usual, Mosher continues in his attempts to mislead people about the real merits of my research. The contorted Mosher’s reasoning is also discussed here:
http://tallbloke.wordpress.com/2012/08/08/steven-mosher-selectively-applied-reasoning/
I do not hope that my explanation will convince him because he clearly does not want to understand.
However, for the general reader this is how the case is.
My research methodology does not have anything to do with the curve fitting exercise implemented in Pratt’s poster.
My logic follows the typical process used in science, which is as follows.
1) preliminary analysis of the patterns of the data without manipulation based on given hypotheses.
2) identification of specific patterns: I found specific frequency peaks in the temperature record .
3) search of possible natural harmonic generators that produce similar harmonics; I found them in major astronomical oscillations.
4) search of whether the astronomical oscillations hindcast data outside the data interval studied in (1): I initially studied the temperature record since 1850, and tested the astronomical model against the temperature and solar reconstructions during the Holocene!
5) use an high resolution model to hindcast the signal (1): I calibrate the model from 1850 to 1950 and check its performance in hindcasting the oscillations in the period 1950-2012 and viceversa.
6) the tested harmonic component of the model is used as a first forecast of the future temperature.
7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here
http://people.duke.edu/~ns2002/#astronomical_model_1
There is nothing wrong with the above logic. It is the way science is actually done, although Mosher does not know it.
My modelling methodology equivalent to the way the ocean tidal empirical models (which are the only geophysical models that really work) have been developed.
Pratt’s approach is quite different from mine, it is the opposite.
He does not try to give any physical interpretation to the harmonics but interprets the upward trending at surely due to anthropogenic forcing despite the well known large uncertainty in the climate sensitivity to radiative forcing. I did the opposite, I interpret the harmonics first and state that the upward trending could have multiple causes that also include the possibility of secular/millennial natural variability that the decadal/multidecadal oscillations could not predict.
Pratt did not tested his model for hindcasting capabilities, and he cannot do it because he does not have a physical interpretation for the harmonics. I did hindcast tests, because harmonics can be used for hindcast tests.
Pratt’s model fails to interpret the post 2000 temperature, as all AGW models, which implies that his model is wrong.
My model correctly hindcast the post 2000 temperature: see again
http://people.duke.edu/~ns2002/#astronomical_model_1
In conclusion, Mosher does not understand science, but I cannot do anything for him because he does not want to understand it.
However, many readers in WUWT may find my explanation useful.

December 13, 2012 9:47 pm

Nicola Scafetta says:
December 13, 2012 at 9:32 pm
7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here
It fails around 2010 and you need a 0.1 degree AGW to make it fit. I would say that there doesn’t look to be any unique predictability in your model. A constant temperature the past ~20 years fits even better.

ed
December 13, 2012 9:53 pm

anybody know the formula for the sawtooth (which multidecadal series and what factors)as there is certainly no resemblance to the amo… by far the most dominant oscillation. The sawtooth presented looks well planned to me, must have taken a lot of work to construct to get the desired residuals…

December 13, 2012 10:07 pm

lsvalgaard says:
December 13, 2012 at 9:47 pm
It fails around 2010 and you need a 0.1 degree AGW to make it fit.
******************************************
A model cannot fail to predict what it is not supposed to predict. The model is not supposed to predict the fast ENSO oscillations within the time scale of a few years such as the ElNino peak in 2010. That model uses only the decadal and multidecadal oscillations.

December 13, 2012 10:29 pm

Nicola Scafetta says:
December 13, 2012 at 10:07 pm
That model uses only the decadal and multidecadal oscillations.
Does that model predict serious cooling the next 20-50 years?

December 14, 2012 1:26 am

lsvalgaard says:
December 13, 2012 at 10:29 pm
Nicola Scafetta says:
December 13, 2012 at 10:07 pm
That model uses only the decadal and multidecadal oscillations.
…….
Does that model predict serious cooling the next 20-50 years?

Yes it does
http://www.vukcevic.talktalk.net/CET-NV.htm
see graph 2.
What has that to do with Scafetta? Not much directly, but put in simple terms :
Either the solar and the Earth’s internal variability, which we can only judge by observing the surface effects or changes in the magnetic fields as an internal proxy:
– sun affects the Earth, since the other way around appears to be negligible.
– or caused by common factor, possibly planetary configurations
Surface effects correlation:
http://www.vukcevic.talktalk.net/SSN-NAP.htm
Internal variability (as derived from magnetic fields as a proxy) correlation:
http://www.vukcevic.talktalk.net/TMC.htm
As the CET link shows, the best and yhe longest temperature record is not immune to the such sun-Earth link, and neither are relatively reliable shorter recent records from the N. Hemisphere
http://www.vukcevic.talktalk.net/GSC1.htm
You and Matthew R Marler call it meaningless curve fitting, but as long as data from which the curves are derived it is foolish to dismiss as nonsense.I admit that I can’t explain above in the satisfactory terms, if you whish to totally reject it you got your reasons, and you were welcome to say that in the past as you are now and future.

December 14, 2012 1:39 am

The model used here is fine for interpolation, ie to calculate the temperature at time T-t where T is the present and t is positive. So it would be useful to replace the historic temperature record by a formula. If we need to know temperatures at time T+t then this is an extrapolation which is valid only if the components of the formula represent all the elements of physical reality that detemine the evolution of the climate. But this is precisely what has not been shown!
There was a transit of Venus earlier this year and we are told thar the next one will be in 2117.
We can be confident in this prediction because we know that the time evolution of the planets is given accurately by the laws of Newton/Einstein.
Climate science contains no equivalent body of knowledge

December 14, 2012 5:21 am

vukcevic says:
December 14, 2012 at 1:26 am
it is foolish to dismiss as nonsense.
The nonsense part is to make up a data set from two unrelated ones.

December 14, 2012 6:54 am

richardscourtney says:
December 13, 2012 at 11:29 am
Stephen Rasey:
re your post at December 13, 2012 at 11:00 am.
Every now and then one comes across a pearl shining on the sand of WUWT comments. The pearls come in many forms.
Your post is a pearl. Its argument is clear, elegant and cogent. Thankyou.
=========
Agreed.

December 14, 2012 7:09 am

Nicola Scafetta says:
December 13, 2012 at 9:32 pm
My modelling methodology equivalent to the way the ocean tidal empirical models (which are the only geophysical models that really work) have been developed.
============
Correct. The tidal models are not calculated from first principles in the fashion that climate models try and calculate the climate. The first principles approach has been tried and tried again and found to be rubbish because of the chaotic behavior of nature.

December 14, 2012 7:14 am

lsvalgaard says:
December 14, 2012 at 5:21 am
vukcevic says:
December 14, 2012 at 1:26 am
it is foolish to dismiss as nonsense.
The nonsense part is to make up a data set from two unrelated ones.
=============
That the data sets are unrelated is an assumption. This can never be know with certainty unless one has infinite knowledge. Something that is impossible for human beings.
The predictive power of the result is the test of the assumption. If there is a predictive power greater than chance, then it is unlikely they are unrelated. Rather, they would simply be related in a fashion as yet unknown.

December 14, 2012 7:25 am

ferd berple says:
December 14, 2012 at 7:14 am
That the data sets are unrelated is an assumption.
That they are related is the assumption. Their unrelatedness is derived from what we know about how physics works.

December 14, 2012 7:44 am

lsvalgaard says:
December 14, 2012 at 5:21 am
The nonsense part is to make up a data set from two unrelated ones.
All magneto-meters recordings around the world do it and did it from the time of Rudolf wolf as you yourself show here:
http://www.leif.org/research/Rudolf%20Wolf%20and%20the%20Sunspot%20Number.ppt#8
to today at Tromso
http://www.vukcevic.talktalk.net/Tromso.htm
Unrelated ? Your own data show otherwise
http://www.vukcevic.talktalk.net/TMC.htm
How does it compare with Pratt’s CO2 millKelvin?
http://www.vukcevic.talktalk.net/CO2-Arc.htm
More you keep repeating ‘unrelated’ more I think you are trying to suppress this to be more widely known.

December 14, 2012 7:58 am

I have always figured the models were fits to the data. When I took my global warming class at Stanford the head of Lawrence livermores climate modeling team argued at first the models were based on physical formulas but I argued that they keep modifying the models to match the hind casting they do more and more and all the groups do the same. Studies have shown and he admitted readily that none of the models predict any better than each other. In fact only the average of all the models did better than any individual model. Such results are what one expects from a bunch of fits. He acknowledged that they were indeed fits.
If what you are doing is fitting the models then a Fourier analysis of the data would produce a model like vpmk which would be much better fit than the computer models and all that vpmk did was demonstrate that if you want to fit the data to any particular set of formulas you can do it easier and with much higher accuracy using a conventional mathematical technique than trying to make a complex iterative computer algorithm with complex formulas match the data. No wonder they need teams of programmers and professors involved since they are trying to make such complex algorithms match the data. Vpmks approach is simpler and way more accurate.
The problem with all fits however is that since they don’t model the actual processes involved they are no better at predicting the next data point and can’t be called “science” in the sense that experiments are done and physical facts uncovered and modeled. Instead we have a numerical process of pure mathematics which has no relationship to the actual physics involved. Vpmks “model” conveniently shows the the cyclical responses fading and the exponential curve overtaking. This gives credence to the underlying assumptions he is trying to promulgate but it is no evidence that indeed any physical process is occurring so it is as likely as not to predict the next data point. The idea that the effect of all the sun variations and amo/pdo/Enso have faded is counter to all intuitive and physical evidence. The existence of 16 years of no trend is indicative that whatever effect co2 is having is being completely masked by natural phenomenon vpmk is diminishing yet if the 16 year trend would show that if anything the natural forcings are much higher than before. Instead vpmk attributes more of the heat currently in the system to co2 and reduces the cooling that would be expected by the current natural ocean and sun phenomenon. So just as vpmks model shows natural phenomenon decreasing to zero effect the actual world we know that this is not the case so again another model not corresponding to reality.

Chas
December 14, 2012 8:16 am

Steveta_UK “If you can, please present it here”:
Here is FOBS without any Multidecadal removed:
http://tinypic.com/r/x4km54/6
If you squint closely you might see that there are two lines a red one and a blue one, the standard deviation of the residuals over the 100 year period, 1900 to 2000, is 0.79 mK.
For fun, here it is run forward to 2100:
http://tinypic.com/r/dwsc9/6
-There is a lot to be said for thinking about sawteeth 🙂

John West
December 14, 2012 8:20 am

Gail Combs says:
[IF] “I publish a paper showing that the rise and fall of women’s skirts plus a saw tooth pattern provides a good fit to the curve. Since no one can provide a better ‘fit’ than that the paper has to stand?”
LOL!
In the spreadsheet:
AGW = ClimSens * (Log2CO2y – MeanLog2CO2) + MeanDATA
Perhaps I’m a little dense but somebody might have to explain to me the physics behind that formula before I could take any of this seriously.
Also in the spreadsheet is a slider bar for climate sensitivity, given enough time to “play” one should be able to slide that to 1 C per 2x CO2 and adjust the sawtooth to arrive at the same results, thereby “proving” (not) climate sensitivity is 1 C / 2X CO2. Seems like a lot of time and effort for nothing to me.

DirkH
December 14, 2012 9:00 am

“The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.”
Wonderfully scientific approach – we need a fudge factor to save the “AHH theory”, so we’ll simply take the deviation from reality, call it “a sawtooth, whatever its origin” and the theory is saved.
PARDON ME? Don’t we give the warmists billions of Dollars? Can’t we expect at least a little bit of effort from them when they construct their con?

DirkH
December 14, 2012 9:05 am

Stephen Rasey says:
December 13, 2012 at 11:00 am
“I want to draw people’s attention to the frequency content of VPmK SAW2 and SAW3 wave forms. ”
Very powerful! Dang, I didn’t think of that!

December 14, 2012 9:45 am

Here is a note to Dr. Svalgaard, not for his (he knows it, possibly far better than I do) but for benefit of other readers.
Relatedness of solar and the Earth’s magnetic fields could be considered in three ways
1. Influence of solar on the Earth’s field, well known but short lasted from few hours to few days
2. Common driving force (e.g. planetary ) – considered possible but insignificant forcing factor.
3. Forces of same kind integrated at point of impact by the receptor. Examples of receptors could be: GCR, saline ions in the ocean currents, interactions between ionosphere and equatorial storms (investigated by NASA), etc
Simple example of relatedness through a receptor:
Daylight and torch light are unrelated by sources and do not interact, but a photocell will happily integrate them, not only that but there is an interesting process of amplification, which I am very familiar with.
In the older type image tubes, before CCDs era (saticon, ledicon and plumbicon) there is an exponential law of response at low light levels from the projected image, which further up the curve is steeper and more linear.
A totally ‘unrelated’ light from back of the tube known as ‘backlight or bias light’ is projected at the photosensitive front layer. Effect is a so called ‘black current’ which lifts ‘the image current’ from low region up the response curve, result is more linear response and surprisingly higher output, since the curve is steeper further away from the zero.
Two light sources are originally totally unrelated, they do not interact with each other in any way, but they are integrated by the receptor, and further more an ‘amplification’ of the output current from stronger source is achieved by presence of a weaker.
I know that our host worked in TV industry and may be familiar with the above.
So I suggest to Dr. Svalgaard to abondan ‘unrelated’ counterpoint and consider the science in the ‘new light’ of my finding
http://www.vukcevic.talktalk.net/EarthNV.htm

Matthew R Marler
December 14, 2012 10:45 am

vukcevic: You and Matthew R Marler call it meaningless curve fitting,
I don’t think I said that your modeling was “meaningless”; I have said that that the test of its “truth” will be how well it fits future data.

December 14, 2012 11:20 am

Re: models and harmonics
I was asked by one of WUWT participants (we often correspond by email) would it be possible to extrapolate CET by few decades. I had ago.
The first step was to separate the summer and the winter data (using two months around the two solstices, to see effect of direct TSI input) result:
http://www.vukcevic.talktalk.net/MidSummer-MidWinter.htm
this graph at a later stage was presented on couple of blogs, but Grant Foster (Tamino), Daniel Bailey (ScSci) and Jan Perlwitz (NASA) fell flat on their faces trying to elucidate, why no apparent warming in 350 years of the summer CET, but gentle warming in the winters for whole of 3.5 centuries.
Meteorologists knows it well: the Icelandic Low semi-permanent atmospheric pressure system in the North Atlantic. Its footprint is found in the most climatic events of the N. Hemisphere. The strength of the Icelandic Low is the critical factor in determining path of the polar jet stream over the North Atlantic
In the winter the IL is located at SW of Greenland (driver Subpolar Gyre), but in the summer the IL is to be found much further north (most likely driver the North Icelandic Jet, formed by complex physical interactions between warm and cold currents), which as graps show had no major ups or downs.
Next step: finding harmonic components separately for the summers and winters, Used one common and one specific to each of the two seasons, all below 90 years. By using the common and two individual components, I synthesized the CET adding average of two linear trends. Result is nothing special, but did indicate that a much older finding of the ‘apparent correlation ‘ between the CET and N. Atlantic geological records now made more sense.
http://www.vukcevic.talktalk.net/CNA.htm
I digressed, what about the CET extrapolation?
http://www.vukcevic.talktalk.net/CET-NV.htm
Well, that suggest return to what we had in the 1970s, speculative. Although the CET is 350 years long, I would advise caution, anything longer than 15-20 years is no more than the ‘blind fate’.
Note: I am not scientist, in no way climate expert, only models I did are the electronic ones, both designed and built working prototypes.

December 14, 2012 11:23 am

Matthew R Marler since I have quoted you wrongly, I do apologise.

December 14, 2012 12:22 pm

vukcevic says:
December 14, 2012 at 9:45 am
3. Forces of same kind integrated at point of impact by the receptor.
So I suggest to Dr. Svalgaard to abondan ‘unrelated’ counterpoint and consider the science in the ‘new light’ of my finding

There is no integrated effect as the external currents have a short life time and decay rapidly. Your ‘findings’ are not science in any sense of that word. You might try to explain in one sentence how you make up the ‘data’ you correlate with. Other people have asked for that too, but you have resisted answering [your ‘paper’ on this is incomprehensible, so a brief, one sentence summary here might be useful].

Chas
December 14, 2012 12:45 pm

Ha! Mr.Whack-a-mole must have gone to bed 😉
Time for a teeny weeny extrapolation, methinks.
The past 1,500 years temperature history, (base data in red)
http://tinypic.com/r/2rm7bd3/6
(The five free phase sine waves, as above)

Mike Rossander
December 14, 2012 1:18 pm

First, let me also congratulate the author for having the courage to provide all of the data and calculations. Such transparency is in the best interests of science.
I also really liked the very first question raised in this thread – “Assume AGW is a flat line and repeat the analysis” and thought that should be a challenge to take up. What I did may be overly simplistic so please correct my attempt.
I downloaded the Excel spreadsheet and reset cell V26 (ClimSens) to a value of zero. As expected, the red AGW line on the graph dropped to flat. I then set up some links to the green parameters so they could be dealt with as a single range (a requirement for the Excel Solver Add-in). I played with a few initial parameters to see what they might do, then fired off Solver with the instruction to modify the parameters below with a goal of maximizing cell U35 (MUL R2). No other constraints were applied.
Converging to the parameters below, Solver returned a MUL R2 of 99.992%, very slightly higher than the downloaded result. The gray MRES line in the chart shows very flat. (I think it needs one more constant to bring the two flat lines together but couldn’t find that on the spreadsheet.) Have I successfully fit the elephant? Does this result answer Steveta_uk’s challenge above (Dec 13 at 8:55 am)? Or have I missed something here?
Cell name value
D26 ToothW 2156.84…
G23 Shift 1 1928.48…
G26 Scale 1 1489.03…
H23 Shift 2 3686.05…
H26 Scale 2 1386.24…
I23 Shift 3 4356.71…
I26 Scale 3 2238.07…
J23 Shift 4 3468.56…
J26 Scale 4 0
K23 Shift 5 2982.83…
K26 Scale 5 781.58…
M26 Amp 2235.58…

mikerossander
December 14, 2012 2:14 pm

Update: Might have found that constant. Setting cell D32 to a value of -0.1325 roughly centers the MRES line around zero and makes the gray detail chart visible.

DirkH
December 14, 2012 2:22 pm

Chas says:
December 14, 2012 at 12:45 pm
“Ha! Mr.Whack-a-mole must have gone to bed 😉
Time for a teeny weeny extrapolation, methinks.
The past 1,500 years temperature history, (base data in red)”
Exactly like in the history books! /sarc
Thanks, Chas. Beautiful.

Chas
December 14, 2012 2:46 pm

Mike, I guess that because you are maximising the R2 you are ending up with two parellel but offset fits (by about 32mK). If you minimised the sum of the residuals you would kill two birds with one stone.This would have to be the sum of the absolute values of the residuals or the sum of the squared residuals to stop the negative residuals cancelling out the positive ones. I get the standard deviation ALL of your residuals to be about 2mK; PV selected his SD from the best 100year period to get the ‘less than a millikelvin’ bit, I think.
-I notice that the residuals seem to have a clear sine wave in them with an amplitude of about 5mK whilst at the same time you have SAW4 with an amplitude of zero -I wonder if Solver hasnt converged?
This all is on the basis that I have entered your solutions correctly!
In some ways your fit ought to make more sense to VP than his fit; you have the first sine wave and he is left wondering why his first wave doesnt exist.

December 14, 2012 2:52 pm

Leif Svalgaard says:
December 14, 2012 at 12:22 pm
how do you make up the ‘data’
…….
‘Phil Jones from CRU’ syndrome, unable to read the Excel file?
How to calculate spectrum of the changes in the Earth’s magnetic field see pages 13 & 14, you can repeat the calculations.
Since you are so infuriated by ‘unrelated’ magnetic fields, and my ‘art of making-up the data’ you should closely examine Fig. 26, in case you did miss it. That should make it even more interesting.
See you.

December 14, 2012 10:11 pm

vukcevic says:
December 14, 2012 at 2:52 pm
my ‘art of making-up the data’ you should closely examine Fig. 26, in case you did miss it.
You are ducking the question again.

December 15, 2012 4:01 am

lsvalgaard says:
December 14, 2012 at 10:11 pm
………………..
Let’s summaries:
Subject of my article is calculation which shows that the natural temperature variability in the N. Hemisphere is closely correlated to the geomagnetic variability with no particular mechanism is considered.
1. You objected: the data was artificially ‘made up’
– this was rebutted by showing that the ‘new data’ is simple arithmetic sum of two magnetic fields.
2. You said: this is not valid since the fields do not interact.
– this was rebutted by showing that interaction is the property of receptor, e.g. magneto-meters do react to both combined fields. Secondary interaction is also recognized via induction of electric currents.
3. You said: currents are of short duration from few hours to up to a few days, therefore effect is insignificant.
– this happens on a regular bases and it may be sufficient to alter the average temperature of about 290K by + or – 0.4K.
4. You are returning to the starting point: ‘made up’ data (see item 1)
– It is not my intention to go forever in circles.
You made more than 20 posts, here and elsewhere, regarding my finding, with a very little or no success to invalidate it.
My intention is to get more scientific appraisal, as a next step I emailed Dr. J. Haig (from my old university), her interests include solar contribution to the climate change.
She is a firm supporter of the AGW theory, you can contact and join forces, if you whish to do so. Content of the email is posted here:
http://wattsupwiththat.com/2012/12/14/another-ipcc-ar5-reviewer-speaks-out-no-trend-in-global-water-vapor/#comment-1173874

December 15, 2012 6:13 am

vukcevic says:
December 15, 2012 at 4:01 am
1. You objected: the data was artificially ‘made up’
– this was rebutted by showing that the ‘new data’ is simple arithmetic sum of two magnetic fields.

Repeating an error is not a rebuttal, and a simple inspection of your graph shows that your made up data is not the sum of two ‘magnetic fields’. So, again, how exactly is the data made up?

December 15, 2012 10:06 am

This is in a way of explanation to Dr. Svalgaards statement ‘you made up data’ you are pseudo scientist ,some would think even possibly fraudster, but I am certain that Dr.S didn’t imply it.
Here we go: Since the AMO is trend-less time function (oscillating around zero value) than it is assumed that the signed and normalized SSN (to the AMO values) is an adequate representative of the heliospheric magnetic field at the Earth’s orbit. It is equally possible to use McCracken, Lockwood or Svalgaard & Cliver data, but these do not contain either sufficient resolution and mutually disagree, so it is considered that the SSN, as most familiar and internationally accepted data set is the best for the purpose. Earth Magnetic field has number of strong spectral components. One of them is exactly same as the Hale cycle period (as calculated from the SSN). I could have used it as an non-dumped oscillator (clean Cos wave) , but match to the AMO is not as good as using the signed SSN. This points to the fact that the SSN is more likely factor, unless of course the Earth harmonic has same annual ‘modulation’ in manner of the SSN, which would be an extraordinary finding. Such possibility is considered on page 14 Fig. 25 curve dF(t). For purpose of comparison to the AMO from the Earth spectrum is then taken second component and employed as clean Cos wave. I suspect that this component is due to the feedback ‘bounce’ due to propagation delay in the Earth’s interior (see link to Hide and Dickey paper) but this is speculative.It is huge puzzle why the Earth’s magnetic field oscillation component should have as its main period one which is exactly same as the SSN derived Hale cycle, but of much stronger intensity than the heliospheric field . I do not think, but many solar scientist (including yourself) postulate that the solar dynamo has amplification property. Would something of a kind exist within the Earth’s dynamo than it would explain the strong Earth’s component as well as Antarctic field http://www.vukcevic.talktalk.net/TMC.htm .How could this occur: I speculate that since dept of Earth’s crust is 20-40 km, and the geomagnetic storm induced currents reach down to 100km (Svalgaard). It is possible that a magnetized bubble of liquid metal is formed than amplified by the field of the Earth’s dynamo, in a manner of the solar dynamo amplification. Although highly speculative, and despite you promoting solar dynamo amplification you will reject the geo dynamo amplification, but it would explain a lot. Mathematics of periodic oscillations ‘amplification’ is dead simple Cos A + Cos B = 2 Cos (A+B)/2 x Cos (A–B)/2, and vice versa, result: one short and one long period of oscillation, giving rise to the two AMO’s characteristic periods of 9 and 64 years (see pages 5 & 6).Where this process occurs it is not known (it could be in the magnetic field itself or in the oceans as receptor of the two oscillation. Now to the Excel file: Word sum is used with its more general meaning, to described any of the four arithmetic operations as used in the Excel file, but here is a list:
Column1: Year; Column2: SSN; Column3: (+ & – 1 to sign SSN); Column4: Hale cycle- SSN with sign (times); Column5: Normalized SSN to AMO (divide); Column6: – Earth field oscillator (Cos, times, minus, divide); Column7: Geosolar oscillator (times); Column8: Geosolar oscillator moved forward by 15 years; Column9: AMO; Column10: AMO 3yma (plus & divide). So what is all this about: http://www.vukcevic.talktalk.net/GSC1.htm Annoying fact is that you know all of the above, why do you want it all spelt out god only knows. I am not answering any more questions, have go at your Stanford colleague and his milliKelvins. , instead I shall refer you to appropriate page in my article, Excel file you have and this post.
Thank you and good bye.

tty
December 15, 2012 10:17 am

I suggest that Vaughan Pratt and some commentators here read up a bit on Fourier theory. Any (yes ANY) periodic or aperiodic continuous function can be decomposed into sine waves to any precision wanted. So it follows that you can also subtract any arbitrary quantity (for example an IPCC estimate of global warming) from that continuous function and it can still be decomposed into sine waves just as well as before, though they would be slightly different sine waves.
However note that there is absolutely no requirement that those sine waves have any physical reason or explanation.

December 15, 2012 10:50 am

vukcevic says:
December 15, 2012 at 10:06 am
why do you want it all spelt out god only knows.
What you describe is a perfect example of fake data, selected and made up to fit the best, based on invalid physics. That you call the ‘data’ is deceptive in the extreme. The one deceived in in first line yourself. What happened to your grandiose plan of sending your stuff to all geophysics departments [before AGU] at all major Universities?

December 15, 2012 11:03 am

vukcevic says:
December 15, 2012 at 10:06 am
why do you want it all spelt out god only knows
What you describe is a perfect example of fake, selected, tortured, and made-up stuff twisted to fit an idea. Calling the result ‘data’ is deceptive in the extreme; the one most deceived being yourself, belying your claim that “ the ‘new data’ is simple arithmetic sum of two magnetic fields.
BTW, what happened to your grandiose plan of carpet-bombing [before AGU] geophysics departments at all major universities to drum up support for your ideas?

Matthew R Marler
December 15, 2012 11:42 am

Mike Jonas: Regarding my request to Vaughan Pratt to retract. I made the same request on ClimateEtc, to which he replied:
I thought I would mention again that Einstein’s 1905 paper on special relativity showed that: by assuming the speed of light to be constant he could derive the already well-known Lorentz-Fitzgerald contraction, an exercise you would regard as circular because the Lorentz-Fitzgerald contraction was already known, and the mechanism by which the speed of light can be independent of the relative motions of source and receiver (whereas the frequency and wavelength are not so independent) is a mystery.
I don’t mean to embarrass Dr Pratt by elevating him into the Pantheon with Einstein, but the logic of the two theoretical derivations is the same in the two cases: the result is known, and the procedure produces it. The suspicion surrounding Einstein’s result was such that the Swedish Academy awarded him the Nobel Prize for a different 1905 paper, and the general theory did not begin to gain widespread acceptance until the Eddington expedition in 1919, and that was the subject of acrimonious debate.
There is no more reason for Pratt to withdraw this paper than there would have been for Einstein to withdraw his first paper on relativity.

December 15, 2012 12:05 pm

Dr. Svalgaard said:
What you describe is a perfect example of fake data, selected and made up to fit the best, based on invalid physics. That you call the ‘data’ is deceptive in the extreme. The one deceived in in first line yourself.
Calling the result ‘data’ is deceptive in the extreme; the one most deceived being yourself, belying your claim that “ the ‘new data’ is simple arithmetic sum of two magnetic fields.

No need to be so furious, sun matters, you know.
Hey, not just the ‘ordinary garden nonsense’ this time, something far more valuable.
‘would need to distill the argument into relatively simple points, show a few key figs’ as another university professor said, and than I’ll dispatch few emails.
Have a happy Xmas and N. Year.
p.s. Apparently a new fable by Hans Christian Anderson is discovered; try to get a first print copy for your grandchildren.
http://www.philstar.com/lifestyle-features/2012/12/14/885985/new-found-tale-could-be-hans-christian-andersens

December 15, 2012 12:39 pm

vukcevic says:
December 15, 2012 at 12:05 pm
No need to be so furious, sun matters, you know.
No need to be so evasive. Honesty matters, you know.
Hey, not just the ‘ordinary garden nonsense’ this time, something far more valuable
D-K effect again. There is nothing valuable at all in your stuff.

P. Solar
December 15, 2012 12:55 pm

lsvalgaard says:
>>
Nicola Scafetta says: “7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here”
It fails around 2010 and you need a 0.1 degree AGW to make it fit. I would say that there doesn’t look to be any unique predictability in your model. A constant temperature the past ~20 years fits even better.
>>
I recently read an article by a professor at Stanford, one of the top universities in the U.S. , that claimed at 3 deg. / 2XCO2 model was accurate to within one thousandth of a degree. But I’m a bit concerned because he doesn’t know how to do a running mean.
Do you think it matters ?

P. Solar
December 15, 2012 2:09 pm

lsvalgaard says: A constant temperature the past ~20 years fits even better.
The same could be said of 3K per doubling, it sure as hell isn’t within a 1/1000 degree whatever way you spin it.

RobertInAz
December 15, 2012 6:40 pm

Mike Rossander says:
December 14, 2012 at 1:18 pm
Mike Rossander is my hero!

chris y
December 15, 2012 7:17 pm

By showing that a climate sensitivity to CO2 of 0 C/W/m^2 gives as good (or better) fit than the consensus climate sensitivity, Mike Rossander and Mike Jonas have completely rubbished the poster.
Thanks for your efforts!
Now assume a climate sensitivity to CO2 of -3 C/W/m^2. That should be fun! I predict an almost perfect fit once again can be achieved with the correct weighting of suitable sinusoids.

December 15, 2012 9:00 pm

Scafetta doesnt share his code or his data. he is a fraud

December 15, 2012 10:47 pm

Vukcevic says that:
he has discovered/invented following formula
AMO = SSN x Fmf
where:
AMO = Atlantic Multidecadal Oscillation (or de-trended N.H. temp)
SSN = Sunspot number with polarity
Fmf = frequency of the Earth’s magnetic field ripple (undulation).
He calls above arithmetic sum ‘Geo-Solar Cycle’
http://www.vukcevic.talktalk.net/GSC1.htm
calculations are accurate, but he is unable to provide valid physical mechanism.
Svalgaard says that:
Vukcevic – writes nonsense, pseudo science, suffers from Denning-Kruger mental aberration, making up data, deceptive in the extreme (implies fraud), honesty matters (implies dishonest)
In the years gone by, where I come from, the above attributes had to be earned. Fortunately that is not the case any more. Similar pronouncements were often repeated by the self-appointed ‘guardians of eternal truth’ regardless of geography or historic epoch.
(@ mosher forwarded the relevant Excel file to you too)

Matthew R Marler
December 15, 2012 11:18 pm

Mike Jonas: Every single argument of VP’s in support of the process that he used with IPCC-AGW applies absolutely equally to the exact same process with AGW = zero.
On that we agree. The test between the two models will be made with the next 20 years’ worth of data. Having found filters and estimated coefficients, they are not free to modify those coefficients willy-nilly to get good fits each time the data disconfirm their model forecast.

December 16, 2012 6:21 am

vukcevic says:
December 15, 2012 at 10:47 pm
AMO = SSN x Fmf
AMO = Atlantic Multidecadal Oscillation (or de-trended N.H. temp)
SSN = Sunspot number with polarity
Fmf = frequency of the Earth’s magnetic field ripple (undulation).
He calls above arithmetic sum ‘Geo-Solar Cycle’

Zeroth: Is it AMO or [manipulated] temps? Not the same.
First, the formula uses multiplication [I assume that is what the ‘x’ stands for], so is not a sum.
Second, which SSN is used? The International [Zurich] SSN or the Group SSN?
Third, the ‘polarity’ of the SSN is nonsense. You might talk about the polarity of the HMF, but then that should go from maximum to maximum [when the polar fields change]. In any case, the polarity that is important for the interaction with the Earth is the North-South polarity which changes from hour to hour.
Fourth, ‘frequency’ is not a magnetic field [you said that you added two magnetic fields].
Fifth, ‘ripple’ is what? and undulation?
Sixth, ‘Earth’s field’, measured where? And why there?
In the years gone by, where I come from, the above attributes had to be earned.
With the expertise you perfected back then, you are still earning it in earnest now.
Similar pronouncements were often repeated by the self-appointed ‘guardians of eternal truth’ regardless of geography or historic epoch.
Nonsense is nonsense no matter where and when.

December 16, 2012 10:35 am

lsvalgaard says:
December 16, 2012 at 6:21 am
……..
Some of the points you raise are explained beforehand, see my post above
The above formula is a summary in its most abstract form. I have also made it clear that all four basic arithmetic calculations for simplicity I always address as ‘sum’, but for your benefit (see the link above) each of the arithmetic calculations has been specifically itemized in the Excel file description.
You also have access to my article, which is over 20 pages long, contains 39 illustrations of which 35 are my own product, and many of the questions you pose are elaborated in detail in the article. You also have the Excel file with further information.
You will appreciate that the blog is not capable of furnishing full reproduction, therefore reading of the article is a prerequisite, which of course you are welcome to do.
Thank you for the note, once the final publication (this is the first draft) is composed the points you made will be fully considered.
For time being you may consider snippets of information as ‘unofficial leaks’ from a future publication, which currently is a ‘high vogue’ on the fringes of climate science, and should be treated as such, or else ignored.
Thanks again for your attention.

December 16, 2012 11:09 am

lsvalgaard says:
December 16, 2012 at 6:21 am
……
May I add, I feel highly privileged and grateful that most if not all, of your attention and time during the last few days, on this blog and elsewhere, was devoted to the ‘ironing out’ of any inadvertent inconsistencies, that may be found in the draft of my article, to the preference of the Dr. Pratt’s paper, which surely must be this year’s the most important contribution to understanding of the anthropogenic warming.
Thank you again, sir.

December 16, 2012 11:24 am

vukcevic says:
December 16, 2012 at 11:09 am
May I add, I feel highly privileged and grateful that most if not all, of your attention and time during the last few days, on this blog and elsewhere, was devoted to the ‘ironing out’ of any inadvertent inconsistencies, that may be found in the draft of my article
As I said, your descriptions here on WUWT have been deceptive and your ‘paper’ is incomprehensible. You cannot presume that anybody would try to decipher what you actually mean. The purpose of publication is to make the paper comprehensible to a [scientifically literate] reader with only cursory knowledge of the subject to understand your claims by simply skimming the paper [your version is much too dense with details and loose ends]. You could start by responding to my seven points, right here on WUWT. As things stand, the paper is still nonsense, and you commit the deadly sin of arguing with the referee instead of responding concisely to the points raised.

Vaughan Pratt
December 17, 2012 3:22 am

Mike Rossander is the first, and so far only, person to respond to my challenge to provide an alternative description of modern secular (> decadal) climate to the one I presented at the AGU Fall Meeting 12 days ago. I only wish there were more Mike Rossanders so as to increase the chances of obtaining a meaningful such description. Thank you, Mike!
Although Mike’s magic number of 99.992% may seem impressive, it’s unclear to me how a single number addresses the conclusion of my poster, which starts as follows.
“We infer from this analysis that the only significant contributors to modern multidecadal climate are SAW and AGW, plus a miniscule amount from MRES after 1950.”
If one views MRES as merely the “unexplained” portion of modern secular climate then 99.992% does indeed beat 99.99%.
However a cursory glance at these charts reveals two clear differences between the upper and lower charts, respectively mine and Mike’s.
1. On the left, the upper chart is “within a millikelvin” (as measured by standard deviation) for the “quiet” century from 1860-1960. It then moves up until 1968, quietens back down to 1990, then moves up again. At no point does it go below the century-long quiet period. (Ignore the decade at each end, secular or multidecadal climate can’t be measured accurately to within a single decade.)
I justify my “within a millikelvin” title by claiming that, although the fluctuations from 1860 to 1960 are certainly “unexplained” (R2 is defined as unity minus the “unexplained variance” relative to the total variance), the non-negative bumps thereafter admit explanations, namely non-greenhouse impacts of growing population and technology in conjunction with the adoption of emissions controls. Their clear pattern therefore makes it unreasonable to count them as part of the unexplained variance.
The lower chart on the other hand simply wiggles up and down throughout the entire period, and moreover with a standard deviation three times as large. It is just as happy to go negative as positive, and it draws absolutely no distinction between the low-tech low-population 19th century and the next century. WW1 consisted largely of firing Howitzers, killing many millions of soldiers with bayonets and machine guns, and dropping bricks from planes, while WW2 consisted largely of blowing cities and dams partially or in some cases completely to smithereens with conventional and nuclear weapons of devastating power. The clear consequences of this trend led to the cancellation of WW3 by mutual agreement.
There is not a trace of this progression in Mike’s chart, just oscillations both above and below the line. Moreover they even die down a little near the end—what clearer proof could you ask for that the increasing human population and its technology cannot be having any impact whatsoever on the climate?
2. On the right, the upper chart shows in orange the Hale or magnetic cycle of the Sun. The lower one does the same except that it is much messier and gives no reason to suppose that a cleaner picture is possible.
Now let’s look at how far one needs to bend over in order to cope with zero AGW. Here are the ten coefficients Mike and I are using to express the sawtooth shape, expressed in natural units rather than the incomprehensible slider units. For shifts this is the fraction of a sawtooth period t to be shifted by, so for example 0.37t means a shift of 37% of the sawtooth period. For scale it is the attenuation of the harmonics, so for example 0.37X means an attenuation down to 37% of full strength for that harmonic in the case of a perfect sawtooth. 0 and X are synonymous with 0X and 1X respectively.
Mike:
Shifts: 0.092848t 0.268605t 0.335671t 0.246856t 0.198283t
Scales: 1.48903X 1.38624X 2.23807X 0 0.78158X
(The 0 for Scale4 is clearly a bug in how the problem was presented to Solver.)
Me:
Shifts: 0 0 0 t/40 t/40
Scales: 0 X X X/8 X/2
If we take the number of bits needed to represent these coefficients as a measure of how much information each of us has to pump into the formulas to force them to fit the data, the difference should be clear to those who can count bits. Also note that all of my shifts are way smaller than all of Mike’s shifts. His five sine waves bear no resemblance whatsoever to the harmonics of a sawtooth.
My question to Mike, and to anyone else who shares Mike’s and my interest in modern secular climate analysis, is, can one create a more plausible MRES, a cleaner HALE cycle, and a less obviously contrived collection of coefficients, while still setting climate sensitivity to zero?
If this can be done we would have a much stronger case against global warming.
(Incidentally the reason the logic looks circular to MIke Jonas is that iterative least-squares fitting is circular: it entails a loop and loops are circular. The correct question is not whether the loop is circular but whether it converges.)

Reply to  Vaughan Pratt
December 18, 2012 6:48 am

Vaughan.
I have used a statistical curve fitting approach in testing one component of the AGW model that you assume to be valid.That component is the assumption that anthropogenic emissions has caused all of the atmospheric increase in CO2. Read http://www.retiredresearcher.wordpress.com.

mikerossander
December 17, 2012 11:39 am

Good afternoon, Vaughan. I think you may have misunderstood the intent of my comment. And perhaps of the original criticism in the post above. My analysis was trivial, limited and deeply flawed. It had to be so because it was based on no underlying hypothesis about the factors being modeled (other than the ClimSens control value). It was an exercise that took about 15 min and was intended only to illustrate the dangers of over-fitting one’s data.
For example, you argue above that the fact that the shifts in your scenario are smaller is an element in favor of that scenario. Unless there is a physical reason why small shifts should be preferred, that claim is unjustifiable. A shift of zero may be perfectly legitimate – or totally irrelevant. That statistics are only useful to the extent that they illuminate some underlying physical process.
By the same token, you can’t say that the coefficients are “contrived” unless you have a physical understanding that indicates what they SHOULD be.
The closeness of R2 is also essentially irrelevant. Minor changes to the parameters drove that value off the 0.99x values quite easily. We can reasonably interpret an R2 of 0.2 as “bad” and an R2 of “0.8” good but the statement that “R2 of 0.99992 is better than 0.9990” is well beyond the reliability of the statistics to honestly say. On the contrary, given everything we know about the randomness of the input data (and about the known uncertainties of the measurement techniques), an R2 that high is almost certainly evidence that we have gone beyond the underlying data. There’s too little noise left in the solution. I say that because my physical model includes assumptions about the existance and magnitude of human error in data collection, transcription, etc. and that those errors should be random, not systemic.
What you need (and what neither of us have done) is a formal test of overfitting. Unfortunately, without an assumed physical model, I don’t know of any reliable way to structure that test. A common approach would be to run the Student’s t Test on each parameter in isolation, comparing the results of the model at the set parameter value vs the hypothesized null value. But your model does not make apparent what the null value ought to be. As noted above, we can not blindly assume that it is zero.
An alternate approach would be to restructure the model so you can feed an element of noise into all your data and rerun the analysis a few thousand times. Parameters which remain relevant across the noisy samples are probably more reliable. That’s not an approach that can be easily built in Excel, however.
Having said all that, you are certainly more familiar with your model than I ever will be. (The organization of your workbook is well above average but reverse-engineering someone else’s excel spreadsheet is almost always an exercise in futility.) Maybe you see a way to run a proper test against the parameters that I don’t.
One last thought. I ran my trivial test with the hypothesis that ClimSens should equal zero. There is no reason why that is necessarily the appropriate null, either. ClimSens really should be its own parameter, also included in the statistical tests of overfitting.

Vaughan Pratt
December 18, 2012 9:08 pm

Hi Mike. All your points are eminently sensible, particularly as regards the dangers of over-fitting (your main concern). That was also a concern of mine, and is why I locked down 5 degrees of freedom in SAW.
One way to structure a test of overfitting is to compare the dimensions of the data and model spaces. To my mind the best protection against overfitting is to keep the latter smaller than the former. When it can be made a lot smaller one can claim to have a genuine theory. When only a little smaller, or barely at all, it is not much better than a mere description from some point of view (namely that of a choice of basis). I would say my Figure 10 was more the latter, argued as follows.
For this data, 161 years of annualized HadCRUT3 anomalies is a point in the 160-dimensional space of all anomaly time series centered on the same interval, one dimension being lost to the definition of anomaly. My F3 filter projects this onto the 14-dimensional space of the first seven harmonics of a 161-year period. However it attenuates harmonics 6 and 7 down into the noise, making 10 dimensions (two per harmonic) a more reasonable estimate, perhaps 12 if you can crank up the R2 really high to bring up the signal-to-noise ratio.)
In principal my model has 14 dimensions, of which I lock down 5 leaving 9. You locked down the 3 AGW dimensions leaving the 11 SAW dimensions. However two of these only partially benefited you because Solver wanted to drive Scale4 negative. I’m guessing you left the box checked that told Solver not to use negative coefficients, so it stopped moving Scale4 when it hit 0, which in turn made tShift4 ineffective. The Evolutionary solver might have been able to add 1250 (half a period) to tShift4 discontinuously to simulate Scale4 going negative.
9 dimensions for the model space is dangerously close to the 10 dimensions of the model space, so I’m within a dimension of being as guilty of overfitting as you. The real difference is not dimensional however but choice of basis: sine waves in your case, a more mixed basis in mine including 3 dimensions for the evident rise that I’ve modeled as log(1+exp(t)) per the known physics of radiative forcing and increasing CO2.
Rather than using a t-test I’d be inclined to move the data and model dimensions apart by dropping the 4th and 5th harmonics altogether (since they aren’t really carrying their weight in increasing R2) while halving the period of F3. The data space would then have 20-24 dimensions and the model space 6.
But I would then spend 5-6 dimensions in describing HALE as a smoothly varying sine wave. That would greatly reduce the contribution of HALE to the unexplained variance, at the price of reducing the 6:20 gap to 11-12:20-24. That’s still an 8-13 dimensional gap between the data and model spaces, which I would interpret as not being at serious risk of overfitting. 3 of the HALE dimensions describe the basic 20-year oscillation, which the remaining 2-3 modulate.
As Max Manacker insightfully remarked at Climate Etc. a day or so ago, it’s not the number of parameters that counts so much as whether they’re the right ones. Whether parameters are meaningful depends heavily on the choice of basis for the space. Do they have any physical relevance?
Physics aside, your suggesting of feeding noise in would be a straightforward way of quantifying the dimension gap empirically that also took into account the attenuation by F3 of harmonics 6 and 7 as well as the role of R2, all in the one test so that would be very nice. I have this on my list of things to look into, hopefully the other things won’t push it down too far.
I would have replied sooner except that I spent today following up on the suggestion to try other ClimSens values besides 0 and 2.83, made by both you and “Bill” on CE. Very interesting results, more on this later as this comment is already so deep into tl;dr territory that I ought to submit it for the 2013 literature Nobel.

Vaughan Pratt
December 18, 2012 9:43 pm

Hi fhaynie. I’m having trouble reading your first figure about dependence on latitude. When I click on it I get only an arctic plot. Is there some way I can blow it up to a readable size?
Your emphasis on carbon 13 and 14 is commendable, but I must confess I have thought less about them than the raw CDIAC emissions and land-use-change data since 1750. These can be converted to ppmv contributions using 5140 teratonnes as the mass of the atmosphere and 28.97 as its average molecular weight. One GtC of emitted CO2 therefore contributes 28.97/12/5.14 = 0.47 ppmv CO2 to the atmosphere.
CDIAC says that in 2010 the anthropogenic contribution including land use changes was 10.6 GtC. For that year Mauna Loa recorded an increase of 2.13 ppmv. The former translates to a contribution of 10.6 * 0.47 = 4.98 ppmv.. Hence 2.13/4.98 = 42.7% percent of our contribution was retained in the atmosphere in 2010, with the remaining 57.3% being presumably taken up by the ocean, plants, soil, etc.
It would be very interesting to compare this with your analysis based on molecular species.of CO2. Do you have an estimate of the robustness of this sort of analysis?
I very much like this direction you’re pursuing, it could lead to useful insights. What feedback have you had from those knowledgeable about this sort of approach? And are there more detailed papers I can read on this?

December 19, 2012 7:37 am

Vaughan,
I am in the process of using similar techniques doing mass balances dividing the earth’s surface into five regions. If you are interested, you can find my email address at http://www.kidswincom.net.
I will gladly share what I am doing as well as thoughts on the mistakes we can make using curve fitting programs. I have had some long email conversations with two individuals that promote the global mass balance you cite. They have a strong vested interest in being right. Most of the favorable comments did not go into any technical detail.