Guest post by Mike Jonas
A few days ago, on Judith Curry’s excellent ClimateEtc blog, Vaughan Pratt wrote a post “Multidecadal climate to within a millikelvin” which provided the content and underlying spreadsheet calculations for a poster presentation at the AGU Fall Conference. I will refer to the work as “VPmK”.
VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.
Background
The background to VPmK was outlined as “Global warming of some kind is clearly visible in HadCRUT3 [] for the three decades 1970-2000. However the three decades 1910-1940 show a similar rate of global warming. This can’t all be due to CO2 []“.
The aim of VPmK was to support the hypothesis that “multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming that can be accounted for 99.98% by the AHH law []“,
where
· the sawtooth is a collection of “all the so-called multidecadal ocean oscillations into one phenomenon“, and
· AHH law [Arrhenius-Hofmann-Hansen] is the logarithmic formula for CO2 radiative forcing with an oceanic heat sink delay.
The end result of VPmK was shown in the following graph
Fig.1 – VPmK end result.
where
· MUL is multidecadal climate (ie, global temperature),
· SAW is the sawtooth,
· AGW is the AHH law, and
· MRES is the residue MUL-SAW-AGW.
Millikelvins
As you can see, and as stated in VPmK’s title, the residue was just a few millikelvins over the whole of the period. The smoothness of the residue, but not its absolute value, was entirely due to three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“.
If the aim of VPmK is to provide support for the IPCC model of climate, naturally it would remove all of those things that the IPCC model cannot handle. Regardless, the astonishing level of claimed accuracy shows that the result is almost certainly worthless – it is, after all, about climate.
The process
What VPmK does is to take AGW as a given from the IPCC model – complete with the so-called “positive feedbacks” which for the purpose of VPmK are assumed to bear a simple linear relationship to the underlying formula for CO2 itself.
VPmK then takes the difference (the “sawtooth”) between MUL and AGW, and fits four sinewaves to it (there is provision in the spreadsheet for five, but only four were needed). Thanks to the box filters, a good fit was obtained.
Given that four parameters can fit an elephant (great link!), absolutely nothing has been achieved and it would be entirely reasonable to dismiss VPmK as completely worthless at this point. But, to be fair, we’ll look at the sawtooth (“The sinewaves”, below) and see if it could have a genuine climate meaning.
Note that in VPmK there is no attempt to find a climate meaning. The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.
The sinewaves
The two main “sawtooth” sinewaves, SAW2 and SAW3, are:
Fig.2 – VPmK principal sawtooths.
(The y-axis is temperature). The other two sinewaves, SAW4 and SAW5 are much smaller, and just “mopping up” what divergence remains.
It is surely completely impossible to support the notion that the “multidecadal ocean oscillations” are reasonably represented to within a few millikelvins by these perfect sinewaves (even after the filtering). This is what the PDO and AMO really look like:
Fig.3 – PDO.
(link) There is apparently no PDO data before 1950, but some information here.
Fig.4 – AMO.
(link)
Both the PDO and AMO trended upwards from the 1970s until well into the 1990s. Neither sawtooth is even close. The sum of the sawtooths (SAW in Fig.1) flattens out over this period when it should mostly rise quite strongly. This shows that the sawtooths have been carefully manipulated to “reserve” the 1970-2000 temperature increase for AGW.
Fig.5 – How the sawtooth “reserved” the1980s and 90s warming for AGW.
Conclusion
VPmK aimed to show that “multidecadal climate has only two significant components”, AGW and something shaped like a sawtooth. But VPmK then simply assumed that AGW was a component, called the remainder the sawtooth, and had no clue as to what the sawtooth was but used some arbitrary sinewaves to represent it. VPmK then claimed to have shown that the climate was indeed made up of just these two components.
That is circular logic and appallingly unscientific. The poster presentation should be formally retracted.
[Blog commenter JCH claims that VPmK is described by AGU as “peer-reviewed”. If that is the case then retraction is important. VPmK should not be permitted to remain in any “peer-reviewed” literature.]
Footnotes:
1. Although VPmK was of so little value, nevertheless I would like to congratulate Vaughan Pratt for having the courage to provide all of the data and all of the calculations in a way that made it relatively easy to check them. If only this approach had been taken by other climate scientists from the start, virtually all of the heated and divisive climate debate could have been avoided.
2. I first approached Judith Curry, and asked her to give my analysis of Vaughan Pratt’s (“VP”) circular logic equal prominence to the original by accepting it as a ‘guest post’. She replied that it was sufficient for me to present it as a comment.
My feeling is that posts have much greater weight than comments, and that using only a comment would effectively let VP get away with a piece of absolute rubbish. Bear in mind that VPmK has been presented at the AGU Fall Conference, so it is already way ahead in public exposure anyway.
That is why this post now appears on WUWT instead of on ClimateEtc. (I have upgraded it a bit from the version sent to Judith Curry, but the essential argument is the same). There are many commenters on ClimateEtc who have been appalled by VPmK’s obvious errors. I do not claim that my effort here is in any way better than theirs, but my feeling is that someone has to get greater visibility for the errors and request retraction, and no-one else has yet done so.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Marler & Jonas, Vuk, et al. 8<)
The thought process becomes “circular” if you “complete the circle”, so to speak, and conclude that: since he found what he assumed, then it must be true. My only claim is that, given what he did, the result can be, and should be, tested on future data. I have written about the same regarding the modeling of Vukcevic and Scafetta. I would say the same regarding the curve-fitting of Liu et al cited by Gail Combs above. Elsewhere I have written the same of the modeling of Latif and Tsonis, and of the GCMs. I do not expect any extant model to survive the next 20 years’ worth of data collection, but I think that the data collected to date do not clearly rule out very much — though alarmist predictions made in 1988-1990 look less credible year by year.
OK, so try this:
We have a (one) actual real world temperature record. It has a “lot” of noise in it, but it is the only one that has actual data insideits noise and high-frequency (short-range or month-to-month) variation and its longer-frequency (60 ?? year and (maybe) 800-1000-1200 year variations. Behind the noise of recent changes – somewhere “under” the temperatures between 1945 and 2012 – there “might be” a CAGW HUM signal that “might be” related to CO2 levels: and there “might be” a non-CO2 signal related to UHI effect starting about 1860 for large cities that tails off to a steady value , and starts off between 1930 and 1970 for smaller cities and what are now rural areas. Both UHI “signals” would begin at near 0, ramp up as population increases in the area between 5000 and 25,000, then slows as the area saturates with new buildings and people after 25,000 people.
The satellite data must be assumed correct for the whole earth top to bottom.
The satellite data varies randomly month-to-month by 0.20 degrees. So it appears you must first put an “error band” of +/-0.20 degrees around your thermometer record BEFORE looking for any trends to analyze. Any data within +/- .20 degrees of any running average can be proven with today’s instruments for the entire earth to be just noise.
Then, you try to eliminate the 1000 year longer term cycle – if you see it at all.
Then, after all the “gray band” is eliminated, then you can start eliminating the short cycle term. Or looking at the problem differently, start looking for the potential short cycle.
So we take a mathematical process we know can fit any curve (I seem to recall Fourrier first showed that a set of sinusoidal oscillations at various frequencies and amplitudes would fit any curve to any accuracy given enough different sine waves; in other words any curve can be expressed as a spectrum) and gasp when it only takes 4 curves to fit to a few mK data that are (a) known to oscillate, so will fit closely with relatively few curves (b) have the non-oscilliatory component arbitrarily removed (see (a)) and (c) don’t actually vary very much, so a few milli Kelvin is proportionally not as precise a fit as it sounds. Of course we don’t mention that none of the data points can possibly be defined to the level that expressing in milli Kelvin has any meaning.
RACookPE1978 – What you say seems to generally make sense. One problem is that every time you enter a new factor (UHI, etc, and there are lots of them) into the equation, you also have to widen your error band.
What seems reasonable to me is that the ocean oscillations are visible in the last century or so of surface temperature record, and obviously contributed significantly to 20thC warming. UHI is clearly present too – see Watts 2012, though there is probably more of it in the USA than elsewhere. Other factors such as land clearing, sun + GCRs, volcanoes, etc, etc, need to be added to the mix. The amount “left over” for CO2-driven AGW ends up being a small amount with large error bands.
The failure of the tropical troposphere to warm faster than the surface does seem to indicate that the IPCC estimate of AGW is way too high.
Mike Jonas: VP’s claimed results flowed from his initial assumptions. That’s what makes it circular.
When results follow from assumptions that’s logic or mathematics. It only becomes circular if you then use the result to justify the assumption. You probably recall that Newton showed that Kepler’s laws followed from Newton’s assumptions. Where the “circle” was broken was in the use of Newton’s laws to derive much more than was known at the time of their creation. In like fashion, Einstein’s special theory of relativity derived the already known Lorentz-Fitzgerald contractions; that was a really nice modeling result, but the real tests came later.
I tested the key part of his result (the sawtooth) against existing data (the PDO and AMO) and found that it did not represent the “multidecadal ocean oscillations” as claimed.
Pratt claimed that he did not know the exact mechanism generating the sawtooth. You showed that one possible mechanism does not fit.
I think we agree that Pratt’s modeling does not show the hypothesized CO2 effect to be true. At ClimateEtc I wrote that Pratt had “rescued” the hypothesis, not tested it. That’s all.
Most of what you have written is basically true but “over-wrought”. There is no reason to issue a retraction.
Just because something can be explained by sine waves proves nothing. In VPmK the anthropogenic component could be replaced by another sine wave of long periodicity to represent the rising AGW component.
That said, I am of the opinion that much of the change in global temperatures can indeed be explained by AGW and the AMO (plus TSI (sunspots) and Aerosols (volcanoes)). See for example my discussion of the paper by Zhou and Tung and my own calculations at:
http://www.climatedata.info/Discussions/Discussions/opinions.php
The main conclusion of this work is that AOGCMs have overestimated the AGW component of the temperature increase by a factor of 2 (welcomed by sceptics but not by true believers in AGW) but there is still a significant AGW component (welcomed by true believers but not by sceptics).
Steven Mosher says:
December 13, 2012 at 9:04 am
4. This is basically the same approach that many here praise when scafetta does it.
****************************************
As usual, Mosher continues in his attempts to mislead people about the real merits of my research. The contorted Mosher’s reasoning is also discussed here:
http://tallbloke.wordpress.com/2012/08/08/steven-mosher-selectively-applied-reasoning/
I do not hope that my explanation will convince him because he clearly does not want to understand.
However, for the general reader this is how the case is.
My research methodology does not have anything to do with the curve fitting exercise implemented in Pratt’s poster.
My logic follows the typical process used in science, which is as follows.
1) preliminary analysis of the patterns of the data without manipulation based on given hypotheses.
2) identification of specific patterns: I found specific frequency peaks in the temperature record .
3) search of possible natural harmonic generators that produce similar harmonics; I found them in major astronomical oscillations.
4) search of whether the astronomical oscillations hindcast data outside the data interval studied in (1): I initially studied the temperature record since 1850, and tested the astronomical model against the temperature and solar reconstructions during the Holocene!
5) use an high resolution model to hindcast the signal (1): I calibrate the model from 1850 to 1950 and check its performance in hindcasting the oscillations in the period 1950-2012 and viceversa.
6) the tested harmonic component of the model is used as a first forecast of the future temperature.
7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here
http://people.duke.edu/~ns2002/#astronomical_model_1
There is nothing wrong with the above logic. It is the way science is actually done, although Mosher does not know it.
My modelling methodology equivalent to the way the ocean tidal empirical models (which are the only geophysical models that really work) have been developed.
Pratt’s approach is quite different from mine, it is the opposite.
He does not try to give any physical interpretation to the harmonics but interprets the upward trending at surely due to anthropogenic forcing despite the well known large uncertainty in the climate sensitivity to radiative forcing. I did the opposite, I interpret the harmonics first and state that the upward trending could have multiple causes that also include the possibility of secular/millennial natural variability that the decadal/multidecadal oscillations could not predict.
Pratt did not tested his model for hindcasting capabilities, and he cannot do it because he does not have a physical interpretation for the harmonics. I did hindcast tests, because harmonics can be used for hindcast tests.
Pratt’s model fails to interpret the post 2000 temperature, as all AGW models, which implies that his model is wrong.
My model correctly hindcast the post 2000 temperature: see again
http://people.duke.edu/~ns2002/#astronomical_model_1
In conclusion, Mosher does not understand science, but I cannot do anything for him because he does not want to understand it.
However, many readers in WUWT may find my explanation useful.
Nicola Scafetta says:
December 13, 2012 at 9:32 pm
7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here
It fails around 2010 and you need a 0.1 degree AGW to make it fit. I would say that there doesn’t look to be any unique predictability in your model. A constant temperature the past ~20 years fits even better.
anybody know the formula for the sawtooth (which multidecadal series and what factors)as there is certainly no resemblance to the amo… by far the most dominant oscillation. The sawtooth presented looks well planned to me, must have taken a lot of work to construct to get the desired residuals…
lsvalgaard says:
December 13, 2012 at 9:47 pm
It fails around 2010 and you need a 0.1 degree AGW to make it fit.
******************************************
A model cannot fail to predict what it is not supposed to predict. The model is not supposed to predict the fast ENSO oscillations within the time scale of a few years such as the ElNino peak in 2010. That model uses only the decadal and multidecadal oscillations.
Nicola Scafetta says:
December 13, 2012 at 10:07 pm
That model uses only the decadal and multidecadal oscillations.
Does that model predict serious cooling the next 20-50 years?
lsvalgaard says:
December 13, 2012 at 10:29 pm
Nicola Scafetta says:
December 13, 2012 at 10:07 pm
That model uses only the decadal and multidecadal oscillations.
…….
Does that model predict serious cooling the next 20-50 years?
Yes it does
http://www.vukcevic.talktalk.net/CET-NV.htm
see graph 2.
What has that to do with Scafetta? Not much directly, but put in simple terms :
Either the solar and the Earth’s internal variability, which we can only judge by observing the surface effects or changes in the magnetic fields as an internal proxy:
– sun affects the Earth, since the other way around appears to be negligible.
– or caused by common factor, possibly planetary configurations
Surface effects correlation:
http://www.vukcevic.talktalk.net/SSN-NAP.htm
Internal variability (as derived from magnetic fields as a proxy) correlation:
http://www.vukcevic.talktalk.net/TMC.htm
As the CET link shows, the best and yhe longest temperature record is not immune to the such sun-Earth link, and neither are relatively reliable shorter recent records from the N. Hemisphere
http://www.vukcevic.talktalk.net/GSC1.htm
You and Matthew R Marler call it meaningless curve fitting, but as long as data from which the curves are derived it is foolish to dismiss as nonsense.I admit that I can’t explain above in the satisfactory terms, if you whish to totally reject it you got your reasons, and you were welcome to say that in the past as you are now and future.
The model used here is fine for interpolation, ie to calculate the temperature at time T-t where T is the present and t is positive. So it would be useful to replace the historic temperature record by a formula. If we need to know temperatures at time T+t then this is an extrapolation which is valid only if the components of the formula represent all the elements of physical reality that detemine the evolution of the climate. But this is precisely what has not been shown!
There was a transit of Venus earlier this year and we are told thar the next one will be in 2117.
We can be confident in this prediction because we know that the time evolution of the planets is given accurately by the laws of Newton/Einstein.
Climate science contains no equivalent body of knowledge
Matthew R Marler – Oh the perils of re-editing a comment in a tearing hurry. You correctly point out that what I said “VP’s claimed results flowed from his initial assumptions. That’s what makes it circular” was wrong. The correct statement is “VP’s claimed results are his initial assumptions. That’s what makes it circular.“.
What we have is this: He assumed that climate consisted of IPCC-AGW and something else. His finding was that the climate consisted of IPCC-AGW and something else.
Now, if we had learned something of value about the ‘something else’, then there could have been merit in his argument. But we didn’t. The ‘something else’s only characteristic was that it could be represented by a combination of arbitrary sinewaves and three box filters. The ‘something else’ began its short life as “all the so-called multidecadal ocean oscillations“, but that didn’t last long because it clearly could not be even remotely matched to the actual ocean oscillations. The ‘something else’ ended its short life as a lame “whatever its origins“. The sum total of VP’s argument is precisely zero.
On the ‘something else’ you say of me that “You showed that one possible mechanism does not fit.“. Well, actually I tested the one and only mechanism postulated by VP. There wasn’t anything else that I could test. As I point out above, even VP walked away from that postulated mechanism.
I am bemused by your assertion that VP “had “rescued” the hypothesis” and that “There is no reason to issue a retraction.“. There was no rescue of anything, since no argument was presented other than the abovementioned assertions and meaningless curve-fitting. Since the poster has absolutely no merit, the retention of its finding is misleading. The only decent thing for VP do now is to retract it.
vukcevic says:
December 14, 2012 at 1:26 am
it is foolish to dismiss as nonsense.
The nonsense part is to make up a data set from two unrelated ones.
richardscourtney says:
December 13, 2012 at 11:29 am
Stephen Rasey:
re your post at December 13, 2012 at 11:00 am.
Every now and then one comes across a pearl shining on the sand of WUWT comments. The pearls come in many forms.
Your post is a pearl. Its argument is clear, elegant and cogent. Thankyou.
=========
Agreed.
Nicola Scafetta says:
December 13, 2012 at 9:32 pm
My modelling methodology equivalent to the way the ocean tidal empirical models (which are the only geophysical models that really work) have been developed.
============
Correct. The tidal models are not calculated from first principles in the fashion that climate models try and calculate the climate. The first principles approach has been tried and tried again and found to be rubbish because of the chaotic behavior of nature.
lsvalgaard says:
December 14, 2012 at 5:21 am
vukcevic says:
December 14, 2012 at 1:26 am
it is foolish to dismiss as nonsense.
The nonsense part is to make up a data set from two unrelated ones.
=============
That the data sets are unrelated is an assumption. This can never be know with certainty unless one has infinite knowledge. Something that is impossible for human beings.
The predictive power of the result is the test of the assumption. If there is a predictive power greater than chance, then it is unlikely they are unrelated. Rather, they would simply be related in a fashion as yet unknown.
ferd berple says:
December 14, 2012 at 7:14 am
That the data sets are unrelated is an assumption.
That they are related is the assumption. Their unrelatedness is derived from what we know about how physics works.
lsvalgaard says:
December 14, 2012 at 5:21 am
The nonsense part is to make up a data set from two unrelated ones.
All magneto-meters recordings around the world do it and did it from the time of Rudolf wolf as you yourself show here:
http://www.leif.org/research/Rudolf%20Wolf%20and%20the%20Sunspot%20Number.ppt#8
to today at Tromso
http://www.vukcevic.talktalk.net/Tromso.htm
Unrelated ? Your own data show otherwise
http://www.vukcevic.talktalk.net/TMC.htm
How does it compare with Pratt’s CO2 millKelvin?
http://www.vukcevic.talktalk.net/CO2-Arc.htm
More you keep repeating ‘unrelated’ more I think you are trying to suppress this to be more widely known.
I have always figured the models were fits to the data. When I took my global warming class at Stanford the head of Lawrence livermores climate modeling team argued at first the models were based on physical formulas but I argued that they keep modifying the models to match the hind casting they do more and more and all the groups do the same. Studies have shown and he admitted readily that none of the models predict any better than each other. In fact only the average of all the models did better than any individual model. Such results are what one expects from a bunch of fits. He acknowledged that they were indeed fits.
If what you are doing is fitting the models then a Fourier analysis of the data would produce a model like vpmk which would be much better fit than the computer models and all that vpmk did was demonstrate that if you want to fit the data to any particular set of formulas you can do it easier and with much higher accuracy using a conventional mathematical technique than trying to make a complex iterative computer algorithm with complex formulas match the data. No wonder they need teams of programmers and professors involved since they are trying to make such complex algorithms match the data. Vpmks approach is simpler and way more accurate.
The problem with all fits however is that since they don’t model the actual processes involved they are no better at predicting the next data point and can’t be called “science” in the sense that experiments are done and physical facts uncovered and modeled. Instead we have a numerical process of pure mathematics which has no relationship to the actual physics involved. Vpmks “model” conveniently shows the the cyclical responses fading and the exponential curve overtaking. This gives credence to the underlying assumptions he is trying to promulgate but it is no evidence that indeed any physical process is occurring so it is as likely as not to predict the next data point. The idea that the effect of all the sun variations and amo/pdo/Enso have faded is counter to all intuitive and physical evidence. The existence of 16 years of no trend is indicative that whatever effect co2 is having is being completely masked by natural phenomenon vpmk is diminishing yet if the 16 year trend would show that if anything the natural forcings are much higher than before. Instead vpmk attributes more of the heat currently in the system to co2 and reduces the cooling that would be expected by the current natural ocean and sun phenomenon. So just as vpmks model shows natural phenomenon decreasing to zero effect the actual world we know that this is not the case so again another model not corresponding to reality.
Steveta_UK “If you can, please present it here”:
Here is FOBS without any Multidecadal removed:
http://tinypic.com/r/x4km54/6
If you squint closely you might see that there are two lines a red one and a blue one, the standard deviation of the residuals over the 100 year period, 1900 to 2000, is 0.79 mK.
For fun, here it is run forward to 2100:
http://tinypic.com/r/dwsc9/6
-There is a lot to be said for thinking about sawteeth 🙂
Gail Combs says:
[IF] “I publish a paper showing that the rise and fall of women’s skirts plus a saw tooth pattern provides a good fit to the curve. Since no one can provide a better ‘fit’ than that the paper has to stand?”
LOL!
In the spreadsheet:
AGW = ClimSens * (Log2CO2y – MeanLog2CO2) + MeanDATA
Perhaps I’m a little dense but somebody might have to explain to me the physics behind that formula before I could take any of this seriously.
Also in the spreadsheet is a slider bar for climate sensitivity, given enough time to “play” one should be able to slide that to 1 C per 2x CO2 and adjust the sawtooth to arrive at the same results, thereby “proving” (not) climate sensitivity is 1 C / 2X CO2. Seems like a lot of time and effort for nothing to me.
“The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.”
Wonderfully scientific approach – we need a fudge factor to save the “AHH theory”, so we’ll simply take the deviation from reality, call it “a sawtooth, whatever its origin” and the theory is saved.
PARDON ME? Don’t we give the warmists billions of Dollars? Can’t we expect at least a little bit of effort from them when they construct their con?
Stephen Rasey says:
December 13, 2012 at 11:00 am
“I want to draw people’s attention to the frequency content of VPmK SAW2 and SAW3 wave forms. ”
Very powerful! Dang, I didn’t think of that!
Here is a note to Dr. Svalgaard, not for his (he knows it, possibly far better than I do) but for benefit of other readers.
Relatedness of solar and the Earth’s magnetic fields could be considered in three ways
1. Influence of solar on the Earth’s field, well known but short lasted from few hours to few days
2. Common driving force (e.g. planetary ) – considered possible but insignificant forcing factor.
3. Forces of same kind integrated at point of impact by the receptor. Examples of receptors could be: GCR, saline ions in the ocean currents, interactions between ionosphere and equatorial storms (investigated by NASA), etc
Simple example of relatedness through a receptor:
Daylight and torch light are unrelated by sources and do not interact, but a photocell will happily integrate them, not only that but there is an interesting process of amplification, which I am very familiar with.
In the older type image tubes, before CCDs era (saticon, ledicon and plumbicon) there is an exponential law of response at low light levels from the projected image, which further up the curve is steeper and more linear.
A totally ‘unrelated’ light from back of the tube known as ‘backlight or bias light’ is projected at the photosensitive front layer. Effect is a so called ‘black current’ which lifts ‘the image current’ from low region up the response curve, result is more linear response and surprisingly higher output, since the curve is steeper further away from the zero.
Two light sources are originally totally unrelated, they do not interact with each other in any way, but they are integrated by the receptor, and further more an ‘amplification’ of the output current from stronger source is achieved by presence of a weaker.
I know that our host worked in TV industry and may be familiar with the above.
So I suggest to Dr. Svalgaard to abondan ‘unrelated’ counterpoint and consider the science in the ‘new light’ of my finding
http://www.vukcevic.talktalk.net/EarthNV.htm