Guest post by Mike Jonas
A few days ago, on Judith Curry’s excellent ClimateEtc blog, Vaughan Pratt wrote a post “Multidecadal climate to within a millikelvin” which provided the content and underlying spreadsheet calculations for a poster presentation at the AGU Fall Conference. I will refer to the work as “VPmK”.
VPmK was a stunningly unconvincing exercise in circular logic – a remarkably unscientific attempt to (presumably) provide support for the IPCC model[s] of climate – and should be retracted.
Background
The background to VPmK was outlined as “Global warming of some kind is clearly visible in HadCRUT3 [] for the three decades 1970-2000. However the three decades 1910-1940 show a similar rate of global warming. This can’t all be due to CO2 []“.
The aim of VPmK was to support the hypothesis that “multidecadal climate has only two significant components: the sawtooth, whatever its origins, and warming that can be accounted for 99.98% by the AHH law []“,
where
· the sawtooth is a collection of “all the so-called multidecadal ocean oscillations into one phenomenon“, and
· AHH law [Arrhenius-Hofmann-Hansen] is the logarithmic formula for CO2 radiative forcing with an oceanic heat sink delay.
The end result of VPmK was shown in the following graph
Fig.1 – VPmK end result.
where
· MUL is multidecadal climate (ie, global temperature),
· SAW is the sawtooth,
· AGW is the AHH law, and
· MRES is the residue MUL-SAW-AGW.
Millikelvins
As you can see, and as stated in VPmK’s title, the residue was just a few millikelvins over the whole of the period. The smoothness of the residue, but not its absolute value, was entirely due to three box filters being used to remove all of the “22-year and 11-year solar cycles and all faster phenomena“.
If the aim of VPmK is to provide support for the IPCC model of climate, naturally it would remove all of those things that the IPCC model cannot handle. Regardless, the astonishing level of claimed accuracy shows that the result is almost certainly worthless – it is, after all, about climate.
The process
What VPmK does is to take AGW as a given from the IPCC model – complete with the so-called “positive feedbacks” which for the purpose of VPmK are assumed to bear a simple linear relationship to the underlying formula for CO2 itself.
VPmK then takes the difference (the “sawtooth”) between MUL and AGW, and fits four sinewaves to it (there is provision in the spreadsheet for five, but only four were needed). Thanks to the box filters, a good fit was obtained.
Given that four parameters can fit an elephant (great link!), absolutely nothing has been achieved and it would be entirely reasonable to dismiss VPmK as completely worthless at this point. But, to be fair, we’ll look at the sawtooth (“The sinewaves”, below) and see if it could have a genuine climate meaning.
Note that in VPmK there is no attempt to find a climate meaning. The sawtooth which began life as “so-called multidecadal ocean oscillations” later becomes “whatever its origins“.
The sinewaves
The two main “sawtooth” sinewaves, SAW2 and SAW3, are:
Fig.2 – VPmK principal sawtooths.
(The y-axis is temperature). The other two sinewaves, SAW4 and SAW5 are much smaller, and just “mopping up” what divergence remains.
It is surely completely impossible to support the notion that the “multidecadal ocean oscillations” are reasonably represented to within a few millikelvins by these perfect sinewaves (even after the filtering). This is what the PDO and AMO really look like:
Fig.3 – PDO.
(link) There is apparently no PDO data before 1950, but some information here.
Fig.4 – AMO.
(link)
Both the PDO and AMO trended upwards from the 1970s until well into the 1990s. Neither sawtooth is even close. The sum of the sawtooths (SAW in Fig.1) flattens out over this period when it should mostly rise quite strongly. This shows that the sawtooths have been carefully manipulated to “reserve” the 1970-2000 temperature increase for AGW.
Fig.5 – How the sawtooth “reserved” the1980s and 90s warming for AGW.
Conclusion
VPmK aimed to show that “multidecadal climate has only two significant components”, AGW and something shaped like a sawtooth. But VPmK then simply assumed that AGW was a component, called the remainder the sawtooth, and had no clue as to what the sawtooth was but used some arbitrary sinewaves to represent it. VPmK then claimed to have shown that the climate was indeed made up of just these two components.
That is circular logic and appallingly unscientific. The poster presentation should be formally retracted.
[Blog commenter JCH claims that VPmK is described by AGU as “peer-reviewed”. If that is the case then retraction is important. VPmK should not be permitted to remain in any “peer-reviewed” literature.]
Footnotes:
1. Although VPmK was of so little value, nevertheless I would like to congratulate Vaughan Pratt for having the courage to provide all of the data and all of the calculations in a way that made it relatively easy to check them. If only this approach had been taken by other climate scientists from the start, virtually all of the heated and divisive climate debate could have been avoided.
2. I first approached Judith Curry, and asked her to give my analysis of Vaughan Pratt’s (“VP”) circular logic equal prominence to the original by accepting it as a ‘guest post’. She replied that it was sufficient for me to present it as a comment.
My feeling is that posts have much greater weight than comments, and that using only a comment would effectively let VP get away with a piece of absolute rubbish. Bear in mind that VPmK has been presented at the AGU Fall Conference, so it is already way ahead in public exposure anyway.
That is why this post now appears on WUWT instead of on ClimateEtc. (I have upgraded it a bit from the version sent to Judith Curry, but the essential argument is the same). There are many commenters on ClimateEtc who have been appalled by VPmK’s obvious errors. I do not claim that my effort here is in any way better than theirs, but my feeling is that someone has to get greater visibility for the errors and request retraction, and no-one else has yet done so.
vukcevic: You and Matthew R Marler call it meaningless curve fitting,
I don’t think I said that your modeling was “meaningless”; I have said that that the test of its “truth” will be how well it fits future data.
Re: models and harmonics
I was asked by one of WUWT participants (we often correspond by email) would it be possible to extrapolate CET by few decades. I had ago.
The first step was to separate the summer and the winter data (using two months around the two solstices, to see effect of direct TSI input) result:
http://www.vukcevic.talktalk.net/MidSummer-MidWinter.htm
this graph at a later stage was presented on couple of blogs, but Grant Foster (Tamino), Daniel Bailey (ScSci) and Jan Perlwitz (NASA) fell flat on their faces trying to elucidate, why no apparent warming in 350 years of the summer CET, but gentle warming in the winters for whole of 3.5 centuries.
Meteorologists knows it well: the Icelandic Low semi-permanent atmospheric pressure system in the North Atlantic. Its footprint is found in the most climatic events of the N. Hemisphere. The strength of the Icelandic Low is the critical factor in determining path of the polar jet stream over the North Atlantic
In the winter the IL is located at SW of Greenland (driver Subpolar Gyre), but in the summer the IL is to be found much further north (most likely driver the North Icelandic Jet, formed by complex physical interactions between warm and cold currents), which as graps show had no major ups or downs.
Next step: finding harmonic components separately for the summers and winters, Used one common and one specific to each of the two seasons, all below 90 years. By using the common and two individual components, I synthesized the CET adding average of two linear trends. Result is nothing special, but did indicate that a much older finding of the ‘apparent correlation ‘ between the CET and N. Atlantic geological records now made more sense.
http://www.vukcevic.talktalk.net/CNA.htm
I digressed, what about the CET extrapolation?
http://www.vukcevic.talktalk.net/CET-NV.htm
Well, that suggest return to what we had in the 1970s, speculative. Although the CET is 350 years long, I would advise caution, anything longer than 15-20 years is no more than the ‘blind fate’.
Note: I am not scientist, in no way climate expert, only models I did are the electronic ones, both designed and built working prototypes.
Matthew R Marler since I have quoted you wrongly, I do apologise.
I mention retraction in my initial post, but I’ll now make it an explicit request:
Vaughan Pratt, please will you issue a formal retraction of your poster “Multidecadal climate to within a millikelvin”.
vukcevic says:
December 14, 2012 at 9:45 am
3. Forces of same kind integrated at point of impact by the receptor.
So I suggest to Dr. Svalgaard to abondan ‘unrelated’ counterpoint and consider the science in the ‘new light’ of my finding
There is no integrated effect as the external currents have a short life time and decay rapidly. Your ‘findings’ are not science in any sense of that word. You might try to explain in one sentence how you make up the ‘data’ you correlate with. Other people have asked for that too, but you have resisted answering [your ‘paper’ on this is incomprehensible, so a brief, one sentence summary here might be useful].
Ha! Mr.Whack-a-mole must have gone to bed 😉
Time for a teeny weeny extrapolation, methinks.
The past 1,500 years temperature history, (base data in red)
http://tinypic.com/r/2rm7bd3/6
(The five free phase sine waves, as above)
First, let me also congratulate the author for having the courage to provide all of the data and calculations. Such transparency is in the best interests of science.
I also really liked the very first question raised in this thread – “Assume AGW is a flat line and repeat the analysis” and thought that should be a challenge to take up. What I did may be overly simplistic so please correct my attempt.
I downloaded the Excel spreadsheet and reset cell V26 (ClimSens) to a value of zero. As expected, the red AGW line on the graph dropped to flat. I then set up some links to the green parameters so they could be dealt with as a single range (a requirement for the Excel Solver Add-in). I played with a few initial parameters to see what they might do, then fired off Solver with the instruction to modify the parameters below with a goal of maximizing cell U35 (MUL R2). No other constraints were applied.
Converging to the parameters below, Solver returned a MUL R2 of 99.992%, very slightly higher than the downloaded result. The gray MRES line in the chart shows very flat. (I think it needs one more constant to bring the two flat lines together but couldn’t find that on the spreadsheet.) Have I successfully fit the elephant? Does this result answer Steveta_uk’s challenge above (Dec 13 at 8:55 am)? Or have I missed something here?
Cell name value
D26 ToothW 2156.84…
G23 Shift 1 1928.48…
G26 Scale 1 1489.03…
H23 Shift 2 3686.05…
H26 Scale 2 1386.24…
I23 Shift 3 4356.71…
I26 Scale 3 2238.07…
J23 Shift 4 3468.56…
J26 Scale 4 0
K23 Shift 5 2982.83…
K26 Scale 5 781.58…
M26 Amp 2235.58…
Update: Might have found that constant. Setting cell D32 to a value of -0.1325 roughly centers the MRES line around zero and makes the gray detail chart visible.
Chas says:
December 14, 2012 at 12:45 pm
“Ha! Mr.Whack-a-mole must have gone to bed 😉
Time for a teeny weeny extrapolation, methinks.
The past 1,500 years temperature history, (base data in red)”
Exactly like in the history books! /sarc
Thanks, Chas. Beautiful.
Mike, I guess that because you are maximising the R2 you are ending up with two parellel but offset fits (by about 32mK). If you minimised the sum of the residuals you would kill two birds with one stone.This would have to be the sum of the absolute values of the residuals or the sum of the squared residuals to stop the negative residuals cancelling out the positive ones. I get the standard deviation ALL of your residuals to be about 2mK; PV selected his SD from the best 100year period to get the ‘less than a millikelvin’ bit, I think.
-I notice that the residuals seem to have a clear sine wave in them with an amplitude of about 5mK whilst at the same time you have SAW4 with an amplitude of zero -I wonder if Solver hasnt converged?
This all is on the basis that I have entered your solutions correctly!
In some ways your fit ought to make more sense to VP than his fit; you have the first sine wave and he is left wondering why his first wave doesnt exist.
Leif Svalgaard says:
December 14, 2012 at 12:22 pm
how do you make up the ‘data’
…….
‘Phil Jones from CRU’ syndrome, unable to read the Excel file?
How to calculate spectrum of the changes in the Earth’s magnetic field see pages 13 & 14, you can repeat the calculations.
Since you are so infuriated by ‘unrelated’ magnetic fields, and my ‘art of making-up the data’ you should closely examine Fig. 26, in case you did miss it. That should make it even more interesting.
See you.
Mike Rossander – Thanks for doing that. Please can you put the result graph online.
—–
Regarding my request to Vaughan Pratt to retract. I made the same request on ClimateEtc, to which he replied:
“The “emperor has no clothes” gambit. Oh, well played, Mike. Mate in one, how could I have overlooked that? 😉
Mike, I would agree with your “simple summary” with two small modifications: replace the first F by F(v) and the second by F(V). My clothes will remain invisible to the denizens of Bill Gates’ case-insensitive world, but Linux hackers will continue to admire my magnificent clothing.
Here F is a function of a 9-tuple v of variables, or free parameters v_1 through v_9, while V is a 9-tuple of reals, or values V_1 through V_9 for those parameters (a valuation in the terminology of logic).
F(v) is a smooth 9-dimensional space whose points are curves expressible as analytic functions of y (smooth because F is an analytic function of the variables and therefore changes smoothly when the variables do). F(V) is one of those curves.
To summarize:
1. I assumed F(v).
2. I found F(V) (just as you said, modulo case)
3. at the surface of F(v) very near F3(HadCRUT3).
That’s all I did. As you say, very simple.
If needed we can always make the simple difficult as follows. With the additional requirement that “near” is defined by the Euclidean or L2 metric (as opposed to say the L1 or taxicab metric), “near” means “least squares.” The least squares approach to estimation is perhaps the most basic of the wide range of techniques treated by estimation theory, on which there is a vast literature.
Least-squares fitting has the downside of exaggerating outliers and the advantage of Euclidean geometry, whose metric is the appropriate one for pre-urban or nontaxicab geometry. Euclidean geometry works just as nicely in higher dimensions as it does in three, thereby leveraging the spatial intuitions of those who think visually rather than verbally.
We picture F(v) as a 9-dimensional manifold (i.e. locally Euclidean) embedded in the higher-dimensional manifold of all possible time series for 1850-2010 inclusive. Without F3 the latter would be a 161-dimensional space. F3 cuts this very high-dimensional space down to a mere 7*2 = 14 dimensions, on the premise that 161/7 = 23 years is the shortest period still barely visible above the prevailing noise despite losing 20 dB. F3(HadCRUT3), H for short, is a point in this 14-dimensional space. The geometrical intuition here is that F3(HadCRUT3) is way closer to F(V) than HadCRUT3, not because F3 moved it anywhere but merely because the dimension decreased. Given two points near and far from a castle wall, the nearest point on the wall to either can be estimated much more accurately for the point near the wall than for the one far away. Whence F3. Isn’t geometry wonderful?
(The factor of two in 7*2 comes from the fact that the space of all sine waves of a given period 161/n years, for some n from 1 to 7, is two-dimensional, having as its two unit vectors sin and cos for that frequency (as first noticed by Fourier?). Letting our imagination run riot, the same factor of 2 is arrived at via De Moivre’s theorem exp(ix) = cos(x) + i sin(x) but that might be too complex for this blog—when I wrote to Martin Gardner in the mid-1970s to complain that his Scientific American column neglected complex numbers he wrote back to say they were a tad too complex for his Scientific American readers.)
We’d like F(V) to be the nearest point of F(v) to H in F(v), i.e the global minimum, though this may require a big search and we often settle for a local minimum, namely a point F(V) in F(v) that is nearest H among points in the neighborhood of F(V).
In either case MRES is the vector from F(V) to H, that is, H – F(V). Since manifolds are smooth, MRES is normal to (the surface of) F(v) at F(V). Hence very small adjustments to V will not change the length of MRES appreciably, as one would hope with a local minimum.
Hmm, better stop before I spout even more abstract nonsense. 😉“.
As a very capable and experienced physicist once said to me: Nonsense dressed up in complicated technical language is still nonsense.
The “simple summary” to which he referred is on WUWT here:
http://wattsupwiththat.com/2012/12/13/circular-logic-not-worth-a-millikelvin/#comment-1172908
Vaughan Pratt then added a further shot, it seems it was aimed as much at WUWT as at me:
“Mike, you can find my “formal retraction” here (right under the post you just linked to). I wrote “The ‘emperor has no clothes’ gambit. Oh, well played, Mike. Mate in one, how could I have overlooked that? ;)”
Feel free to announce on WUWT that I “formally retracted” with those words. In the spirit of full disclosure do please include the point that only Windows fans would consider that a retraction, not Linux hackers. WUWT readers won’t have a clue what that means and will simply assume I retracted, whereas those at RealClimate will have no difficulty understanding my meaning. Climate Etc. may be more evenly split.“.
vukcevic says:
December 14, 2012 at 2:52 pm
my ‘art of making-up the data’ you should closely examine Fig. 26, in case you did miss it.
You are ducking the question again.
lsvalgaard says:
December 14, 2012 at 10:11 pm
………………..
Let’s summaries:
Subject of my article is calculation which shows that the natural temperature variability in the N. Hemisphere is closely correlated to the geomagnetic variability with no particular mechanism is considered.
1. You objected: the data was artificially ‘made up’
– this was rebutted by showing that the ‘new data’ is simple arithmetic sum of two magnetic fields.
2. You said: this is not valid since the fields do not interact.
– this was rebutted by showing that interaction is the property of receptor, e.g. magneto-meters do react to both combined fields. Secondary interaction is also recognized via induction of electric currents.
3. You said: currents are of short duration from few hours to up to a few days, therefore effect is insignificant.
– this happens on a regular bases and it may be sufficient to alter the average temperature of about 290K by + or – 0.4K.
4. You are returning to the starting point: ‘made up’ data (see item 1)
– It is not my intention to go forever in circles.
You made more than 20 posts, here and elsewhere, regarding my finding, with a very little or no success to invalidate it.
My intention is to get more scientific appraisal, as a next step I emailed Dr. J. Haig (from my old university), her interests include solar contribution to the climate change.
She is a firm supporter of the AGW theory, you can contact and join forces, if you whish to do so. Content of the email is posted here:
http://wattsupwiththat.com/2012/12/14/another-ipcc-ar5-reviewer-speaks-out-no-trend-in-global-water-vapor/#comment-1173874
vukcevic says:
December 15, 2012 at 4:01 am
1. You objected: the data was artificially ‘made up’
– this was rebutted by showing that the ‘new data’ is simple arithmetic sum of two magnetic fields.
Repeating an error is not a rebuttal, and a simple inspection of your graph shows that your made up data is not the sum of two ‘magnetic fields’. So, again, how exactly is the data made up?
This is in a way of explanation to Dr. Svalgaards statement ‘you made up data’ you are pseudo scientist ,some would think even possibly fraudster, but I am certain that Dr.S didn’t imply it.
Here we go: Since the AMO is trend-less time function (oscillating around zero value) than it is assumed that the signed and normalized SSN (to the AMO values) is an adequate representative of the heliospheric magnetic field at the Earth’s orbit. It is equally possible to use McCracken, Lockwood or Svalgaard & Cliver data, but these do not contain either sufficient resolution and mutually disagree, so it is considered that the SSN, as most familiar and internationally accepted data set is the best for the purpose. Earth Magnetic field has number of strong spectral components. One of them is exactly same as the Hale cycle period (as calculated from the SSN). I could have used it as an non-dumped oscillator (clean Cos wave) , but match to the AMO is not as good as using the signed SSN. This points to the fact that the SSN is more likely factor, unless of course the Earth harmonic has same annual ‘modulation’ in manner of the SSN, which would be an extraordinary finding. Such possibility is considered on page 14 Fig. 25 curve dF(t). For purpose of comparison to the AMO from the Earth spectrum is then taken second component and employed as clean Cos wave. I suspect that this component is due to the feedback ‘bounce’ due to propagation delay in the Earth’s interior (see link to Hide and Dickey paper) but this is speculative.It is huge puzzle why the Earth’s magnetic field oscillation component should have as its main period one which is exactly same as the SSN derived Hale cycle, but of much stronger intensity than the heliospheric field . I do not think, but many solar scientist (including yourself) postulate that the solar dynamo has amplification property. Would something of a kind exist within the Earth’s dynamo than it would explain the strong Earth’s component as well as Antarctic field http://www.vukcevic.talktalk.net/TMC.htm .How could this occur: I speculate that since dept of Earth’s crust is 20-40 km, and the geomagnetic storm induced currents reach down to 100km (Svalgaard). It is possible that a magnetized bubble of liquid metal is formed than amplified by the field of the Earth’s dynamo, in a manner of the solar dynamo amplification. Although highly speculative, and despite you promoting solar dynamo amplification you will reject the geo dynamo amplification, but it would explain a lot. Mathematics of periodic oscillations ‘amplification’ is dead simple Cos A + Cos B = 2 Cos (A+B)/2 x Cos (A–B)/2, and vice versa, result: one short and one long period of oscillation, giving rise to the two AMO’s characteristic periods of 9 and 64 years (see pages 5 & 6).Where this process occurs it is not known (it could be in the magnetic field itself or in the oceans as receptor of the two oscillation. Now to the Excel file: Word sum is used with its more general meaning, to described any of the four arithmetic operations as used in the Excel file, but here is a list:
Column1: Year; Column2: SSN; Column3: (+ & – 1 to sign SSN); Column4: Hale cycle- SSN with sign (times); Column5: Normalized SSN to AMO (divide); Column6: – Earth field oscillator (Cos, times, minus, divide); Column7: Geosolar oscillator (times); Column8: Geosolar oscillator moved forward by 15 years; Column9: AMO; Column10: AMO 3yma (plus & divide). So what is all this about: http://www.vukcevic.talktalk.net/GSC1.htm Annoying fact is that you know all of the above, why do you want it all spelt out god only knows. I am not answering any more questions, have go at your Stanford colleague and his milliKelvins. , instead I shall refer you to appropriate page in my article, Excel file you have and this post.
Thank you and good bye.
I suggest that Vaughan Pratt and some commentators here read up a bit on Fourier theory. Any (yes ANY) periodic or aperiodic continuous function can be decomposed into sine waves to any precision wanted. So it follows that you can also subtract any arbitrary quantity (for example an IPCC estimate of global warming) from that continuous function and it can still be decomposed into sine waves just as well as before, though they would be slightly different sine waves.
However note that there is absolutely no requirement that those sine waves have any physical reason or explanation.
vukcevic says:
December 15, 2012 at 10:06 am
why do you want it all spelt out god only knows.
What you describe is a perfect example of fake data, selected and made up to fit the best, based on invalid physics. That you call the ‘data’ is deceptive in the extreme. The one deceived in in first line yourself. What happened to your grandiose plan of sending your stuff to all geophysics departments [before AGU] at all major Universities?
vukcevic says:
December 15, 2012 at 10:06 am
why do you want it all spelt out god only knows
What you describe is a perfect example of fake, selected, tortured, and made-up stuff twisted to fit an idea. Calling the result ‘data’ is deceptive in the extreme; the one most deceived being yourself, belying your claim that “ the ‘new data’ is simple arithmetic sum of two magnetic fields.
BTW, what happened to your grandiose plan of carpet-bombing [before AGU] geophysics departments at all major universities to drum up support for your ideas?
Mike Jonas: Regarding my request to Vaughan Pratt to retract. I made the same request on ClimateEtc, to which he replied:
I thought I would mention again that Einstein’s 1905 paper on special relativity showed that: by assuming the speed of light to be constant he could derive the already well-known Lorentz-Fitzgerald contraction, an exercise you would regard as circular because the Lorentz-Fitzgerald contraction was already known, and the mechanism by which the speed of light can be independent of the relative motions of source and receiver (whereas the frequency and wavelength are not so independent) is a mystery.
I don’t mean to embarrass Dr Pratt by elevating him into the Pantheon with Einstein, but the logic of the two theoretical derivations is the same in the two cases: the result is known, and the procedure produces it. The suspicion surrounding Einstein’s result was such that the Swedish Academy awarded him the Nobel Prize for a different 1905 paper, and the general theory did not begin to gain widespread acceptance until the Eddington expedition in 1919, and that was the subject of acrimonious debate.
There is no more reason for Pratt to withdraw this paper than there would have been for Einstein to withdraw his first paper on relativity.
Dr. Svalgaard said:
What you describe is a perfect example of fake data, selected and made up to fit the best, based on invalid physics. That you call the ‘data’ is deceptive in the extreme. The one deceived in in first line yourself.
Calling the result ‘data’ is deceptive in the extreme; the one most deceived being yourself, belying your claim that “ the ‘new data’ is simple arithmetic sum of two magnetic fields.
No need to be so furious, sun matters, you know.
Hey, not just the ‘ordinary garden nonsense’ this time, something far more valuable.
‘would need to distill the argument into relatively simple points, show a few key figs’ as another university professor said, and than I’ll dispatch few emails.
Have a happy Xmas and N. Year.
p.s. Apparently a new fable by Hans Christian Anderson is discovered; try to get a first print copy for your grandchildren.
http://www.philstar.com/lifestyle-features/2012/12/14/885985/new-found-tale-could-be-hans-christian-andersens
vukcevic says:
December 15, 2012 at 12:05 pm
No need to be so furious, sun matters, you know.
No need to be so evasive. Honesty matters, you know.
Hey, not just the ‘ordinary garden nonsense’ this time, something far more valuable
D-K effect again. There is nothing valuable at all in your stuff.
lsvalgaard says:
>>
Nicola Scafetta says: “7) wait the future to see what happens: for example follow the (at-the moment-very-good) forecasting performance of my model here”
It fails around 2010 and you need a 0.1 degree AGW to make it fit. I would say that there doesn’t look to be any unique predictability in your model. A constant temperature the past ~20 years fits even better.
>>
I recently read an article by a professor at Stanford, one of the top universities in the U.S. , that claimed at 3 deg. / 2XCO2 model was accurate to within one thousandth of a degree. But I’m a bit concerned because he doesn’t know how to do a running mean.
Do you think it matters ?
lsvalgaard says: A constant temperature the past ~20 years fits even better.
The same could be said of 3K per doubling, it sure as hell isn’t within a 1/1000 degree whatever way you spin it.
Matthew R Marler says “I thought I would mention again that Einstein’s 1905 paper on special relativity showed that: by assuming the speed of light to be constant he could derive the already well-known Lorentz-Fitzgerald contraction, an exercise you would regard as circular because the Lorentz-Fitzgerald contraction was already known[…]“.
That doesn’t look at all logical to me. Circular logic is where your finding is what you assumed in the first place. The fact that you can derive A from B when A is already known doesn’t make the logic circular. Circular is deriving A from A.