By Dr. Nicola Scafetta
It is time to update my widget comparing the global surface temperature, HadCRUT3 (red and blue), the IPC 2007 projection (green) and my empirical model (black thick curve and cyan area) based on a set of detected natural harmonics (period of approximately: 9.1, 10-11, 20 and 60 years) which are based on astronomical cycles, plus a corrected anthropogenic warming projection of about 0.9 oC/century. The yellow curve represents the harmonic model alone without the corrected anthropogenic warming projection and represents an average lower limit.
The proposed astronomically-based empirical model represents an alternative methodology to reconstruct and forecast climate changes (on a global scale, at the moment) which is alternative to the analytical methodology implemented in the IPCC general circulation models. All IPCC models are proven in my paper to fail to reconstruct all decadal and multidecadal cycles observed in the temperature since 1850. See details in my publications below.
As the figure shows, the temperature for Jan/2012 was 0.218 oC, which is a cooling respect to the Dec/2011 temperature, and which is about 0.5 oC below the average IPCC projection value (the central thin curve in the middle of the green area). Note that this is a very significant discrepancy between the data and the IPCC projection.
On the contrary, the data continue to be in reasonable agreement with my empirical model, which I remind, is constructed as a full forecast since Jan/2000.
In fact the amplitudes and the phases of the four cycles are essentially determined on the basis of the data from 1850 to 2000, and the phases are found to be in agreement with appropriate astronomical orbital dates and cycles, while the corrected anthropogenic warming projection is estimated by comparing the harmonic model, the temperature data and the IPCC models during the period 1970-2000. The latter finding implies that the IPCC general circulation models have overestimated the anthropogenic warming component by about 2.6 time on average, within a range between 2 to 4. See original papers and the dedicated blog article for details: see below.
The widget also attracted some criticisms from some readers of WUWT’s blog and from skepticalscience
Anthony asked me to respond to the criticism, and I am happy to do so. I will respond five points.
- Criticism from Leif Svalgaard.
As many readers of this blog have noted, Leif Svalgaard continuously criticizes my research and studies. In his opinion nothing that I do is right or worth of consideration.
About my widget, Leif claimed many times that the data already clearly contradict my model: see here 1, 2, 3, etc.
In any case, as I have already responded many times, Leif’s criticism appears to be based on his confusing the time scales and the multiple patterns that the data show. The data show a decadal harmonic trending plus faster fluctuations due to ElNino/LaNina oscillations that have a time scale of a few years. The ENSO induced oscillations are quite large and evident in the data with periods of strong warming followed by periods of strong cooling. For example, in the above widget figure the January/2012 temperature is out of my cyan area. This does not mean, as Leif misinterprets, that my model has failed. In fact, such pattern is just due to the present La Nina cooling event. In a few months the temperature will warm again as the El Nino warming phase returns.
My model is not supposed to reconstruct such fast ENSO induced oscillations, but only the smooth decadal component reconstructed by a 4-year moving average as shown in my original paper figure: see here for the full reconstruction since 1850 where my models (blue and black lines) well reconstruct the 4-year smooth (grey line); the figure also clearly highlights the fast and large ENSO temperature oscillations (red) that my model is not supposed to reconstruct.
As the widget shows, my model predicts for the imminent future a slight warming trending from 2011 to 2016. This modulation is due to the 9.1 year (lunar/solar) and the 10-11 year (solar/planetary) cycles that just entered in their warming phase. This decadal pattern should be distinguished from the fast ENSO oscillations that are expected to produce fast periods of warming and fast period of cooling during these five years as it happened from 2000 to 2012. Thus, the fact that during LaNina cooling phase, as right now, the temperature may actually be cooling, does not constitute a “proof” that my model is “wrong” as Leif claimed.
Of course, in addition to twist numerous facts, Leif has also never acknowledged in his comments the huge discrepancy between the data and the IPCC projection which is evident in the widget. In my published paper [1], I did report in figure 6 the appropriate statistical test comparing my model and the IPCC projection against the temperature. The figure 6 is reported below
The figure reports a kind of chi-squared statistical test between the models and the 4-year smooth temperature component, as time progress. Values close to zero indicate that the model agrees very well with the temperature trending within their error range area; values above 1 indicate a statistically significant divergence from the temperature trending. It is evident from the figure above that my model (blue curve) agrees very well with the temperature 4-year smooth component, while the IPCC projection is always worst, and statistically diverges from the temperature since 2006.
I do not expect that Leif changes his behavior against me and my research any time soon. I just would like to advise the readers of this blog, in particular those with modest scientific knowledge, to take his unfair and unprofessional comments with the proper skepticism.
- Criticism about the baseline alignment between the data and the IPCC average projection model.
A reader dana1981 claimed that “I believe Scafetta’s plot is additionally flawed by using the incorrect baseline for HadCRUT3. The IPCC data uses a baseline of 1980-1999, so should HadCRUT.”
This reader also referred to a figure from skepticalscience, shown below for convenience,
that shows a slight lower baseline for the IPCC model projection relative to the temperature record, which give an impression of a better agreement between the data and the IPCC model.
The base line position is irrelevant because the IPCC models have projected a steady warming at a rate of 2.3 oC/century from 2000 to 2020, see IPCC figure SPM.5. See here with my lines and comments added
On the contrary, the temperature trending since 2000 has been almost steady as the figure in the widget clearly shows. Evidently, the changing of the baseline does not change the slope of the decadal trending! So, moving down the baseline of the IPCC projection for giving the illusion of a better agreement with the data is just an illusion trick.
In any case, the baseline used in my widget is the correct one, while the baseline used in the figure on skepticalscience is wrong. In fact, the IPCC models have been carefully calibrated to reconstruct the trending of the temperature from 1900 to 2000. Thus, the correct baseline to be used is the 1900-2000 baseline, that is what I used.
To help the readers of this blog to check the case by themselves, I sent Anthony the original HadCRUT3 data and the IPCC cmip3 multimodel mean reconstruction record from here . They are in the two files below:
itas_cmip3_ave_mean_sresa1b_0-360E_-90-90N_na-data
As everybody can calculate from the two data records that the 1900-2000 average of the temperature is -0.1402, while the 1900-2000 average of the IPCC model is -0.1341.
This means that to plot the two records on the common 1900-2000 baseline, there is the need to use the following command in gnuplot
plot “HadCRUT3-month-global.dat”, “itas_cmip3_ave_mean_sresa1b_0-360E_-90-90N_na.dat” using 1:($2 – 0.0061)
which in 1850-2040 produces the following graph
The period since 2000 is exactly what is depicted in my widget.
The figure above also highlights the strong divergences between the IPCC model and the temperature, which are explicitly studied in my papers proving that the IPCC model are not able to reconstruct any of the natural oscillations observed at multiple scales. For example, look at the 60-year cycle I extensively discuss in my papers: from 1910 to 1940 a strong warming trending is observed in the data, but the warming trending in the model is far lower; from 1940 to 1970 a cooling is observed in the data while the IPCC model still shows a warming; from 1970 to 2000, the two records present a similar trending (this period is the one originally used to calibrate the sensitivities of the models); the strong divergence observed in 1940-1970, repeats since 2000, with the IPCC model projecting a steady warming at 2.3 oC/century , while the temperature shows a steady harmonically modulated trending highlighted in my widget and reproduced in my model.
As explained in my paper the failure of the IPCC model to reconstruct the 60-year cycle has large consequences for properly interpreting the anthropogenic warming effect on climate. In fact, the IPCC models assume that the 1970-2000 warming is 100% produced by anthropogenic forcing (compare figures 9.5a and 9.5b in the IPCC report) while the 60-year natural cycle (plus the other cycles) contributed at least 2/3 of the 1970-2000 warming, as proven in my papers.
In conclusion, the baseline of my widget is the correct one (baseline 1900-2000). My critics at skepticalscience are simply trying to hide the failure of the IPCC models in reconstructing the 60-year temperature modulation by just plotting the IPCC average simulation just since 2000, and by lowering the baseline apparently to the period 1960-1990, which is not where it should be because the model is supposed to reconstruct the 1900-2000 period by assumption.
It is evident that by lowering the base line a larger divergence would be produced with the temperature data before 1960! So, skepticalscience employed a childish trick of pulling a too small coversheet from a too large bed. In any case, if we use the 1961-1990 baseline the original position of the IPCC model should be shifted down by 0.0282, which is just 0.0221 oC below the position depicted in the figure above, not a big deal.
In any case, the position of the baseline is not the point; the issue is the decadal trend. But my 1900-2000 baseline is in the optimal position.
- Criticism about the chosen low-high boundary levels of the IPCC average projection model (my width of the green area in the widget).
Another criticism, in particular by skepticalscience, regards the width of the boundary (green area in the widget) that I used, They have argued that
“Most readers would interpret the green area in Scafetta’s widget to be a region that the IPCC would confidently expect to contain observations, which isn’t really captured by a 1-sigma interval, which would only cover 68.2% of the data (assuming a Gaussian distribution). A 2-sigma envelope would cover about 95% of the observations, and if the observations lay outside that larger region it would be substantial cause for concern. Thus it would be a more appropriate choice for Scafetta’s green envelope.”
There are numerous problems with the above skepticalscience’s comment.
First, the width of my green area (which has a starting range of about +/- 0.1 oC in 2000) coincides exactly with what the IPCC has plotted in his figure figure SPM.5. Below I show a zoom of IPCC’s figure SPM.5
The two red lines added by me show the width at 2000 (black vertical line). The width between the two horizontal red lines in 2000 is about 0.2 oC as used in my green area plotted in the widget. The two other black lines enclosing the IPCC error area represent the green area enclosure reported in the widget. Thus, my green area accurately represents what the IPCC has depicted in its figure, as I explicitly state and show in my paper, by the way.
Second, skepticalscience claims that the correct comparison needed to use a 2-sigma envelope, and they added the following figure to support their case
The argument advanced by skepticalscience is that because the temperature data are within their 2-sigma IPCC model envelope, then the IPCC models are not disproved, as my widget would imply. Note that the green curve is not a faithful reconstruction of my model and it is too low: compare with my widget.
However, it is a trick to fool people with no statistical understanding to claim that by associating a huge error range to a model, the model is validated.
By the way, contrary to the claim of sckepticalscience, in statistics it is 1-sigma envelope width that is used; not 2-sigma or 3-sigma. Moreover, the good model is the one with the smallest error, not the one with the largest error.
In fact, as proven in my paper, my proposed harmonic model has a statistical accuracy of +/- 0.05 oC within which it well reconstructs the decadal and multidecadal modulation of the temperature: see here.
On the contrary, if we use the figure by skepticalscience depicted above we have in 2000 a 1-sigma error of +/- 0.15 oC and a 2-sigma error of +/- 0.30 oC. These robust and fat error envelope widths are between 3 and 6 times larger than what my harmonic model has. Thus, it is evident from the skepticalscience claims themselves that my model is far more accurate than what the IPCC models can guarantee.
Moreover, the claim of skepticalscience that we need to use a 2-sigma error envelope indirectly also proves that the IPCC models cannot be validated according the scientific method and, therefore, do not belong to the realm of science. In fact, to be validated a modeling strategy needs to guarantee a sufficient small error to be capable to test whether the model is able to identify and reconstruct the visible patterns in the data. These patterns are given by the detected decadal and multi-decadal cycles, which have amplitude below +/- 0.15 oC: see here. Thus, the amplitude of the detected cycles is well below the skepticalscience 2-sigma envelope amplitude of +/- 0.30 oC, (they would even be below the skepticalscience 1-sigma envelope amplitude of +/- 0.15 oC).
As I have also extensively proven in my paper, the envelope of the IPCC model is far larger than the amplitude of the temperature patterns that the models are supposed to reconstruct. Thus, those models cannot be properly validated and are useless for making any useful decadal and multidecadal forecast/projection for practical society purpose because their associated error is far too large by admission of skepticalscience itself.
Unless the IPCC models can guarantee a precision of at least +/- 0.05 oC and reconstruct the decadal patterns, as my model does, they cannot compete with it and are useless, all of them.
- Criticism about the upcoming HadCRUT4 record.
Skepticalscience has also claimed that
“Third, Scafetta has used HadCRUT3 data, which has a known cool bias and which will shortly be replaced by HadCRUT4.”
HadCRUT4 record is not available yet. We will see what happens when it will be available. From the figures reported here it does not appear that it will change drastically the issue: the difference with HadCRUT3 since 2000 appears to be just 0.02 oC.
In any case for an optimal matching the amplitudes of the harmonics of my model may need to be slightly recalibrated, but HadCRUT4 already shows a clearer cooling from 1940 to 1970 that further supports the 60-year natural cycle of my model and further contradicts the IPCC models. See also my paper with Mazzarella where the HadSST3 record is already studied.
- Criticism about the secular trending.
It has been argued that the important issue is the upward trending that would confirm the IPCC models and their anthropogenic warming theory.
However, as explained in my paper, once that 2/3 of the warming between 1970 and 2000 is associated to a natural cycle with solar/astronomical origin (or even to an internal ocean cycle alone) the anthropogenic warming trending reproduced by the models is found to be spurious and strongly overestimated. This leaves most of the secular warming tending from 1850 to 2012 as due to secular and millennial natural cycles, which are also well known in the literature.
In my published papers, as clearly stated there, the secular and millennial cycles are not formally included in the harmonic model for the simple reason that they need to be accurately identified: they cannot be put everywhere and the global surface temperature is available only since 1850, which is a too short period for accurately locate and identify these longer cycles.
In particular, skepticalscience has argued that the proposed model (by Loehle and Scafetta) based only on the 60-year and 20-year cycles plus a linear trending from 1850 to 1950 and extrapolated up to 2100 at most, must be wrong because when the same model is extrapolated for 2000 years it clearly diverges from reasonable patterns deduced from temperature proxy reconstructions. Their figure is here and reproduced below
Every smart person would understand that this is another skepticalscience’s trick to fool the ignorant.
It is evident that if, as we have clearly stated in our paper, we are ignoring the secular and millennial cycles and we just approximate the natural millennial harmonic trending with a first order linear approximation that we assume can be reasonable extended up to 100 years and no more, it is evident that it is stupid, before than being dishonest, to extrapolate it for 2000 years and claim that our result is contradicted by the data. See here for extended comment by Loehle and Scafetta.
As said above in those models the secular and millennial cycles were excluded for purpose. However, I already published in 2010 a preliminary reconstruction with those longer cycles included here (sorry in Italian), see figure 6 reported below
However, in the above model the cycles are not optimized, which will be done in the future. But this is sufficient to show how ideologically naïve (and false) is the claim from skepticalscience.
In any case, the secular trending and its association to solar modulation is extensively addressed in my previous papers since 2005. The last published paper focusing on this topic is discussed here and more extensively here where the relevant figure is below
The black curves represent empirical reconstruction of the solar signature secular trending since 1600. The curve with the upward trending since 1970 is made using the ACRIM TSI composite (which would be compatible with the 60-year cycle) and the other signature uses the PMOD TSI composite which is made by manipulating some of the satellite records with the excuse that they are wrong.
Thus, until the secular and millennial cycles are accurately identified and properly included in the harmonic models, it is the studies that use the TSI secular proxy reconstructions that need to be used for comparison to understand the secular trending, like my other publications from 2005 to 2010. Their results are in perfect agreement with what can be deduced from the most recent papers focusing on the astronomical harmonics, and would imply that no more that 0.2-0.3 oC of the observed 0.8 oC warming since 1850 can be associated to anthropogenic activity. (Do not let you to be fooled by Benestad and Schmidt 2009 criticism that is filled with embarrassing mathematical errors and whose GISS modelE performance is strongly questioned in my recent papers, together with those of the other IPCC models) .
I thank Anthony for the invitation and I apologize for my English errors, which my above article surely contains.
Relevant references:
[1] Nicola Scafetta, “Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models.” Journal of Atmospheric and Solar-Terrestrial Physics, (2012). DOI: 10.1016/j.jastp.2011.12.005
[2] Adriano Mazzarella and Nicola Scafetta, “Evidences for a quasi 60-year North Atlantic Oscillation since 1700 and its meaning for global climate change.” Theor. Appl. Climatol. (2011). DOI: 10.1007/s00704-011-0499-4
[3] Craig Loehle and Nicola Scafetta, “Climate Change Attribution Using Empirical Decomposition of Climatic Data.” The Open Atmospheric Science Journal 5, 74-86 (2011). DOI: 10.2174/1874282301105010074
[4] Nicola Scafetta, “A shared frequency set between the historical mid-latitude aurora records and the global surface temperature.” Journal of Atmospheric and Solar-Terrestrial Physics 74, 145-163 (2012). DOI: 10.1016/j.jastp.2011.10.013
[5] Nicola Scafetta, “Empirical evidence for a celestial origin of the climate oscillations and its implications.” Journal of Atmospheric and Solar-Terrestrial Physics 72, 951–970 (2010). DOI: 10.1016/j.jastp.2010.04.015
Additional News and Links of Interest:
Global Warming? No, Natural, Predictable Climate Change, Larry Bell
http://scienceandpublicpolicy.org/images/stories/papers/reprint/astronomical_harmonics.pd
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Joachim Seifert says:
March 15, 2012 at 5:31 pm
LETS SUM UP THE PRESENT STATE TO THIS HOUR:
—-Amazing is that Warmists and 60-year-Cycle Deniers have joined…but .to no avail
….because
—-Astronomic cycles are PERPETUAL:
I’ve only seen a couple of feeble attempts to deny the ~60 year climate cycle in this thread. Any intelligent viewer can see the ~60 year climate cycle by inspection. One has to not want to see it to miss it.
But, where the cycle comes from, that is another thing altogether. As I have made clear, it can easily be the result of a resonance intrinsic to the Earth’s physical environment driven by random forcing, and such a phenomenon will occasionally be in phase with just about any process with a similar frequency. No appeal to weakly coupled astronomical phenomena is required to establish consistency of this hypothesis with all the observations. Consistency is not proof, and neither is Occam’s Razor. But, I personally think this resonance hypothesis has more solid theoretical basis and is more likely to be correct.
To Bart:
We are talking here not about an esoterical, climate irrelevant 60 year cycle, but
of an significant astronomical cycle with an large effect of up to 0.4 ‘C as the
staircase shape of 19/20 Cty GMT measurements show……the steps
(CYCLE EFFECTS) are present in SST, AMO, and more you name it graphs……
The Warmists, until now, were given billions $ and 30 years to prove that the
CO2-Cause or other atmospheric CAUSES produce STEPWISE (cycle)
EFFECTS…. and utterly failed….
Let me repeat: You find 60 year cyclic EFFECTS (so-called “quasi-proof”) within
the atmosphere but they are not CAUSED by atmospheric action, there was enough
money and time spent on this idea….rubbish as the English would say…..
This strong 60-year astronomic cycle is the real new knowledge and we should be
grateful to real climate pioneers as NicK Scafetta….and better listen……
He is way above all Warmists, Cycle Deniers and (if you like): Cycle Minimizers…..
……and he/we are ahead of our times…..
we will see in our lifetime, I am 110% sure, that Warmists/Deniers/Minimizers will die
out in foreseeable future………
Be happy…. Cheers mate….
JS
“…more solid theoretical basis…”
By that, I mean the gravity gradients are just too small to have much of an impact, and what other way can we expect the planets to affect solar dynamics or the Earth’s climate directly?
Bart says:
March 15, 2012 at 9:47 am
Well, that’s odd. Now, I’m seeing the same plot in both places. Just thought I’d mention it in case anyone gets confused. These plots are adequate to show what I wanted to show, so I’m just going to leave it.
Bart says: March 15, 2012 at 6:19 pm
Joachim Seifert says: March 15, 2012 at 5:31 pm
“I’ve only seen a couple of feeble attempts to deny the ~60 year climate cycle in this thread. Any intelligent viewer can see the ~60 year climate cycle by inspection. One has to not want to see it to miss it.”
It looks evident to me too 🙂
An its importance is great.
I believe that my papers are quite important. A research done with great sacrifices from my side, I can ensure you.
Despite what a charlatan and a vulgar buffoon have said above without being able to disprove anything, the facts are the one that count at the end. I simply hope that everybody will understand the importance of this research for the good of humanity. And the facts speak loud.
thank you, for the interest, but unless something will truly require my comments, I will stop here.
About the charlatan and the vulgar buffoon of above, I would like to precise that I have no bad feeling against them, I just wish that they change their inappropriate behavior.
Joachim Seifert says: March 15, 2012 at 5:31 pm
(1) 60 year cycles exist in the GISP2 Holocene Power Spectrum for over 10,000 years (see Davis & Bolling), are (2) accounted for by Nick Scafetta with observations since 1850. and (3) MUST EXIST therefore before 1850….
No. A cycle has no existence in general, an existence can have a physical process were a geometry is involved.
Denial is nothing less than obstinance, not wanting to learn…..
Not in general. If there are stronger counter arguments as the given arguments it is practic science.
….. clear is that cycles are NEW KNOWLEDGE NOW quantifying their profound effects onto climate….
No. Correct is that a possible mechanism must not be shown if there is a strong correlation in geometry of real nature. Wegener has shown the continental drift without a mechanism because of the geometry from South America and Africa, and he was right.
What can be concluded is that every cycle with a prominent label in the science community has no existence in sciences, because no one ever has shown a geometry from real nature. Time [s,year] has no physical existence, because it is not an observable in physics; it is social term oft the society.
N. Scafetta has neither shown a geometry in the solar system nor a mechanism that processes the geometric cycles in the solar system.
V.
Nicola Scafetta says:
March 15, 2012 at 8:44 am
“People like you have a very wrong understanding of science. They think that science starts with the “mechanism”; it does not. That is metaphysics, not physics.”
You have the wrong idea about me, I have spent years doing observational correlations in this very field. The problem here is not the “mechanisms” on how the physical forces impart their climatic impact over the said c.60yr cycle, but of the validity of the correlation itself, i.e. the question of: how does 5 orbits of Jupiter, and 2 orbits of Saturn, actually constitute a 60yr cycle with its positive peaks at the synods in 1940 and 2000 ? i.e. what makes those particular synods stand out against the other two at 1960 and 1980 ?
FWIW, it is all academic to me as I am definitely seeing a signal in the temperature data that is at least 70yrs.
Geoff Sharp says:
March 15, 2012 at 2:50 pm
There is a cycle that can be repeated many times, the King-Hele cycle. It comprises of 16 Ju/Sa synods, 7 Ur/Sa synods and 23 Ju/Ur synods. (317.722yrs)
7 K-H cycles is the 2224yr near Jovian return.
14 K-H plus an extra 179yrs is the 4627yrs Jovian cycle.
Ulric Lyons says: March 16, 2012 at 8:34 am
“60yr cycle with its positive peaks at the synods in 1940 and 2000 ? i.e. what makes those particular synods stand out against the other two at 1960 and 1980 ?”
As explained in the paper, in 1940 and 2000 the planets were closer to the sun than during the other two conjuctions. This implies stronger gravitational forcing on the sun, which responded with an increased activity which drived higher temperatures.
To Nick:
This quote: “”As explained in the paper….ff. “” is your great fundamental insight on
the cause/base of the 60 year cyclic climate change….and your grand contribution
to climate science …..well done…..
What is now left to do/ it has been done in the past weeks is/was
to define clear base numbers/explanations for the part following “”…the Sun, which
responded with…..”
I sent you some draft numbers some weeks ago and these are the ones for the
cyclic increasing/decreasing activities…..all clear by now, you will recognize it once
you will get the English text for this…..2012 will be the pivotal climate science year……
……the beginning of the end of AGW, coming in much faster than foreseen, just wait….
Saludos de JS
Nicola Scafetta says:
March 16, 2012 at 3:08 pm
“As explained in the paper, in 1940 and 2000 the planets were closer to the sun than during the other two conjuctions.”
So what went wrong when they were even closer in 1643 and 1702 ?
To Ulrich
Quote: “What went wrong in 1643 and 1702?
Thanks for these valuable dates. The 59 year cycles would then be:
1643-1702-1761-1820-1879-1938-1997-2056…. and then, according
to their shape, each divided into the first into “FLAT 40 years”, followed by
“STEP INCREASE 19 years” …
great….
Concerning the LIA: Nothing went wrong….ship is steaming on course, the
explanation is that “navigation conditions” are different between “bottoming
out centuries”(17 Cty), regular navigation conditions 18-20 Cty, and “top
wave plateau” conditions of the 21 Cty….
You cannot throw everything into one&only 60-year-cycle bucket…..because
there is one more major cause to it, for which you need my mentioned booklet….
because, the same you can see in Nick Scafetta’s comments, one cannot derive
a complex theme purely on blog sites, the background needs literature support…
Thanks anyway…great hint…..
JS
Ulric Lyons says: March 16, 2012 at 3:42 pm
“So what went wrong when they were even closer in 1643 and 1702 ?”
I know what appened in that period, but because the published papers do not deal with that specific issue, I do not think to be appropriate to comment the issue here. Please, wait that future research is published. Just be patient a little bit.
Joachim Seifert says:
March 16, 2012 at 5:03 pm
“You cannot throw everything into one&only 60-year-cycle bucket….”
Especially if no such bucket exists, try fit it here for example:
http://members.multimania.nl/ErrenWijlens/co2/ceurvsjghcnupd.gif
Ulrich:
It is important to match observations to 60 year cycles…..how to do it….
….. this is the question…..we have plenty of time, every month there are new
Delta-O-18 studies out from different parts of the world, there exists not only
those meteorological data of yours quoted… we have CETs, BESTs, from almost
every European capital for 300 up to 400 year measurements…
……. The European values have to be substantially discounted…the way they are, they
appear to be too horizontal (the approach of Luedecke, Link and others doubting
temp increases…..and insisting on measuring “mistakes”….)
but we KNOW that 1. Northern Hemispheric values are exceeding the GMT, because
of the land MASS producing higher temps….. 2. The Southern hem. has lower temp
values, due to solar energy entering the huge water mass and good bye, until by
means of ocean flow they might/might not surface, see Jim HANSEN NASA GISS: The
LIA solar energy HIDES on the ocean bottom.)…
therefore: To compensate this NH/Land bias,
HadCRUT DEMONSTRATES a MIX for GMT NECESSARY to indicate proper
60 year temp cycles…….
3. in the LIA: temps had top summer time highs due to dry “meteorological high
pressure conditions with weeks without clouds, such as the recent Moscow summer
heat. (see burning of London…..).
…. winters were severe…but exceeding summer temps always prop average temp
level upward…..
I believe you know this about biased LIA temps already….
….. the question stands up why do you suggest 1 – 3 distorted data to me…..?
What is the point in it?
The only answer making sense to me is that you still go tainted with
Warmist BS, sorry to say so….otherwise, on the astronomical side, you
did some good contributions for the advance of science and deserve
recognition for what has to be recognized….
Free yourself from Warmism, join those who are ahead of their time and let
Hansen’s dinosaurs suffer their natural destination ….Its never too late….
Thanks anyway……
JS
Nicola, You haven’t shown enough of Hadcrut4. It looks like its turning cooling into warming http://www.real-science.com/hadcrut4-policy-based-evidence-making
Dr Scafetta, please can you clarify exactly how you obtained the +/- 0.1 C uncertainty range for the IPCC projections, did you estimate it from the SPM figure, or did you calculate it from the CMIP3 ensemble for SRES A1B itself (which IIRC you analyse in the supplementary material of your paper). I have downloaded these model runs, and they give a standard deviation closer to 1.7, so I would be keen to find the source of the discrepancy (I don’t consider estimating from a rather cluttered graph a very accurate method).
Joachim Seifert says:
March 16, 2012 at 6:34 pm
“The only answer making sense to me is that you still go tainted with
Warmist BS, sorry to say so….”
No it is going to cool again. It is just that my astronomical analogues indicate a return to a generally warmer period from 2025 to 2038 which is where the 60yr club says it will bottom out.
To Ulrich:
It seems that you have some calculations at hand, instead of only
adivinations…
Well, give some hints, any contributions to real science should always
be welcome…..
JS
dikranmarsupial says: March 17, 2012 at 9:01 am
“how you obtained the +/- 0.1 C uncertainty range for the IPCC projections, did you estimate it from the SPM figure?……I have downloaded these model runs, and they give a standard deviation closer to 1.7, so I would be keen to find the source of the discrepancy”
You should ask to the IPCC. Their figure for sure does not show a SD close to 1.7 C, which is 17 (seventeen) times larger than what they have depicted in their figures. I limited myself to use what the IPCC says.
But yo are right, if you use the absolute temperature values of the computer runs, that is what you may get.
Dr Scafetta, sorry that should have been 0.17C, rather than 1.7C, the variability in the models is high, but not that high! ;o)
I would however greatly appreciate a direct answer to my question, did you arrive at an estimate of 0.1C by measuring the graph in the SPM, or did you calculate it from the model runs (which you did download) or did you get it from some other source.
As far as I am aware, the IPCC do not “say” that the standard deviation is 0.1C, that is *your* estimate of the standard deviation based on a figure from the SPM, which is too rather cluttered in my opinion to be a reliable source for accurate estimation. The enlarged portion of the figure you provide above suggests that 0.1 is a significant under-estimate as there is clearly some grey area outside the lower of the two red horizontal lines. As the IPCC have made the model runs publicly available, it would seem more reasonable to take that as the definitive source of information on what the model projections do or do not say that is not explicitly stated in the report.
dikranmarsupial says: March 18, 2012 at 4:54 am
As I clearly said, my about +/- 0.1 C estimate is based on what the IPCC has claimed and depicted in his figures. You should ask the IPCC for your question about the discrepancy between your way to do the calculations and theirs. Probably they used a statistical method more advanced than yours. In fact, you may need to use a formula that takes into account many things such as that the sd of an average needs to be divided by the square root of the number of used samples and this should be weighted among the models.
In any case, the +/- 0.1 C is also compatible with the minimum RMS value that I calculated for each GCM model, as calculated in my table 2. So, this minimum among the RMS values is the true estimate for the minimum error of the GCM evaluation comparison because it is the best that they could guarantee. Your 0.17 value appears to be compatible to the average of all calculated RMS values which vary between 0.1 and 0.25, as I reported in Table 2.
Finally, the entire question is truly irrelevant. As I explained above it is not by making the error bigger that you can make the IPCC model more accurate. Bigger errors mean less accuracy, not larger accuracy. A error of +/- 0.17 would mean that the model error is far larger that the largest pattern observed in the data (the 60-year cycle) which implies the models cannot be validated and cannot be used to predict or project anything on a time scale of at least 50 years. In fact with a 1-sd error of +/-0.17C a model would not be able to tell you if during the next 15-30 years there will be a significant warming or a significant cooling, so the utility of the model would be zero.
For example, if a weatherman tells you that according his weather model tomorrow will be between a strong rainy day or a strong sunny day, do you think that the weather model is very accurate because it predicts everything, or is it useless because its error bar is too large?
Moreover, as I said above if you like big errors, you may use the absolute temperature values of the computer runs (not their anomaly), the 1-sd error window will be far larger and of the order of about 2.0 C. (it is not an error, I intend 2.0 (two) C), with your method. So, if you like an even larger error, you may use the larger one.
dikranmarsupial says:
March 18, 2012 at 4:54 am
This is a ridiculous argument. If the error bars are large enough to encompass any eventuality, then the estimates are USELESS. You are running from the evidence, and your flight is doomed.
“The first principle is that you must not fool yourself and you are the easiest person to fool.”
-Richard P. Feynman
Dr Scafetta wrote “As I clearly said, my about +/- 0.1 C estimate is based on what the IPCC has claimed and depicted in his figures.”. The IPCC as far as I know has not made a specific claim here other than this figure. It is a pity that you can’t give a direct answer to a direct question, when politely posed. Did you estimate the 0.1C directly from that figure in the SPM or not? A yes or no answer would be appreciated. If the answer is “no”, then please can you specify the other source.
Dr Scafetta also wrote: “the sd of an average needs to be divided by the square root of the number of used samples and this should be weighted among the models. ”
No, that is the standard ERROR of the mean, not the standard deviation of the model runs. Assuming that the observations should lie within a region defined by the standard error of the mean was the error made by the Douglass et al paper that you cited. To see the flaw in that reasoning you only need to consider the fact that the best model you could theoretically make (an infinite ensemble of models with perfect physics and infinite spatial and temporal resolution) would be essentially guaranteed to fail the Douglass test, even though it were perfect.
As to the size of the error bars, the point is not whether they are large or small, it is whether they are accurately represented by your widget. If they are the correct size, the observations do not lie outside the uncertainty of the projection, so there is no evidence (yet) that the model projection is inconsistent with the projection. If you want to criticise the models for having high uncertainty, then that is fine, but that is a different criticism from suggesting that the observations lie outside the range of projections. They don’t.
dikranmarsupial says: March 18, 2012 at 10:27 am
I think that I responded you quite clearly. If you think that the pictures used in the IPCC have only an artistic but not scientific value, what can I do? (I do agree that the IPCC is more art than science.)
My evaluations are based on what the IPCC has reported by presenting it as “scientific” and on my evaluation of the RMS values as discussed in the paper.
As Bart said, “If the error bars are large enough to encompass any eventuality, then the estimates are USELESS. You are running from the evidence, and your flight is doomed.”
You should read my paper with an open mind. The real test is what I did in the paper. I checked each single model run against the data, and none of those model runs agree with the data. Those model runs, appears to be random noise with an upward trend.
No.
The IPCC relies, not on error bars or standard deviations of experimental data that can be duplicated by other experiments, but on their CLAIMS of “increased assurance” that the model results. The CAGW dogmatists cannot present ANY experimental results, ANY statistical duplicative experimental results that can produce repeatable results that can yield something that could be called a “standard deviation” from anything.
And, without at least 3x experiments/33x experiments/333x experiments or 3333x experiments, how do you get a “standard deviation” and error bars?
What is the “standard deviation” of a calculator that sometimes says 1+1 = 3.0, sometimes 1+1 = 4.0 and sometimes 1+1 = 0.5 but most of the time 1+1 = 2.00002?
Instead, much was made of the “increased assurance” and greater confidence” that the IPCC wrote into their successive press releases for policymakers: “great confidence” (implying deliberately an accurate prediction within one standard deviation, became “greater confidence” (and a standard deviation within two!) of their conclusion, and then that became – by the miracle of self-written press releases (er, IPCC reports) greatest confidence and a somehow a model accurate to within 3 standard deviations.
And THAT implication of actual “scientific results” was not only allowed to stand, but was promoted by the CAGW dogma in the world’s political press.
But three standard deviations of what?
The BEST that they can claim is that some 23 different models, using different computers but the same assumptions and model logic and basic equations, come up with an “average” after many thousands runs of a prediction of increased temperature with increasing CO2 levels. But if every model is using different equations, how can anybody conclude any ONE equation and baseline assumption is correct?
Rather, over ONE single 25 year period from 1973 through 1998, most of the average of the model runs are backfitted to match temperature and volcanic data. Over the previous 25 years, the models are wrong: CO2 increased and temperatures fell. Over the 25 year period from 1916 though 1950, CO2 was essentially steady, but temperatures increased. Over another 15 year period, from 1995 to 2012, CO2 increased, but temperatures remained constant – even declining a bit.
NO model result has predicted that result over that long a period. NONE. Ever.
Therefore, the models are wrong. The assumption of CO2 have a dramatic, catastrophic influence on global temperature is dead wrong. And has been proven wrong by the world’s measured temperatures.
dikranmarsupial says:
March 18, 2012 at 10:27 am
” If they are the correct size, the observations do not lie outside the uncertainty of the projection, so there is no evidence (yet) that the model projection is inconsistent with the projection.”
The question is moot. If you are defending projections which have no predictive power, then you are defending nothing. The “projections” are self-damning, without any aid needed from Dr. Scafetta.
“The first principle is that you must not fool yourself and you are the easiest person to fool.”
-Richard P. Feynman
RACookPE1978 says: March 18, 2012 at 10:52 am:
“Rather, over ONE single 25 year period from 1973 through 1998, most of the average of the model runs are backfitted to match temperature and volcanic data. Over the previous 25 years, the models are wrong: CO2 increased and temperatures fell. Over the 25 year period from 1916 though 1950, CO2 was essentially steady, but temperatures increased. Over another 15 year period, from 1995 to 2012, CO2 increased, but temperatures remained constant – even declining a bit. NO model result has predicted that result over that long a period. NONE. Ever.
Therefore, the models are wrong. The assumption of CO2 have a dramatic, catastrophic influence on global temperature is dead wrong. And has been proven wrong by the world’s measured temperatures.
”
Well said, and it is what I found in my paper by testing all models used by the IPCC.
However, I need to tell you that what yor say would not be understood by any of the IPCC advocates. It is too “logic” for them.