By Dr. Nicola Scafetta
It is time to update my widget comparing the global surface temperature, HadCRUT3 (red and blue), the IPC 2007 projection (green) and my empirical model (black thick curve and cyan area) based on a set of detected natural harmonics (period of approximately: 9.1, 10-11, 20 and 60 years) which are based on astronomical cycles, plus a corrected anthropogenic warming projection of about 0.9 oC/century. The yellow curve represents the harmonic model alone without the corrected anthropogenic warming projection and represents an average lower limit.
The proposed astronomically-based empirical model represents an alternative methodology to reconstruct and forecast climate changes (on a global scale, at the moment) which is alternative to the analytical methodology implemented in the IPCC general circulation models. All IPCC models are proven in my paper to fail to reconstruct all decadal and multidecadal cycles observed in the temperature since 1850. See details in my publications below.
As the figure shows, the temperature for Jan/2012 was 0.218 oC, which is a cooling respect to the Dec/2011 temperature, and which is about 0.5 oC below the average IPCC projection value (the central thin curve in the middle of the green area). Note that this is a very significant discrepancy between the data and the IPCC projection.
On the contrary, the data continue to be in reasonable agreement with my empirical model, which I remind, is constructed as a full forecast since Jan/2000.
In fact the amplitudes and the phases of the four cycles are essentially determined on the basis of the data from 1850 to 2000, and the phases are found to be in agreement with appropriate astronomical orbital dates and cycles, while the corrected anthropogenic warming projection is estimated by comparing the harmonic model, the temperature data and the IPCC models during the period 1970-2000. The latter finding implies that the IPCC general circulation models have overestimated the anthropogenic warming component by about 2.6 time on average, within a range between 2 to 4. See original papers and the dedicated blog article for details: see below.
The widget also attracted some criticisms from some readers of WUWT’s blog and from skepticalscience
Anthony asked me to respond to the criticism, and I am happy to do so. I will respond five points.
- Criticism from Leif Svalgaard.
As many readers of this blog have noted, Leif Svalgaard continuously criticizes my research and studies. In his opinion nothing that I do is right or worth of consideration.
About my widget, Leif claimed many times that the data already clearly contradict my model: see here 1, 2, 3, etc.
In any case, as I have already responded many times, Leif’s criticism appears to be based on his confusing the time scales and the multiple patterns that the data show. The data show a decadal harmonic trending plus faster fluctuations due to ElNino/LaNina oscillations that have a time scale of a few years. The ENSO induced oscillations are quite large and evident in the data with periods of strong warming followed by periods of strong cooling. For example, in the above widget figure the January/2012 temperature is out of my cyan area. This does not mean, as Leif misinterprets, that my model has failed. In fact, such pattern is just due to the present La Nina cooling event. In a few months the temperature will warm again as the El Nino warming phase returns.
My model is not supposed to reconstruct such fast ENSO induced oscillations, but only the smooth decadal component reconstructed by a 4-year moving average as shown in my original paper figure: see here for the full reconstruction since 1850 where my models (blue and black lines) well reconstruct the 4-year smooth (grey line); the figure also clearly highlights the fast and large ENSO temperature oscillations (red) that my model is not supposed to reconstruct.
As the widget shows, my model predicts for the imminent future a slight warming trending from 2011 to 2016. This modulation is due to the 9.1 year (lunar/solar) and the 10-11 year (solar/planetary) cycles that just entered in their warming phase. This decadal pattern should be distinguished from the fast ENSO oscillations that are expected to produce fast periods of warming and fast period of cooling during these five years as it happened from 2000 to 2012. Thus, the fact that during LaNina cooling phase, as right now, the temperature may actually be cooling, does not constitute a “proof” that my model is “wrong” as Leif claimed.
Of course, in addition to twist numerous facts, Leif has also never acknowledged in his comments the huge discrepancy between the data and the IPCC projection which is evident in the widget. In my published paper [1], I did report in figure 6 the appropriate statistical test comparing my model and the IPCC projection against the temperature. The figure 6 is reported below
The figure reports a kind of chi-squared statistical test between the models and the 4-year smooth temperature component, as time progress. Values close to zero indicate that the model agrees very well with the temperature trending within their error range area; values above 1 indicate a statistically significant divergence from the temperature trending. It is evident from the figure above that my model (blue curve) agrees very well with the temperature 4-year smooth component, while the IPCC projection is always worst, and statistically diverges from the temperature since 2006.
I do not expect that Leif changes his behavior against me and my research any time soon. I just would like to advise the readers of this blog, in particular those with modest scientific knowledge, to take his unfair and unprofessional comments with the proper skepticism.
- Criticism about the baseline alignment between the data and the IPCC average projection model.
A reader dana1981 claimed that “I believe Scafetta’s plot is additionally flawed by using the incorrect baseline for HadCRUT3. The IPCC data uses a baseline of 1980-1999, so should HadCRUT.”
This reader also referred to a figure from skepticalscience, shown below for convenience,
that shows a slight lower baseline for the IPCC model projection relative to the temperature record, which give an impression of a better agreement between the data and the IPCC model.
The base line position is irrelevant because the IPCC models have projected a steady warming at a rate of 2.3 oC/century from 2000 to 2020, see IPCC figure SPM.5. See here with my lines and comments added
On the contrary, the temperature trending since 2000 has been almost steady as the figure in the widget clearly shows. Evidently, the changing of the baseline does not change the slope of the decadal trending! So, moving down the baseline of the IPCC projection for giving the illusion of a better agreement with the data is just an illusion trick.
In any case, the baseline used in my widget is the correct one, while the baseline used in the figure on skepticalscience is wrong. In fact, the IPCC models have been carefully calibrated to reconstruct the trending of the temperature from 1900 to 2000. Thus, the correct baseline to be used is the 1900-2000 baseline, that is what I used.
To help the readers of this blog to check the case by themselves, I sent Anthony the original HadCRUT3 data and the IPCC cmip3 multimodel mean reconstruction record from here . They are in the two files below:
itas_cmip3_ave_mean_sresa1b_0-360E_-90-90N_na-data
As everybody can calculate from the two data records that the 1900-2000 average of the temperature is -0.1402, while the 1900-2000 average of the IPCC model is -0.1341.
This means that to plot the two records on the common 1900-2000 baseline, there is the need to use the following command in gnuplot
plot “HadCRUT3-month-global.dat”, “itas_cmip3_ave_mean_sresa1b_0-360E_-90-90N_na.dat” using 1:($2 – 0.0061)
which in 1850-2040 produces the following graph
The period since 2000 is exactly what is depicted in my widget.
The figure above also highlights the strong divergences between the IPCC model and the temperature, which are explicitly studied in my papers proving that the IPCC model are not able to reconstruct any of the natural oscillations observed at multiple scales. For example, look at the 60-year cycle I extensively discuss in my papers: from 1910 to 1940 a strong warming trending is observed in the data, but the warming trending in the model is far lower; from 1940 to 1970 a cooling is observed in the data while the IPCC model still shows a warming; from 1970 to 2000, the two records present a similar trending (this period is the one originally used to calibrate the sensitivities of the models); the strong divergence observed in 1940-1970, repeats since 2000, with the IPCC model projecting a steady warming at 2.3 oC/century , while the temperature shows a steady harmonically modulated trending highlighted in my widget and reproduced in my model.
As explained in my paper the failure of the IPCC model to reconstruct the 60-year cycle has large consequences for properly interpreting the anthropogenic warming effect on climate. In fact, the IPCC models assume that the 1970-2000 warming is 100% produced by anthropogenic forcing (compare figures 9.5a and 9.5b in the IPCC report) while the 60-year natural cycle (plus the other cycles) contributed at least 2/3 of the 1970-2000 warming, as proven in my papers.
In conclusion, the baseline of my widget is the correct one (baseline 1900-2000). My critics at skepticalscience are simply trying to hide the failure of the IPCC models in reconstructing the 60-year temperature modulation by just plotting the IPCC average simulation just since 2000, and by lowering the baseline apparently to the period 1960-1990, which is not where it should be because the model is supposed to reconstruct the 1900-2000 period by assumption.
It is evident that by lowering the base line a larger divergence would be produced with the temperature data before 1960! So, skepticalscience employed a childish trick of pulling a too small coversheet from a too large bed. In any case, if we use the 1961-1990 baseline the original position of the IPCC model should be shifted down by 0.0282, which is just 0.0221 oC below the position depicted in the figure above, not a big deal.
In any case, the position of the baseline is not the point; the issue is the decadal trend. But my 1900-2000 baseline is in the optimal position.
- Criticism about the chosen low-high boundary levels of the IPCC average projection model (my width of the green area in the widget).
Another criticism, in particular by skepticalscience, regards the width of the boundary (green area in the widget) that I used, They have argued that
“Most readers would interpret the green area in Scafetta’s widget to be a region that the IPCC would confidently expect to contain observations, which isn’t really captured by a 1-sigma interval, which would only cover 68.2% of the data (assuming a Gaussian distribution). A 2-sigma envelope would cover about 95% of the observations, and if the observations lay outside that larger region it would be substantial cause for concern. Thus it would be a more appropriate choice for Scafetta’s green envelope.”
There are numerous problems with the above skepticalscience’s comment.
First, the width of my green area (which has a starting range of about +/- 0.1 oC in 2000) coincides exactly with what the IPCC has plotted in his figure figure SPM.5. Below I show a zoom of IPCC’s figure SPM.5
The two red lines added by me show the width at 2000 (black vertical line). The width between the two horizontal red lines in 2000 is about 0.2 oC as used in my green area plotted in the widget. The two other black lines enclosing the IPCC error area represent the green area enclosure reported in the widget. Thus, my green area accurately represents what the IPCC has depicted in its figure, as I explicitly state and show in my paper, by the way.
Second, skepticalscience claims that the correct comparison needed to use a 2-sigma envelope, and they added the following figure to support their case
The argument advanced by skepticalscience is that because the temperature data are within their 2-sigma IPCC model envelope, then the IPCC models are not disproved, as my widget would imply. Note that the green curve is not a faithful reconstruction of my model and it is too low: compare with my widget.
However, it is a trick to fool people with no statistical understanding to claim that by associating a huge error range to a model, the model is validated.
By the way, contrary to the claim of sckepticalscience, in statistics it is 1-sigma envelope width that is used; not 2-sigma or 3-sigma. Moreover, the good model is the one with the smallest error, not the one with the largest error.
In fact, as proven in my paper, my proposed harmonic model has a statistical accuracy of +/- 0.05 oC within which it well reconstructs the decadal and multidecadal modulation of the temperature: see here.
On the contrary, if we use the figure by skepticalscience depicted above we have in 2000 a 1-sigma error of +/- 0.15 oC and a 2-sigma error of +/- 0.30 oC. These robust and fat error envelope widths are between 3 and 6 times larger than what my harmonic model has. Thus, it is evident from the skepticalscience claims themselves that my model is far more accurate than what the IPCC models can guarantee.
Moreover, the claim of skepticalscience that we need to use a 2-sigma error envelope indirectly also proves that the IPCC models cannot be validated according the scientific method and, therefore, do not belong to the realm of science. In fact, to be validated a modeling strategy needs to guarantee a sufficient small error to be capable to test whether the model is able to identify and reconstruct the visible patterns in the data. These patterns are given by the detected decadal and multi-decadal cycles, which have amplitude below +/- 0.15 oC: see here. Thus, the amplitude of the detected cycles is well below the skepticalscience 2-sigma envelope amplitude of +/- 0.30 oC, (they would even be below the skepticalscience 1-sigma envelope amplitude of +/- 0.15 oC).
As I have also extensively proven in my paper, the envelope of the IPCC model is far larger than the amplitude of the temperature patterns that the models are supposed to reconstruct. Thus, those models cannot be properly validated and are useless for making any useful decadal and multidecadal forecast/projection for practical society purpose because their associated error is far too large by admission of skepticalscience itself.
Unless the IPCC models can guarantee a precision of at least +/- 0.05 oC and reconstruct the decadal patterns, as my model does, they cannot compete with it and are useless, all of them.
- Criticism about the upcoming HadCRUT4 record.
Skepticalscience has also claimed that
“Third, Scafetta has used HadCRUT3 data, which has a known cool bias and which will shortly be replaced by HadCRUT4.”
HadCRUT4 record is not available yet. We will see what happens when it will be available. From the figures reported here it does not appear that it will change drastically the issue: the difference with HadCRUT3 since 2000 appears to be just 0.02 oC.
In any case for an optimal matching the amplitudes of the harmonics of my model may need to be slightly recalibrated, but HadCRUT4 already shows a clearer cooling from 1940 to 1970 that further supports the 60-year natural cycle of my model and further contradicts the IPCC models. See also my paper with Mazzarella where the HadSST3 record is already studied.
- Criticism about the secular trending.
It has been argued that the important issue is the upward trending that would confirm the IPCC models and their anthropogenic warming theory.
However, as explained in my paper, once that 2/3 of the warming between 1970 and 2000 is associated to a natural cycle with solar/astronomical origin (or even to an internal ocean cycle alone) the anthropogenic warming trending reproduced by the models is found to be spurious and strongly overestimated. This leaves most of the secular warming tending from 1850 to 2012 as due to secular and millennial natural cycles, which are also well known in the literature.
In my published papers, as clearly stated there, the secular and millennial cycles are not formally included in the harmonic model for the simple reason that they need to be accurately identified: they cannot be put everywhere and the global surface temperature is available only since 1850, which is a too short period for accurately locate and identify these longer cycles.
In particular, skepticalscience has argued that the proposed model (by Loehle and Scafetta) based only on the 60-year and 20-year cycles plus a linear trending from 1850 to 1950 and extrapolated up to 2100 at most, must be wrong because when the same model is extrapolated for 2000 years it clearly diverges from reasonable patterns deduced from temperature proxy reconstructions. Their figure is here and reproduced below
Every smart person would understand that this is another skepticalscience’s trick to fool the ignorant.
It is evident that if, as we have clearly stated in our paper, we are ignoring the secular and millennial cycles and we just approximate the natural millennial harmonic trending with a first order linear approximation that we assume can be reasonable extended up to 100 years and no more, it is evident that it is stupid, before than being dishonest, to extrapolate it for 2000 years and claim that our result is contradicted by the data. See here for extended comment by Loehle and Scafetta.
As said above in those models the secular and millennial cycles were excluded for purpose. However, I already published in 2010 a preliminary reconstruction with those longer cycles included here (sorry in Italian), see figure 6 reported below
However, in the above model the cycles are not optimized, which will be done in the future. But this is sufficient to show how ideologically naïve (and false) is the claim from skepticalscience.
In any case, the secular trending and its association to solar modulation is extensively addressed in my previous papers since 2005. The last published paper focusing on this topic is discussed here and more extensively here where the relevant figure is below
The black curves represent empirical reconstruction of the solar signature secular trending since 1600. The curve with the upward trending since 1970 is made using the ACRIM TSI composite (which would be compatible with the 60-year cycle) and the other signature uses the PMOD TSI composite which is made by manipulating some of the satellite records with the excuse that they are wrong.
Thus, until the secular and millennial cycles are accurately identified and properly included in the harmonic models, it is the studies that use the TSI secular proxy reconstructions that need to be used for comparison to understand the secular trending, like my other publications from 2005 to 2010. Their results are in perfect agreement with what can be deduced from the most recent papers focusing on the astronomical harmonics, and would imply that no more that 0.2-0.3 oC of the observed 0.8 oC warming since 1850 can be associated to anthropogenic activity. (Do not let you to be fooled by Benestad and Schmidt 2009 criticism that is filled with embarrassing mathematical errors and whose GISS modelE performance is strongly questioned in my recent papers, together with those of the other IPCC models) .
I thank Anthony for the invitation and I apologize for my English errors, which my above article surely contains.
Relevant references:
[1] Nicola Scafetta, “Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models.” Journal of Atmospheric and Solar-Terrestrial Physics, (2012). DOI: 10.1016/j.jastp.2011.12.005
[2] Adriano Mazzarella and Nicola Scafetta, “Evidences for a quasi 60-year North Atlantic Oscillation since 1700 and its meaning for global climate change.” Theor. Appl. Climatol. (2011). DOI: 10.1007/s00704-011-0499-4
[3] Craig Loehle and Nicola Scafetta, “Climate Change Attribution Using Empirical Decomposition of Climatic Data.” The Open Atmospheric Science Journal 5, 74-86 (2011). DOI: 10.2174/1874282301105010074
[4] Nicola Scafetta, “A shared frequency set between the historical mid-latitude aurora records and the global surface temperature.” Journal of Atmospheric and Solar-Terrestrial Physics 74, 145-163 (2012). DOI: 10.1016/j.jastp.2011.10.013
[5] Nicola Scafetta, “Empirical evidence for a celestial origin of the climate oscillations and its implications.” Journal of Atmospheric and Solar-Terrestrial Physics 72, 951–970 (2010). DOI: 10.1016/j.jastp.2010.04.015
Additional News and Links of Interest:
Global Warming? No, Natural, Predictable Climate Change, Larry Bell
http://scienceandpublicpolicy.org/images/stories/papers/reprint/astronomical_harmonics.pd
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
dikranmarsupial says: March 18, 2012 at 10:27 am
” If they are the correct size, the observations do not lie outside the uncertainty of the projection, so there is no evidence (yet) that the model projection is inconsistent with the projection.”
Not really, look at the figure carefully. Right now the difference between the temperature and the IPCC projection mean is larger than 0.3 C (annual average), which is almost 2 times your 0.17 error.
Dr Scafetta, I’m sorry, but you have not given an unambiguous answer to my question, in each case the wording of your answer admitted the possibility that the estimate of 0.1C was obtained in some other way, or that there was some other justification in the IPCC report for that figure. Just answering “yes” or “no” would have been easier for you have typed than the reply you have given. It wasn’t an unreasonable question and I don’t understand why you could not simply give a direct and completely unambiguous answer.
Dr Scafetta wrote: “You should read my paper with an open mind.” it is not a good idea to assume that someone who disagrees with you has anything other than an open mind. If I had a closed mind I wouldn’t take so much time clarifying exactly what was done, so that I could fully understand your position.
Further “Not really, look at the figure carefully. Right now the difference between the temperature and the IPCC projection mean is larger than 0.3 C (annual average), which is almost 2 times your 0.17 error.”
Well, as I think I explained above, twice the standard deviation is the appropriate test. Assuming the noise is Gaussian, there would only be a 5% chance of the observations lying outside a 2-sigma region (and hence it would suggest the observations are unlikely to be generated by the model), whereas there is a 30% probability of this happing with the 1-sigma region. The model runs themselves often lie outside the 1-sigma region, does that mean the model runs are inconsistent with the model that generated them? No, of course it doesn’t.
dikranmarsupial says:
March 18, 2012 at 1:47 pm
“Assuming the noise is Gaussian, there would only be a 5% chance of the observations lying outside a 2-sigma region (and hence it would suggest the observations are unlikely to be generated by the model), whereas there is a 30% probability of this happing with the 1-sigma region.”
This is very sloppy. First, you make the assumption that it is Gaussian when you could at least plot a simple histogram to see if it is anything like that – please do not bother making an appeal to the central limit theorem in defense, you have the data and can do an analysis.
More importantly, and assuredly falsely, you assume that the errors are uncorrelated. They are anything but. When your model is consistently off and diverging, you have got a serious problem. Your preference is to whistle past the graveyard. Fine, if you like. But, it is a most unimpressive display.
dikranmarsupial says: March 18, 2012 at 1:47 pm
I think that you do not truly understand how to validate a computer model from a statistical point of view.
You continue to say that the real test needs to use 2-sigma and that 1-sigma should be 0.17 C.
This would imply a 2-sigma of 0.34 C, which would imply a +/- 0.34 C range, which would have from up to down a range of 0.68 C.
Now, you need to realize that the upward warming trend of the temperature from 1850 to 2010 is about 0.8 C, and your wanted confidence model error window is about 0.68 C. You need to understand that a confidence model error window of about 0.68C is far too large compared to the temperature patterns that one would like to identify. For example, the temperature could have warmed by just 0.12 C from 1850 to 2010 and you would still conclude that the same models are consistent with the historical temperature!
That would be of course true because of the confidence model error window of about 0.68 C. But does that mean that the models are useful?
In fact, models that agree with almost any outcome are useless; one would like to have something more precise than that.
Perhaps, the reason why the IPCC did not depict the figures with your error bars was because everybody with a minimum of common sense would have laughed at them and never stopped!
Now you need to understand that the temperature signal is not a huge random noise (sd 0.17 C) around an upward trend as assumed by the IPCC models. The temperature signal is a complex dynamical signal with detectable patterns with amplitude of the order 0.1 C. This is the precision that you would like to have at maximum, that is what the IPCC has shown in its figures, and with that precision the model failed the prediction since 2000.
Dr Scafetta, It seems that you do not understand how to evaluate climate models.
I agree that that the 2-sigma range is broad (0.68C sounds about right). The thing that you do not appear to appreciate is that the IPCC probably would not claim that the models have great predictive skill on decadal projections. I understand there is discussion about including decadal projections in the next IPCC report due to progress made in modelling that allows them to predict chaotic events such as ENSO, which currently make decadal projections meaningless.
It is very well known that decadal scale trends are not informative, see e.g. the paper by Easterling and Wehner http://www.agu.org/pubs/crossref/2009/2009GL037810.shtml which show that the observations often include periods of little or no warming, even in the presence of a long term warming trend and this is also replicated in the models. The models however can only predict that they can happen, but not when they happen, which is why they cannot make useful decadal projections.
However, the inability to make decadal prdictions doesn’t mean they can’t make useful centenial scale projections, as features such as ENSO are quasi-cyclic and cancel out over such long periods.
So if you are arguing that the AR4 models don’t make useful decadal projections, then it is a straw man, I rather doubt anybody would claim they do.
Lying within the 2-sigma error bars is not an indication of skill, just that the models have not (yet) been falsified by the observations. The latter is no big deal, but it IS a big deal to claim that they are not consistent, as it suggests the models cannot even scale the smallest of hurdle.
“Perhaps, the reason why the IPCC did not depict the figures with your error bars was because everybody with a minimum of common sense would have laughed at them and never stopped! ”
Funny, climate modellers very often do, e.g. Gavin Schmidt. Do you think it is just possible that Gavin understand climate models rather better than you do, and that just perhaps there is something that you do not fully understand?
To Troll Dikran Marsupinal:
Please refrain from Warmist trolling: You are not even capable of reading simple IPCC
AR4 pages: AR4 EXPLICITELY states that the IPCC made world astonishing computer
modelling advances compared to the previous AR3, and that AR4 thus achieved the
highest precision…..with sky high predictive skills within almost non-existing error bands…..
…..whereas your quotes shatter and deningrade those great IPCC models, with your quote:
“THE IPCC WOULD PROBABLY NOT CLAIM THAT THEIR MODELS HAVE PREDICTIVE
SKILL” and that IPCC concedes any “INABILITY OF THEIR GREAT MODELS TO MAKE
DECADAL PREDICTIONS”…… your opinion is just trolling BS: Check as well on Warmist
blogs: They all claim incrediblel accuracy of IPCC forecasts….and NOT that IPCC models
“”probably would not claim nothing……??””
Don’t molest the great climate pioneer Scafetta doing his great work and stop trying to
steal his limited time with BS trolling….
JS
Incidentallly, if you look at SPM figure SPM.4, you will find that the 90% (5% to 95%) range for climate models. Those for global surface temperatures are a bit less than 0.5C, those for global land a bit over 0.6C according to my eyecrometer. This suggests that your reading of even just the SPM was insufficiently thorough, as the IPCC DO show figures with error bars very nearly as broad as mine (which cover roughly the 2.5% – 97.5% range, and they may not have been the same set of model runs, so it isn’t surprising they are slightly broader than mine). So your quote
“Perhaps, the reason why the IPCC did not depict the figures with your error bars was because everybody with a minimum of common sense would have laughed at them and never stopped! ”
Suggests that you are not very familiar with the findings of the IPCC.
dikranmarsupial says: March 19, 2012 at 3:05 am
“The thing that you do not appear to appreciate is that the IPCC probably would not claim that the models have great predictive skill on decadal projections.”
I am sorry, but I think that you are missing the point. The IPCC models need to be validated, you cannot trust models that cannot be validated. To do that they need to predict and/or hindcast the decadal-multidecadal scale properly. What I show in my paper is that they do not do it, so they cannot be validated. Moreover, I propose another model which has a far more accurate predicting skills, so why should we trust and or use the IPCC models if we have something better?
For example, in my papers I am saying that climate may be interpreted as the tides are interpreted and forecasted. The used method is the Kelvin’s one based on tidal harmonic constituents. In theory, the IPCC models should be able to predict tides, but nobody uses them to predict tides for any practical purpose. Why should they, gioven the fact that another method works far better because it has far higher predicting skills?
Dr Scafetta, yes, of course the models need to be tested and evaluated (see chapter 8 of the IPCC AR4 WG1 report). However that evaluation should be performed on tasks that the modellers claim their models can do, rather than things that they openly acknowledge they cannot. For instance, the uncertainty on decadal projections being large is an indication that they don’t make useful projections on such short timescales.
Now not being able to validate models based on decadal projections doesn’t mean they cannot be validated. Just that they cannot be validated in the way you suggests YET. Note that the observations currently lie pretty close to the lower error bar. Should the climate cool for the next few years, or perhaps remain constant for the next 5 years or so (according to my eyecrometer), then the models will have failed the test. But at the current time, they have not.
As I have already pointed out, the inability to predict decadal timescales does not mean that they cannot make useful centennial scale, because the chaotic “weather noise” tends to cancel out on a scale of 30 years or so.
At the end of the day, you need to compare the observations with the range of plausible outcomes according to the model ensemble. You have not done this. If you want to validate the models, you also need to validate them on some task where the modellers actually agree that their models have useful skill, not some task where they don’t.
Did you contact any climate modellers to check that your presentation of the IPCC models was reasonable? That is the sort of thing that most scientists do.
To Mr. Dikran Marsupial:
The AR4 accuracy claims can be easily quoted from AR4 texts…..
they contain nothing about your alleged IPCCs “INABILITY TO MAKE DECADAL
PREDICTIONS”,
where did you read this? Please quote AR4…..
In AR4, every word has been meticulously examined by reviewers…please quote your
IPCC self-admitted “were are INCAPABLE/UNABLE to make decadal predictions”….
by the way, they talk about “projections” which is IPCC- forecast terminology….
Quote pages and line from AR4 and then you will get an answer ….refrain from trolling
once again….
JS
As I mentioned, I studied the question of solar barycentric variations at some length, including corresponding with Ted Landsheidt on the question. I finally concluded I couldn’t make sense of it.

I got to thinking today about WHY I had decided I couldn’t make sense of it, and dug out some of my old work. I remembered that the rock that I’d run my ship on was the lack of any connection between barycentric velocity and sunspots. At the time, I’d realized that if I couldn’t figure out how the barycentric velocity is correlated with sunspots on the sun itself, what hope was there for correlating it with the climate on the earth?
Here’s the two variables:
Sunspot Cycle Source
Note that the barycentric velocity varies on a cycle somewhere around 20 years, which is kinda like the Hale sunspot cycle of 22 years (two ~ 11 year cycles with alternately reversed magnetic polarity). I have colored the sunspot cycles alternately red and green to indicate the polarity of the cycle.
The problem is that when we look at the actual data, we start out with the red polarity matching up with the peaks in the barycentric cycle in 1761. But by 1882, the green polarity matches up with the barycentric cycle … and by 2000, we’re back to the red cycle matching up.
I puzzled over that for a while, and could never make it work out … so my conclusion was if barycentric variations can’t explain sunspot cycles, they likely couldn’t explain the earth’s climate.
w.
dikranmarsupial says: March 19, 2012 at 7:25 am
I do not want to convince everybody. I do understand that the issue is difficul.
I may just suggest you to consider the new findings that are coming out (very soon).
The climate system, as my research suggests me, is mostly regulated by specific astronomical/solar cycles. In my above paper I have discussed some of them, but other cycles have been already identified, you just need to wait my new papers coming out.
The IPCC models do not know anything about these cycles, so they cannot be correct.
Just, wait a little bit and you may be surprised of how nice the big picture looks once the major solar cycles are included in the discussion. Let us wait and let us see, OK?
Willis Eschenbach says: March 19, 2012 at 9:51 am
Willis, sorry. You do not get the point.
My argument is that every function made of the planetary orbits presents the same frequencies. What is causing climate change is a misterious function X which is function of the planetary/solar cycles. Thus, you can use the speed of the sun to get at least some of the frequencies of the function X, without knowing it.
Let us see if my next paper convinces you better where at least a major component of the function X becomes more explicit.
You should try to read my papers before writing.
Dr Scafetta. I have explained how the error bars on your widget are incorrect and do not accurately represent the range of IPCC projections. The correct thing to do would be to ammend your widget so that it accurately and fairly represents what the CMPI3 model ensemble actually says. The easiest way to do so would be to merely plot the range of model runs from the CMP3 ensemble for SRES A1B. I doubt anyone would disagree with that. You have already downloaded this data, so changing the widget should be a trivial exercise.
Once you have done this, *then* I will be willing to discuss the cyclical model.
And here comes the greatest Scam of Dikranmarsupial::
Your quote””” Dr. Scafetta, once “YOU” have done this, then “I” might be
willing to discuss the cyclic model””……..
Either you are Newton and Einstein combined or just a little troll, who demands
that great works of others authors have to fulfil your demands ……and then
(maybe) you might desire a “discussion”…..
I am sure you do not boast and do understatements….What are your
achievements in climate science….just one hint, if there were any….
JS
dikranmarsupial says:
March 19, 2012 at 7:25 am
“Now not being able to validate models based on decadal projections doesn’t mean they cannot be validated. ”
Any model that does not properly include convection is not a model at all. What we have at present is a political gambit that predicts nothing.
dikranmarsupial says: March 19, 2012 at 10:52 am
Unfortunately I cannot change the widget because it is linked to a published paper.
It is good enougth to give the idea. As I tald you the range +/- 0.1 C is what shown in the IPCC figures. That is fine enougth with me.
In the future I may consider your proposal for another paper.
It is shown here that the GISP2 Fourier analysis (1200 years data) contains not a ‚~60’ year cycle.
But many peaks can be assigned to special frequencies, which are part of real astronomical (elliptical) functions. These sinusoid frequencies must not exist in real, but the frequencies in perihel or aphel position have a relation to the analyzed sinusoid frequencies. As ad nausem explained it makes no sense to argue nn year cycles in astronomy, because all moving objects have not simple sinusoid functions.
In a discussion whether there is a relation between the moving planets in the solar system and terrestrial climate and/or global temperature it is therefore necessary, as mentioned ad nauseam, to take the real astronomical data of the planets, but not “nn year cycles”. And because it is obvious that solar tide functions of synodic pattern have most magnitude it is easy to create a rough climate simulation in any time resolution down to month.
One can compare the FFT spectra of GISP2 (1200 years) and solar tide spectra (2400 years) and there are some similarities, especially in magnitudes of synodic tide functions, which frequencies are always twice the synodic frequency. But it seems that also single objects create magnitudes from its difference frequencies because of the ellipticity.
http://www.volker-doormann.org/images/gisp2_vs_solar_tide1.gif
That this work has a serious basis was shown by the phase coherence of the sea level oscillations and the solar tide function of the synodic couple of Mercury and Earth.
http://www.volker-doormann.org/images/sea_level_vs_solar_tides_c.gif
http://www.volker-doormann.org/Sea_level_vs_solar_tides1.htm
http://www.volker-doormann.org/images/ghi_vs_hadcrut3_1980.gif
That the try to (sinusoid) cycles in years is naïve if one would like to make climate predictions becomes clear, if one knows that the most high magnitudes in the Holocene is related to a synodic tide function of trans-neptune objects:
http://www.volker-doormann.org/images/echo_g_vs_ghi.gif
I do not know, what the reason is, that science people, who are looking to the processes on the Sun because of climate relevant functions are practicing silence.
However, life goes on.
V.
Dr Scafetta, the link to the published paper is no reason not to change the representation of the IPCC models, as the main purpose of your paper is to present your model. Further promulgating incorrect material is damaging to public understanding of science and if you don’t wish to change the widget, you ought to withdraw it.
As to the 0.1C, I have already shown that (i) it under-represents the standard deviation shown in the figure from which it was estimated, as there is grey area clearly under the lower red line (ii) the standard deviation is not an indicator of the range of IPCC projections (iii) there is another figure in the IPCC SPM which does show the range of model projections and it is at least twice the size of the one in your widget.
I am greatly disturbed that you show so little concern over an error present not only on a blog article, but in one of your peer-reviewed publications.
Nicola Scafetta says:
March 19, 2012 at 10:48 am
Dr. Scafetta, that is certainly a very roundabout way to say that you can’t link up sunspots with barycentric motions either …
And since despite all of your studies and all of your papers you can’t figure out how to use the solar barycentric data to understand the actions of the sun itself, the idea that you can use them to understand the actions of earth’s climate is … well, let me call it “unlikely” in lieu of a more earthy Anglo-Saxon word.
w.
To Willis:
The Solar barycenter motion SIM is important and influences the 3-body
GRAVITATION between SUN/EARTH/THIRD BODY PLANETS….. what we
have to talk about is how the motions/gravitation (PULLING FORCES)
effect/impact the TRUE TRAJECTORY of planetary orbits……
The 5 Keplerian elements are only PENCIL+PAPER diagrams, 2-dimensional….
too coarse, this is what JPL Horizons expresses…. The work NOW to do is to
obtain the daily diversions from the Kepler line in order to assess the pull/push
effects of the 60 year 3-body cycle….
The idea on how many spots someone has on its face by judging his movements
all over town…..lets forget this weird idea….
JS
Willis Eschenbach says: March 19, 2012 at 1:11 pm
Ok Willis, be happy with that.
dikranmarsupial says: March 19, 2012 at 12:27 pm
“am greatly disturbed that you show so little concern over an error present not only on a blog article, but in one of your peer-reviewed publications.”
see, it is not an “error”. It is what the IPCC has shown in their publications. I show the same, for comparison.
Joachim Seifert says:
March 19, 2012 at 1:25 pm
I’m not clear what your point is here, Joachim. It sounds like you are saying that the barycentric movement of the sun has no effect on the magnetism, or the sunspots, or any of the other observable variations of the sun itself, but the movement does affect the climate of the planets.
If that is your claim, then I wish you the happiness of your beliefs. Me, I’m more logical than that … if the movement is to affect anything, it will first affect the sun. And if we can’t understand the action on the sun …
w.
Willis, you got the message…..the Sun output is one case: with spots,
magnetism, auroras, winds and all the like…. if the combined output derived
from all these variables were high, significant enough to send us on Earth
additional energy, even in milleniums or centennial rhythms, then wonderful…
.all natural causes which would bite into the share of CO2 (if there were one)..
..fine with me…..
More important however are Solar System MECHANICS, gravitation (pulling/pushing
force releasing) between Sun, Outer planets and Earth, all in their orbits, which
do vary a little from each year to year. We therefore, have to measure the resulting
distance changes between planets, Earth and Sun (depending on its actual location
in the barycenter) and quantify them meticulously…..
……. Svalsgaard, for example, just points to JPL Horizons and cannot find any
cycles in ephemerides……
……but this is clear (but not to him) that DE 405 tables show a lot of
numbers but one cannot ask the system : “Show me the 60 year cycle…..”
expecting that km/miles-changes of 60 years gravitation cycles would pop up….
……
What we need is a (1) heuristic approach, with (2) calculations for it and
followed by (3) empirical data as comparison….
If SIM movements produce climate change effects, wonderful., fine…..
But concerning gravitation in the solar system: This is the historical success of Nick
Scafetta to have spotted it and having it described to beyond doubt in (preliminary
humble) steps…but , in the near future, those mechanics will be concretized and
quantified, the final blow to AGW….just wait and see, there is depth in this
approach….and I did my own humble share as well and have the numbers
already on paper. They all add up well and I suspect that you, by now, have
gotten a first feeling that we are on the brink of a historical breakthrough which
will end AGW in only a few more years….
Cheers
JS
JS
A hint for Willis as he seems incapable of reading the science.
Sunspots are correlated with Angular Momentum (AM). But first you must learn the different properties of AM. A guide for the basics can be found at:
http://tinyurl.com/2dg9u22/?q=node/218
Also Nicola is not using solar output or sunspots as a baseline for his 60 year cycle.
Willis Eschenbach says:
March 19, 2012 at 9:51 am
As I mentioned, I studied the question of solar barycentric variations at some length, including corresponding with Ted Landsh… on the question. I finally concluded I couldn’t make sense of it.
Much has been learned since Theodor, he was a pioneer on the right track but missed the crucial component. Even so if you corresponded with him you should know what simple tool he used to predict solar grand minimum. Give us an elevator statement of your understanding of this tool.
[Moderator’s Note: Anthony has NOT indicated a willingness to have this topic discussed here. Please drop it. -REP]
Geoff Sharp says:
March 19, 2012 at 11:05 pm
[Moderator’s Note: Anthony has NOT indicated a willingness to have this topic discussed here. Please drop it. -REP]
I did not start the topic discussion but I am happy to educate Willis if he wishes to contact me via my website. He says he wants to understand the logic, I am happy to do so.
[REPLY: Geoff, thank you for your courtesy. I am sure Willis will be contacting you directly. At some point Anthony may want to revisit his decision about this topic, but I don’t think he wants his home on the internet to be the battleground. He has enough on his plate. You are, by the way, a valued contributor. -REP]
To Willis:
and Anthony, make sure, Willis gets to read it, importance of the
highest order:
Post from Michele Casati: “Acapulco earthquake…..”
Just came out now:
Here it is proven that the GRAVITY of the planets exercise strong effects
towards EARTH, not only (1) “grab” the atmosphere and (2) ocean tides, but
additional grabbing the (3) Earth crust, so the crust will split, wobble and break
up….PRODUCING Earth/sea quakes……
The good Willis puts his eggs more on the Landscheidt stuff, such as auroras,
magnetism, solar winds and what not, and still doubting that PLANETARY ORBIT
pecularities have the greatest atmospheric climate/crust movement/ocean tide
effects…..
The animated picture in this post helps to see how planetary constellations/
positions of Scafettas 60 year gravity cycle produces climate change….
Anthony, I hope you can convince him to exercise self-critizism. to some degree….
Cheers, and I am happy that this post came in/out in good timing…
A wink from higher up?
JS
Joachim Seifert says:
March 20, 2012 at 5:41 pm
Thanks for the heads-up, Joachim. Perhaps that impresses you. I’m more of a realist.
First, Casati did not give a specific day, he just said that this was a dangerous period. What does that mean? A magnitude 7 quake within ± 5 days? ± 10 days? ± 1 day? As it stands, his prediction is unfalsifiable, and thus it is not science.
Next, there were no less than 24 earthquakes greater than magnitude 7 in 2010, and 21 in 2011. If you do a monte carlo analysis, you’ll soon find out that if you had predicted an earthquake in 2010 purely at random, there is a 50/50 chance of a magnitude 7 earthquake within 5 days of that date, and a 25% chance that you’re prediction was within 2 days of a magnitude 7 quake. So that’s why I’m not impressed.
Next, I find it difficult to believe that this is his first prediction … and if it is not, what were the results of his other predictions?
Finally, if Casati actually has a working system, it would be a trivial exercise to test it out against the historical earthquake records and report back to us how amazingly well his clearly specified alignments correlate with actual earthquakes. Not only that, but he would immediately be world-famous.
He has not done so … which may mean he’s just really humble and doesn’t want the notoriety, but certainly raises my suspicions.
Get back to me when Casati specifies 1) the details of exactly what he calls a “dangerous alignment”, 2) the days in the past when a “dangerous alignment” has happened, 3) whether that list of dates correlates BETTER THAN CHANCE with historical earthquakes, and 4) a list of the future dates on which we can expect earthquakes.
It’s called “science”, Joachim. You make a hypothesis and then you test it. Then you make falsifiable predictions, not vague handwaving about a “dangerous period” of unspecified duration and centre. And finally, you don’t jump up and down and crow if you happen to hit one, even a blind hog will find an acorn once in a while.
So post again when Casati actually subjects his theory to the normal scientific process of verification. As I said, it’s a trivial task for him to list the alignment conditions and show that when applied to historical earthquakes, his system does significantly better than chance. The fact that he hasn’t done so should raise your suspicions to the limit.
w.
To Willis: Thank you having looked at the matter…….
You are not impressed….In any case, you looked deeper as others who just
want to find one hair in every soup…
As you say: If this planetary constellation method were valid, it must be reciprocable
and function again and again….
No problem: Lets wait until the stars are in favourable position again (sounds
somewhat like astrology) and lets give the author another try and see how he fares….
Volker Doormann pointed out that he (V.) is capable of finding such positions in
hindsight …..maybe he can……?
I remain still impressed because none of the Earthquake guys steps forward with
a short time forcast, they stay on the Warmist 100 year level: “We predict a major
earthquake in San Francisco over 100 years…..with accuracy such as the global
warming forecast…..
JS