By Dr. Nicola Scafetta
It is time to update my widget comparing the global surface temperature, HadCRUT3 (red and blue), the IPC 2007 projection (green) and my empirical model (black thick curve and cyan area) based on a set of detected natural harmonics (period of approximately: 9.1, 10-11, 20 and 60 years) which are based on astronomical cycles, plus a corrected anthropogenic warming projection of about 0.9 oC/century. The yellow curve represents the harmonic model alone without the corrected anthropogenic warming projection and represents an average lower limit.
The proposed astronomically-based empirical model represents an alternative methodology to reconstruct and forecast climate changes (on a global scale, at the moment) which is alternative to the analytical methodology implemented in the IPCC general circulation models. All IPCC models are proven in my paper to fail to reconstruct all decadal and multidecadal cycles observed in the temperature since 1850. See details in my publications below.
As the figure shows, the temperature for Jan/2012 was 0.218 oC, which is a cooling respect to the Dec/2011 temperature, and which is about 0.5 oC below the average IPCC projection value (the central thin curve in the middle of the green area). Note that this is a very significant discrepancy between the data and the IPCC projection.
On the contrary, the data continue to be in reasonable agreement with my empirical model, which I remind, is constructed as a full forecast since Jan/2000.
In fact the amplitudes and the phases of the four cycles are essentially determined on the basis of the data from 1850 to 2000, and the phases are found to be in agreement with appropriate astronomical orbital dates and cycles, while the corrected anthropogenic warming projection is estimated by comparing the harmonic model, the temperature data and the IPCC models during the period 1970-2000. The latter finding implies that the IPCC general circulation models have overestimated the anthropogenic warming component by about 2.6 time on average, within a range between 2 to 4. See original papers and the dedicated blog article for details: see below.
The widget also attracted some criticisms from some readers of WUWT’s blog and from skepticalscience
Anthony asked me to respond to the criticism, and I am happy to do so. I will respond five points.
- Criticism from Leif Svalgaard.
As many readers of this blog have noted, Leif Svalgaard continuously criticizes my research and studies. In his opinion nothing that I do is right or worth of consideration.
About my widget, Leif claimed many times that the data already clearly contradict my model: see here 1, 2, 3, etc.
In any case, as I have already responded many times, Leif’s criticism appears to be based on his confusing the time scales and the multiple patterns that the data show. The data show a decadal harmonic trending plus faster fluctuations due to ElNino/LaNina oscillations that have a time scale of a few years. The ENSO induced oscillations are quite large and evident in the data with periods of strong warming followed by periods of strong cooling. For example, in the above widget figure the January/2012 temperature is out of my cyan area. This does not mean, as Leif misinterprets, that my model has failed. In fact, such pattern is just due to the present La Nina cooling event. In a few months the temperature will warm again as the El Nino warming phase returns.
My model is not supposed to reconstruct such fast ENSO induced oscillations, but only the smooth decadal component reconstructed by a 4-year moving average as shown in my original paper figure: see here for the full reconstruction since 1850 where my models (blue and black lines) well reconstruct the 4-year smooth (grey line); the figure also clearly highlights the fast and large ENSO temperature oscillations (red) that my model is not supposed to reconstruct.
As the widget shows, my model predicts for the imminent future a slight warming trending from 2011 to 2016. This modulation is due to the 9.1 year (lunar/solar) and the 10-11 year (solar/planetary) cycles that just entered in their warming phase. This decadal pattern should be distinguished from the fast ENSO oscillations that are expected to produce fast periods of warming and fast period of cooling during these five years as it happened from 2000 to 2012. Thus, the fact that during LaNina cooling phase, as right now, the temperature may actually be cooling, does not constitute a “proof” that my model is “wrong” as Leif claimed.
Of course, in addition to twist numerous facts, Leif has also never acknowledged in his comments the huge discrepancy between the data and the IPCC projection which is evident in the widget. In my published paper [1], I did report in figure 6 the appropriate statistical test comparing my model and the IPCC projection against the temperature. The figure 6 is reported below
The figure reports a kind of chi-squared statistical test between the models and the 4-year smooth temperature component, as time progress. Values close to zero indicate that the model agrees very well with the temperature trending within their error range area; values above 1 indicate a statistically significant divergence from the temperature trending. It is evident from the figure above that my model (blue curve) agrees very well with the temperature 4-year smooth component, while the IPCC projection is always worst, and statistically diverges from the temperature since 2006.
I do not expect that Leif changes his behavior against me and my research any time soon. I just would like to advise the readers of this blog, in particular those with modest scientific knowledge, to take his unfair and unprofessional comments with the proper skepticism.
- Criticism about the baseline alignment between the data and the IPCC average projection model.
A reader dana1981 claimed that “I believe Scafetta’s plot is additionally flawed by using the incorrect baseline for HadCRUT3. The IPCC data uses a baseline of 1980-1999, so should HadCRUT.”
This reader also referred to a figure from skepticalscience, shown below for convenience,
that shows a slight lower baseline for the IPCC model projection relative to the temperature record, which give an impression of a better agreement between the data and the IPCC model.
The base line position is irrelevant because the IPCC models have projected a steady warming at a rate of 2.3 oC/century from 2000 to 2020, see IPCC figure SPM.5. See here with my lines and comments added
On the contrary, the temperature trending since 2000 has been almost steady as the figure in the widget clearly shows. Evidently, the changing of the baseline does not change the slope of the decadal trending! So, moving down the baseline of the IPCC projection for giving the illusion of a better agreement with the data is just an illusion trick.
In any case, the baseline used in my widget is the correct one, while the baseline used in the figure on skepticalscience is wrong. In fact, the IPCC models have been carefully calibrated to reconstruct the trending of the temperature from 1900 to 2000. Thus, the correct baseline to be used is the 1900-2000 baseline, that is what I used.
To help the readers of this blog to check the case by themselves, I sent Anthony the original HadCRUT3 data and the IPCC cmip3 multimodel mean reconstruction record from here . They are in the two files below:
itas_cmip3_ave_mean_sresa1b_0-360E_-90-90N_na-data
As everybody can calculate from the two data records that the 1900-2000 average of the temperature is -0.1402, while the 1900-2000 average of the IPCC model is -0.1341.
This means that to plot the two records on the common 1900-2000 baseline, there is the need to use the following command in gnuplot
plot “HadCRUT3-month-global.dat”, “itas_cmip3_ave_mean_sresa1b_0-360E_-90-90N_na.dat” using 1:($2 – 0.0061)
which in 1850-2040 produces the following graph
The period since 2000 is exactly what is depicted in my widget.
The figure above also highlights the strong divergences between the IPCC model and the temperature, which are explicitly studied in my papers proving that the IPCC model are not able to reconstruct any of the natural oscillations observed at multiple scales. For example, look at the 60-year cycle I extensively discuss in my papers: from 1910 to 1940 a strong warming trending is observed in the data, but the warming trending in the model is far lower; from 1940 to 1970 a cooling is observed in the data while the IPCC model still shows a warming; from 1970 to 2000, the two records present a similar trending (this period is the one originally used to calibrate the sensitivities of the models); the strong divergence observed in 1940-1970, repeats since 2000, with the IPCC model projecting a steady warming at 2.3 oC/century , while the temperature shows a steady harmonically modulated trending highlighted in my widget and reproduced in my model.
As explained in my paper the failure of the IPCC model to reconstruct the 60-year cycle has large consequences for properly interpreting the anthropogenic warming effect on climate. In fact, the IPCC models assume that the 1970-2000 warming is 100% produced by anthropogenic forcing (compare figures 9.5a and 9.5b in the IPCC report) while the 60-year natural cycle (plus the other cycles) contributed at least 2/3 of the 1970-2000 warming, as proven in my papers.
In conclusion, the baseline of my widget is the correct one (baseline 1900-2000). My critics at skepticalscience are simply trying to hide the failure of the IPCC models in reconstructing the 60-year temperature modulation by just plotting the IPCC average simulation just since 2000, and by lowering the baseline apparently to the period 1960-1990, which is not where it should be because the model is supposed to reconstruct the 1900-2000 period by assumption.
It is evident that by lowering the base line a larger divergence would be produced with the temperature data before 1960! So, skepticalscience employed a childish trick of pulling a too small coversheet from a too large bed. In any case, if we use the 1961-1990 baseline the original position of the IPCC model should be shifted down by 0.0282, which is just 0.0221 oC below the position depicted in the figure above, not a big deal.
In any case, the position of the baseline is not the point; the issue is the decadal trend. But my 1900-2000 baseline is in the optimal position.
- Criticism about the chosen low-high boundary levels of the IPCC average projection model (my width of the green area in the widget).
Another criticism, in particular by skepticalscience, regards the width of the boundary (green area in the widget) that I used, They have argued that
“Most readers would interpret the green area in Scafetta’s widget to be a region that the IPCC would confidently expect to contain observations, which isn’t really captured by a 1-sigma interval, which would only cover 68.2% of the data (assuming a Gaussian distribution). A 2-sigma envelope would cover about 95% of the observations, and if the observations lay outside that larger region it would be substantial cause for concern. Thus it would be a more appropriate choice for Scafetta’s green envelope.”
There are numerous problems with the above skepticalscience’s comment.
First, the width of my green area (which has a starting range of about +/- 0.1 oC in 2000) coincides exactly with what the IPCC has plotted in his figure figure SPM.5. Below I show a zoom of IPCC’s figure SPM.5
The two red lines added by me show the width at 2000 (black vertical line). The width between the two horizontal red lines in 2000 is about 0.2 oC as used in my green area plotted in the widget. The two other black lines enclosing the IPCC error area represent the green area enclosure reported in the widget. Thus, my green area accurately represents what the IPCC has depicted in its figure, as I explicitly state and show in my paper, by the way.
Second, skepticalscience claims that the correct comparison needed to use a 2-sigma envelope, and they added the following figure to support their case
The argument advanced by skepticalscience is that because the temperature data are within their 2-sigma IPCC model envelope, then the IPCC models are not disproved, as my widget would imply. Note that the green curve is not a faithful reconstruction of my model and it is too low: compare with my widget.
However, it is a trick to fool people with no statistical understanding to claim that by associating a huge error range to a model, the model is validated.
By the way, contrary to the claim of sckepticalscience, in statistics it is 1-sigma envelope width that is used; not 2-sigma or 3-sigma. Moreover, the good model is the one with the smallest error, not the one with the largest error.
In fact, as proven in my paper, my proposed harmonic model has a statistical accuracy of +/- 0.05 oC within which it well reconstructs the decadal and multidecadal modulation of the temperature: see here.
On the contrary, if we use the figure by skepticalscience depicted above we have in 2000 a 1-sigma error of +/- 0.15 oC and a 2-sigma error of +/- 0.30 oC. These robust and fat error envelope widths are between 3 and 6 times larger than what my harmonic model has. Thus, it is evident from the skepticalscience claims themselves that my model is far more accurate than what the IPCC models can guarantee.
Moreover, the claim of skepticalscience that we need to use a 2-sigma error envelope indirectly also proves that the IPCC models cannot be validated according the scientific method and, therefore, do not belong to the realm of science. In fact, to be validated a modeling strategy needs to guarantee a sufficient small error to be capable to test whether the model is able to identify and reconstruct the visible patterns in the data. These patterns are given by the detected decadal and multi-decadal cycles, which have amplitude below +/- 0.15 oC: see here. Thus, the amplitude of the detected cycles is well below the skepticalscience 2-sigma envelope amplitude of +/- 0.30 oC, (they would even be below the skepticalscience 1-sigma envelope amplitude of +/- 0.15 oC).
As I have also extensively proven in my paper, the envelope of the IPCC model is far larger than the amplitude of the temperature patterns that the models are supposed to reconstruct. Thus, those models cannot be properly validated and are useless for making any useful decadal and multidecadal forecast/projection for practical society purpose because their associated error is far too large by admission of skepticalscience itself.
Unless the IPCC models can guarantee a precision of at least +/- 0.05 oC and reconstruct the decadal patterns, as my model does, they cannot compete with it and are useless, all of them.
- Criticism about the upcoming HadCRUT4 record.
Skepticalscience has also claimed that
“Third, Scafetta has used HadCRUT3 data, which has a known cool bias and which will shortly be replaced by HadCRUT4.”
HadCRUT4 record is not available yet. We will see what happens when it will be available. From the figures reported here it does not appear that it will change drastically the issue: the difference with HadCRUT3 since 2000 appears to be just 0.02 oC.
In any case for an optimal matching the amplitudes of the harmonics of my model may need to be slightly recalibrated, but HadCRUT4 already shows a clearer cooling from 1940 to 1970 that further supports the 60-year natural cycle of my model and further contradicts the IPCC models. See also my paper with Mazzarella where the HadSST3 record is already studied.
- Criticism about the secular trending.
It has been argued that the important issue is the upward trending that would confirm the IPCC models and their anthropogenic warming theory.
However, as explained in my paper, once that 2/3 of the warming between 1970 and 2000 is associated to a natural cycle with solar/astronomical origin (or even to an internal ocean cycle alone) the anthropogenic warming trending reproduced by the models is found to be spurious and strongly overestimated. This leaves most of the secular warming tending from 1850 to 2012 as due to secular and millennial natural cycles, which are also well known in the literature.
In my published papers, as clearly stated there, the secular and millennial cycles are not formally included in the harmonic model for the simple reason that they need to be accurately identified: they cannot be put everywhere and the global surface temperature is available only since 1850, which is a too short period for accurately locate and identify these longer cycles.
In particular, skepticalscience has argued that the proposed model (by Loehle and Scafetta) based only on the 60-year and 20-year cycles plus a linear trending from 1850 to 1950 and extrapolated up to 2100 at most, must be wrong because when the same model is extrapolated for 2000 years it clearly diverges from reasonable patterns deduced from temperature proxy reconstructions. Their figure is here and reproduced below
Every smart person would understand that this is another skepticalscience’s trick to fool the ignorant.
It is evident that if, as we have clearly stated in our paper, we are ignoring the secular and millennial cycles and we just approximate the natural millennial harmonic trending with a first order linear approximation that we assume can be reasonable extended up to 100 years and no more, it is evident that it is stupid, before than being dishonest, to extrapolate it for 2000 years and claim that our result is contradicted by the data. See here for extended comment by Loehle and Scafetta.
As said above in those models the secular and millennial cycles were excluded for purpose. However, I already published in 2010 a preliminary reconstruction with those longer cycles included here (sorry in Italian), see figure 6 reported below
However, in the above model the cycles are not optimized, which will be done in the future. But this is sufficient to show how ideologically naïve (and false) is the claim from skepticalscience.
In any case, the secular trending and its association to solar modulation is extensively addressed in my previous papers since 2005. The last published paper focusing on this topic is discussed here and more extensively here where the relevant figure is below
The black curves represent empirical reconstruction of the solar signature secular trending since 1600. The curve with the upward trending since 1970 is made using the ACRIM TSI composite (which would be compatible with the 60-year cycle) and the other signature uses the PMOD TSI composite which is made by manipulating some of the satellite records with the excuse that they are wrong.
Thus, until the secular and millennial cycles are accurately identified and properly included in the harmonic models, it is the studies that use the TSI secular proxy reconstructions that need to be used for comparison to understand the secular trending, like my other publications from 2005 to 2010. Their results are in perfect agreement with what can be deduced from the most recent papers focusing on the astronomical harmonics, and would imply that no more that 0.2-0.3 oC of the observed 0.8 oC warming since 1850 can be associated to anthropogenic activity. (Do not let you to be fooled by Benestad and Schmidt 2009 criticism that is filled with embarrassing mathematical errors and whose GISS modelE performance is strongly questioned in my recent papers, together with those of the other IPCC models) .
I thank Anthony for the invitation and I apologize for my English errors, which my above article surely contains.
Relevant references:
[1] Nicola Scafetta, “Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models.” Journal of Atmospheric and Solar-Terrestrial Physics, (2012). DOI: 10.1016/j.jastp.2011.12.005
[2] Adriano Mazzarella and Nicola Scafetta, “Evidences for a quasi 60-year North Atlantic Oscillation since 1700 and its meaning for global climate change.” Theor. Appl. Climatol. (2011). DOI: 10.1007/s00704-011-0499-4
[3] Craig Loehle and Nicola Scafetta, “Climate Change Attribution Using Empirical Decomposition of Climatic Data.” The Open Atmospheric Science Journal 5, 74-86 (2011). DOI: 10.2174/1874282301105010074
[4] Nicola Scafetta, “A shared frequency set between the historical mid-latitude aurora records and the global surface temperature.” Journal of Atmospheric and Solar-Terrestrial Physics 74, 145-163 (2012). DOI: 10.1016/j.jastp.2011.10.013
[5] Nicola Scafetta, “Empirical evidence for a celestial origin of the climate oscillations and its implications.” Journal of Atmospheric and Solar-Terrestrial Physics 72, 951–970 (2010). DOI: 10.1016/j.jastp.2010.04.015
Additional News and Links of Interest:
Global Warming? No, Natural, Predictable Climate Change, Larry Bell
http://scienceandpublicpolicy.org/images/stories/papers/reprint/astronomical_harmonics.pd
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Nicola
To what degree does your model use ocean cycles if at all ? AMO, PDO / ENSO ,etc.
Nicola Scafetta, whom I had the pleasure of meeting when we both made presentations at the Los Alamos National Laboratory’s climate science conference in Santa Fe late last year, has been very patient in answering the baseless, impolitely-expressed, and to a large extent fabricated criticisms of those who have inexpertly and inappropriately attempted to dismantle his careful work.
He began running his forecast in the year 2000. Twelve years later, it is surely blindingly obvious that his projection has proven very considerably closer to observed reality than those of the IPCC, which – as almost always – is demonstrated to have erred monstrously in the direction of exaggerating the imagined effect of CO2 on global temperature.
I have long suspected – but have lacked the knowhow to demonstrate – that deducting the 60-year ocean-oscillation cycles would allow some estimate of the true warming component from CO2 to be derived. Dr. Scafetta puts this anthropogenic warming component at 0.9 C/century, or perhaps less, compared with the 2.8 C/century imagined by the IPCC.
On this ground alone, his work is valuable. It implies a climate sensitivity about one-third of the IPCC’s 3.3 C per CO2 doubling. If he continues to be correct for another decade, even the intolerant IPCC, which seems at present hell-bent on persisting with its extremist projections notwithstanding the mounting evidence that they are prodigiously overblown, will have to rethink its position fundamentally, if it has not been swept away by then. Congratulations and many thanks to Dr. Scafetta for so patiently, politely, and thoroughly exposing the grievous defects in his ill-intentioned critics’ arguments.
If you were punting on how much the planet would warm over the next decade, would you bet the farm on Dr. Scafetta’s forecast, or on that of the IPCC?
Monckton of Brenchley says:
March 12, 2012 at 8:24 am
If you were punting on how much the planet would warm over the next decade, would you bet the farm on Dr. Scafetta’s forecast, or on that of the IPCC?
Since his ‘forecast’ agrees with that based on my old shoe, I’ll tend to submit to confirmation bias and not bet the farm on IPCC.
@Leif: “but if there is no change at all, we cannot conclude that he is right as there could be many reasons for no change. My new and improved phenomenological model [that the climate mirrors the mass of my old shoes] also predicts no change at all.”
For heaven’s sake, you could at least conclude that he is not wrong. At least he is saying something concrete about what WILL happen. And you could say that about just about anything, including the recent warming.
The IPCC have projected continuing warming from assumptions made about the 1970-2000 warming, where as Dr Scafetta proposes there is 60 year oscillation that may account for some of that warming. Since the oscillation should be starting its cooling phase from around 2000 onwards based on the cycle, then it follows that temperatures would stop increasing and start to decline, and then continue to decline very slightly. This appears to be happening, contradicting the IPCC version and supporting Dr Scaffeta’s.
So we have 3 possibilities – a massive jump in temps and over the next 4 or 5 years a return to IPCC’s version of how things should be, a sudden drop into an ice age, or continuation of a fairly flat trend. From the point of view of policy, I would be putting my money on the most accurate prediction so far – that of Dr Scaffeta’s, but I would allow for the possibility he might be wrong.
From an objective point of view, the complete dismissal of Dr Scaffeta’s work strains credulity. He may yet be wrong – and some of the apparent cycles mere coincidence, but it is more than worthy of very serious consideration if only based on the principle of using the past as a guide to the future. I would like to see suggestions made to improve the model, make additions or consider things known to not be known, rather than simply deride an easily understandable idea, that is well supported by his research. It seems willfully obtuse.
“The first point is clear as a straight horizontal line with no trend falls within the cyan error band. The second point is Scafetta’s excuse for deviations.”
You’ve lost me. I understand perfectly what he is saying and it seems utterly reasonable and uncontroversial. It may yet be wrong (though I would be surprised) but I can’t seeing anything wrong with looking at 4 year moving average to detect an overall short term multi-year trend. It tells me something that is much more meaningful and useful when looking at longer term trends.
Great!
Thanks Dr. Scafetta, I have updated your graph in my pages.
To all interested in cyclical phenomena… As a chartist, here’s what may seem a silly question but has been burning me for quite some time now: has anyone tried using well-established technical indicators and oscillators that we use in *finance*, such as momentum, relative strength indicator (RSI), moving average convergence/divergence (MACD), etc., with different triggers and time frames, in order to test-model the temperatures? Don’t laugh, those oscillators reflect human behaviour in the financial markets, which is also natural and also cyclical.
“Come on, Leif. It is not difficult to understand the point, just focus a little bit, OK?
Perhaps, can some readers help Leif?”
Why do you waste your time trying to get through to Leif? I think he and Mr. Gleick share the same worldview. No amount of empirical evidence will stop his sniping. Too much money at stake. Who would fund their enterprises if there is a simple(natural) explaination and the human caused CO2 warming is minor( abt 1C)?
I look forward to your periodic updates.
I would like to thank Lord Monckton of Brenchley for having helped Leif to recover a little bit of objectivity, nobody and nothing succeeded in the task up to now. Thank you very much.
Let us hope that it last.
About the baseline chosen from skepticalscience, shown above, it was based on 20 years from 1980 to 2000. As I said above the right baseline is the period 1900-2000 because the IPCC models are supposed to reconstruct he 20-century warming trend. So, it is the 1900-2000 baseline that needs to be used, as I did.
In any case by using the baseline 1980-2000, the gnuplot command to plot the graph is
plot [1850:2040]’HadCRUT3-month-global.dat’,’itas_cmip3_ave_mean_sresa1
b_0-360E_-90-90N_na.dat’ using 1:($2-0.0474)
Which is just 0.041 C below the optimal 1900-2000 baseline
It is evident that by choosing skepticalscience 1980-2000 base line, the huge divergence between the IPCC model and the temperature around 1940, for example, would be even larger that what it is shown in the figure above.
@ur momisugly matt v. says: March 12, 2012 at 8:22 am
I am not using explicitly any ocean cycles. I am using temperature cycles deduced from the global surface temperature and I am using frequencies and phases mostly taken from astronomical considerations.
@MAVukcevic says:
March 12, 2012 at 12:49 am
The old Socrates had a method of inquiry he called “Maieutics”,(from the Greek “μαιευτικός”, pertaining to midwifery), as it is similar to delivering a baby,
http://en.wikipedia.org/wiki/Maieutics
Which involved, as example in this case: What is it the cause of temperature, climate, etc.
No one, with your exception, would survive Socrates´ Maieutic method. Such a method would scare to death any “post modern, new age” and “cool” scientist.
For them a trivia, just to start with: Why does the earth spin?
@Nicola Scafetta: You said in your article: “By the way, contrary to the claim of sckepticalscience, in statistics it is 1-sigma envelope width that is used; not 2-sigma or 3-sigma.” This is incorrect and is very basic, so it calls into question everything you say.
In your reply to my pointing this out (along with others), you say, “To be validated a proposed model needs to have an error bar smaller than the amplitude of the detectable data patterns.” Which seems to be correct.
Why not eliminate the incorrect sentence in your article? It’s incorrect as stated, and it does not state what your actual (correct) point is. It’s lose-lose: those with any amount of statistical experience will immediately assume you don’t even know the basics, and it isn’t what you really meant to say anyhow.
Agnostic says:
March 12, 2012 at 9:05 am
For heaven’s sake, you could at least conclude that he is not wrong.
One can be right [like my old shoe and Scafetta] for the wrong reason. That IPCC is wrong does nor prove my old shoe right, nor Scafetta. This is good example of the False Dilemma Fallacy:
Either claim X is true or claim Y is true (when X and Y could both be false).
Claim X is false.
Therefore claim Y is true.
To Lord Monckton: your quote:
Nick…….”has been very patient in answering the baseless, impolitely-expressed, and to a large extent fabricated criticisms of those who have inexpertly and inappropriately attempted to dismantle his careful work…”
This patience is also shown in this latest widget update….. But him, being a real climate science
pioneer and being miles ahead of out time, I do not see sufficient reason of being so patient
and dealing with straightforward BS/slander of Leif/Moscher/Physicist/Lack/Skepticalscience
and the rest of the Warmist howling crowd ……he does not have to be patient, to apologize for whatever reason….he should be ABOVE low quality attacks and should rather have lost
patience, as I have for some time already…thumbs down sign…….
The Lack confessions on the Lack page prove it: Climate villains are motivated
by a dogma/paradigm and twist and bend and lie, heaping one BS upon the other
in order to cause damage…..the likes as the climate Gleicks….
I believe one has to learn to be arrogant for dealing with the likes……..
JS
Dear Prof. Scafetta. rather than try and estimate the error bars on the IPCC projections from a magnified diagram from the IPCC report which doesn’t have the resolution to give a reasonable estimate, why not do what I did and go and get the A1B model runs from the publically available archives and plot them, along with the temperature data? If you do, you will get an image like this one
http://www.skepticalscience.com/pics/sresA1B.png
which shows the IPCC model runs project that temperatures both warmer and colder than observed during the past decade. I note also that the error bars you have estimated from the IPCC diagram are for annual data, which has a substantially lower variance than the monthly data that you plot. I would be happy to discuss your criticisms in depth, one by one, over at Skeptical Science.
http://www.skepticalscience.com/scafetta-widget-problems.html
best regards
Dikran Marsupial
1) Could we please have a -residual- plot of your model? That is plot ‘model – observed’ with the y-axis being sigma. I’ve always found this to be quite useful examining empirical models for -further- patterns that might be discernible.
2) Wow, you would think no one had ever made an empirical model. Kudos for doing so.
3) I’d also appreciate a ‘running squared error’ for the main plot. This would provide a concrete number for not just any current deviation, but the running-accumulated deviation for both your model and the IPCC.
@ur momisugly Wayne2 says: March 12, 2012 at 10:00 am
I do not think that it is incorrect. In statistics people use 1-sigma as the base-unit error width. Then people may use 2-sigma, 3-sigma etc just for expanding the comparison.
See here
http://en.wikipedia.org/wiki/68-95-99.7_rule
However, the basic error width is 1-sigma, and I am using the same error area that was used in the IPCC figure SPM.5 I depicted above. Why should I use something different from what the IPCC has used in its own figure?
Moreover, if it was 2-sigma the basic error width, statisticians would have simple redefined the Gaussian function is such a way that the new 1-sigma would correspond to the old 2-sigma. In fact, it is highly uncomfortable to use unit measures that start with a 2-value units!
In any case, as I said, that is not the point. The point is that the decadal-multidecadal data patterns have an amplitude of +/- 0.05 to +/- 0.12, so to validate a model it must have an accuracy of at least +/-0.05 or less. The IPCC models, as acknowledged by skepticalscience figures themselves, do not guarantee that accuracy because their error is above +/-0.15, thus they cannot be even validated according the scientific method. This is a very simple and straightforward argument.
Since Dr. Scafetta has not presented a comprehensive and direct step by step link from the sun (planets) to the projection of the future temperature change, readers may consider these six steps as contained in the available data and graphically illustrated here:
http://www.vukcevic.talktalk.net/GTC.htm
On the trail of the global temperature change
1. Planets regulate solar oscillation cycles
2. Solar oscillations induce changes in the flow of the North Atlantic currents
3. Flow of the North Atlantic currents initiates two well known North Atlantic oscillations: the NAO & AMO
4. Flow of the North Atlantic currents also regulates the Central England Temperature – CET
5. Central England temperature – CET correlates well with the Global Temperatures – GT
6. Future CET (and the GT) projection based on the extrapolation of the existing components.
Spectral compositions of the SSN, NAP, AMO, CET & GT do not contain 60 year components but all have 52-55, 68 and 90 years. The inferior Fourier transformation is unable to resolve 52 to 68 range so it misleadingly shows as ~60(+) in the relatively short global temperature data set for 1860-2011. This can be easily demonstrated by analysing the CET for 1860-2011 and 1660-2011 periods separately (composite spectra graph will be added shortly).
I am expecting strong objection from Dr. Svalgaard , but if Lord Monckton of Brenchley or Dr. Scafetta wish to remark on any aspect of the above, as anyone else’s comments, they are more then welcome.
@Leif:
“One can be right [like my old shoe and Scafetta] for the wrong reason. That IPCC is wrong does nor prove my old shoe right, nor Scafetta.”
I absolutely agree with this reasoning. But that is just as applicable to the IPCC – or anything else for that matter. At the very least though, you can say that he has not been disproved, and until he has then it is as worthy as any other prediction. Since the IPCC’s predictions have not followed reality, are we going to apply the same logic to their assessment of 1970-2000 warming? Yes it warmed, but for the wrong reason?
Taking this a stage further, Dr Scafetta is saying something concrete about the future you can hold him to. In fact he has been saying it for 12 years. In another 5, that makes Ben Santers 17 years of significant time to say something about a trend, if you accept that that is an appropriate time frame.
In a science so complex, immature and important as climate science, wouldn’t it make sense to propose different models to conceive broadly how the system works? You then compare how they run against observation over time, how well they hindcast. This in effect is what happens, but without properly accounting for uncertainty and unknowns. What you need to do is state a physical reason why you think Dr Scaffesta’s model is not plausible, otherwise it is not any less valid than any other model. In fact it has 12 years of pretty reasonable validation by any measure suggesting that it is at least not wrong, whether for the right reason or not.
You haven’t actually addressed my criticisms. For example, the fact that a 1-sigma envelope only covers 68% of model runs, that changing the baseline to 1900-2000 as you’ve done would also change that uncertainty envelope (which you have not done), that HadCRUT3 has a known cool bias and you could have used any number of other data sets, that it is you who is trying to fool the eye by using an incosistent baseline, etc. etc. All you’ve done is created a bunch of incorrect and straw man arguments (i.e. saying that changing the baseline would not change the trend – of course it wouldn’t!) to defend your flawed widget instead of correcting your mistakes.
As a scientist, I just want to say that Leif Svalgaard’s summary is spot on, provided the null hypothesis is that of unchanging temperature. But the likes of Trenberth and Hansen would have it that warming should be the new null hypothesis, so Scafetta’s prediction is an effective rejoinder to that.
Prof. Scafetta, If I make a computer model of a fair six sided die, and I roll it 1,000 times to predict the expected score whenever I roll a real die, I get a mean of 3.5810 with a standard devaition of 1.7171. Thus a one-sigma region covers the scores 2, 3, 4 and 5. So if I roll the real die and I get a 1 or a 6, does that mean I have falsified my computer simulation? No, of course not, because (for a Gaussian distribution) the +/- one sigma region only contains 68.2% of the data, so we would expect a bit over 30% of the time for the model to be “falsified” even if it were exactly correct. In other words, it wouldn’t be very surprising to see an observation outside a 1-sigma error bar, even if the model was right. That is why a two-sigma region is used more often, because then there is only approximately 5% of the observations that would be expected to lie outside the error bars. In that case, it would be surprising to see observations lying outside the 2-sigma error bars. Note that 5% is also the common threshold used in hypothesis testing.
To Dikran Marsupial.
The unit symbol of the kelvin is K, not °K.
@ur momisugly dana1981 says: March 12, 2012 at 10:36 am
Sorry dana1981, my arguments are correct.
I repeat the important point just for you
In any case, as I said, that is not the point. The point is that the decadal-multidecadal data patterns have an amplitude of +/- 0.05 C to +/- 0.12 C, so to validate a climate model it must have an accuracy of at least +/-0.05 C or less. The IPCC models, as acknowledged by skepticalscience figures themselves, do not guarantee that accuracy because their error on average is above +/-0.15 C, thus they cannot be even validated according the scientific method. This is a very simple and straightforward argument.
Did you get it? Or I need to repeat it ad infinitum?
Read my last paper where you will find that I have analyzed all models of the IPCC and I proven that all of them do not reconstruct any of the decadal and multidecadal patterns thatthe temperarture shows during the period 1850-2010.
Those models simply do not contain the right physics.
Nicola
I see where there is a difference between your forecast and that of Orssengo, both of which are based partly on historical GMTA records. In your model the data is modified by other factors. His is not . Your model has a very small difference between the trough and valley [ I eye balled about 0.2 C] and Orssengo uses about 0.42 C. The observed cooling for the last 2 typical cooling cycles [due to ocean cycles ?] were 1880-1910 was 0.42 C and again 1940 to 1970 was o.42C .I can see now why you propose a steady climate to 2030/2040 and his shows a significant dip. I guess time will tell which model turns out to be more realistic .
Agnostic says:
March 12, 2012 at 10:36 am
What you need to do is state a physical reason why you think Dr Scaffesta’s model is not plausible
No, the shoe is on the other foot. He needs to show a physical reason why it is plausible. He is committing yet another fallacy:
Description of Questionable Cause
This fallacy has the following general form:
A and B are associated on a regular basis.
Therefore A is the cause of B.
otherwise it is not any less valid than any other model
but also not any more valid than any other model. In fact it is just a valid as my old shoe model, I’ll have to concede that.
@weibel many thanks, I have fixed it in the MATLAB code for next time I replot it. I hope you agree that if you actually plot the model output, the recent observations are clearly still consistent with the models (although they are currently in the lower tail).