CHRISTOPHER MONCKTON of BRENCHLEY
DELEGATES at the 18th annual UN climate gabfest at the dismal, echoing Doha conference center – one of the least exotic locations chosen for these rebarbatively repetitive exercises in pointlessness – have an Oops! problem.
No, not the sand-flies. Not the questionable food. Not the near-record low attendance. The Oops! problem is this. For the past 16 of the 18-year series of annual hot-air sessions about hot air, the world’s hot air has not gotten hotter. There has been no global warming. At all. Zilch. Nada. Zip. Bupkis.

The equations of classical physics do not require the arrow of time to flow only forward. However, observation indicates this is what always happens. So tomorrow’s predicted warming that has not happened today cannot have caused yesterday’s superstorms, now, can it?
That means They can’t even get away with claiming that tropical storm Sandy and other recent extreme-weather happenings were All Our Fault. After more than a decade and a half without any global warming at all, one does not need to be a climate scientist to know that global warming cannot have been to blame.
Or, rather, one needs not to be a climate scientist. The wearisomely elaborate choreography of these yearly galah sessions has followed its usual course this time, with a spate of suspiciously-timed reports in the once-mainstream media solemnly recording that “Scientists Say” their predictions of doom are worse than ever. But the reports are no longer front-page news. The people have tuned out.
The Intergovernmental Panel on Climate Change (IPeCaC), the grim, supranational bureaucracy that makes up turgid, multi-thousand-page climate assessments every five years, has not even been invited to Doha. Oversight or calculated insult? It’s your call.
IPeCaC is about to churn out yet another futile tome. And how will its upcoming Fifth Assessment Report deal with the absence of global warming since a year after the Second Assessment report? Simple. The global-warming profiteers’ bible won’t mention it.
There will be absolutely nothing about the embarrassing 16-year global-warming stasis in the thousands of pages of the new report. Zilch. Nada. Zip. Bupkis.
Instead, the report will hilariously suggest that up to 1.4 Cº of the 0.6 Cº global warming observed in the past 60 years was manmade.
No, that is not a typesetting error. The new official meme will be that if it had not been for all those naughty emissions of carbon dioxide and other greenhouse gases the world would have gotten up to 0.8 Cº cooler since the 1950s. Yeah, right.
If you will believe that, as the Duke of Wellington used to say, you will believe anything.

The smarter minds at the conference (all two of us) are beginning to ask what it was that the much-trumpeted “consensus” got wrong. The answer is that two-thirds of the warming predicted by the models is uneducated guesswork. The computer models assume that any warming causes further warming, by various “temperature feedbacks”.
Trouble is, not one of the supposed feedbacks can be established reliably either by measurement or by theory. A growing body of scientists think feedbacks may even be net-negative, countervailing against the tiny direct warming from greenhouse gases rather than arbitrarily multiplying it by three to spin up a scare out of not a lot.
IPeCaC’s official prediction in its First Assessment Report in 1990 was that the world would warm at a rate equivalent to 0.3 Cº/decade, or more than 0.6 Cº by now.
But the real-world, measured outturn was 0.14 Cº/decade, and just 0.3 Cº in the quarter of a century since 1990: less than half of what the “consensus” had over-predicted.
In 2008, the world’s “consensus” climate modelers wrote a paper saying ten years without global warming was to be expected (though their billion-dollar brains had somehow failed to predict it). They added that 15 years or more without global warming would establish a discrepancy between real-world observation and their X-boxes’ predictions. You will find their paper in NOAA’s State of the Climate Report for 2008.
By the modelers’ own criterion, then, HAL has failed its most basic test – trying to predict how much global warming will happen.
Yet Ms. Christina Figurehead, chief executive of the UN Framework Convention on Climate Change, says “centralization” of global governing power (in her hands, natch) is the solution. Solution to what?
And what solution? Even if the world were to warm by 2.2 Cº this century (for IPeCaC will implicitly cut its central estimate from 2.8 Cº in the previous Assessment Report six years ago), it would be at least ten times cheaper and more cost-effective to adapt to warming’s consequences the day after tomorrow than to try to prevent it today.
It is the do-nothing option that is scientifically sound and economically right. And nothing is precisely what 17 previous annual climate yatteramas have done. Zilch. Nada. Zip. Bupkis.
This year’s 18th yadayadathon will be no different. Perhaps it will be the last. In future, Ms. Figurehead, practice what you preach, cut out the carbon footprint from all those travel miles, go virtual, and hold your climate chatternooga chit-chats on FaceTwit.
Support CFACT’s mission here.


Lord Monckton:
Those climate model “predictions” are non-predictions dressed up to look like predictions through applications of the equivocation fallacy.. Predictions state claims about the relative frequencies of the outcomes of events. The relative frequencies are a property of the complete set of these events, the so-called “statistical population.” For global warming climatology, however, there is no such population and thus are no relative frequencies.
@Werner Brozek
There is nothing magic about the figure 95%. It is perfectly permissible of me given the quoted data to say that warming has occurred over this period, just as long as the confidence with which that statement is made is specified (Monckton doesn’t let such mundane considerations bother him)
Now then, after much diligent searching through the manonfidence for a ty different available datasets, someone has found a 15 or 16-year interval where the statistical cemperature increase is less than 95%, perhaps 92%.
Does this means that the foundations of climate science have come crashing down?
Sorry, no.
The relevant section of the 2008 NOAA report (which I suspect few here have read) specifies a confidence value for their quoted statement (something Monckton has conveniently ignored) : the occasional 15-year interval where no significant warming occurs is in no way inconsistent with the simulations and is indeed to be expected once in a while.
@people who think that because CO2 absorption bands in the lower atmosphere are nearly saturated further additions will have little or no effect.
Time to brush up on some mainstream climate science.
@people who enjoy fitting sine curves to climate data
This is a totally meaningless exercise unless you can identify a physical process affecting the climate that has that period.
You might just as well use epicycles.
spvincent,
You do understand that the real world is deconstructing your beliefs, don’t you?
OOPS!
spvincent says:
December 4, 2012 at 5:45 pm
Monckton doesn’t let such mundane considerations bother him
Monckton says:
“For the past 16 of the 18-year series of annual hot-air sessions about hot air, the world’s hot air has not gotten hotter. There has been no global warming.”
As I have shown at:
http://wattsupwiththat.com/2012/12/01/18-annual-climate-gabfests-16-years-without-warming/#comment-1161843
This statement is true to the nearest year, but not to the nearest month, at least on the three data sets I presented. I will assume you are familiar with significant digits. If I were to say that a table was 43 cm by 65 cm and if I were to ask for the area, well multiplying 43 by 65 gives 2795 cm^2. However the “correct” answer would be 2.8 x 10^3 cm^2 since the answer should only have 2 significant digits as in the question. Is this a “mundane consideration”? I am not going to go there.
But note that in the following blog on skeptical science, he was more precise and did say: “5. The fact that there has been no statistically-significant global warming for 16 years is described as a “myth”. Yet the least-squares linear-regression trend on the Hadley Centre/CRU dataset favoured by the IPCC indeed shows no statistically-significant warming for 16 years. “
Now as for your 92% comment, do you know if this is true even for 18 years on any data set? See your own post:
spvincent says:
December 2, 2012 at 9:08 pm
Taking the Hadcrut4 dataset, here are the trend values in degrees C/decade over five closely-related time periods.
1995-2012 +0.109 +/- 0.129
1996-2012 +0.107 +/- 0.129
1997-2012 +0.058 +/- 0.142
1998-2012 +0.052 +/- 0.153
1999-2012 +0.095 +/- 0.162
Let’s look at a satellite-derived dataset (UAH)
1995-2012 +0.139 +/- 0.203
1996-2012 +0.138 +/- 0.227
1997-2012 +0.106 +/- 0.252
1998-2012 +0.063 +/- 0.153
1999-2012 +0.179 +/- 0.262
It seems to me that Monckton would even have been correct to say that there was no statistically significant warming for 18 years.
(By the way, does this stop at December 31, 2011? If so, the slopes may be even lower when going to October, 2012. It is no big deal, but when going from 1995 to the present, for Hadcrut4, I get 0.097 +/-0.113 which is just very slightly lower than your number to the end of 2011.)
However we are to interpret their 95%, they are on thin ice and they know it. Are you familiar with Santer’s 17 years?
Richard,
I think that the writer of the article is mis-leading the public by cherry picking his start and end time to come up with a statement that in isolation is true but doesn’t represent the true situation in general/ That is there has been no significant warming in the most recent 16 years. The public, may therefore think that the world is not undergoing a warming phase. However, if he had increased his period by one year, it would also have been significant but would have come up with a better representation of the what is happening to the world. As you know, including the 1998 readings early in the sequence produced a flat graph for that period of time.
Pointing this out appears to have upset you. For that I am sorry and I hope that our continued discussions maturely and that if there is something that I might write that upsets you, that you can refrain from personal abuse. You do a lot to help educate people about a very complex problem and resorting to abuse demeans that contribution.
sp vincent says
This is a totally meaningless exercise unless you can identify a physical process affecting the climate that has that period.
Henry@ur momisugly LetsBeReasonable, spvincent
Well why be so unreasonable? If we look from 2002
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2012/plot/hadcrut4gl/from:2002/to:2012/trend/plot/hadcrut3vgl/from:2002/to:2012/plot/hadcrut3vgl/from:2002/to:2012/trend
we are cooling!! Both on Hadcrut3 and Hatcrut4.
This is something I had expected as I had already calculated that there has been a regime change in 1995, from warming to cooling.
Earth stores energy in its waters, vegetations, chemicals, even in currents and wind and weather, etc. On top of that we have earth’s own volcanic actions which also provides heating/cooling, whatever. Ice, more or less of it, also becomes a factor. I also found that earth’s inner core, molten hot iron, also changes position sometimes, creating more heat in one place and less in another. So whatever comes out as average temp. is bound to be confusing.
Maxima is a much better parameter to look at as it gives us a sense of energy in.
Eventually, after analysing all daily results from 47 weather stations (47x365x38) since 1974, I came up with this curve:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
This is the “energy-in” curve.
There must be a lag between energy out and energy in, so instead of the 88 year curve I am more inclined to believe in a 100 year cycle consisting of a 50 year warming cycle and a 50 year cooling cycle (44 + ca.5 ; remember 7 x 7 + 1 jubilee year? Know your Bible?)
The reason that the ancients knew about these cycles is because they looked (and measured!) the height of the Nile at certain places.
To explain weather cycles, before they started with the carbon dioxide nonsense, scientists looked in the direction of the planets, rightly or wrongly. See here.
http://www.cyclesresearchinstitute.org/cycles-astronomy/arnold_theory_order.pdf
To quote from the above paper:
A Weather Cycle as observed in the Nile Flood cycle, Max rain followed by Min rain, appears discernible with maximums at 1750, 1860, 1950 and minimums at 1670, 1800, 1900 and a minimum at 1990 predicted.
(The 1990 turned out to be 1995 when cooling started!)
Please note: indeed one would expect more condensation (bigger flooding) at the end of a cooling period and minimum flooding at the end of a warm period. This is because when water vapor cools (more) it condensates (more) to water (i.e. more rain).
Now put my sine wave next to those dates? Not too bad?
1900- minimum flooding : end of warming
1950 – maximum flooding: end of cooling
1995 – minimum flooding: end of warming
So far, I do not exclude a gravitational or electromagnetic swing/switch that changes the UV coming into earth. In turn this seems to change the chemical reactions of certain chemicals reacting to the UV lying on top of the atmosphere. This change in concentration of chemicals lying on top of us, i.e. O3, HxOx and NxOx, in turn causes more back radiation (when there is more), hence we are now cooling whilst ozone & others are increasing.
I hope spvincent is satisfied now?
Hope this helps a few people.
LetsBeReasonable:
At December 5, 2012 at 1:27 am you say to me
NO!
I know that there is no “cherry pick”.
The start time is NOW and there has been no discernible warming at 95% confidence for the previous 16 years since now.
Climate modellers said such a ‘stasis’ of more than 15 years would be problematic for their models. You are trying to pretend that the problem does not exist: it does.
And don’t claim I “know” what I refute. Similar ‘stasis’ is observed by considering trends for periods since 2000 (i.e. after the 1998 peak). But those shorter periods do not provide the ‘stasis’ of longer than 15 years which modellers said would be a “problem” for their models.
Clearly, lack of warming DOES mean “the world is not undergoing a warming phase”. I am strongly of the opinion that the public have a right to be informed that “the world is not undergoing a warming phase” when some people are trying to justify political policies by promoting the lie that the world is in a warming phase.
Of course, the world will again enter a warming or a cooling phase at some time, but it is not in either at present.
I am offended by your disingenuous posts.
Richard
LetsBeReasonable says:
December 5, 2012 at 1:27 am
However, if he had increased his period by one year, it would also have been significant
To the nearest year, there is NO warming for 16 years on at least 3 data sets and NO 95% SIGNIFICANT warming for 18 years on all data sets that I am aware of. For proof, see my post above your latest one.
Henry@Werner Brozek
thanks for all your comments Werner, on WUWT, I always find that they help giving me some insight.
You probably know that I like stats and that I trust my own data set better than any other, simply because I know how I put it together. I measured the average difference from the average temp. over time periods and this is therefore less dependent on actual calibration and other sources of error.
According to my own set we fell about 0.2 degrees K since 2000. It seems now that Hadcrut 3 and 4 are beginning to see this cooling too but UAH is still not seeing things right.:
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2012/plot/hadcrut4gl/from:2002/to:2012/trend/plot/hadcrut3vgl/from:2002/to:2012/plot/hadcrut3vgl/from:2002/to:2012/trend/plot/uah/from:2002/to:2012/plot/uah/from:2002/to:2012/trend
I still suspect that UAH does not have its reference zero or calibration points right.
I am not sure if you have considered this, and what your thoughts are on that?
( I am vaguely suspecting that you might have excess to the sources of those that put UAH together)
Conclusions such as that “…there has been no significant warming in the most recent 16 years” “are based upon a host of assumptions about the manner in which the data are distributed; for example, it is assumed that the population mean varies linearly with the time. These assumptions are, however, indefensible and thus all conclusions that a consequence from them must be regarded as either false or unproved.
Terry Oldberg:
At December 5, 2012 at 12:01 pm you say
No.
Warming is a rise in temperature and cooling is a fall in temperature.
If there has been no discernible rise in temperature at X% confidence then there has been no warming discernible with X% confidence.
There has been no global warming discernible with 95% confidence for 16 years.
However, a system may be varying in temperature in a systematic manner and, in that case, a change in temperature would be indicated over a complete series of variations. If the form of the variation were not known then apparent warming (or cooling) could be an artifact of the existing phase(s) of the variation. Hence, in that case, a determination of any true variation would require the form of the variation to be known so the phase effects could be removed.
(This possibility is analagous to the change of current during part of an AC electricity cycle: there seems to be a change of current but that is an artifact of sampling a small part of a cycle).
There may be such systematic variations of climate, but if there are then they are not known. And they are not relevant to the present issue.
Richard
richardscourtney:
Thanks for taking the time to reply! In the 16 year interval, the temperature rises and the temperature falls. Thus, the claim that there has been no warming in this interval is literally false.
My understanding of the process that produces the opposite conclusion is derived from the presentation by the physicist Lubos Motl at http://motls.blogspot.com/2009/12/no-statistically-significant-warming.html. Motl achieves this end by placement of constraints on the ways in which the temperature can vary with respect to the time. In particular: a) the mean value of the temperature in the underlying population is constrained to vary linearly with respect to the time and b) for fixed time, the data are constrained to vary per the normal distribution function. Additionally, Motl assumes the data to be statistically independent.
He assumes his data to be randomly drawn from the population. The process by which the data are produced results in sampling error and resulting uncertainty in the rate of change of the temperature with respect to time. To this uncertainty, Motl applies the notions of statistical confidence bounds and statistical significance in reaching his conclusion.
Terry Oldberg:
I am replying to your post addressed to me at December 5, 2012 at 1:36 pm.
Firstly, let me thank you for drawing my attention to the arguments of Lubos Motl at http://motls.blogspot.com/2009/12/no-statistically-significant-warming.html
I had not seen it and he makes some interesting observations. Thankyou.
But Motl says
In other words, his arguments conclude that the data does not indicate no warming, but no warming is probable.
Also, he says
So, it seems to me that Motl is making an ‘Angels On A Pin’ argument.
Despite that I think his arguments are an important indication of how difficult it is to obtain meaningful information from the data.
Secondly, I agree with you when you say
Indeed, I had a flaming row with the egregious Perlw1tz on WUWT because I made the same point.
However, with respect, I think that pedantic point is an irrelevance. The warmunists set the target of no significant warming over 15 years at 95% confidence as indicated by a linear trend over the period. They set the ‘rules’ and I am accepting them.
According to those rules there has been no warming for the last 16 years.
I hope that clarifies my position.
Richard
richardscourtney:
It sounds as though you’ve won your debate with the warmunists. Though impressive, your victory provides an inadequate basis for policy decisions on CO2 emissions.
For the purpose of making policy, policy makers need information about the outcomes from their policy decisions in advance of the occurrence of these outcomes. The methodology of the investigation of global warming that is described by the IPCC in AR4 provides no such information. Thus, while the question of the confidence bounds on the rate of change of the global surface temperature in the past 16 years is titilating, more pertinent questions are if and how the methodology can be changed to produce this information.
HenryP says:
December 5, 2012 at 10:48 am
I still suspect that UAH does not have its reference zero or calibration points right.
I often comment on their site and I am aware of the discrepancy you mention. My understanding was that they admitted to a problem and some version 6 was to fix it. However they have not had time to do the fix yet so they implemented an interim fix to narrow the gap, so to speak, but I believe they themselves would admit they are still too high relative to RSS. At least their latest is not their final word.
Terry Oldberg:
Your post at December 5, 2012 at 3:33 pm says to me
OK. You raise two distinct issues.
My major concern is the science. Climate science has been corrupted to become a pure pseudoscience, and this is damaging the reputation of all science. The record shows this has been my major concern about AGW for decades.
I hope you are right when you say the debate with the warmunists is “won”, but I remain to be convinced. I will agree – or not – when I read the next IPCC Report. I will agree that the matter is “won” when an IPCC Report admits the AGW-scare is unfounded or, alternatively, IPCC so-called ‘science’ is publicly exposed for the pseudoscientific political propaganda which it is.
I do have a concern about the politics which feeds – and feeds on – the AGW-scare. That also requires public exposure of the IPCC so-called ‘science’. The political issue was lost by the warmunists at Copenhagen in 2009. The issue is dead but continues with the appearance of life because politicians fear the loss of ‘face’ (i.e. loss of votes) if they are seen to have been wrong. Therefore, they continue to pay lip-service to AGW, and they continue to use AGW as an excuse for e.g. taxation policies.
Exposure of the IPCC so-called ‘science’ would permit the politicians to overtly abandon AGW without themselves taking the blame. Indeed, it seems that the ‘Hockey Team’ sees the writing on the wall and is turning against Michael Mann as their scapegoat.
In terms of the AR4, there are two very, very important issues; viz. the disappearance of “committed warming” and the absence of the ‘hot spot’. Either would falsify IPCC so-called ‘science’ and together they are devastating. The lack of warming over the last 16 years is direct evidence that the prediction of “committed warming” was wrong.
The AR5 must address these two failures of the AR4 science or be publicly called to account for failing to address them. Hence, I think the issue cannot be “won” until the AR5 is published.
Simply, the politics and the IPCC so-called ‘science’ rely on each other and if one is defeated then the other folds, but only the IPCC so-called science can be defeated by the cold light of reality.
Anyway, that is how I see it.
Richard
richardscourtney:
Predictions have a one-to-one relationship to the events in a statistical population. As AR4 references no such population, it is clear that there are no predictions from the IPCC climate models. I cover this and related issues in the peer reviewed article at http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ .
@HenryP. I look forward to seeing your remarkable work, incorporating as it does such diverse elements such as biblical numerology and unknown gravitational or electromagnetic switches, written up and published in Nature or Science: it would be a shame to see such groundbreaking research published in some other journal.
davidmhoffer says:
December 3, 2012 at 11:58 am
That is excellent, David. Beautifully written, and succinct. For anyone who missed it, I strongly advise them to “see above”.
That one goes in my “Climate” folder.
spvincent,
Henry P is always the gentleman, and he does his own research. What have you added to the conversation, besides mindless snark?
Terry Oldberg:
At December 5, 2012 at 4:48 pm you assert
Rubbish! Such sophistry up with which I will not put.
If somebody says, “The globe will warm” then that is a prediction (i.e. it is something foretold)..
The IPCC AR4 predicted that the globe will warm at a rate averaged over the first two decades after 2000 of 0.2deg.C per decade (+/-20%) as a result of greenhouse gases already in the system. This was “committed warming” which would occur unless there were significant changes to volcanism and/or solar activity which have not happened.
Since 2000 there has been no such global warming.
Richard
richardscourtney:
If you’re in the UK then you’re up late!
It sounds as though you’ve not read my paper at http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ . If so, please pause to read it and report back.
If you read the paper, I hope you’ll come away from this experience with the understanding that climatological arguments suffer in a major way from the presence in them of the fallacy that is known as “equivocation.” In a recorded lecture, a professor of logic states that “one cannot draw a proper inference from an equivocation” or words to that effect. This being the case, in logical discourse it is essential for all parties to avoid the ambiguities of reference that are common in the natural languages, including English. This can be accomplished through the use of a mutually agreed upon disambiguated language. If one of the parties to a conversation rejects the very idea of using a disambiguated language then logical discourse is not possible. That is where we presently are in this conversation.
Werner Brozek says:
December 3, 2012 at 8:41 am
Re pg 23 v 123 in the NOAA 2008 report.
Thanks. You’re right; it IS page 23. It seems I can’t tell the difference between “S” and “1” Honest scholarship, like honest science, depends upon confirmation, and acknowledging mistakes.
Terry Oldberg:
At December 5, 2012 at 6:54 pm you write
Yes, I was “up” until nearly 2 am. It is now 8.30 am and I have not yet had breakfast. But I did not want you to think I was ignoring your posts.
I read your paper some time ago when you then pointed me to it. That paper is not relevant. It attempts to determine what is – and what is not – a justifiable prediction by use of a computer model.
We are discussing a prediction made by the IPCC. The failure of that prediction is important. An how they made that prediction is of no importance.
As I said
There is no “equivocation” in that prediction.
1.
It says the globe will warm.
2.
It says the average rate of warming over a stated period.
3.
It asserts a confidence in the accuracy of the predicted rate of warming.
And it is not relevant whether that clearly specified prediction was produced using a computer, or by astrology or was ‘seen’ in a dream, or was a deliberate falsehood, or …
It is a prediction and – so far – it is plain wrong.
Your excuses are letting the IPCC pseudoscience ‘off the hook’. I reject your excuses.
Richard
richardscourtney:
I’m relieved to hear that you got some sleep!
I assume that your reference is to the Web page at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html. Under the title “Projections of Future Changes in Climate” the IPCC states that “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. {10.3, 10.7}”
Please note that the quoted text uses the term “projection” rather than the term “prediction.” “Projection” is a term in the IPCC’s disambiguation of the polysemic term “prediction”; see Vincent Gray’s paper entitled “Spinning the Climate” for his notes on the origin of this disambiguation. Under this disambiguation, “predictions” have a one-to-one relationship to events in a statistical population thus supporting statistical validation of models. Lacking a statistical population or predictions, the IPCC models do not support statistical validation of themselves.
A “projection” is nothing more nor less than a computed time series. Through its use of the polysemic word “about” as a modifier on “0.2°C per decade” the IPCC makes an equivocation of the statement that “For the next two decades, a warming of about 0.2°C per decade is projected…” thus ensuring that a projection does not state a falsifiable claim but sounds as though it states one.
spvincent says
it would be a shame to see such groundbreaking research published in some other journal.
(sic. perhaps this should read: …NOT to see)
Henry says
looks like somebody already beat me at it:
JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, 1003, 15 PP., 2003
doi:10.1029/2002JA009390
Persistence of the Gleissberg 88-year solar cycle over the last ∼12,000years: Evidence from cosmogenic isotopes
Alexei N. Peristykh
Department of Geosciences, University of Arizona, Tucson, Arizona, USA
Paul E. Damon
Department of Geosciences, University of Arizona, Tucson, Arizona, USA
link: http://www.agu.org/pubs/crossref/2003/2002JA009390.shtml
Among other longer-than-22-year periods in Fourier spectra of various solar–terrestrial records, the 88-year cycle is unique, because it can be directly linked to the cyclic activity of sunspot formation. Variations of amplitude as well as of period of the Schwabe 11-year cycle of sunspot activity have actually been known for a long time and a ca. 80-year cycle was detected in those variations. Manifestations of such secular periodic processes were reported in a broad variety of solar, solar–terrestrial,and terrestrial climatic phenomena. Confirmation of the existence of the Gleissberg cycle in long solar–terrestrial records as well as the question of its stability is of great significance for solar dynamo theories. For that perspective, we examined the longest detailed cosmogenic isotope record— …..
Maybe I could be the first one to have put some figures to the cycle?
OK. You can be me co-author.
May I insert, into the courteous and exemplary discussion so far, a reminder of the utility of ordinary statistics as well as its limitations?
for a null hypothesis of no slope, and invert the result into a p-value (the probability of getting the data if the null hypothesis is true). One might then compute the Pearson’s
for an alternative null hypothesis of the best linear regression fit and turn it into a p-value. One might then compare the
‘s or their associated p-values. Or, one might use a good cooefficient of determination argument (compute
) to accomplish much the same purpose, provided that one first learned about the serious limitations of the conclusions one can legitimately draw from it in a case like this comparing the two hypotheses.
; or b) obtain the best fit permitting an arbitrary intercept (basically moving fixed slope straight line to where it minimizes
) and etc.
.
There is this thing called “hypothesis testing” that can be used to assess whether or not a hypothesis has explanatory value, or for that matter comparative explanatory value given a specific alternative hypothesis. For example, for the data segment in question, one might compute Pearson’s
Or, one could look at the data, go “gee, it is pretty obvious that a linear trend with nonzero slope has at most a tiny amount of additional explanatory power compared to one with zero slope, given the variance in the data” and not bother (if one had any experience with statistics).
Or — and this is the interesting pair — one could assert as the null hypothesis a specific straight line (one with slope 0.02/year), and either a) fix the start point at 2000 and compute
The advantage of doing this with the actual data is that it eliminates the bullshit assertions of “95% confidence” in this discussion which, taken out of any specific context, mean absolutely nothing. In fact, the term “95% confidence” doesn’t mean what it says in English even in the context of hypothesis testing — the p-value it just gives you some measure of getting the data, given the null hypothesis, which is not in any way interpretable as being the probability that the null hypothesis is right or wrong. p-values really are only useful when they are absurdly low — not the 5% that is often used as a basis for rejecting the null hypothesis (one in twenty odds? Puh-leeze. Chances like that happen all the time). Show me a p-value of 0.001 (especially on the first or only test one can make) and we’re talking, maybe.
Here is the way I would interpret p-values in the current discussion. Personally I prefer the “gee, …” interpretation up above, because it is perfectly obvious that all alternative analyses will lead one to the same conclusion, they’re just more work. It’s not that the data doesn’t have some linear trend — any method of generating a simulated data set with some noise and a fixed (non-trended) mean would have some linear trend that beats zero slope — it is whether or not the linear trend of the actual data is “surprising” or “large” compared to precisely this sort of expected accidental linear trend due to noise/natural variation that doesn’t necessarily reflect any causal/explanatory feature of the underlying process that generates the series.
The first pair of p-values are a decent way of arriving at that conclusion a different way — they will both be the same, and will both be utterly unremarkable. The second pair are the ones associated with the 2008 report. Somebody wishing to convince the world that the 2008 hypothesis is incorrect will use the first method, because that will lead to the largest $\chi^2$ and the lowest p-value, let’s say (depending on what you call the number of degrees of freedom in a time series with a significant time correlation even if the series has zero trend so that adjacent years are hardly independent samples) that it is as low as 0.03. Does that mean that it is 97% certain that the 0.02/year hypothesis is false? Only if you are smoking something other than tobacco at the time. It means precisely what it says — if the hypothesis, stated as “compare the annual data to a line with intercept of T(2000) and slope 0.02, with annual error/variance set to thus and such”, is true, then the probability of getting the data by random chance is 0.03, or it would happen in roughly on in 33 Universes where the experiment was done from identical starting conditions.
Somebody wishing to defend the 2008 hypothesis would let the intercept float, moving the intercept towards the center of the time series to align with the most favorable linear segment that matches the data. This might drop the p-value to (say) 0.2. Does this mean that the null hypothesis is correct and the world is warming at this rate? Of course not. It means what it says, that the chances of getting this result given the null hypothesis and some assumptions about the way the data is (normally) distributed around the proposed linear trend with the best intercept is one in five Universes, or around 20% of the time simulated data sets that were truly randomly generated would have at least this large a
Neither number has any other meaning. Neither number is sufficient for the rejection of any of these hypotheses. A zero trend provides a satisfactory explanation of the data. So does almost any trend with a small linear slope. The particular linear trend with slope 0.02 is not a particularly good fit to the data — terrible if you fix the start point, not that great (but perhaps not terrible) if you pick the best possible one.
Valid conclusions to draw from the data? Not many, I’m afraid. It does not prove that the “warming has stopped” — that is a causal statement and post hoc ergo propter hoc, in spades. It does not prove that the “warming continues” (somehow hidden under this or that) ditto. To be able to do any better, one has to look at more of the relevant timeseries. But nobody wants to do this, everybody wants to cherrypick this interval, or that one, and make overblown and actively silly statstical claims.
It’s easy to lie with statistics. It is easy to be misled by statistics, blinded by the science. But if you use it with just a tiny bit of common sense, statistical analysis can be your friend.
Here’s an easy test. Give the time series to a friend who is a statistician but don’t tell him what it is! Say that it is the annual production of widgets or something. Ask him or her to tell you if it is safe to conclude that widget production is definitely increasing, and if so, by how much and how reliably. The answer is going to be “gee,…” but they will be able to do ever so much better a job of demonstrating it.
rgb
Well said! It can also be said that computations of confidence bounds assume the statistical independence of those events whose count maps to the confidence bounds. However, this count is indeterminate pending the identification of these events by the climatological establishment.
Under a convention of climatology, an independent event has a duration of not less than 30 years; this is the averaging period over a meteorological variable in arriving at a climatological variable such as the spatially and temporally averaged global surface air temperature. Under this convention, in an interval of 16 years the count is nil and thus statistically significant conclusions about the alleged warming are impossible.
Henry@Werner & D.Boehm
Thanks for your comments.. I appreciate.
In hindsight, looking back, I realize now that I have been extremely lucky. For some odd reason I could only get complete reliable daily data going back to 1974 from most stations. That is just after the tipping point of 1972 which is now apparent from my sine wave. So when analyzing these data from 47 weather stations and putting it together in a global result I found a beautiful relationship of the speed in warming degrees C/ year versus time curving down, like as if somebody was throwing me a ball. Had I taken data from before 1972 everything would have been totally mixed up and I might never have picked up any relationship at all…..no ball to catch…
Looking carefully at my graphs, you will note with me that over the next 8 years or so, we will be cooling down at the maximum rate, of around -0.04 degrees C globally per year. That is ca. -0.3 degrees C down on the maxima by 2020. And I think earth average temps. (means) will follow this trend because it has already used up most of its reserves. So the following two decades will be cold. Very cold. But if you count back 88 years you will always realize that we have been there before and we all came through…
So there is really nothing new under the sun. Everything is as it has always been. Natural global warming and natural global cooling have been with us, like, forever, or at least for as far back as I can see….All the graphs that I can think of follow more or less the 88 year sine wave from 1927.
The (global) record from before that time is murky. They could not hardly build cars back then, let alone calibrate thermometers to a high accuracy. But I am still waiting for someone to show me a calibration certificate of a thermometer from before 1920?
Although, lucky… As you know I don’t believe in luck. So let me say that I was extremely blessed.
The great Designer wanted to show me a little piece of His work. And all we can do is stand in awe….
I meant to address my most recent comment to rgbatduke but failed to do so.