The Intriguing Problem Of The Younger Dryas—What Does It Mean And What Caused It?

This is a follow up posting to Younger Dryas -The Rest of the Story!

Guest post by Don J. Easterbrook

Dept. of Geology, Western Washington University.

The Younger Dryas was a period of rapid cooling in the late Pleistocene 12,800 to 11,500 calendar years ago. It followed closely on the heels of a dramatically abrupt warming that brought the last Ice Age to a close (17,500 calendar years ago), lasted for about 1,300 years, then ended as abruptly as it started. The cause of these remarkably sudden climate changes has puzzled geologists and climatologists for decades and despite much effort to find the answer, can still only be considered enigmatic.

The Younger Dryas interruption of the global warming that resulted in the abrupt, wholesale melting of the huge late Pleistocene ice sheets was first discovered in European pollen studies about 75 years ago. Terrestrial plants and pollen indicate that arboreal forests were replaced by tundra vegetation during a cool climate. This cool period was named after the pale yellow flower Dryas octopetella, an arctic wildflower typical of cold, open, Arctic environments. The Younger Dryas return to a cold, glacial climate was first considered to be a regional event restricted to Europe, but later studies have shown that it was a world-wide event. The problem became even more complicated when oxygen isotope data from ice cores in Antarctica and Greenland showed not only the Younger Dryas cooling, but several other shorter cooling/warming events, now known as Dansgaard-Oerscher events.

The Younger Dryas is the longest and coldest of several very abrupt climatic changes that took place near the end of the late Pleistocene. Among these abrupt changes in climate were: (1) sudden global warming 14,500 years ago (Fig. 1) that sent the immense Pleistocene ice sheets into rapid retreat, (2) several episodes of climatic warming and cooling between ~14,400 and 12,800 years ago, (3) sudden cooling 12,800 years ago at the beginning of the Younger Dryas, and (4) ~11,500 years ago, abrupt climatic warming of up to 10º C in just a few decades. Perhaps the most precise record of late Pleistocene climate changes is found in the ice core stratigraphy of the Greenland Ice Sheet Project (GISP) and the Greenland Ice Core Project (GRIP). The GRIP ice core is especially important because the ages of the ice at various levels in the core has been determined by the counting down of annual layers in the ice, giving a very accurate chronolgoy, and climatic fluctuations have been determined by measurement of oxygen isotope ratios. Isotope data from the GISP2 Greenland ice core suggests that Greenland was more than~10°C colder during the Younger Dryas and that the sudden warming of 10° ±4°C that ended the Younger Dryas occurred in only about 40 to 50. years.

clip_image002

Figure 1. Temperature fluctuations over the past 17,000 years showing the abrupt cooling during the Younger Dryas. The late Pleistocene cold glacial climate that built immense ice sheets terminated suddenly about 14,500 years ago (1), causing glaciers to melt dramatically. About 12,800 years ago, after about 2000 years of fluctuating climate (2-4), temperatures plunged suddenly (5) and remained cool for 1300 years (6). About 11,500 years ago, the climate again warmed suddenly and the Younger Dryas ended (7).

Radiocarbon and cosmogenic dating of glacial moraines in regions all over the world and abrupt changes in oxygen isotope ratios in ice cores indicate that the Younger Dryas cooling was globally synchronous. Evidence of Younger Dryas advance of continental ice sheets is reported from the Scandinavian ice sheet, the Laurentide ice sheet in eastern North America, the Cordilleran ice sheet in western North America, and the Siberian ice sheet in Russia. Alpine and ice cap glaciers also responded to the abrupt Younger Dryas cooling in both the Northern and Southern hemispheres, e.g., many places in the Rocky Mts. of the U.S. and Canada, the Cascade Mts. of Washington, the European Alps, the Southern Alps of New Zealand, and the Andes Mts. in Patagonia of South America.

clip_image004

Figure 2. Temperature fluctuations over the past 15,000 years showing the abrupt cooling during the Younger Dryas and other warming and cooling periods, the Oldest Dryas (cool), Bölllng (warm), Older Dryas (cool), Allerød (warm), InterAllerød (cool), and Younger Dryas (cool).

clip_image006

Figure 3. Oxygen isotope record from the Greenland ice core showing an abrupt temperature drop 12,800 years ago, 1300 years of cool climate, and sudden warming 11,500 years ago.

The Younger Dryas had multiple glacial advances and retreats

The Younger Dryas was not just a single climatic event. Late Pleistocene climatic warming and cooling not only occurred before and after the YD, but also within it. All three major Pleistocene ice sheets, the Scandinavian, Laurentide, and Cordilleran, experienced double moraine-building episodes, as did a large number of alpine glaciers. Multiple YD moraines of the Scandinavian Ice Sheet have long been documented and a vast literature exists. The Scandinavian Ice Sheet readvanced during the YD and built two extensive end moraines across southern Finland, the central Swedish moraines, and the Ra moraines of southwestern Norway(Fig. 4). 14C dates indicate they were separated by about 500 years.

clip_image008

Figure 4. Double Younger Dryas moraines of the Scandinavian Ice Sheet.

Among the first multiple YD moraines to be recognized were the Loch Lomond moraines of the Scotish Highlands. Alpine glaciers and icefields in Britain readvanced or re-formed during the YD and built extensive moraines at the glacier margins. The largest YD icefield at this time was the Scotish Highland glacier complex, but smaller alpine glaciers occurred in the Hebrides and Cairngorms of Scotland, in the English Lake District, and in Ireland. The Loch Lomond moraines consist of multiple moraines. Radiocarbon dates constrain the age of the Loch Lomond moraines between 12.9 and 11.5 calendar years ago.

Multiple Younger Dryas moraines of alpine glaciers also occur throughout the world, e.g., the European Alps, the Rocky Mts., Alaska, the Cascade Range, the Andes, the New Zealand Alps, and elsewhere.

clip_image010

Figure 5. Double Younger Dryas moraines at Titcomb Lakes in the Wind River Range of Wyoming.

Implications

The multiple nature of YD moraines in widely separated areas of the world and in both hemispheres indicates that the YD consisted of more than a single climatic event and these occurred virtually simultaneously worldwide. Both ice sheets and alpine glaciers were sensitive to the multiple YD phases. The GISP2 ice core shows two peaks within the YD that match the glacial record. The absence of a time lag between the N and S Hemispheres glacial fluctuations precludes an ocean cause and is not consistent with the North Atlantic Deep Ocean Water hypothesis for the cause of the Younger Dryas, nor with a cosmic impact or volcanic origin.

Both 14C and 10Be production rates in the upper atmosphere changed during the YD. 14C and 10Be are isotopes produced by collision of incoming radiation with atoms in the upper atmosphere. The change in their production rates means that the Younger Dryas was associated with changes in the amount of radiation entering the Earth’s atmosphere, leading to the intriguing possibility that the YD was caused by solar fluctuations.

Why the Younger Dryas is important

What can we learn from all this? The ice core isotope data were hugely significant because they showed that the Younger Dryas, as well as the other late Pleistocene warming and cooling events could not possibly have been caused by slow, Croll-Milankovitch orbital forcing, which occurs over many tens of thousands of years. The ice core isotope data thus essentially killed the Croll-Milankovitch theory as the cause of the Ice Ages.

In an attempt to save the Croll-Milankovitch theory, Broecker and Dention (1990) published a paper postulating that large amounts of fresh water discharged into the north Atlantic about 12,800 years ago when retreat of the Laurentide ice sheet allowed drainage of glacial Lake Agassiz to spill eastward into the Atlantic Ocean. They proposed that this large influx of fresh water might have stopped the formation of descending, higher-density water in the North Atlantic, thereby interrupting deep-water currents that distribute large amounts of heat globally and initiating a short-term return to glacial conditions. If indeed that was the case, then the Younger Dryas would have been initiated in the North Atlantic and propagated from there to the Southern Hemisphere and the rest of the world. Since that would take time, it means that the YD should be 400-1000 years younger in the Southern Hemisphere and Pacific areas than in the Northern Hemisphere. However, numerous radiocarbon and cosmogenic dates of the Younger Dryas all over the world indicate the cooling was globally synchronous. Thus, the North Atlantic deep current theory is not consistent with the chronology of the Younger Dryas.

The climatic fluctuations before and after the Younger Dryas, as well as the fluctuations within it, and the duration of these changes are not consistent with a single event cause of the YD. Neither cosmic impact or volcanic eruptions could produce the abrupt, multiple climatic changes that occurred during the late Pleistocene.

###

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

204 Comments
Inline Feedbacks
View all comments
June 21, 2012 10:31 am

rgb says
I am (I would humbly claim) a serious expert in modeling and fitting
Henry says
the paper you quote actually confirms that my approach was correct/
initially, when I found missing data at a weather station, for a month, I put in the long term average, which is the generally accepted statistical principle for such a problem. However, when measuring temps. and climate versus time that was the wrong way to go.
I explained this here
http://www.letterdash.com/henryp/global-cooling-is-here
Note that I determined not only the development of means, but also those of the minima and maxima, which, for some reason are ignored by all in climate science. All can be transformed into parabolic curves with r square >0.95
on the maxima it is r2=0.995
So, the probability that I am wrong with my assertions and predictions is only 0.5%
You want to take a bet on that?

rgbatduke
June 21, 2012 10:51 am

OK, so that reply either went in partially or disappeared. To try again:
Of course I’d take a bet ten years out. It’s nearly a sure thing, because you are completely clueless about what R^2 measures as far as extrapolating a fit trend is concerned (basically, nothing).
Are you familiar with: Taylor Series, the Stone-Weierstrass theorem, how both together guarantee that you can fit nearly any smooth function with a parabola in a sufficiently local neighborhood, and how that fit will fail for almost all possible smooth functions (all the ones that aren’t really parabolas) once you try to extrapolate outside of that neighborhood and cubic or higher order terms become important, and how even that doesn’t serve to explain the problem with using this in climate science, where transitions appear to be smooth nearly flat variations punctuated by sudden jumps on a decadal plus timescale (Hurst-Kolmogorov transitions).
Did you even glance at the Koutsoyannis reference I linked? Of course not, or I wouldn’t be explaining the problem to you, as his figure 1 shows how a parabola for a short sample interval becomes a linear trend for a longer one becomes a sinusoidal for a still longer one and (not shown) how all of this could still be nearly irrelevant patterned noise on a still longer timescale?
I could take a few thousand lines and explain statistics or modelling to you. Or you could take a course and learn what R^2 really means for extrapolation versus interpolation. Hmmm, I vote for the latter.
rgb

June 21, 2012 11:50 am

Steve Wilde, and myself have been in pretty good agreement about the solar variability and how that in turns effects the climate through solar changes themselves. I take it further, by trying to show the secondary effects from those solar changes.
Those secondary effects, again being as follows:
ATMOSPHERIC CHANGES – A MORE -AO/NAO, WHICH RESULTS IN MORE CLOUDS,SNOW COVER ,PRECIP., WHICH RESULTS IN A HIGHER ALBEDO /LOWER TEMPERATURES.
ATMOSPHERIC CHANGES DUE TO OZONE CONCENTRATION CHANGES IN THE ATMOSPHERE IN THE VERTICAL /HORIZONTAL, DUE TO UV LIGHT CHANGES FROM THE SUN.
VOLCANIC ACTIVITY INCREASE(ESPECIALLY HIGH LATITUDE,THIS IN TURN WILL HELP PROMOTE A MORE -AO ,IN ADDITION TO THE LOW SOLAR ACTIVITY. SO2 ACTING AS A COOLING AGENT FOR EARTH’S SURFACE AND WARMING AGENT FOR THE STRATOSPHERE
HAVOC WITH EARTH’S MAGNETIC FIELD – MORE COSMIC RAYS ,MORE CLOUDS, MORE VOLCANIC ACTIVITY.
. A sun that displays variability changes, seems to effect earth’s magnetic field by weakening/strenghtening it,in very fast ,sharp ways..
SOLAR WIND DECREASE- ALLOWS MORE COSMIC RAYS MORE CLOUDS
PDO- EVIDENCE OF A 60 YEAR CYCLE ASSOCIATED WITH SOLAR ACTIVITY
ENSO- BEING GEARED MORE TOWARD LA NINAS RATHER THEN EL NINOS WHEN PDO IS IN COLD PHASE WHICH MIGHT BE TIED INTO SOLAR ACTIVITY.
SOLAR IRRADIANCE ITSELF DECREASING AT TIMES OF LOW SUNSPOT ACTIVITY.
I think what is not being appreciated or understood by the mainstream is the fact that solar variations are much more common and have a greater degree of magnitude/duration(change/staying power) then what the present thinking is. In addition the secondary effects from these solar changes are not being taken seriously enough and are much greater, then what mainstream present thinking is.
If one just studies the two most recent solar minimums those being the DALTON ,and MAUNDER MINIMUMS, one can see the same changes took place with earth’s climate each time, both BEING associated with low solar activity.
I say those two periods of low solar activity can be exceeded in magnitude and duration , and have been in earth’s past, creating a much greater change in the climate, then what even happened during the DALTON/MAUNDER MINIMUMS.
I say the variabilty of the sun and how it effects items that control the climate are being greatly underestimated.The typical 11 year sunspot cycle is NONSENCE, and that is what mainstream is fixated on. When the sun does display an 11 year sunspot typical pattern of course nothing is going to happen, because the sun is changing in a regualr pattern ,which cancells itself out.
Here we have mainstream trying to put upon the public that, that is the norm, which is flat out wrong. Mainstream in addition, is also trying to down play the EXTREME solar activity that has taken place prior to 2005, going back to 1850 , which I say is 100% responsible for the temperature rise last century.
Here we are in 2012 , and the sun has had a significant change which started in year 2005. Lag times have to be appreciated ,but I say once this solar max.of cycle 24 passes on by(which is very very weak) that is when the impacts of the quiet sun on the climate will start to exert themselves.
If the study of LIVINGSTON, and PENN ,should come to be, that could mean this GRAND MINIMUM can be much more severe then anticipated, and the climatic effects that much more.
TEMPERATURE CHANGES
Past histroy shows the temperatures do not change gradually, they go in jerks both up and down. The only time temperatures change gradually is when the climate is in one particular climatic regime. I say led by the sun, and through the secondary effects, if duration of time,magnitude of change is strong enough, thresholds will be met which will then cause the climate to shift into another climate regime,jhence the sharp temperature changes, during those times.
If it is not the sun ,I say what is it?????

June 21, 2012 12:18 pm

EARTH’S MAGNETIC FIELD – A weak magnetic field is associated with cooling of the earth, while a strong magnetic field is associated with a warming earth. Somebody had this backwords. It has been shown that earth’s geomagetic field increases by a factor of 3 to 5 times in strength during interglacial periods.
The above all make sense if one things an increase is cosmic rays/more clouds plays a big role in the climate. The weaker the solar magnetic field is ,along with earth’s magnetic field, the more cosmic rays will be allowed to enter into earth’s atmosphere .Hence a cooling during periods of waek magnetic fields both on the sun and earth..
The SURPRISE , for lack of a better word is the sun has SPURTS of activity within the minimum most likely ,and that is when charged particle burst from the sun are greatest, and with earth’s magnetic field already in a weakened state ,their effects are much greater then would be otherwise. I think the charge particle burst to earth’s magnetosphere ,play a role in an increase in geological activity here on earth, if the bombarment of the particles occurs when the sun is mostly in a solar minimum state.
This is why in large part there is always an increase in geological activity here on earth, in association with prolong solar minimum periods. Look at the Dalton,Maunder minimums as an example .

June 21, 2012 12:19 pm

Henry@rgb
clearly, you must be able to understand that I calculated all three parameters, maxima, means and minima, finding they all best fit into polynominals of the 2nd order, at r2>0.95
which cannot possibly be co-incidental….
That means I can use the highest 0.995 found for maxima to determine cut off points for warming and cooling in time.
I suggest you go back to your stats classes…
please donot patronize me.
if you have some evidence to prove I am wrong, show me your (actual) data. Otherwise, please just go to bed.. which is what I am going to do now.
you are just confusing everyone here and you really have no data to show……all you have is words and endless speculations….

El Gordo
June 21, 2012 12:55 pm

Okay,
I’ll bite.
The answer is… Drumroll….
Cosmic Ray incursion and deformation of the heliosphere resulting in varying levels of cosmic energy absorption throughout the solar system.
Why hasn’t it happened since?
Well, we don’t know enough about interstellar space to predict timing of these things.
But, if this theory were true, it could happen again tomorrow.

phlogiston
June 21, 2012 1:04 pm

agfosterjr says:
June 21, 2012 at 7:44 am
phlogiston says:
June 21, 2012 at 5:07 am
“The biggest NH megafauna extinctions (e.g. wooly rhino, N American wild horses) took place 30,000-40,000 years ago, well before the YD.”
========================================================================
Not sure where you get that–the horses lasted till about 12kya, and probably the rhinos too.
I was wrong, mis-remembered an article I recently read in Science Illustrated. A series of megafauna extinctions began 30-40,000 years ago with the likes of sabre toothed tigers and mastodons but the mammoths (10.500 yrs), north American horses (10,500 yrs) and wooly rhinos (14,000 yrs) continued till the start of the Holocene.

phlogiston
June 21, 2012 1:22 pm

Eske Willerslev, a geogeneticist at Copenhagen University, studied genetic diversity, geographic ranges and diet of several megafauna from 50,000 years ago till the Holocene. The picture that emerged is that the animals which becme extinct at the Holocene has already been in decline for at least 20,000 years. This decline was largely attributable to reducing habitat and climate change (deepening glaciation). In the final centuries before extinction competition caused by overlap between different megafauna with similar diet (as shown by gut and fecal material) and also overlap with humans and human hunting, both may have contributed to the final extinction. However the species in question, e.g. mammoth, wild horses, wooly rhinos, had already been in decline for thousands of years.
http://www.nature.com/news/2011/021111/full/news.2011.626.html

rgbatduke
June 21, 2012 2:28 pm

EARTH’S MAGNETIC FIELD – A weak magnetic field is associated with cooling of the earth, while a strong magnetic field is associated with a warming earth. Somebody had this backwords. It has been shown that earth’s geomagetic field increases by a factor of 3 to 5 times in strength during interglacial periods.
Right, but if you read the context of my remarks (and followed the link to the NASA site that presents the evidence) you would see that the Earth’s magnetic field has grown 10% weaker during the recent warming trend (in addition to moving North, closer to the rotational pole and away from the equator). This was offered to support the observation that it is easy to find phenomena correlated with temperature change that may or may not be causal of that change. It is also direct, immediate, current evidence that your assertion that a weak field is associated with cooling is not, at the moment, true. Whether or not it is true “on average”, or whether the magnitude of the change is simply too small to be relevant — all of that requires a concrete model. I merely offer the locally confounding example.
Similarly, if one looks at solar maxima and minima, while there are some coincidences and some of them appear to be strong enough that one really, really wants to believe them causal, there are both maxima and minima that don’t seem to produce the predicted effect. It’s not a single variable system. It may well be a matter of solar state AND magnetic state AND greenhouse gas levels AND recent volcanism AND the state of the ocean AND… with all of them contributing, reinforcing or cancelling, any given local warming or cooling trend. And we don’t have any idea which of these parameters — if any — are in any sense dominant, the primary determinant(s) of global temperature (whatever that means) or climate (ditto). Aside from Mr. Sun being the source of basically all of the heat and radiation, in the end, being the source of all of the balancing cooling. That part is easy enough, or would be if it weren’t for the modulation by albedo, greenhouse gases, water vapor (so interesting it really has to be treated separately), and (again) a long line of …’s
Forgive me if I’m just as skeptical of skeptical models for the Earth’s climate as I am of the GCMs. Maybe more. GCMs, for better or worse, at least try to be quantitative and make falsifiable predictions. They turn out to be wrong, sure, but that doesn’t mean that any particular alternative formulation that makes CO_2 less important and solar state more important is righter, only that until you make it into an actual model, it can’t be falsified in turn.
The problem is that whether or not one “likes” modelling, for a problem this complex one either throws one’s hands into the air in disgust and pronounces it unsolvable (not a completely unreasonable thing to do, as I suspect that it still is) or one tries to capture what one thinks might be the most important science in a model and see how it does when compared to nature. Sadly, this sort of approach is isomorphic to nonlinear parametric function fitting, and in the end all too often turns into the moral equivalent of what I’m chiding Henry for — fitting a (short) range of the data only to learn that however well you can make the fit work on the training set it generally doesn’t extrapolate out of the training set to any sort of non-interpolatory trial set. It is, however, superior in that one can at least assign some meaning to the (possibly failing) set of model parameters, and hence learn something — perhaps — from even the model failures.
rgb

June 21, 2012 2:45 pm

phlogiston says:
June 21, 2012 at 1:22 pm
The link is typical of the silliness: the last glaciation or deglaciation presented the big beasts with unprecedented challenges and left the little critters unaffected. Never in the history of the world has extinction been almost exclusively size dependent, and no theory I’ve seen besides human hunting explains this unique occurrence. If heat did in the mammoths then pity the poor polar bears. –AGF

June 21, 2012 2:48 pm

High solar activity trumped the weakening magnetic field which caused the temperatures to rise last
and cosmic ray levels to be low.
I cannot find any examples of temperatures rising on the earth when solar activity was low, or
cooling when solar activity was high.

June 21, 2012 3:04 pm

This change in solar activity to very low levels , which started in 2005 and should last for sometime to come ,in my opinion will tell us very soon just how much influence or lack of influence, the sun exerts on earth’s climatic system.
Again I say the sun drives earth’s climatic/oceanic systems, therefore if something changes from the source(the sun) that drives the systems , it stands to reason, those changes must effect the systems it drives.
The sun has gone from very active (prior to 2005) to very inactive after 2005. I can’t believe the forces the sun exerted on earth’s climatic system (along with all the secondary effects) will not change due to the significant change in the solar activity.
Again the Dalton and Maunder Minimums lend support to what I am saying.
Time will tell.
One thing I know for sure, and that is, it is not a trace gas with a trace increase that controls the climate. The CO2 hoax, is just that, a hoax.

rgbatduke
June 21, 2012 3:09 pm

finding they all best fit into polynominals of the 2nd order, at r2>0.95
which cannot possibly be co-incidental….

Until you understand why this statement would make any mathematician laugh out loud, you will continue to be ignored by mathematicians and physicists and statisticians. Until you look at Koutsoyannis paper — which you clearly still haven’t done — you won’t understand why even math novices that have done so will doubt your results. Nor will you understand why Roy Spencer (who puts a third order curve through his UAH global troposphere temperatures, and who I’m sure did so long before you did) carefully prefaces this with the remark that the curve is drawn only as a guide to the eye and is not intended to imply meaning.
I’m not being patronizing, I’m trying to teach you. If you are completely confident that your understanding of math and statistics is superior to everybody else in the Universe, by all means refuse to be taught. You might well ask if I have any justification for believing that I am qualified to teach you (given that we’ve never met and for all I know for certain you could be a double Ph.D. in math and statistics). So here are my credentials. I have a BS in physics, a BA in philosophy, have completed enough math courses for a major in that as well but all but two taken as an undergraduate were at the graduate level so I’m probably closer to a masters level than either a BS or a BA. I am the author of dieharder, a random number generator tester (and hence am pretty buff in the general realm of hypothesis testing). I’m on my second company selling very high end predictive modeling services, the second one largely based on a patent (pending) on how to make inferences in both directions across a privacy boundary (or other constraint on exporting records in a general database) without violating any individual’s privacy (exporting their record) across the boundary, using a clever application of Bayes’ Theorem. I’ve worked on Monte Carlo (Markov Chain) models in physics for well over a decade, modelled Langevin processes in quantum optics, and done any amount of nonlinear curve fitting in between. I teach selected independent study students advanced statistics and statistical mechanics, and am writing a book extending the work of Richard Cox and E. T. Jaynes on how (Bayesian) probability theory is effectively the basis of all human knowledge.
Now, listen up. The whole point of my referencing a Taylor Series is that it is a prescription for expanding any smooth function in the neighborhood of a point as:
f(t_0 + \Delta t) = f(t) + \frac{df}{dt}|_{t = t_0} \Delta t + \frac{1}{2!} \frac{d^2f}{dt^2}|_{t = t_0} \Delta t^2 + \frac{1}{3!} \frac{d^3f}{dt^3}|_{t = t_0} \Delta t^3 + ...
As you can see, any function f(t) that has a third derivative at a point t_0 that is smaller than \frac{\Delta t^3}{3!} is going to be well-fit by a quadratic function, just as on a smaller range it is decently fit by a linear function, on a still smaller range it is well fit by a constant, and on a still broader range it is decently fit by a cubic. One doesn’t get quite as elegant a result from Weierstrass:
f(t) = a_0 + a_1 t + a_2 t^2 + a_3 t^3 +... = \sum_{i = 0}^\infty a_i t^i
because one doesn’t get an elegant bound on the size of the a_i‘s but again it is pretty clear that if one puts one’s origin in the middle of any range of data and sets one’s scale so that all of the points in the range are less than one, the coefficients have to actually grow for the expansion not to be dominated by the first few terms.
You wanted proof — this is a mathematical proof that your assertion in the first paragraph of this response is incorrect. It can be coincidental. In fact, for a small enough range of data compared to the true variability of the unknown function you are trying to fit it isn’t really coincidental, it is almost certain that a quadratic function will provide an excellent fit.
Finally, as an assignment, you might try to learn something about R^2. I offer you the following quote from the wikipedia article to get you started:
Notes on interpreting R2
R² does not indicate whether:
* the independent variables are a true cause of the changes in the dependent variable;
* omitted-variable bias exists;
* the correct regression was used; (this is one place you get in trouble)
* the most appropriate set of independent variables has been chosen;
* there is collinearity present in the data on the explanatory variables;
* the model might be improved by using transformed versions of the existing set of independent variables.
It also does not indicate whether or not any model can extrapolate unless you have a concrete basis for the model. As noted (and proven) above, given any smooth objective function f that is the centroid for some random scatter, for a sufficiently small sampling one expects to get first R^2 = 0 for a linear model (adequately fit by a constant), then R^2 \approx 1 for a linear model, then R^2 \approx 0 for a linear model but R^2 \approx 1 for a quadratic model, etc as you open up the range being fit, unless the error bars on the points being fit are so large that you can’t fit any linear or better model. And in all of these cases, one cannot be certain that the next higher neglected term doesn’t become dominant as soon as one is outside of the fit range! If you ever actually followed the link and looked at the Koutsoyannis paper, you could see, and understand, that at a glance.
I won’t even go over the fact that fitting max, min, and mean (three parameters) with a three parameter quadratic model is also not surprising, because the discussion above is more elegant and apropos and besides, inevitable if you can fit the actual data nicely with a quadratic. It still doesn’t guarantee that the quadratic will extrapolate outside of the fit range.
You can see this without even waiting for the future. Just try to extrapolate the quadratic backwards in time. See how quickly it departs from the data?
rgb

June 21, 2012 3:21 pm

I cannot find any examples of temperatures rising on the earth when solar activity was low, or
cooling when solar activity was high.

Lief Svalgaard is your man. He’s got a zillion of them. Although it is really pretty easy to do — temperatures rose pretty steadily from the Dalton minimum up to the late 20th century right across significant variability in the solar cycle. Lief also offers evidence — take it or leave it, but he’s pretty serious — that the sunspot count in particular has been seriously abused as a measure of solar activity for the last 200 or so years and that corrected measures show far less correlation between solar activity and temperature (a lot of the correlation that exists comes from changes in the counting algorithm that basically found more spots compared to the earlier measures as the 20th century advanced, in coincidence with generally rising temperatures). If I understand his papers correctly — perhaps he’ll chime in (he usually does, if his name is invoked).
I will note that this isn’t universally accepted — Ushokin and others use radiometric proxies to determine solar activity across the entire Holocene; their results do seem to support the hypothesis. But the main point is that finding or not finding (counter)examples depends in part where (at which solar activity representation and at which global temperature representation, given that THAT isn’t particularly reliable either) you look. It leaves the entire proposition at least somewhat dubious, although hardly disproven.
If you want more evidence, it is hardly plausible that solar activity remained low throughout the entire 80,000 years of the last ice age. It may be an important local modulator of temperature — I suspect that it is, without agreeing that it is quite proven as things stand — and still not be an important global modulator. As, for that matter, may CO_2 be.
rgb

June 21, 2012 3:41 pm

This change in solar activity to very low levels , which started in 2005 and should last for sometime to come ,in my opinion will tell us very soon just how much influence or lack of influence, the sun exerts on earth’s climatic system.
And with this I agree. And I believe that you are right, and that temperatures are likely to drop as a consequence of the low activity. I like the model where some aspect (magnetic or otherwise) of solar activity modulates the climate nonlinearly. It is because I like it that I’m rigorously careful not to claim that it is proven or certain, especially while the only semi-reliable evidence of correlation in the period where we have semi-reliable measures is the bobble in temperature and solar activity in the 60s. I’ll be a lot more convinced if solar activity remains low in 24 and temperatures refuse to dramatically rise and actively fall once we pass the (low) peak, and then aggressively fall if 25 is as low as at least some people project that it will be.
It will be equally interesting to see what happens to CO_2 during that interval. If global temperatures fall and CO_2 concentrations fall along with it, it will certainly strengthen the models of the Carbon Cycle that postulate that most of the growth in CO_2 observed over the last 35 years or so has come because of warming (which alters e.g. ocean and soil chemistry so that the equilibrium concentration is higher) rather than because of the anthropogenic contribution. After all, the ocean could easily take up all of the human contributed CO_2, as could the soil — both have one to two orders of magnitude more CO_2 bound up in them than the entire atmosphere and actively exchange enormous amounts of CO_2 with the atmosphere every year.
That’s part of the problem. Things have been “boringly monotonic” over much of the last century. As I’ve been pointing out to Henry, monotonic low order linear or nonlinear models are too easy to fit and the fits are too meaningless to be of use. What is useful is a model where major variations in both directions correlate well between cause and effect. With luck, we’ll see some real variation of important (potential) control parameters and we’ll see some sort of real response.
rgb

Gail Combs
June 21, 2012 3:58 pm

El Gordo says:
June 21, 2012 at 12:55 pm
Okay,
I’ll bite.
The answer is… Drumroll….
Cosmic Ray incursion and deformation of the heliosphere resulting in varying levels of cosmic energy absorption throughout the solar system.
Why hasn’t it happened since?…..
______________________________________
And then there are those who mention the Solar system bobbing in and out of the galactic plane.

…Every 60 million years or so, two things happen, roughly in synch: The solar system peeks its head to the north of the average plane of our galaxy’s disk, and the richness of life on Earth dips noticeably.
Researchers had hypothesized that the former process drives the latter, via an increased exposure to high-energy subatomic particles called cosmic rays coming from intergalactic space. That radiation might be helping to kill off large swaths of the creatures on Earth, scientists say.
The new study lends credence to that idea, putting some hard numbers on possible radiation exposures for the first time. When the solar system pops its head out, radiation doses at the Earth’s surface shoot up, perhaps by a factor of 24, researchers found….
http://www.space.com/10532-earth-biodiversity-pattern-trace-bobbing-solar-system-path.html

That could also explain the wiping out of the mega fauna but not the sudden freezing with buttercups still in the mouth and undigested plant matter in the stomach found in some specimens.

June 21, 2012 4:29 pm

I don’t think you understand how the solar /climate ,connection works, or for that matter the
climate system itself. If you had read carefully my previous post on why the climate might change,
you would not have brought up the statement that temperatures went up after the DALTON
MINIMUM, despite solar variability.
When the climate is in a particular climatic regime all the items that control the climate from the
sun,to volcanoes,to enso,to the pdo are going to result in random temperature fluctuations even
if the same forcings are involved.
What is needed is for thresholds to be met by the various items that control the climate in order to
change it in one direction or another. This is not easy to accomplish,but when it happens it can
be very abrupt.
The sun ‘s variation per say is not going to change the climate or even correspond to temperature
changes over the short run ,due to the endless feedbacks in the system to begin with and due to
what state the climate system is in at any given time.
How the climate changes is when THE POSITIVE FEEDBACKS that are created, are so strong that
they overtake the negative feedbacks. How that is accomplished is ONLY if the duration and degree
of MAGNITUDE change, in the items that controll the climate (let’s say the sun) are PERSISTENT
enough and INTENSE enough to create a POSITIVE feedback in the climate system that can
overwhelm the mostly negative feedbacks(which keep things in check most of the time),to cause the
climate to shift into another range of temperatures or regime change.
In order for solar variation to accomplish that POSITIVE feedback, it does not mean it does it, just
by the fact that it varies. It has to vary by X amount ,for X amount of time in order to create the
POSITIVE feedback through it’s variance that is strong enough to overcome the other negative feed
backs in the climate system in order to exert it’s effects on the climate system.
Also lag times have to be considered, especially the ocean which can, put off solar effects in the
short run.
LEIF ,may understand solar ,but I think he is lacking in the climate aspect of things.
To put this in another way, I should have said I havesaid I never seen the temperatures rise when
the sun was in a prolong deep minimum period or fall when the sun was in a very prolong active
period.

June 21, 2012 4:43 pm

this site is notletting me write in it ,the way I want to.
That aside, I say in order for the sun to have the climatic effects I refer to, the SOLAR FLUX,
a measure of solar activity, has to stay at values of less then 90, and do so for many years.
That only happens during GRAND MINIMUMS , which we are likely to have, going forward.

June 22, 2012 12:17 am

rgb says
It still doesn’t guarantee that the quadratic will extrapolate outside of the fit range.
Henry@the Duke
You should perhaps take some time to actually look at all my data.
http://www.letterdash.com/henryp/global-cooling-is-here
I don’t go outside the fit range. To calculate the change in signal from warming to cooling (e.g. on the maxima) I can go linear (r2=0.9645), I can go (nat) logorithmic (r2=0.9967) 0r I can go binominal (r2=0.9950). In all three cases, and staying within my measuring range, I find that when y=0 (when there was no warming or cooling) all 3 equations give me x=16.4 That means at 16 years BP the warming period ended and we changed to a cooling period (1994-5). In my case, 2011 is the “present”. When x=0 (that is 2011) we find the most reasonable result coming from the said binominal, namely -0.06 K per annum. On the linear it would be -0.04, on the (nat) logarithmic it would be -0.2K per annum –
Similarly it can be shown that the means are now dropping by -0.08 K per annum. Means are now dropping a bit faster then maxima because maxima have been dropping for a longer time and earth is only now busy starting to play catch up.
Now for getting these results, pray do tell, where did I go and extrapolate outside of my measuring range? I stayed within the 37 years where I measured. I only looked at the other side of the binominals for means and minima which suggests to me that another change at y=0 could have occurred at ca. 42 years BP and wanted to know from Stephen Wilde how he had figured that out.
I did not proclaim that yet as a final result.

Spector
June 22, 2012 2:55 am

As Dr. Henrik Svensmark appears to have shown that there is a link between high cosmic radiation activity and cold weather periods on Earth, based on intervals when the Earth and the solar system were going through the spiral arms of the galaxy, where such radiation is high, I think it might be worthwhile to see if the Earth might have been transiting an unusual part of the galaxy during the Younger Dryas event.
If this were, for example, the expanded remains of an ancient supernova, then there may also have been many embedded solid fragments, greatly increasing the probability of impact events during the transit interval.
If true, this would really prove the Svensmark’s theory.
Svensmark: The Cloud Mystery
“Uploaded by rwesser1 on Jul 24, 2011”
100 likes, 9 dislikes; 8,433 Views; 62:46 min
“Henrik Svensmark’s documentary on climate change and cosmic rays.”

June 22, 2012 6:41 am

SOLAR VARIABILITY VERSUS CLIMATE CHANGE
One last time to explain where I am coming from in regards to this matter.
In order to obtain a POSITIVE FEEDBACK,from a variance in solar activity, that has a degree of MAGNITUDE CHANGE powerful enough to bring earth’s climatic system to a THRESHOLD, I propose solar activity as measured by the SOLAR FLUX(NOT SUNSPOTS PER SAY)has to achieve a SOLAR FLUX reading of 90 or less, for a DURATION in period of time, for at least 10 years. Solar flux being at a reading of 90 or lower ,95% of the time.
Once solar activity goes down to this level and stays at this level ,then and only then can the secondary effects from the prolong solar activity and the low solar activity itself PHASE IN, to exert enough of a POSITIVE FEEDBACK, to over come the inherent NEGATIVE FEEDBACKS of earth’s climatic system, to perhaps bring about an ABRUPT climatic change. Temperatures for instance going to a NEW RANGE,in a SHARP manner, in response to this.
Right now we are in the same climate regime, all the small variations in temperature are more or less random events caused by the normal oscillations in the climate system, such as ENSO,PDO, or VOLCANIC ACTIVITY as examples. Nothing has changed or oscillated in the climate system including the sun up to this point enough, to bring about the POSITIVE FEEDBACK needed , to bring the climate to a THRESHOLD.
I am hopeful that this GRAND MINIMUM ,which started in 2005,once it reaches it’s peak will be INTENSE enough,and LONG enough in duration to bring earth’s climatic system to some sort of a THRESHOLD, which will bring the temperatures down to another range. How much different a range I don’t know, for it will depend on how weak solar activity actually becomes, and how the secondary effects respond and phase in with this weakness.
SOLAR ACTIVITY /COSMIC RAYS /GEOMAGNETIC FIELD
There is a school of thought from what I can gather, that says if the difference in the charge between the sun and the earth is strong enough, it can cause a sharp weakening of earth’s magnetic field,thus allowing cosmic rays to flood into our atmosphere. This if it happens ,happens when the sun becomes very active, after being in a deep minimum state. The temperature response is sharply down at first, due to the influx of cosmic rays, only to reverse in time and rise sharply , as the solar wind establishes itself once again and starts to deflect the cosmic rays away from earth, while at the same time earth’s geomagnetic field recovers.
If one believes in the cosmic rays,more clouds created ,lower temperature sequence, this is an interesting approach to help explain past abrupt temperature changes.

June 22, 2012 6:53 am

Now for getting these results, pray do tell, where did I go and extrapolate outside of my measuring range? I stayed within the 37 years where I measured. I only looked at the other side of the binominals for means and minima which suggests to me that another change at y=0 could have occurred at ca. 42 years BP and wanted to know from Stephen Wilde how he had figured that out.
I did not proclaim that yet as a final result.

Good, because that is an example of what I’m talking about. And if you extrapolate your polynomial fit back 200 or 300 years, does it still fit the data? How about 1000 years, forward or backwards? Of course not, because your function becomes so negative that the Earth’s temperature would be negative, and negative temperatures don’t exist outside of lasers (and then, only for a special interpretation of the term “temperature”). So we agree that your fit has limited predictive value outside of the range in which it was fit. Now to find a reasonable value for that range.
I would argue that a reasonable value is at most four or five years, even more reasonably one or two. Of course, I can’t really tell from your web page (which I’d looked at multiple times generating my previous responses) but it looks like you’re trying to fit only four data points — the temperature anomaly on a 37 year basis. Your method description leaves something to be desired, so let’s go through it. You use 45 weather stations taken randomly “from all over the world”, that weren’t missing a lot of data. This is a bit self-contradictory — “randomly” means that you use a random selection method to choose weather stations out of a very large pool. However, you obviously preselect them by eliminating from the pool stations with inadequate or missing records (a non-random process). Finally, the “all over the world” part is perplexing — random selection from most pools would yields a sample — especially one with only 45 elements — that is highly biased towards populated areas which and are hardly uniform. Generally, to select stations from all over the world uniformly one has to have a hightly NON random selection process as some parts of the world have only a handful of stations where others, e.g. the United States, are extremely dense in them.
And then there are the oceans, which cover 70% of the Earth’s surface and are completely unrepresented by “weather stations”.
So when you say random, do you mean:
a) Handle all weather stations with data problems — in particular infilling (a process I absolutely hate, BTW — there are right ways and wrong ways to handle missing data and this is a very wrong way, as it is basically making stuff up in such a way that your error estimates are all horribly wrong from that point on — if you make error estimates).
b) From what is left, select a large number of sites from geographical locations at different latitudes and longitudes that yield a uniform probability of covering the planet with a random selection process;
c) Roll dice (or use a random number generator) to select out of this pool.
or do you mean that you just did b), looked over a bunch of sites and grabbed one “at random” that didn’t have data problems. I ask because this latter process cannot be described as random. Unless you have an objectively random selection process, you should not use the term, and you should recognize that it is an opportunity for bias to creep in and/or for your work to be challenged. It has long been demonstrated that humans are almost incapable of generating random elements in any selection process. If you try to write down a random string of digits, it will almost certainly fail many tests of randomness. We make lousy random number generators!
Second point — why are you working so hard? Woods For Trees will let you slice and dice and fit the last 40 years of data a few thousand different ways, all online, with far, far more data (and hence much more statistically meaningful results).
Third point — I have no real idea of how you made your table, not from your description. In particular, how did you compute the number in each column labelled “Last X years”? You call them the “average change in temperature per year”. What does this mean? How do you compute it? In words it sounds like you are (per station):
a) Forming the 37 year mean (from all of the data over all 37 years?).
b) Subtracting the mean from the data to form the anomaly (for all 37 years) to form the temperature anomaly on a 37 year basis.
So far this is all good, or at least reasonable, and very similar to what Roy Spencer does with his truly global, properly selected and averaged UAH lower troposphere data which is far, far better than yours, and which he also sometimes presents with a fit (as I noted) with a smooth curve for fun. You can see his data here:
http://wattsupwiththat.com/widget/
and even put a widget on your personal webpage that automatically updates the graph every month (Anthony doesn’t re-plot this page quite often enough, sadly, so it gets a bit out of sync with the actual widget:-(.
Then you do not describe adequately how you form the average change per year during the period indicated. Do you:
c) Take the anomaly for 37 years ago, divide by 37, and make that your first column, anomaly for 32 years, divide by 32, and make that second column, etc?
c’) Take the anomaly for 37 years ago, subtract it from the anomaly for 36 years ago, divide by 1, make that the first column, etc?
c”) Something else?
The second of these is really “the slope” at 37 years ago, but what you are probably doing is akin to the first. I won’t criticize this at this point (although I could) except to note two things: Why 37, 32, 22 and 12 years? Why not 37, 36, 35…? Why vary the interval? Why not coarse grain the average so that the same number of samples contribute on both sides of a centroid, that is look at the change per year over a seven year interval and center the slope on the middle year? Both of these are serious flaws in your approach as year 37 MIGHT have been (and in fact was) anomalous in some way (in particular, it might have been sitting on or in the El Nino bobble clearly visible in the UAH data). Picking particular years in the set is yet another form of bias, intentional or unintentional.
So then (AFAICT) you end up with the four columns of data, one entry per “randomly” selected, infilled station. Let’s focus on just one thing — the means. Your final product is:
37 32 22 12
0.015 0.013 0.014 -0.019
You fit these four points — again, if I understand your method correctly, which I may not — to:
y(x) = -0.00011 x^2 + 0.0067 x - 0.08240
and claim an R^2 = 0.954.
Here I’m a bit dismayed. First of all, none of these points have an error estimate or a standard deviation (whatever that might mean, frankly, for the method you are using). Goodness of fit estimate are thus almost completely meaningless. Of course you have the data and might have generated the s.d. for the means for each column, in which case you could have fit the data and generated Pearson’s \chi^2, which would have been at least moderately meaningful. Or perhaps you are fitting the entire table of data all at once in a routine that internally generates \chi^2 and R^2 (if so, what routine or toolset are you using?) How do you justify fitting the mean anomaly in this way given that different stations have very different temperature anomaly ranges (so that you automatically give the greatest weight to the stations with the greatest anomaly)?
Here is where your using different intervals to compute the slope comes back to haunt you. When you divide by 37 you suppress the variation in the data by a factor of 3 compared to dividing by 12 (if you use method c) above). So the 12 year data is incorrectly weighted in computing error estimates.
In the end, your table of data means above is obviously not quadratic, it has two local maxima and two local minima. Indeed, it could be fit exactly with a cubic — so why not use a cubic? Perhaps because the cubic gets warmer in the past? But how do you reject that possibility, since you have no a priori reason to choose a constant, linear, quadratic or cubic or fourth, fifth or fiftieth order polynomial (except that with only four points cubic is already perfect).
Then you do something really odd. You assert that since your fit has zeros — two of them — “cooling has already begun” at the year corresponding to a zero in the fit range. Cooling compared to what? The 37 year mean? You note that (since your quadratic is upside down) there is another zero 42 years ago. Was the earth “cooling” 47 years ago? 60? 100?
Of course not. And this is why your entire result is nonsense, or if not nonsense, nothing one cannot get faster, more accurately, and without the egregious claims by simply looking at the UAH data, well plotted, or looking at your own data for just the annual anomalies, well plotted, and without “selecting” particular years. Just plot the full dataset (scatter plot) or the means, per year, and please — compute the s.d. per year and include it as an error bar on the means for the year. Then fit your whatever to the full dataset — to the anomalies themselves, not to the “slopes” averaged over some variable interval. Compute \chi^2 for the fit, as it will be very revealing — if the error bars are as wide as the range of variation of the data from end to end of the plot, \chi^2 will be very small and so will R^2. If the error bars are small, \chi^2 will be much larger as a smooth low order polynomial fit will not interpolate the data particularly well, but a linear R^2 may be significantly different from 0, indicating a meaningful trend on the interval.
Look, statistical analysis, and fits, aren’t “magic”. They can’t squeeze information out of noise, and
most of the content of any given graph of temperature or (especially) temperature anomalies is noise! It is also very much a case of GIGO — garbage in, garbage out. There’s no reason to think that a quadratic fit to anomalies (even the strange ones you compute) should be more meaningful than a linear fit, or for that matter a constant over the interval. Every year the anomaly will, after all, be different from the mean and if you generated a completely random distribution of anomalies for the years involved with a given width around the mean, for a short enough interval you could most definitely fit a low order polynomial to the result, even when you know that the result has no meaning. It would have very, very small polynomial coefficients — just like yours does! Those small coefficients are a warning — you could be (and mostly, probably, are) fitting noise, not trend.
That’s why most people stop with computing a linear trend over short baseline data. Even that is usually meaningless, but that’s your best chance for meaning. Quadratic curvature? Difficult to infer, especially when the means themselves are manifestly not quadratic over only 4 samples, and believe me, if you fit a cubic to the data you’d get completely different quadratic and linear and constant coefficients, a sign that the expansion is not systematically convergent. Getting a linear trend clearly distinguishable from no trend is usually considered — incorrectly — to tell you something about what might happen if one continues the trend to extrapolate the future outside of the interval.
It’s incorrect because for short sample baselines, in climate science, the curves are never nice linear/smooth low order functions. Well, except for CO_2 concentration at the top of Mauna Loa — that’s a major exception to many rules. They are noisy as all hell, fluctuate up and down and sideways, have completely different “trends” as one makes small changes in the baseline of the averages. What you are doing is no different from what Hansen et. al. do/did when they drew a simple exponentially diverging fit to the temperature anomaly and predicted — oh so confidently — that the temperature right now would be some 0.2 to 0.4 C warmer than it is, extrapolating the trend from the 80s and 90s into the 2000s. Oops.
Why commit the same, stupid, oops?
rgb

June 22, 2012 6:55 am

ROBERT BROWN MAKES MUCH SENSE IN HIS POSTINGS.

June 22, 2012 7:15 am

ROBERT, do you see where I am coming from? I am saying once we are in a particular regime then relatively small short lived solar variablity, won’t necessarily correspond to temperature changes, because of the stronger negative feedbacks ,inherent in the climate system once it is in a particular regime.
Only extreme changes in the items that control the climate (sun for example) can bring about enough of a positive feedback, to overcome those inherent neg. feedbacks to switch the climate into another state with a different range in temperature.
So far nothing has come close to doing this up to this point, since we left the DALTON MINIMUM climate regime, and went to what we presently have. Only minor random variations have taken place since,causing random up and down temperature noise ,due to the random small oscillations in earth’s climatic system. Those(some examples) being convection rates in the tropics, to minor solar variations, to enso,to volcanic activity, to the arctic oscillation phase, pdo/amo etc etc. All strong enough to cause minor temp. changes ,but far to weak to bring earth’s climatic system to a threshold, which could bring it to another climate regime.

Gail Combs
June 22, 2012 7:19 am

Salvatore Del Prete says:
June 21, 2012 at 4:43 pm
this site is not letting me write in it ,the way I want to…..
_________________________________
I normally write in a word processor like Gedit or Libre Office Writer and then paste the comment. I make sure I have the html tags in place when I write and before I paste. Emax is used to check html tags in long posts if I can catch Hubby to do it for me. (I am computer challenged)
Here is a guide to the HTML tags and lots of other stuff for WUWT: Ric Werme’s Guide to Watts Up With That
Hope that helps.

Verified by MonsterInsights