Detecting the AGW Needle in the SST Haystack

Guest essay by Jeffery S. Patterson

My last post on this site, examined the hypothesis that the climate is dominated by natural, harmonically related periodicities. As they say in the business, the critics were not kind. Some of the criticisms were due to a misunderstanding of the methodology and others stemmed from an under appreciation for the tentativeness of the conclusions, especially with respect to forecasting. With respect to the sparseness of the stochastic analysis, the critics were well founded. This lack of rigor is why it was submitted as a blog post and not a journal paper. Perhaps it served to spark someone’s interest who can do the uncertainty analysis properly, but I have a day job.

One of the commentators suggested I repeat the exercise using a technique called Singular Spectrum Analysis which I have done in a series of posts starting here. In this post, I turn my attention away from cycles and modeling and towards signal detection. Can we find a signature in the temperature data attributable to anthropogenic effects?

Detecting small signals in noisy data is something I am quite familiar with. I work as a design architect for a major manufacturer of test equipment, half of which is dedicated to the task of finding tiny signals in noisy data. These instruments can measure signals on the order of -100dBm (1 part in 10-13). Detecting the AGW signal should be a piece of cake (tongue now removed from cheek).

Information theory (and a moment’s reflection) tells us that in order to communicate information we must change something. When we speak, we modulate the air pressure around us and others within shouting distance detect the change in pressure and interpret it as sound. When we send a radio signal, we must modulate its amplitude and/or or phase in order for those at the other end to receive any information. The formal way of saying this is that ergodic processes (i.e. a process whose statistics do not change with time) cannot communicate information. Small signal detection in noise then, is all about separating the non-ergodic sheep from the ergodic goats. Singular Spectrum Analysis excels at this task, especially when dealing with short time series.

Singular Spectrum Analysis is really a misnomer, as it operates in the time-domain as opposed to the frequency domain as the term spectrum normally applies. It allows a time-series to be split into component parts (called reconstructions or modes) and sorts them in amplitude order, with the mode contributing most to the original time series first. If we use all of the modes, we get back the original data exactly. Or we can choose to use just some of the modes, rejecting the small, wiggly ones for example to provide the long term trend. As you may have guessed by now, we’re going to use the non-ergodic information-bearing modes, and relegate the ergodic, noisy modes to the dust bin.

SSA normally depends on two parameters, a window length L which can’t be longer than ½ the record length and a mode selection parameter k (k is sometimes a multi-valued vector if the selected modes aren’t sequential but here they are). This can make the analysis somewhat subjective and arbitrary. Here however, we are deconstructing the temperature time-series into only two buckets. Since the non-ergodic components contribute most to the signal characteristics, they will generally be in the first k modes, and the ergodic components will be sequential, starting from mode k+1 and including all remaining L-k-1 modes. Since in this method, L only controls how much energy leaks from one of our buckets to the other, we set to its maximum value to give the finest grain resolution to the division between our two buckets. Thus our analysis depends only on a single parameter k which is set to maximize the signal to noise ratio.

That’s a long time to go without a picture so here are the results of the above applied to the Northern Hemisphere Sea Surface Temperature data.

clip_image002

Figure 1 -SST data (blue) vs. a reconstruction based on the first four eigen modes (L=55, k=1-4)

The blue curve is the data and the red curve is our signal “bucket” reconstructed from the first four SSA modes. Now let’s look in the garbage pail.

clip_image004

Figure 2 – Residual after signal extraction

We see that the residual indeed looks like noise, with no discernible trend or other information. The distribution looks fairly uniform, the slight double-peak due probably to the fact that the early data is noisier than the more recent. Remembering that the residual and the signal sum to the original data, and since there is no discernible AGW signal in the residual, we can state without fear of contradiction that any sign of AGW, if one is to be found, must be found in the reconstruction built from the non-ergodic modes plotted in red in figure 1.

What would an AGW signal look like? The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climate’s natural variation. So we are looking for a ramp, or equivalently a step change in the slope of the temperature record. Here’s an idea of what a trendless climate record (with the natural variation and noise similar to the observed SST record) might look like, with (right) and without (left) a 4 °C/century AGW component. The four curves on the right represent four different points in time where the AGW component first becomes detectable: 1950, 1960, 1970 and 1980.

clip_image006 clip_image008

Figure 3 – Simulated de-trended climate record without AGW component (left) and with 4°C/century AWG components

Clearly a linear AGW signal of the magnitude suggested by the IPCC should be easily detectible within the natural variation. Here’s the real de-trended SST data. Which plot above does it most resemble?

clip_image010

Figure 4 -De-trended SST data

Note that for the “natural variation is temporarily masking AGW” meme to hold water, the natural variation during the AGW observation widow would have to be an order of magnitude higher than that which occurred previously. SSA shows that not to be the case. Here is the de-trended signal decomposed into its primary components (note, for reasons too technical to go into here, SSA modes occur in pairs. The two signals plotted below include all four signal modes constituted as pairs)

clip_image011

Figure 5 – Reconstruction of SSA modes 1,2 and 3,4

Note the peak-to-peak variation has remained remarkably constant across the entire data record.

Ok, so if it’s not 4°C/century, what is it? Remember we are looking for a change in slope caused by the AGW component. The plot below shows the slope of our signal reconstruction (which contains the AWG component ,if any), over time.

clip_image013

Figure 6 – Year-to-year difference of reconstructed signal

We see two peaks, one in 1920 well before the effects of AGW are thought to have been detectable and one slightly higher in 1995 or so. Let’s zoom in on the peaks.

clip_image015

Figure 7 – Difference in signal slope potentially attributable to AGW

The difference in slope is 0.00575 °C/year or ~.6 °C/century. No smoothing was done on the first-difference plot above as is normally required, because we have eliminated the noise component which makes this necessary.

Returning to our toy climate model of figure 3, here’s what it looks like with a .6 per century slope (left) with the de-trended real SST data on the right for comparison.

clip_image017clip_image010[1]

Figure 8 – Simulated de-trended climate record with .6°C/century linear AGW components (see figure 3 above) left, de-trended SST (northern hemisphere) data right

Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. The slope of the AGW fit? .59 °C/century, arrived at complete independent of the SSA analysis above.

Conclusion

As Monk would say, “Here’s what happened”. During the global warming scare of the 1980’s and 1990’s, the quasi-periodic modes comprising the natural temperature variation were both in their phase of maximum slope (See figure 5). This naturally occurring phenomenon was mistaken for a rapid increase in the persistent warming trend and attributed to the greenhouse gas effect. When these modes reached their peaks approximately 10 years ago, their slopes abated, resulting in the so-called “pause” we are currently enjoying. This analysis shows that the real AGW effect is benign and much more likely to be less than 1 °C/century than the 3+ °C/century given as the IPCC’s best guess for the business-as-usual scenario.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

125 Comments
Inline Feedbacks
View all comments
Editor
September 27, 2013 9:27 am

Dr Norman Page says:
September 27, 2013 at 6:10 am

Willis You are being deliberately obtuse or misleading or misunderstanding the numbers.The forecast temperatures clearly refer to the Fig 8 -SST Global Temperature anomaly in the last post at http://climatesensenorpag.blogspot,com

No, they don’t “clearly” refer to that or I would have seen it. In any case, they still lack specificity.

Thus the 2035 anomaly number is minus 0.15 and the 2100 number is minus 0.5
For some reason you didn’t recognize the minus sign.

Norman, as I said, your claims are far too vague to be falsified. For example, in this particular case you said:

4 Temperature Hadsst3 moving average anomaly 2035 – 0.15

If I said no, your forecast was wrong, then you can simply pick a different time period for your moving average. Or you can say that you meant a centered moving average, not a trailing moving average.
I say again: If it can’t be falsified, it’s not a forecast …and by and large what you call “forecasts” are totally and completely unfalsifiable.

The earlier 3 Forecasts are general trends and events in the context of the forecast of the minus 0.15 anomaly in 2035.The 2650 comment follows logically from a repeat of the 1000 to 2000 cycle.
I say later that at this time this forecast is speculative but it is by no means meaningless.
You cant replace something with nothing. What would your best shot at the Global HadSST3 numbers for 2035 , 2100 and 2650 be?

I say that any “forecast” for 2650 is both speculative, meaningless, and a joke. I say that if you think a forecast for 2650 is a valid forecast in any sense of the world, you’ve lost the plot entirely.
Look, if you want to be the New Age Nostradamus and make wacky “forecasts” for events half a millennium from now, that’s your choice, I can’t stop you.
But claiming that a climate forecast for 2650 is science, especially a “forecast” with an unspecified “moving average” of unknown length? Don’t make me laugh.
IF IT IS NOT FALSIFIABLE IT IS NOT A FORECAST!! Again I say, think of it as a bet that you really, really don’t want the other guy to be able to weasel out of. If you say “it will be warm tomorrow”, can he get out of paying you? Sure … you didn’t say how warm. If you say “it will raid tomorrow” he can say “you didn’t say how much”, and if you said how much, he can say “you didn’t say where”.
And as a result, you need NUMBERS, NUMBERS, NUMBERS. Even if you are forecasting what you call “general trends”, you still need numbers—how big will the trends be, how will they be calculated, what data will you use, what are the starting and ending points. Saying a “moving average anomaly” says nothing, Norman. How long an average? Gaussian average or regular? Centered average or trailing?
As I said, the bad news is that what you’ve done to date in your 30-year so-called “forecasts” is mumble and wave your hands … so you’ll have to throw it all out. Standing around claiming your existing “forecasts” are good will only cause people to point and laugh. Throw them out and start over with new ones, ones with too many numbers if anything, forecasts that are solidly defined, and thus falsifiable.
Because asking people about what their forecasts are for 2650 … that goes nowhere.
w.

September 27, 2013 9:36 am

Matthew You and Willis are perfectly entitled to can believe that you cant predict sufficiently accurately to matter. I beg to differ as far as I am concerned and made a prediction which I think is testable within a usefully short time frame. See my comment at 9/26/3:57 pm
“You ask for falsifiability .My forecasts would be seriously in question if there is not 0.15 – 0.2 degrees of cooling in the global SSTs by 2018-20.”

RC Saumarez
September 27, 2013 9:37 am

Patterson.
Thanks, I’m not trying to be bloody minded and I’ve never used SSA. The problem, as I see it, is that we are used to thinking of “orthogonal components” in a signal being orthogonal to each other because this is so conditioned into our thinking (or at least mine!) from Fourier, Z transforms etc, and I made the intellectually lazy jump of assuming that a PCA orthogonality necessarily implied component orthogonality in the strictly inner product sense.
The problem arises from the vector space into which you project the signal – usually this an orthonormal space. However, the lagged covariance space is not orthonormal and the basis of this space and projection into it doesn’t necessarily form a mapping into an orthonormal signal space. I’ve looked at little at the basis of SSA, but frankly I think I would have to do an MPhil on the subject to really get my head around it!
However, we can construct an argumentum ad absurdum, as Lord Monkton would probably phrase it. If we take a signal that is based on orthonormal components:
f(t)=cos(at)+cos(2at)
this will presumably separate into 2 cosine components that are orthonormal. If we then take a signal that is the sum of a pulse and triangle, which are of different lengths, SSA will identify these, but they are not orthogonal signals. I.E the results of SSA does not necessarily produce a set of orthogonal basis signals.
This is the problem with decomposition of a temperature signal. If the “natural” and “anthropogenic” signals are correlated, will they necessarily be identified by SSA?
I don’t know about you, but I went into quite a lot of maths when learning signal processing but then used the techniques and used the theory as “rules of thumb”. I’ve found that I’ve made a few clangers and then I’ve had to go and dig into the theory, which I hadn’t understood as well as I should. Unfortunately as I get get older, this gets more difficult!
Cheers,
Richard Saumarez

September 27, 2013 10:02 am

Willis It is not a great stretch to propose that the period from about 2000 – 3000 will see a quasi repeat of the trends from 1000 – 2000 see Figs 6 and 7 of the post at http://climatesense.norpag.blogspot.com
That is a useful and reasonable working hypothesis. Much better than the IPCC CO2 driver claim
There are several ways of projecting the trends forward. The numbers I give are just one way of providing very reasonable ball park estimates.
I think your problem (and many of the other skeptics too ) is that you can’t believe that forecasting the general trends of climate over the next several centuries can be so obvious, simple and commonsensical.
I agree we need to understand the mechanisms and many of your posts are very illuminating in that regard I especially like your post at 26/9:42
Are you familiar with http://www.happs.com.au/images/stories/PDFarticles/TheCommonSenseOfClimateChange.pdf
I think there is a great deal there that you would find interesting re mechanisms and teleconnections.

September 27, 2013 10:14 am

Willis One more thing If you showed me data that disagreed with my forecast I would have no problem admitting it. There is no arguing with a dry hole,

milodonharlani
September 27, 2013 10:24 am

Leo Smith says:
September 27, 2013 at 3:31 am
I did neglect to point out that it took almost 1500 years to falsify Ptolemy’s epicycles, both for institutional reasons (adherence to a ruling scientific paradigm & religious dogma) & for lack of the needed instrument (the telescope with which to observe the phases of Venus), but only because I didn’t think it necessary. Ptolemy had the right number of orbits for the then known planets, but made the mistake of placing the sun where the earth should be among them. The orbits however featured epicycles upon them in order to compensate for their circular [orbits] instead of [elliptical orbits]. The earth was also offset slightly from the center of the presumed concentric spheres.
I agree with your conclusion, correct me if I’m misstating it, that cycles can be observed in climate, even if humans haven’t yet figured out what causes most of them. It appears to me that they are observable on decadal, centennial, millennial, myriadal (if that’s a word) time scales, as well as the better-explained 100,000 year order of magnitude Milankovitch cycles. A case can also IMO be made for longer climatic cycles on the four orders of magnitude from a million to a billion years.
Since Milankovitch cycles are strongly supported & well explained, why not longer & shorter ones? Discussions & explorations of possible explanations for them have been suppressed by the presently prevailing ideological paradigm & politico-religious dogma of CACA.

September 27, 2013 10:24 am

Matthew R Marler;
davidmhoffer’s argument is that there is so much more water vapor than CO2, even after doubling CO2, that the increased downwelling of IR at the surface can’t be very much. Indeed, no one has argued that it is very much: the predicted equilibrium effect is only 1/2% (1.3K/288K), but a case that it is 0 is not complete, and not believable.
>>>>>>>>>>>>>>>>>>>
I never said it was 0, in fact I’ve been very active in this forum debunking the claims of those who claim it is. What I was trying to get at is that the commonly quoted CO2 doubling = 3.7 w/m2 ~ 1.2 deg C is not what you think it is. That calculation has nothing to do with either surface temperature or surface forcing. IPCC AR4 WG1 kinda glosses this over and refers you back to AR3, links to which I don’t have handy but here’s the basic physics.
Stefan Boltzmann Law is that P(w/m2)=5.67*10^-8*T^4
with T in degrees K.
So run the numbers. Average temperature of earth surface is commonly given as 15 C or 288 K. If you add 3.7 w/m2 to 288 K you would get an increase of 0.68 degrees…. not 1.2 degrees. So where does the 1.2 degrees come from? Glad you asked.
The “effective black body temperature” of earth is about -20 C or 253K. That’s the temperature of earth as seen from space. It isn’t the temperature of earth at the surface, nor is it the temperature of earth at Top of Atmosphere (TOA). It the temperature somewhere in between, roughly at the Mean Radiating Level (which is a lengthy discussion unto itself). So let’s add 3.7 w/m2 to 253K and run the numbers through SB Law and we get….. 1.0 degrees.
So we now have two numbers for sensitivity to CO2 doubling, one at surface and one at effective black body temperature of earth. Which one is correct? Answer: NEITHER.
Doubling of CO2 changes the effective black body temperature of earth by precisely 0. What it changes is the altitude at which the effective black body temperature of earth occurs. Now it gets messy from there. If the atmosphere was uniform in composition, we could probably extrapolate some linear function from the MRL to arrive at surface forcing in w/m2 and temp change, but the atmosphere ISN’T uniform. You’ve got thousands of ppm of water vapour at low altitude, and only dozens at high altitude.
Further, the 3.7 w/m2 number doesn’t exist at any given point in the atmosphere in the first place. It is calculated from the sum of all downward LW emissions that otherwise would not have existed from the surface up to the TOA. So, it doesn’t exist as an energy flux in the traditional sense in the first place, it is smeared across the atmospheric air column and it has to be put in the context of the energy flux that already existed before CO2 doubled. With low altitude water vapour running in the 30,000 ppm + range, an extra dose of CO2 is tiny in terms of effective surface forcing…which is what the graph I linked to in the first place shows.

milodonharlani
September 27, 2013 10:24 am

Elliptical instead of eccentric in the above. Sorry.

Matthew R Marler
September 27, 2013 12:05 pm

davidmhoffer: Further, the 3.7 w/m2 number doesn’t exist at any given point in the atmosphere in the first place. It is calculated from the sum of all downward LW emissions that otherwise would not have existed from the surface up to the TOA. So, it doesn’t exist as an energy flux in the traditional sense in the first place, it is smeared across the atmospheric air column and it has to be put in the context of the energy flux that already existed before CO2 doubled. With low altitude water vapour running in the 30,000 ppm + range, an extra dose of CO2 is tiny in terms of effective surface forcing…which is what the graph I linked to in the first place shows.
I agree it’s not a uniform value in space or time.
I agree the increased surface downwelling IR caused by doubling CO2 is tiny — the projected effect from equilibrium calculations is a 0.5%, appx, increase in the hypothetical equilibrium temperature. Realistically, I don’t believe the equilibrium calculations tell us what we want to know, and the induced surface change is not uniform in space or time any more than the radiation change.
Back to my original question: Given that the increase in downwelling IR is non-uniform and tiny, does the effect differ among water, wet ground, and dry ground? Is it possible for a tiny increase in downwelling IR on the Equator (say) in the central Pacific, in summer (or winter, or always) to increase the water vapor without raising the surface temperature? With 70% of the earth surface being ocean, it would seem that knowing this is a requirement for even a first calculation of the transient effect of doubling CO2 concentration.

Editor
September 27, 2013 2:26 pm

Dr Norman Page says:
September 27, 2013 at 10:14 am

Willis One more thing If you showed me data that disagreed with my forecast I would have no problem admitting it. There is no arguing with a dry hole,

You truly don’t seem to get it. Your so-called “forecasts” are so vague that it is nearly impossible for the data to disagree with them.
I give up. Wave your hands and say “tomorrow will be kinda like today” … or in your words, “Willis It is not a great stretch to propose that the period from about 2000 – 3000 will see a quasi repeat of the trends from 1000 – 2000”
What is a “quasi-repeat”? There’s no “there” there in that statement. It’s pure handwaving, and is meaningless. What data could disagree with a forecast of a “quasi-repeat”?
I’ve told you what you need. Numbers and specificity. Until you provide them, you’ll be just another crank Nostradamus wannabee. I can’t seem to dent your armor, I give up. The field is yours, you’ve left me behind.
w.

September 27, 2013 2:30 pm

The recent post on climate sensitive by Willis Eschenbach persuaded me that I should include the pre-1900 data in the analysis. I’ve done so here. It confirms the result above and adds a new wrinkle that lends credence to the comments here that even the .6 degC/century change in slope cannot be attributed to AGW.

September 27, 2013 5:04 pm

Matthew R Marler;
Given that the increase in downwelling IR is non-uniform and tiny, does the effect differ among water, wet ground, and dry ground?
>>>>>>>>>>>>>>>>>
Yes. Still water will absorb LW pretty much 100% in the first few microns, causing the water to be vapourized, Not much of the ocean is still however, there are waves, flotsam, lotsa rain, etc, which complicates the matter, but my current understanding is that this would be the dominant process. Ground on the other hand the dominant process would be to absorb the LW, raising the temperature of the ground surface. Either way you have additional energy at the surface/atmosphere interface, and where it goes from there gets pretty complicated.

September 27, 2013 5:47 pm

Willis Not much point in carrying this further. I think forecasting an HadSST3 5 year moving average of minus 0.5 at 2035 and minus O.5 at 2100 is pretty precise and is clearly distinguished and distinguishable from e,g the IPCC warming trend what more would you expect at this time? Also pointing out the possibility of continued cooling until 2650 is perfectly reasonable looking at the Christiansen data set and the current state of solar activity.
Enjoyed your English travel bit -I’m from Liverpool originally.

Editor
September 27, 2013 6:48 pm

davidmhoffer says:
September 27, 2013 at 5:04 pm

Matthew R Marler;

Given that the increase in downwelling IR is non-uniform and tiny, does the effect differ among water, wet ground, and dry ground?

>>>>>>>>>>>>>>>>>
Yes. Still water will absorb LW pretty much 100% in the first few microns, causing the water to be vapourized, Not much of the ocean is still however, there are waves, flotsam, lotsa rain, etc, which complicates the matter, but my current understanding is that this would be the dominant process. Ground on the other hand the dominant process would be to absorb the LW, raising the temperature of the ground surface. Either way you have additional energy at the surface/atmosphere interface, and where it goes from there gets pretty complicated.

Again, the numbers don’t work. Evaporation is about 80 W/m2. Downwelling IR averages 330 W/m2. Much evaporation is from the sun. That means that 90% of the DLR worldwide is not going to evaporation, but to warming the surface.
In both the land and the ocean, the IR is absorbed in the first few microns. There is an urban legend that the IR is transferred down into the land, but not into the ocean … I don’t believe that in the slightest. What’s stopping it in the ocean but not in the land? See the four questions in my post “Radiating the Ocean“, If you can’t answer all four of them your radiation theory is in trouble.
w.

Editor
September 27, 2013 7:05 pm

Dr Norman Page says:
September 27, 2013 at 5:47 pm

… I think forecasting an HadSST3 5 year moving average of minus 0.5 at 2035 and minus O.5 at 2100 is pretty precise and is clearly distinguished and distinguishable from e,g the IPCC warming trend what more would you expect at this time?

OK … suppose we get to 2038, and for the year 2035 the trailing 5 year average is -0.4, and because temperatures continued to drop, the centered 5 year average for 2035 is -0.6 … is your forecast right or wrong? And more importantly what is the baseline for the temperature? 1951-1980? 1981-2010?

Also pointing out the possibility of continued cooling until 2650 is perfectly reasonable looking at the Christiansen data set and the current state of solar activity.

It is so far from falsifiable that it has no meaning at all. What’s the point of such far-fetched and unverifiable speculation? Stick to what we can check, even 2100 is way too far out.

Enjoyed your English travel bit -I’m from Liverpool originally.

Thanks kindly. I had a good time in Liverpool, I liked the feeling of the city.
w.

September 27, 2013 7:29 pm

In both the land and the ocean, the IR is absorbed in the first few microns.
>>>>>>>>>>>>>>>>
Agreed. But the land doesn’t evaporate. And I said “still water” and that wave action and other factors changed the equation. And I said that what happens afterward gets very complicated for both scenarios. You’ve now got water vapour being generated that is in thermal conductivity with the water surface. So yes, I would also argue that the net effect is warming of both, but that wasn’t the question. The question was is there a difference between how LW interacts with water vs land and the answer is yes.

Matthew R Marler
September 28, 2013 10:03 am

Willis: Again, the numbers don’t work. Evaporation is about 80 W/m2. Downwelling IR averages 330 W/m2. Much evaporation is from the sun. That means that 90% of the DLR worldwide is not going to evaporation, but to warming the surface.
Thanks for the comments, but my question is about what happens if DWIR is added to the DWIR already present. I appreciate that the mechanics are complicated. My wonder was prompted by boiling water: if you increase the flame before the water starts boiling, the temperature of the water increases; if you increase the flame after the water starts boiling, then you increase the vaporization rate. Obviously, the situation is different at ocean and lake surfaces where you have chaotic mixing of the surface, wind and spin drift and such, and no literal “boiling”.
“Equilibrium” calculations treat the surface of the Earth as flat, uniform in surface texture, and uniformly insolated. Equilibria are quite rare in high dimensional non-linear dissipative systems (even on flat, uniform surfaces with uniform input) so the equilibrium calculations are a priori suspect. With a round, non-uniform surface non-uniformly insolated, I think that it is impossible to predict on present knowledge what a doubling of CO2 concentration will actually produce; and if there is an equilibrium, it will only appear after the transient processes have been long underway. With water on 70% of the Earth surface and large wet land regions, that strikes me as a serious known unknown.
As I hinted, I suspect that in warm wet regions (N. Pacific summer), increases of CO2 may increase vaporization with much less increase in temperature than has been calculated; in cold dry regions, or hot dry regions, I expect the balance of temperature change/vaporization change to be different.
Thanks again for the interchange.

Bart
September 28, 2013 11:09 am

FTA:
‘As Monk would say, “Here’s what happened”.’
That Monk is a smart guy. It’s been pretty obvious, actually, for a long time that the climate modelers had conflated the natural cyclical upswing with a sudden anthropogenic rise. But, the fact that the rise from approximately 1970-2000 was almost precisely the same as the rise from 1910-1940 gave the game away.
“This analysis shows that the real AGW effect is benign and much more likely to be less than 1 °C/century than the 3+ °C/century given as the IPCC’s best guess for the business-as-usual scenario.”
It’s actually pretty obvious from other data that it must be even less than that, and effectively zero. If we look at the relationship between CO2 and temperatures, it is apparent that to a very high degree of fidelity that
dCO2/dt = k*(T – Teq)
CO2 = atmospheric concentration
k = sensitivity factor
T = global temperature anomaly
Teq = equilibrium temperature
k and Teq are parameters for a 1st order fit. They may change over time, but are well represented by constants for the modern era since 1958 when precise measurements of CO2 became available.
This is a positive gain system – an increase in temperatures produces an increase in CO2 concentration. If we now presume that there is a positive feedback between CO2 and temperature, we get a positive feedback loop, which would be unstable.
There are other negative feedbacks, e.g., the T^4 radiation of heat. But, to maintain stability, these would have to be dominant, in which case the overall effect of CO2 on temperature would be negligible anyway. All roads lead to Rome – whatever the overall system response is, it must be such that the effect of CO2 on temperatures is effectively nil.
Now, a note on how the relationship above comes about. Atmospheric CO2 obeys a partial differential diffusion equation. The interface with the oceans sets boundary conditions. The boundary condition can be considered to obey something akin to Henry’s law (buffering processes complicate the actual relationship)
CO2(boundary) = Kh*CO2_Oceans(boundary)
The derivative of this is
dCO2(boundary)/dt = dKh/dt*CO2_Oceans(boundary) + Kh*dCO2_Oceans(boundary)/dt
Kh is a function of temperature, and thus can be expanded to first order as
Kh = Kh_eq + Kh_partial*(T – Teq)
where Kh_partial is the partial derivative of Kh to temperature. The oceans have been a net source of CO2 to the atmosphere. Assuming these are dominant, then
dCO2(boundary)/dt := (Kh_partial*dCO2_Oceans(boundary)/dt) * (T – Teq)
which is the form of the equation above with
k = Kh_partial*dCO2_Oceans(boundary)
In words, the influx of CO2 from the oceans produces a temperature dependent pumping action into the atmosphere.
The full dynamics are an atmospheric diffusion equation, with ocean boundary conditions as above, as well as a boundary condition with the land, which establishes a flow from the atmosphere into the miinerals and biota of the land, and an outflow from anthropogenic release of latent CO2. This is vastly simplified, of course, as the oceans contain their own biota and other CO2 absorbing processes. So, rather than division strictly into oceans and land, there is some overlap between the two reservoirs. In any case, though I have not yet worked out the details, it is clear where all this is heading. A very simplified ODE system model is
dCO2/dt = (CO2eq – CO2)/tau + H
dCO2_eq/dt = k*(T – Teq)
CO2 = atmospheric CO2
CO2eq = equilibrium CO2 established by the oceanic boundary condition
H = human inputs
tau = a time “constant”
The equilibrium CO2 is established by the interface with the oceans, and is relentlessly driven upward by temperatures above the equilibrium level. These feed into the atmospheric diffusion equation, which is being driven by human inputs, but is also being depleted by natural sinks which react in proportion to the CO2 level above equilibrium.
If “tau” is short, then H will be dramatically attenuated, and have little overall effect, and CO2 will track CO2eq. The actual dynamics are undoubtedly much more complicated, and “tau” would be more precisely modeled as an operator theoretic value which smooths the CO2 differential, leading to a “long tail” response, though not too long in the most significant components, as the data show that human inputs are being fairly rapidly sequestered.
But, this is effectively what the data show is happening. There really is no doubt about it. And, because of the positive feedback effect noted above, CO2 concentration cannot have a significant effect on temperature, because otherwise, we already would have maxxed out at some enormous level of CO2 and exceedingly high temperatures eons ago.

Bill from Nevada
September 29, 2013 7:49 pm

Amateurs attempting to make phase change refrigerant a heater are so hilarious.

Tim Folkerts
September 30, 2013 2:16 pm

Bart says:
“it is apparent that to a very high degree of fidelity that
dCO2/dt = k*(T – Teq)”

One big problem here is that the CO2 levels are averaged over 12 months. So what this relationships shows is that the the current temperature anamoly is correlated to the combination of the LAST 6 months of CO2 and the NEXT 6 months of CO2. In other words, there is no way to know of the if is CO2 driving temperature, or temperature driving CO2 from this relationship. So the conclusion that “an increase in temperatures produces an increase in CO2 concentration” is pure speculation with this data”.
A better way to analyze this would be to first compare the correlation of temperature with the NEXT 12 months of CO2 and then with the PREVIOUS 12 months of CO2 and see which is better.

September 30, 2013 2:46 pm

Bart, you are essentially wrong on several points:
If we now presume that there is a positive feedback between CO2 and temperature, we get a positive feedback loop, which would be unstable.
If the positive feedback is modest, then the system is not unstable, only gives an extra increase of temperature and CO2 levels with (fb) and without (nofb) feedback of CO2 on temperature:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/feedback.jpg
If we look at the relationship between CO2 and temperatures
The strong relationship is between the variability of (d)temperature(/dt) and dCO2/dt, not with the slope of dCO2/dt. By fitting the trends with an arbitrary factor and bias, you attribute the whole slope of dCO2/dt to temperature, but the slope is the result of all contributions to the increase, including human emissions.
The oceans have been a net source of CO2 to the atmosphere
Vegetation is a net sink for CO2 (~1 GtC/yr, humans emit ~9 GtC/yr), based on the oxygen balance. Besides vegetation and oceans, all other known natural sinks are either to small or too slow. The atmospheric increase is ~4 GtC/yr. Some 4 GtC/yr human emissions (as mass) + the extra release from the oceans goes where?
In words, the influx of CO2 from the oceans produces a temperature dependent pumping action into the atmosphere.
According to Henry’s Law a temperature increase gives an increase in equilibrium setpoint of ~16 µatm CO2 with the atmosphere. Thus an increase of ~16 ppmv in the atmosphere will bring the in- and outfluxes of the ocean-atmosphere system back to what they were previous to the temperature increase. Starting from a system in dynamic equilibrium:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_temp.jpg
The increase of CO2 in the atmosphere both reduces the increased upwelling and increased the downwelling
If the increase of CO2 was caused by a sudden extra upwelling of extra CO2 from the deep oceans (the “Coke effect”), that would have a similar effect on the balance, as in the case of a temperature increase:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_incr.jpg
Temperature changes and upwelling changes act independent of each other and are simply additive:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_incr_temp.jpg
Thus the dynamics of the ocean processes prove that a continuous release of the oceans from a sustained difference in temperature is impossible, as the increased CO2 level in the atmosphere influences both the release and uptake of CO2 from/into the oceans.

Bart
September 30, 2013 6:45 pm

Tim Folkerts says:
September 30, 2013 at 2:16 pm
“So what this relationships shows is that the the current temperature anamoly is correlated to the combination of the LAST 6 months of CO2 and the NEXT 6 months of CO2.”
The average is applied non-causally, as you say. As a result, it has zero phase. Its only effect is to attenuate higher frequencies, in particular, zeroing out the annual variation so that underlying trends can be observed.
“In other words, there is no way to know of the if is CO2 driving temperature, or temperature driving CO2 from this relationship.”
No. The derivative relationship establishes it. It would be absurd to argue that the rate of change of CO2 drives temperature. If that were the case, we could boost CO2 up until it was the greater part of the atmosphere, but once we stopped pumping, the temperature would revert to its equilibrium level.
A derivative also provides leading phase. Thus, the overall CO2 concentration, which is the integral of the derivative, lags temperature by 90 deg in phase. This also establishes causality. The cause, temperature, always precedes the effect, CO2 concentration.
Ferdinand Engelbeen says:
September 30, 2013 at 2:46 pm
See my reply back at The Hockey Schtick.

October 1, 2013 11:05 am
Greg Goodman
October 5, 2013 1:38 am

“Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. ”
A more accurate description would be “where the climatologists PUT it”.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2
Much of the reason there a two ramps in 20th c. is because Hadley inserted of -0.5K drop in 1945. The raw data has a lot more downward trend and more variability in 19th c. The 20th c was a more continuous rise from 1910.
Recent modification rounded off the step change but it’s still there and still as big.
This is what the overall adjustment is for hadSST3.
http://curryja.files.wordpress.com/2012/03/hadsst3-cosine-fit1.png

gordie
October 7, 2013 2:52 am

Marler says:
“…doubling CO2 will not change the (equilibrium) surface temperature.”
One of the early proponents of the idea of GHG warming , John Tyndall, would
probably have agreed with you.
Thus:
“It is evident that olefiant gas of 1 inch tension [1 / 15th of an atmosphere pressure] must
extinguish a large proportion of the rays which are capable of being absorbed by
the gas, and hence the succeeding measures having a less and less amount of heat
to act upon must produce a continually smaller effect.” (Bakerian Lecture, 1861)
This is acknowledgement of (progressive) saturation of the (primary) process.

1 3 4 5