Detecting the AGW Needle in the SST Haystack

Guest essay by Jeffery S. Patterson

My last post on this site, examined the hypothesis that the climate is dominated by natural, harmonically related periodicities. As they say in the business, the critics were not kind. Some of the criticisms were due to a misunderstanding of the methodology and others stemmed from an under appreciation for the tentativeness of the conclusions, especially with respect to forecasting. With respect to the sparseness of the stochastic analysis, the critics were well founded. This lack of rigor is why it was submitted as a blog post and not a journal paper. Perhaps it served to spark someoneā€™s interest who can do the uncertainty analysis properly, but I have a day job.

One of the commentators suggested I repeat the exercise using a technique called Singular Spectrum Analysis which I have done in a series of posts starting here. In this post, I turn my attention away from cycles and modeling and towards signal detection. Can we find a signature in the temperature data attributable to anthropogenic effects?

Detecting small signals in noisy data is something I am quite familiar with. I work as a design architect for a major manufacturer of test equipment, half of which is dedicated to the task of finding tiny signals in noisy data. These instruments can measure signals on the order of -100dBm (1 part in 10-13). Detecting the AGW signal should be a piece of cake (tongue now removed from cheek).

Information theory (and a momentā€™s reflection) tells us that in order to communicate information we must change something. When we speak, we modulate the air pressure around us and others within shouting distance detect the change in pressure and interpret it as sound. When we send a radio signal, we must modulate its amplitude and/or or phase in order for those at the other end to receive any information. The formal way of saying this is that ergodic processes (i.e. a process whose statistics do not change with time) cannot communicate information. Small signal detection in noise then, is all about separating the non-ergodic sheep from the ergodic goats. Singular Spectrum Analysis excels at this task, especially when dealing with short time series.

Singular Spectrum Analysis is really a misnomer, as it operates in the time-domain as opposed to the frequency domain as the term spectrum normally applies. It allows a time-series to be split into component parts (called reconstructions or modes) and sorts them in amplitude order, with the mode contributing most to the original time series first. If we use all of the modes, we get back the original data exactly. Or we can choose to use just some of the modes, rejecting the small, wiggly ones for example to provide the long term trend. As you may have guessed by now, weā€™re going to use the non-ergodic information-bearing modes, and relegate the ergodic, noisy modes to the dust bin.

SSA normally depends on two parameters, a window length L which canā€™t be longer than Ā½ the record length and a mode selection parameter k (k is sometimes a multi-valued vector if the selected modes arenā€™t sequential but here they are). This can make the analysis somewhat subjective and arbitrary. Here however, we are deconstructing the temperature time-series into only two buckets. Since the non-ergodic components contribute most to the signal characteristics, they will generally be in the first k modes, and the ergodic components will be sequential, starting from mode k+1 and including all remaining L-k-1 modes. Since in this method, L only controls how much energy leaks from one of our buckets to the other, we set to its maximum value to give the finest grain resolution to the division between our two buckets. Thus our analysis depends only on a single parameter k which is set to maximize the signal to noise ratio.

Thatā€™s a long time to go without a picture so here are the results of the above applied to the Northern Hemisphere Sea Surface Temperature data.

clip_image002

Figure 1 -SST data (blue) vs. a reconstruction based on the first four eigen modes (L=55, k=1-4)

The blue curve is the data and the red curve is our signal ā€œbucketā€ reconstructed from the first four SSA modes. Now letā€™s look in the garbage pail.

clip_image004

Figure 2 – Residual after signal extraction

We see that the residual indeed looks like noise, with no discernible trend or other information. The distribution looks fairly uniform, the slight double-peak due probably to the fact that the early data is noisier than the more recent. Remembering that the residual and the signal sum to the original data, and since there is no discernible AGW signal in the residual, we can state without fear of contradiction that any sign of AGW, if one is to be found, must be found in the reconstruction built from the non-ergodic modes plotted in red in figure 1.

What would an AGW signal look like? The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climateā€™s natural variation. So we are looking for a ramp, or equivalently a step change in the slope of the temperature record. Hereā€™s an idea of what a trendless climate record (with the natural variation and noise similar to the observed SST record) might look like, with (right) and without (left) a 4 Ā°C/century AGW component. The four curves on the right represent four different points in time where the AGW component first becomes detectable: 1950, 1960, 1970 and 1980.

clip_image006 clip_image008

Figure 3 ā€“ Simulated de-trended climate record without AGW component (left) and with 4Ā°C/century AWG components

Clearly a linear AGW signal of the magnitude suggested by the IPCC should be easily detectible within the natural variation. Hereā€™s the real de-trended SST data. Which plot above does it most resemble?

clip_image010

Figure 4 -De-trended SST data

Note that for the ā€œnatural variation is temporarily masking AGWā€ meme to hold water, the natural variation during the AGW observation widow would have to be an order of magnitude higher than that which occurred previously. SSA shows that not to be the case. Here is the de-trended signal decomposed into its primary components (note, for reasons too technical to go into here, SSA modes occur in pairs. The two signals plotted below include all four signal modes constituted as pairs)

clip_image011

Figure 5 – Reconstruction of SSA modes 1,2 and 3,4

Note the peak-to-peak variation has remained remarkably constant across the entire data record.

Ok, so if itā€™s not 4Ā°C/century, what is it? Remember we are looking for a change in slope caused by the AGW component. The plot below shows the slope of our signal reconstruction (which contains the AWG component ,if any), over time.

clip_image013

Figure 6 – Year-to-year difference of reconstructed signal

We see two peaks, one in 1920 well before the effects of AGW are thought to have been detectable and one slightly higher in 1995 or so. Letā€™s zoom in on the peaks.

clip_image015

Figure 7 – Difference in signal slope potentially attributable to AGW

The difference in slope is 0.00575 Ā°C/year or ~.6 Ā°C/century. No smoothing was done on the first-difference plot above as is normally required, because we have eliminated the noise component which makes this necessary.

Returning to our toy climate model of figure 3, hereā€™s what it looks like with a .6 per century slope (left) with the de-trended real SST data on the right for comparison.

clip_image017clip_image010[1]

Figure 8 – Simulated de-trended climate record with .6Ā°C/century linear AGW components (see figure 3 above) left, de-trended SST (northern hemisphere) data right

Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. The slope of the AGW fit? .59 Ā°C/century, arrived at complete independent of the SSA analysis above.

Conclusion

As Monk would say, ā€œHereā€™s what happenedā€. During the global warming scare of the 1980ā€™s and 1990ā€™s, the quasi-periodic modes comprising the natural temperature variation were both in their phase of maximum slope (See figure 5). This naturally occurring phenomenon was mistaken for a rapid increase in the persistent warming trend and attributed to the greenhouse gas effect. When these modes reached their peaks approximately 10 years ago, their slopes abated, resulting in the so-called ā€œpauseā€ we are currently enjoying. This analysis shows that the real AGW effect is benign and much more likely to be less than 1 Ā°C/century than the 3+ Ā°C/century given as the IPCCā€™s best guess for the business-as-usual scenario.

0 0 votes
Article Rating
125 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
September 26, 2013 7:09 am

You get bad reviews because global warming alarmists are a rowdy bunch who have experience dominating the discussion by shouting everyone else down. It’s not like we have an alternative or minority position, we are branded skeptics and deniers.
Your analysis seems to me to be one of the best explanations of recent data and a prediction I have been seeing for tiny temperature increases or even decreases for another decade.

RDG
September 26, 2013 7:21 am

Thank you … I do so love clarity and reason. It is a genuine shame that they are so enjoyable because of their rarity.

@njsnowfan
September 26, 2013 7:23 am

Good post Jeffery S. Patterson.
I do feel the main climate natural variation is caused by the sun.
Historical Total Solar Irradiance Chart
1900 to 2012 and it matches Temp charts.
http://lasp.colorado.edu/lisird/tsi/historical_tsi.html
Now negative PDO followed Solar Irradiance and sun spot cycle.
http://www.landscheidt.info/images/powerwave3.png

Editor
September 26, 2013 7:25 am

Ron Scubadiver says:
September 26, 2013 at 7:09 am

You get bad reviews because global warming alarmists are a rowdy bunch who have experience dominating the discussion by shouting everyone else down. Itā€™s not like we have an alternative or minority position, we are branded skeptics and deniers.

Actually, Ron, Jeffrey got bad reviews because he claimed that he had detected a 170.7 year signal in 110 years of data …
w.

Crispin in Waterloo but really in Yogyakarta
September 26, 2013 7:34 am

Excellent straightforward analysis using standard tools suited to the purpose. The confirmation of 0.6 Deg C/Century replicates the natural variation claimed by several other sources going on for years now. That are far too many to cite.
We are of course interested in what the signal frequency(s) is and what the likely near-term change will be. Thanks.

Harold Oelofse
September 26, 2013 7:38 am

Sometime ago a group I was working with used a similar approach to locate non-radial pulsations moving across the surface of a star, effectively small waves. Our graphs looked almost identical to your Figure 4, these tiny surface waves could barely be seen, very much like the temperature variations you found of less than 1deg C per century….excellent analysis.

Editor
September 26, 2013 7:39 am

Willis Eschenbach says:
September 26, 2013 at 7:25 am

Actually, Ron, Jeffrey got bad reviews because he claimed that he had detected a 170.7 year signal in 110 years of data ā€¦

At least he had 110 points. It only take three to describe a circle (if there’s only a circle to be described).

Stephen Wilde
September 26, 2013 7:40 am

A neat variant on the previous observations that the upward slope in the late 20th century is little different from the upward slope in the early 20th century despite vastly increased CO2 emissions by humans.

JA
September 26, 2013 7:43 am

CO2 comprises .04 PERCENT of the atmosphere ; it is an atmospheric trace gas.
Of this .04 PERCENT of CO2, about 5 PERCENT, is produced as a result of human activity.
So, this tells us that the CO2 level in the atmosphere resulting from human activity is about one part in 50,000 or .002 PERCENT.
And we are supposed to believe that this is the cause of warming.
Sorry, but there is no mathematical tinkering with the (bogus?) temperature time series that will convince me humans are affecting the climate.
What will? When science can EXPLAIN (not just describe) the historical climate.
Of course, when this happens, climate models will be able to REPRODUCE, ACCURATELY, the GLOBAL historical climate of , say, the last 1000 years.
I am not holding my breath.

Jim G
September 26, 2013 7:51 am

A+ for clarity and explanation of statistical method.

Editor
September 26, 2013 7:55 am

Jeffrey, first, thanks for a clean understandable piece of analysis.
Unfortunately, I am always very suspicious of simply filtering out short-term variations, calling them “noise”, and focusing on the long-term variations. The problem is well known in climate science, which is that such long-term variations appear for a while … and then they disappear. For example, over the last ~ 50 years, the rate of change of sea level has had good correlation with the sunspots.
However, the fifty years before that show no such signal. Where did the signal come from? Where did it go?
So finding new methods to filter out the short-term variation doesn’t impress me much. There is very little difference, for example, between the red SSA line in your Figure 1, and either a Gaussian or a Loess filter applied to the same data.
I also am very suspicious when someone says thing like:

“The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climateā€™s natural variation.”

For me, “natural variation” is an un-scientific dodge to avoid saying “we don’t have a clue what makes it go up and down”. Given that, we also don’t know what the shape and nature of the natural variation is … except that it is chaotic.
So properly translated, your statement should refer to “a linear temperature trend impressed on top of a chaotic signal about which we are totally clueless.”
So that leaves us with a single equation with two unknownsā€”the anthropogenic signal and the unknown chaotic signal. You can’t pull the short term variations out of the signal and say “voilĆ”, what remains are natural variations”. We don’t know that.
Finally, you have a hundred years of data, and you’re highlighting a sixty-year cycle … me, I limit my findings to a third of the length of my data.
I’m sorry, but to me, filtering the data and then saying that simple transformation is enough to differentiate between the (presumably quasi-linear) anthropological signal and the totally unknown, chaotic natural signal doesn’t cut it.
w.

Mark Hirst
September 26, 2013 7:57 am

Very well reasoned. Thank you!
Thanks also for the flashback to my senior year electrical engineering “Communication/Information Theory” class as I started on the road to my MSEE work. This is perhaps the best analysis I’ve seen on trying to extract an AGW signal from the noisy temperature record.

richardscourtney
September 26, 2013 8:14 am

Jeffery S. Patterson:
Thankyou for this analysis. Clearly, you have taken account of criticisms of your previous analysis.
However, I and some others asked you to conduct your previous analysis on each half of the time series and to observe if your analysis then predicts each half from the other.
I would appreciate your attempting that for this analysis, too. It would give confidence that your observed signals are real.
Richard

J. Bob
September 26, 2013 8:19 am

Here are some plots using Fourier Convolution (Spectral Ana.) to remove higher freq. component from temperature station records.
This analysis comprised of computing the anomaly of long term station records (prior to 1800) so as to produce a average. The average anomaly was then converted to the freq. domain,passed through a “mask” to cut off higher freq. & converted back to the time domain. Included were following analysis procedures to reduce “leakage”.
Three groups were evaluated:
Prior to 1700 A1 ( actually only one, CEL-Central England)
Prior to 1750 A4 ( CEL, Debuilt, Uppsalla, Berlin- later 2 from http://www.rimfrost.no/ )
Prior to 1800 A1 ( 14 stations )
Here are the results, all seem to predict a downward trend.
Ave1 – CEL 25 Yr. Cut Off:
http://dc456.4shared.com/img/AgE-cCa2/s7/13188a22140/Ave1_2010_FF_25yr.jpg?async&rand=0.8159342953716682
Ave1 – CEL 50 Yr. Cut Off:
http://dc385.4shared.com/img/7rxAWINH/s7/131889a5cf8/Ave1_2010_FF_50yr.jpg?async&rand=0.31054256968052096
Ave4 – CEL 50 Yr. Cut Off:
http://dc358.4shared.com/img/tGnWv886/s7/131889a2648/Ave4_2010_FF_50yr.jpg?async&rand=0.008796124754720136
Ave14 – CEL 50 Yr. Cut Off:
http://dc488.4shared.com/img/4FKXcwnw/s7/131889a9790/Ave14_2010_FF_50yr.jpg?async&rand=0.16009074298866643
When the results of a similar plot was presented on RC, about 4 years ago, one “Tamino”, noted it was “bungled”, since it went against his analysis.
Guess time will tell who “bungled”.

September 26, 2013 8:26 am

“What would an AGW signal look like? The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climateā€™s natural variation. ”
Wrong.
The AGW signal is the result of ALL forcings due to humans, slightly over half is C02.
if you want to look for a signal it helps to understand the actual theory, and to understand what the theory is.
Next. The theory states that the addition of GHGs will change the energy balance. There are two large unknowns here.
1. How much will the balance change ( sensitivity)
2. How will balance be restored
Question 1 can be answered only vaguely, the ECS, will be between 1.5 and 4.5C per 3.7W of increased forcing
Question 2. Question 2 can only be answered with models. Which means it cant be answered very clearly. Where the excess energy will be stored and how it will be released is not known very well. Your analysis depends on your assumption that the restoration happens linearly.
That’s probably the least likely scenario.

September 26, 2013 8:45 am

In the last year in a series of posts at
http://climatesense-norpag.blogspot.com
I have laid out a method of climate forecasting based on recognising quasi repetitive- quasi cyclic patterns in the temperature and other relevant climate-driver data time series.Patterson’s post illustrates a useful approach to deconvolving possible patterns from the temperature time series.We should not expect mathematical precision in this type of forecast because of the changing resonances between the quasi cyclic rate processes which integrate into the temperature climate metric. It would however be very useful if Patterson could extend his analysis to the 2000 year proxy temperature record of Christiansen and Ljundqvist 2012
http://www.clim-past.net/8/765/2012/cp-8-765-2012.pdf
which is probably the best representation of the NH temperature for the last 2000 years and the basis for most of my cooling projections for the next several hundred years .
I would suggest that the underlying 0.6 degree /century trend which Patterson is happy to attribute to AGW is in fact part of the natural 1000 year solar cycle seen in the Christiansen and the ice core data shown in the first link above. The key question is whether the recent peak was a peak in both the 60 and 1000 years cycles. The decline in solar activity since about 2004 suggests that it may well be so.

Nancy C
September 26, 2013 8:50 am

Hmmm….nitpicking, but 10^-13 is -260dB; -100dB is 10^-5. I imagine your company’s instruments could still be detecting a -100dBm signal (7.75 uV) on top of a +160dBm signal (77,500,000 volts), but it seems unlikely.

Cho_cacao
September 26, 2013 8:54 am

The analysis shows mostly that the underlying trend is linear. It would reject an AGW hypothesis only under the assumption that AGW creates a non linear trend. Which cannot be the case over such short period of times, so I think the analysis doesn’t show anything at all…

Matthew R Marler
September 26, 2013 9:07 am

Can we find a signature in the temperature data attributable to anthropogenic effects?
Sure. Vaughan Pratt showed one way, with a trend explicitly related to CO2 concentration. But with time series like this that have been worked and reworked, we can “find” a signal that isn’t there. He had to then find an “unusual” (shall we say?) model for the residual.
Nice post. The test will be on the next 20 years worth of out of sample data.
If the data are trend plus residual (i.e., everything else), and if you have a good model for the residual (derived as the computed residuals from a smoothing), then you have a good model for the trend. The “trend” includes all anthropogenic effects that have been reasonably monotonic over the interval: land use changes and UHI, aerosols, CO2. So the conclusion dependent on your model result is that CO2 is at worst benign.
Your model fits my mood better than Vaughan Pratt’s model, but I expect to have to wait 20 some years yet for a credible model.

Merrick
September 26, 2013 9:09 am

Nancy, I’m not sure I understand either of you. A dBm is a decibel measurements referenced to a milliwatt. So, for starters, the only thing it’s 1 part in 10^13 of is 1 watt. Maybe that’s what Jeffrey meant, but I doubt it. -130dB *is* 1 part in 10^13 referenced to whatever you are calling unity (or 0dB). Electrical Engineers, one of which you apoear to be, often “fudge” by saying that power is proportional to volts^2, so change the decibel relationship from 10Log10 to 20Log10 and everything is peachy – but that’s a shorthand that can lead to incorrect results if one isn’t careful (though it’s usually ok) because the relationship is only linear under controlled conditions (freely available current, added power through system doesn’t change operating characteristics, etc.), so that’s how you got -260dB, but that’s probably not what Jeffrey was referring to, either. Or he’s completely wrong to have referenced dBm, because that is specifci to power (which is the only appropriate quantity to use in rating dB).
Sorry, now my nitpicking is done.

JA
September 26, 2013 9:11 am

So, everybody; what caused the Little Ice Age??
What caused the Medieval Warming Period?
Does ANYBODY KNOW?
If a Singular Spectrum Analysis had been carried out of the first 100 years of the Medieval Warming Period, what exactly would that tell us of THE CAUSE of that warming??
Would that analysis shed any light about the subsequent Little Ice Age?
Hello. !!!!!! Anybody there??

September 26, 2013 9:19 am

The Farmers’ Almanac seems to have the most accurate long range forecasting tool to date. Has anyone investigated the parameters that they use and tried to duplicate their model?

Matthew R Marler
September 26, 2013 9:19 am

Steven Mosher: Question 1 can be answered only vaguely, the ECS, will be between 1.5 and 4.5C per 3.7W of increased forcing
A question for you, incidental to the leading post. The 3.7W/m^2 of forcing that results from CO2 doubling falls mostly on water and other wet surfaces: Amazon Basin, N. American forests and plains, Central Africa, S.E. Asia, etc.. How much of that extra energy will result in extra vaporization of water with no (or reduced, or little) increase in temperature? The ECS calculation assumes an equilibrium, but any such equilibrium will be the result of the events during the transition (or transient, if you prefer), not the cause of them. The existence of an equilibrium is not guaranteed, and the earliest effects of 3.7W/m^2 increased forcing need to be understood.
This leads to a guess about question 2: if (!) the likely effect of increased forcing is mostly increased vaporization, a reasonable consequence is that cloud cover increases faster than otherwise, slightly blocking incoming sunlight.

September 26, 2013 9:21 am

Nancy C says:
September 26, 2013 at 8:50 am
Hmmmā€¦.nitpicking, but 10^-13 is -260dB; -100dB is 10^-5. I imagine your companyā€™s instruments could still be detecting a -100dBm signal (7.75 uV) on top of a +160dBm signal (77,500,000 volts), but it seems unlikely.
Your confusing voltage with power. dBm is db below a 1 mW reference

September 26, 2013 9:33 am

Dr Norman Page says:
September 26, 2013 at 8:45 am
I would suggest that the underlying 0.6 degree /century trend which Patterson is happy to attribute to AGW …
I’ll respond more fully wanted to correct this. Note the key plot is captioned ” Difference in signal slope potentially attributable to AGW. It could be as you point out, less, but certainly the hypothesis that an AWG of the magnitude depicted in fig 3 (or even half that) being undetectable in the data is not supportable.

Merrick
September 26, 2013 9:33 am

Sorry, again, I also don’t understand your numbers.
10,000,000V is 160 dB, so if you said dBm was referenced to mV then 160dBm is 3.1623 x 10^9 mV
10 microvolt is -100dB, so -100dBm is 3.162 x 10^-4 mV
I still want to know how -130dBm relates to the signal (what S/N does that represent?) as opposed to 1W.

Nik
September 26, 2013 9:38 am

What I fail to understand is the emphasis the warmists place on “Peer Reviewed Papers” and that having been Peer Reviewed the conclusions are 100% correct.”Peer Review” just means the whole paper is consistent, clear, free of ambiguities and won’t make you look like a complete idiot if you published it. In no way does infer that the conclusions of the paper are correct.

September 26, 2013 9:38 am

Jeffrey S. Patterson:
Thank you for the interesting essay. I’m aware of a barrier to accomplishment of what you try to accomplish that I would like to share.
The notion that of an anthropogenic “signal” submerged in “noise” is of political importance as it appears in the IPCC’s assessment reports as well as your essay. However, while control theory sets a requirement for information to travel from the future to the present if a system is to be controlled, relativity theory denies that a physical signal can do so, for to do so this “signal” would have to travel at superluminal speed. As “information” is only a measure, there is no bar in relativity theory for information to travel from the future to the present.
The information that is needed in order to control the climate cannot be carried by a signal. One of the consequences is that for a control system for the climate there is no such thing as a “signal-to-noise ratio.”
Currently, there is a bar to controlling the climate but it is not relativity theory. Instead, it is the non-existence of the events in the statistical population underlying the climate models. Absent these events, “information” does not exist as a concept.

DesertYote
September 26, 2013 9:40 am

Willis Eschenbach says:
September 26, 2013 at 7:55 am
###
You need to learn something about Information Theory. Statistical methods are completely inadequate in dealing with chaotic signals. Information Theory, on the other hand was developed specifically for this. The signal received by your cell phone is far more chaotic then a temperature record. BTW, many of the major errors with classical statistical methods were discovered by Information Theorists.
You know all those pretty pictures we get from Mars? Well, if Jeffery’s techniques did not work, we would not be getting them. Nothing he did is unusual or exotic. In fact they are pretty standard techniques.
A cheap book on the subject, very old but still a solid introduction that does not skimp.
http://www.amazon.com/Introduction-Information-Theory-Dover-Mathematics/dp/0486682102

September 26, 2013 9:41 am

JA says:
September 26, 2013 at 7:43 am
CO2 comprises .04 PERCENT of the atmosphere ; it is an atmospheric trace gas.
Of this .04 PERCENT of CO2, about 5 PERCENT, is produced as a result of human activity.

Of this trace gas, some 30% is produced as a result of human activity: from 0.03% ot 0.04%.
The 5% per year human contribution is additional, the 95% natural CO2 is simply going in and out, with 2.5% more going out than going in…

Editor
September 26, 2013 9:42 am

Matthew R Marler says:
September 26, 2013 at 9:19 am

… This leads to a guess about question 2: if (!) the likely effect of increased forcing is mostly increased vaporization, a reasonable consequence is that cloud cover increases faster than otherwise, slightly blocking incoming sunlight.

Actually, the numbers indicate that ON AVERAGE the increased forcing can’t be “mostly increased vaporization”.
Global evaporation is estimated by the fact that on average the annual rainfall averaged over the surface of the planet (including snow water equivalent) is about a metre (39″) per year. It takes about 70 W/m2 to evaporate that much water. And from things like the steam you see rising off the pavement after a rain, much of that evaporation is from the sun. So maybe half of that 70 W/m2 is coming from longwave.
Downwelling longwave radiation, on the other hand, is measured around the globe, and averages about 330 W/m2.
This means that the majority (about 90%) of the longwave is NOT going into evaporation.
But of course, it’s not that simple. That’s just the average. In fact, you are correct on the local level in the tropics. There, because of the high temperatures, further increases in temperature have a greater effect on evaporation. There’s something called the “Bowen Ratio”, which is the ratio of heat loss through convection (sensible heat flux) to heat loss from evaporation (latent heat flux). Evaporation generally is much larger than sensible heat loss. Globally the Bowen Ratio is about 0.25, representing the ratio between global sensible (17 W/m2) and evaporative (80 W/m2) heat loss. In the tropics, the Bowen Ratio drops to .05 or so.
And the evaporation, as you point out, does lead to clouds which cut down the incoming energy. In fact, my analysis of the TAO buoy data shows that when the clouds form (typically around 11 AM) the change in incoming sunlight is so large that the surface actually cools.

SOURCE: The TAO That Can Be Spoken
See also:
Cloud Radiation Forcing in the TAO Dataset
TAO/TRITON TAKE TWO
All the best,
w.

Dr. Deanster
September 26, 2013 9:50 am

Cool Stuff .. but I have a few questions.
Point 1). Your presentation is expressed as 0.6C change/century. And thus, you express that this is evidence that the increase in temp over the next 100 years would be closer to 1C than 3C. However, the basis of the 3C claim is on a doubling of CO2 in conjunction with time. I can see where this type of analysis can tease out a 0.6C changer per century in a hindcast, but that does not say what the effect will be from a doubling of CO2 or an increase rate of CO2 concentration.
Point 2). Your analysis shows a 0.6C/C increase, but such analysis takes into account all trending changes. Is it not correct to say that the 0.6C increase could have been due to CO2 .. or changes in cloud cover .. or an increase in SWR reaching the ground. I don’t see how this analysis serves as evidence for a particular “cause” outside of the noise. To assume that it is CO2 is to assume that ALL other influences are just noise, which is not a supportable argument.
Comments??

September 26, 2013 9:52 am

I’m working on a “paper” that will show the global trend is not due to a global warming signal at all, but the averaging of separate warming trends in different parts of the globe. Global warming is definitely not a global response, but regional warming driven by SST’s. I still have a few more regions to finish analyzing, I should be done soon.
Willis and Bob T, you’re going to love this……

milodonharlani
September 26, 2013 9:52 am

Matthew R Marler says:
September 26, 2013 at 9:19 am
Here is IMO an honest attempt to summarize & analyze what is known & not known about changes in cloudiness over the past 60 years:
http://meteora.ucsd.edu/~jnorris/presentations/Caltechweb.pdf
I recommend it.

David Ball
September 26, 2013 10:08 am

Isn’t a better analogy finding a specific needle in a needle stack?

September 26, 2013 10:13 am

JA See my comment at 8:45 above and check the series of posts at
http://climatesense:norpag.blogspot.com
The changing surface temperature of the earth has been accepted as a convenient metric for climate change.It is obvious that there are quasi cyclic- quasi repetitive patterns in the data. Some of these periodicities relate to the orbital relationships between the earth and the sun. If you want to “understand” where we are in relation to climate trends, first figure out where you are relative to these Milankovitch cycles. At this time we are several thousand years past the peak of the current interglacial and are headed toward the next ice age. The temperature changes which would be caused by these orbital changes then resonate and convolve with quasi-cyclic changes in solar “activity” .These include changes in solar magnetic field strength and solar wind speed which lead to changing GCR influx and probable associated changes in cloud cover and atmospheric chemistry ,also there are changes in TSI and perhaps more importantly in the solar radiation spectrum which importantly changes the ozone layer. These drivers then work through the great ocean and air systems – ENSO,PDO AMO,AO etc to produce the climate and weather.
The exact mechanisms are extremely complicated and hard to disentangle . However it is not necessary to understand these processes in order to make perfectly useful forecasts.
There are obvious patterns in the temperature data which can be reasonably projected forward for some fairly short time ahead .Here is the conclusion to my latest post.
“6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
9 Warning !!
The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon .If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up.This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others. A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly. I am sure, however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate.

Editor
September 26, 2013 10:16 am

DesertYote says:
September 26, 2013 at 9:40 am

Willis Eschenbach says:
September 26, 2013 at 7:55 am
###
You need to learn something about Information Theory. Statistical methods are completely inadequate in dealing with chaotic signals. Information Theory, on the other hand was developed specifically for this. The signal received by your cell phone is far more chaotic then a temperature record.

Yote, if you had a specific objection to what I said, you would have quoted the words I said, and raised your specific objection. That would have been helpful.
Instead, you wave your hands and say I need education … well, I do, and I always have. But saying something on the order of “Jeffrey is right because Information Theory, so there!!” is not an adequate teaching method. Nor does it address my objections in the slightest.
Finally, I think you may mean “noisy” when you claim that cell phone signals are chaotic, whereas I mean chaotic as in Mandelbrot.
w.

milodonharlani
September 26, 2013 10:28 am

JA says:
September 26, 2013 at 9:11 am
So, everybody; what caused the Little Ice Age??
What caused the Medieval Warming Period?
Does ANYBODY KNOW?
If a Singular Spectrum Analysis had been carried out of the first 100 years of the Medieval Warming Period, what exactly would that tell us of THE CAUSE of that warming??
Would that analysis shed any light about the subsequent Little Ice Age?
Hello. !!!!!! Anybody there??
—————————————
Please see:
Dr Norman Page says:
September 26, 2013 at 10:13 am

David L. Hagen
September 26, 2013 10:32 am

Thanks Jeffrey for showing the very clear 60 year cycle.
For another 60 year correlation, see: ENSO and PDO Explain Tropical Average SSTs during 1950-2013 September 26th, 2013 by Roy W. Spencer, Ph. D.

the last 60 years was comprised of 30 years of stronger La Ninas (cool conditions) followed by 30 years of stronger El Ninos (warm conditions). . . .You can use statistical linear regression . . . to explain 5-month running average tropical HadSST3 variations as a linear combination of the Multivariate ENSO Index (MEI), the cumulative MEI index since 1950, and the cumulative Pacific Decadal Oscillation (PDO) index since 1950. . . .the largest excursion in the model residuals (variations the model canā€™t explain, 2nd panel) is the cooling caused by the Mt. Pinatubo eruption . . .
The third term in the model equation (time accumulated PDO index) has a smaller influence than the accumulated MEI term, by about a factor of 2.3. . . .
the rate of rise in ocean heat content since the 1950s corresponds to only a 1 in 1,000 imbalance in the radiative energy budget of the Earth (~0.25 W/m2 out of ~240 W/m2).

jeanparisot
September 26, 2013 10:32 am

I tend to agree. We need to concentrate on getting the measurements correct going forward, lets let our great-grand children concentrate on the short term analysis. The last hundred years of data is compromised, the reconstructions before that have anecdotal value, lets ensure the next hundred years of data is good.

September 26, 2013 10:45 am

Matthew R Marler;
The 3.7W/m^2 of forcing that results from CO2 doubling falls mostly on water and other wet surfaces
>>>>>>>>>>>>>>>>>>
Actually it doesn’t, it never even make it to the surface for the most part. Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it. Very little gets to surface. You need not believe me, I’m just citing the IPCC explanation of the difference between radiative forcing and surface forcing:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-2-23.html

Matthew R Marler
September 26, 2013 10:56 am

Willis: Actually, the numbers indicate that ON AVERAGE the increased forcing canā€™t be ā€œmostly increased vaporizationā€.
I really am not addressing averages, but what happens at particular places and times. I don’t think the rest of your post rules out the possibility that I mentioned, or the possibility that there is an increase in the rate of the hydrological cycle without an increase in the area of cloud cover.
Downwelling longwave radiation, on the other hand, is measured around the globe, and averages about 330 W/m2.
This means that the majority (about 90%) of the longwave is NOT going into evaporation.

The question I posed is: What does the additional radiation do given the processes that happen every morning and evening, in that 70% and more of the surface that is wet and already experiencing daily evaporation and condensation?
It is related to another question: Given that a non-negligible fraction of surface heat is carried to the upper atmosphere by convection of wet and dry air, how does the transport of heat by convection of wet air get affected by increased downwelling IR? Surely (?) the answer can’t be “no effect at all.”

DesertYote
September 26, 2013 11:15 am

Willis Eschenbach says:
September 26, 2013 at 10:16 am
###
No. I mean chaotic, e.g. the RF signal your cell phone receives in the Santa Rosa Mall. You should see what a grid of metal beams does to an RF signal! Though to be sure, the signal is very noisy also, just like the temperature record.
You have been consistently objecting to Jeffery’s methodology. Everything you have written is to this effect. No need to quote a specific. I am making a general observation, pointing out that his methods are successfully used to actually do stuff, in the real world. Don’t be a Mosher, reading into my comment things I did not say, such as “Jeffery is right because of Information Theory”. What I said was, Jeffery’s methods work, and then I site a few real examples of them working.
I do not write essays. I don’t have the time. I have a demanding job and can not afford the hours it takes to serialize out my thoughts (which is difficult for me). I keep my comments short and general. I expect people who are interested to go out and get their own knowledge. That’s what I do. That how I found out the things I now know about economics. I made a stupid comment, a regular here blasted me. So I did some research.

DesertYote
September 26, 2013 11:26 am

Willis Eschenbach says:
September 26, 2013 at 10:16 am
###
Forgot to add full discloser:
10 years HP/Agilent, mostly Microwave Spectrum Analyzers. I also know who Jeffery Patterson is. So I might be a bit defensive.

September 26, 2013 11:32 am

Merrick says:
September 26, 2013 at 9:33 am
Sorry, again, I also donā€™t understand your numbers.
That’s because I totally mucked them up (that’s why they don’t let us DSP guys talk about analog front-end specs). Our spectrum analyzers have about a 10 db noise figure which puts the noise floor at about -165 dBm. With noise correction and averaging we can get an additional 6 dB for a stable signal. At that attenuation setting our front-end mixer can handle about +10 dBm (we’d get some harmonic distortion but we’re talking about noise limited dynamic range). So the NFDR is about -181 dB, or about 1 part in 10^9, not 1 part in 10^13 as I erronously stated in the article. I apologize for the error.

September 26, 2013 12:00 pm

Willis Eschenbach says:
September 26, 2013 at 7:55 am
So finding new methods to filter out the short-term variation doesnā€™t impress me much. There is very little difference, for example, between the red SSA line in your Figure 1, and either a Gaussian or a Loess filter applied to the same data.
This is not surprising- both the Loess filter and SSA are adaptive filters whose characteristic polynomial (i.e. the fiter poles and zeros placement) is determined by the data to optimize signal to noise ratio. The similarity between the red curve in my figure 1 and the brown curve in the An impartial look at global warmingā€¦ by M.S.Hodgart a few days ago, in my mind re-enforces not detracts from their validity.

For me, ā€œnatural variationā€ is an un-scientific dodge to avoid saying ā€œwe donā€™t have a clue what makes it go up and downā€. Given that, we also donā€™t know what the shape and nature of the natural variation is ā€¦ except that it is chaotic.
Hopefully we all agree that the high-pass residual shown in fig. 2 is incapable of masking any long-term persistent AGW signal.

On the scientific dodge part we agree. Kevin Trenberth in a refreshing moment of candor said natural variation is just another was of saying we don’t know. I shouldn’t have used such a loaded term when what I really meant was the natural, unaltered (by man) climate.
To me, the notion that an emergent, natural phenomena showed up just at the right time and with the right slope to mask the AGW signal seems far fetched.
Best regards,
Jeff

September 26, 2013 12:23 pm

Dr. Deanster says:
September 26, 2013 at 9:50 am

Cool Stuff .. but I have a few questions.
Point 1). Your presentation is expressed as 0.6C change/century. And thus, you express that this is evidence that the increase in temp over the next 100 years would be closer to 1C than 3C. However, the basis of the 3C claim is on a doubling of CO2 in conjunction with time. I can see where this type of analysis can tease out a 0.6C changer per century in a hindcast, but that does not say what the effect will be from a doubling of CO2 or an increase rate of CO2 concentration.

The projected rise in CO2 under the business as usual scenario to my understanding is more or less exponential. The logarithmic CO2 saturation curve gives rise to the expected linear trend.

Point 2). Your analysis shows a 0.6C/C increase, but such analysis takes into account all trending changes. Is it not correct to say that the 0.6C increase could have been due to CO2 .. or changes in cloud cover .. or an increase in SWR reaching the ground. I donā€™t see how this analysis serves as evidence for a particular ā€œcauseā€ outside of the noise. To assume that it is CO2 is to assume that ALL other influences are just noise, which is not a supportable argument.

A fair point which I should of highlighted more in my post. The detected signal matches the expected AWG signal in characteristic (roughly linear trend) and detection corner (mid-1960s) but that is not sufficient for attribution. It does I think show the unlikelihood of the IPCC business-as-usually projection and places an upper bound on any AGW component.
More important in mind than determining the exact number is the qualitative conclusion that there is no 3-6 degC/century crisis and spending untold billions on mitigations that will negatively impacting the quality of life for billions of people in the developing world is unwarranted. This and many other analyses like it show we have time to gather the data we need to make a rational decision. Heck even the warmist are saying the trend is likely down for at least another decade. In the meantime, we should be spending as much or more on improving the quality of data and on signal detection than on refining models whose usefulness for this task is, in my estimation, at least five decades away.

September 26, 2013 12:32 pm

DesertYote says:
September 26, 2013 at 11:15 am

No. I mean chaotic, e.g. the RF signal your cell phone receives in the Santa Rosa Mall. You should see what a grid of metal beams does to an RF signal! Though to be sure, the signal is very noisy also, just like the temperature record.

Sorry DY but I don’t think it is correct to say that a Faraday shield (your metal grid) induces chaos in a cell phone signal. That would require a high-order non-linearity which I’m not seeing in your analogy. Your main point however is correct. A type of adaptive filter similar to the SSA used here is what achieves the high signal-to-noise ratio that makes cell phone communication (and high speed modems) possible. The difference is the Kalman filter in your CP is dynamically adapting to the data continuously while the the SSA filter is adapted to the entire dataset just once.
Regards,
Jeff

John West
September 26, 2013 12:36 pm

Jeff Patterson
Very interesting analysis, thank you.
The “AGW signal” could also be a longer term oscillation on the hundreds of years scale.
Have you thought of running a similar analysis on SSN?

j ferguson
September 26, 2013 1:02 pm

David Ball: “Isnā€™t a better analogy finding a specific needle in a needle stack?”
That’s really good, David. thanks for thinking it and writing it. Bravo

Matthew R Marler
September 26, 2013 1:03 pm

davidmhoffer: Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it.
How close to the surface? Downwelling IR is measured by the TAO buoys.

September 26, 2013 1:04 pm

Jeff you say
” Given that, we also donā€™t know what the shape and nature of the natural variation is ā€¦ except that it is chaotic.”
The climate system is not chaotic on any meaningful time scale. The global temperature has been within narrow limits for at least 500 million years. The latest ephemerides are good back to about 45 million years before the decimal points in the planetary positions diverge chaotically. If you want to see the shape or pattern of the natural variation see Figs 5,6,and 7 of the last post at http://climatesense-norpag.blogspot.com

DesertYote
September 26, 2013 1:12 pm

Jeff Patterson says:
September 26, 2013 at 12:32 pm
###
My example was unclear. I was trying to describe the environment within a typical shopping mall constructed using steal girders, not a Faraday cage.

Nancy C
September 26, 2013 1:23 pm

Sorry for off topic, but Merrick, dBm is usually used to reference power into a specific 600 ohm load. So it’s usually understood that 0dBm = .001W but also corresponds to .775 volt (.775/600 x .775 = .001). You’re right, technically 0dBm is not .775 volts. On the other hand, if you say you’re measuring a -100dBm signal, most likely you’re actually measuring it’s voltage (.775/100,000) and then calling it power because you know it’s going into a “fixed” load. How accurately you can measure the power depends on how accurately you can measure the voltage, and since power is a function of the square of voltage, it would be silly to use the number of decimal places in the power to say what the resolution is. Like saying if I can measure length with a certain ruler to 1 part in 1000 I must be able to measure area with that same ruler to 1 part in 1000 x 1000. I think Jeff’s updated response reflects resolution in terms of voltage not power, which seems correct to me assuming the other numbers are correct. But sorry again for even bringing it up in the first place.

John Trigge
September 26, 2013 1:28 pm

None of these analyses are of any value unless performed by a ‘climate scientist’. As has been shown with statistics, only their results are of value, no matter how skewed, incorrect, misguided (or wrong) they are.

John Trigge
September 26, 2013 1:28 pm

Please forgive my ‘end sarc’ not appearing.

RC Saumarez
September 26, 2013 1:35 pm

This is an interesting post.
I have never used SSA and have just read an account of its mathematics. I have two reservations about the conclusions presented here.
1) SSA is an arbitrary linear decomposition of the lagged covariance matrix. At first sight, a ramp due to anthropogenic influences might be expected to be represented as an eigenvector. However, if the pCO2 has been rising approximately exponentially, there would be expected to be a linear increase in forcing. Since the first few components of the decomposition accounts for much of the shape of the temperature curve, how can one be certain that an anthropogenic component is not represented in the eigenvalue of the most linear component rather than a separate orthogonal component? I realise that you have done simulations with a ramp but if this has a distinctly different time of onset, I am not certain that this is a sufficient test to eliminate the anthropogenic component.
2) The decomposition is linear and therefore it it is not clear how multiplicative effects would be represented. For example, considering the effects of aerosols and/or dust, one could assume that these are linear, superimposable forcings. However, if they have a multiplicative effect on the total energy in the system, then trying to detect their contribution to the temperature signal is far more difficult and would involve a homomorphic deconvolution, which given the data would be extraordinarily difficult. This is a more general comment but one could argue that given the non-linear processes in climate, the anthropogenic component may not be detectable without non-linear modelling. (I am sure that there those who would say this).
I’m happy to be shown to be wrong on this

September 26, 2013 1:36 pm

Dr Norman Page says:
September 26, 2013 at 1:04 pm
Jeff you say
ā€ Given that, we also donā€™t know what the shape and nature of the natural variation is ā€¦ except that it is chaotic.ā€
=======================================
That was not my quote but rather it was from Willis Eschenbach September 26, 2013 at 7:55 am

September 26, 2013 1:43 pm

Jeff Sorry- consider the comment readdressed to Willis .

September 26, 2013 1:51 pm

@RC Saumarez September 26, 2013 at 1:35 pm
SSA can be subjective, but as the article points out, this particular use of it is not. L is set to its max value to give the highest spectral resolution, k is set for best SNR.
One of the nice things about SSA is that any line can be transparently subtracted from the data and simple be added in point-by-point to the trend mode if desired. As pointed out, the analysis done on the detrended data. Removing the line of regression doesn’t affect the slope analysis as it is a common mode.
Regards,
Jeff

Ursus Augustus
September 26, 2013 2:19 pm

So basically a bunch of desert dwellers finally made it to the coast at around low tide and completely freaked out as they saw the sea level rising and by mid tide became quite completely hysterical. They turned on those amongst them who looked at the signs on the beach that shouted out a cyclical phenomenon. Come high tide they are still chanting their chants, dancing their sacred dances and threatening to sacrifice the rationalists to the sea god/demon. They even blame the rationalists for causing the tide to come in by starting a cooking fire using driftwood they now believe to be sacred and cursed.

Editor
September 26, 2013 2:31 pm

davidmhoffer says:
September 26, 2013 at 10:45 am

Matthew R Marler;

The 3.7W/m^2 of forcing that results from CO2 doubling falls mostly on water and other wet surfaces

>>>>>>>>>>>>>>>>>>
Actually it doesnā€™t, it never even make it to the surface for the most part. Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it. Very little gets to surface. You need not believe me, Iā€™m just citing the IPCC explanation of the difference between radiative forcing and surface forcing:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-2-23.html

I see absolutely nothing in the citation about what you claim. Here’s what it says:

Figure 2.23. Globally and annually averaged temporal evolution of the instantaneous all-sky RF (bottom panel) and surface forcing (top panel) due to various agents, as simulated in the MIROC+SPRINTARS model (Nozawa et al., 2005; Takemura et al., 2005). This is an illustrative example of the forcings as implemented and computed in one of the climate models participating in the AR4. Note that there could be differences in the RFs among models. Most models simulate roughly similar evolution of the LLGHGsā€™ RF.ļ»æ

Your citation says nothing, zero, about longwave radiation somehow being captured and re-radiated by a layer of water vapor just above the surface.
The downwelling longwave is indeed absorbed and re-radiated on its way downwards. It does reach the earth, however. Remember that after each absorption, it is eventually re-radiated half upwards and half downwards. Also, a similar process is happening with the upwelling longwave radiation.
w.

Editor
September 26, 2013 2:53 pm

Dr Norman Page says:
September 26, 2013 at 1:04 pm

Jeff you say
ā€

Given that, we also donā€™t know what the shape and nature of the natural variation is ā€¦ except that it is chaotic.ā€

The climate system is not chaotic on any meaningful time scale. The global temperature has been within narrow limits for at least 500 million years. The latest ephemerides are good back to about 45 million years before the decimal points in the planetary positions diverge chaotically. If you want to see the shape or pattern of the natural variation see Figs 5,6,and 7 of the last post at http://climatesense-norpag.blogspot.com

Thanks for re-addressing this to me, Norman. I fear your claim about the climate is disputed by none other than Benoit Mandelbrot himself … see here for his discussion of the issues.
w.

September 26, 2013 3:01 pm

Thanks Jeff.
You may want to examine Central England Temperatures that date back to 1659.
Here is one data source ā€“ I am uncertain if it is the best one, and I cannot comment on the existence or absence of a warming bias in CETā€™s.
The CET website says ā€œThe mean daily data series begins in 1772 and the mean monthly data in 1659. ā€¦ Since 1974 the data have been adjusted to allow for urban warming.ā€
http://www.metoffice.gov.uk/hadobs/hadcet/
Regards, Allan

Editor
September 26, 2013 3:14 pm

Dr Norman Page says:
September 26, 2013 at 1:04 pm

If you want to see the shape or pattern of the natural variation see Figs 5,6,and 7 of the last post at http://climatesense-norpag.blogspot.com

I checked your “30 Year Forecast”. it contains forecasts like this one:

At this time the sun has entered a quiet phase with a dramatic drop in solar magnetic field strength since 2004. This suggests the likelihood of a cooling phase on earth with Solar Cycles 21, 22 ,23 equivalent to Solar Cycles 2,3,4, and the delayed Cycle 24 comparable with Cycle 5 so that a Dalton type minimum is probable “. …………………………

“There will be a steeper temperature gradient from the tropics to the poles so that violent thunderstorms with associated flooding and tornadoes will be more frequent in the USA

Last year, lots of tornadoes. You declared the prediction a grand success. This year, almost no tornadoes … is your prediction now a failure? And there has been a long-term, sustained dearth of hurricanes. Are thunderstorms more violent? I haven’t seen one study to say so … but none of that is the real problem.
The problem is that without numbers your forecasts are meaningless. Without value. Worthless. Why? Because they are not falsifiable. You have not specified the time period, the numbers, none of that. Since they cannot be falsified they are not forecasts at all.
So you might as well scrap your whole “30-Year Forecast” and start over. This time, make real prediction. What’s a real prediction? Here’s one:

During two of the three years 2010-2013, the number of tornadoes of F2 strength or better in the US will exceed 127.

Do you see the difference? At the end of 2013, we can say with surety whether my forecast is right or wrong.
But your pseudo-cast, that “tornadoes will be more frequent in the USA”, that says nothing. During what time period? How strong a tornado? More frequent than what?
Scrap it and start over, I don’t see a real forecast in the whole lot. Remember … if it can’t be falsified, it’s not a forecast, so you need numbers, numbers, numbers …
w.

milodonharlani
September 26, 2013 3:14 pm

Willis Eschenbach says:
September 26, 2013 at 2:53 pm
Mandelbrot and Wallis (1969) wrote before Hayes, Imbrie & Shackleton (1976) confirmed Milankovitch cycles in what Mandelbrot would have classified as paleoclimatology. It could well be that orbital mechanics influence climatic & meteorological phenomena on time scales both longer & shorter than 10,000 to 100,000 years, but Earth’s movements do seem to dominate on the order of 100,000 years.
It also may be that on the scale of decades to millennia climate is indeed chaotic, although that hypothesis is IMO by no means strongly supported, while not yet falsified. But neither has the hypothetical existence of quasi-periodic waves like D-O & Bond cycles been falsified. Observed multi-decadal climatic phenomena such as the PDO & AMO & centennial to millennial events like the Holocene Climatic Optimum, the Minoan, Roman, Medieval & Modern Warm Periods, with intervening Cold Periods IMO tend to support the reality of cycles rather than noise amid the chaos.
I agree with those who have written here that climatology would be better served by trying to discover possible causes for these observed quasi-cycles instead of constructing ever more Ptolemaic epicycles-like GIGO models.

September 26, 2013 3:57 pm

Willis I’m interested in forecasting climate – I don’t think you read the post to the end and the last post was just the last in a series on this subject going back about a year to see where I’m at. Here below.is the conclusion. You ask for falsifiability .My forecasts would be seriously in question if there is not 0.15 – 0.2 degrees of cooling in the global SSTs by 2018-20.
Re weather -Obviously the short term weather events will vary considerably during decadal time spans and in different geographical regions. On a cooling world the jet stream moves more meridionally with the formation of blocking highs which bring cold fronts and cold further south in winter and can develop high temperature highs in summer, Similarly warm moist air streams can move further north around the highs .Thus on a cooling and generally dryer planet there is a much greater temperature gradient particularly across the fronts- with all that that implies with regard to weather. This type of weather pattern has been more frequent over the last several years. Major – class 4 and 5 hurricanes will be less frequent on a cooling world, Sandy e,g was barely a hurricane at all when it went ashore. Nothing in the U.S since Katrina Enough on the weather.
Re Mandelbrot – nowhere in the long quote you linked to does he say anything I would disagree with -in particular in this quote he doesn’t say the climate is chaotic. Here’s the cooling forecast which is reasonably specific.
“To summarize- Using the 60 and 1000 year quasi repetitive patterns in conjunction with the solar data leads straightforwardly to the following reasonable predictions for Global SSTs
1 Continued modest cooling until a more significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
9 Warning !!
The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon. If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up. This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others .A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly. I am sure, however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate.

richardscourtney
September 26, 2013 4:00 pm

Dr Norman Page:
I read your post at September 26, 2013 at 3:57 pm.
Please say if you intend to sell your ‘forecasts’ commercially.
Richard

Henry Clark
September 26, 2013 4:23 pm

Unfortunately, any calculation, however much mathematical analysis may be performed, is limited by its input data and starting assumptions. What is the source of the SST data used in this article? It is probably HADCRUT3 as in the last article, which is published by CRU (infamous for Climategate). That may be moderately less fudged than the later publication HADCRUT4, somewhat a better choice, but that doesn’t make it an accurate one.
A problem is activists like them and Hansen’s GISS have repeatedly rewritten decades-old thermometer readings towards cooling the early/mid 20th century and warming the late 20th century. So the 0.6 degrees/century is a spurious result (or, most favorably, like an upper limit) compared to what history was before activists rewrote it, as illustrated in http://img176.imagevenue.com/img.php?image=81829_expanded_overview_122_424lo.jpg
I appreciate the work done by Mr. Patterson but just believe that anytime the likes of HADCRUT is used, even if done so merely because it was the most readily accessible digital source (since the alarmists have the most money for spreading such on the internet), there should be prominent disclaimers to readers.

Bill Illis
September 26, 2013 4:39 pm

Instead of using some artificial pseudo-cycle, why not use a real 60 year ocean oscillation like the AMO for example.
Just see how close the Raw, undetrended AMO is to Hadcrut4. There is some scary correlation here and it also helps explain some of the large ENSO spikes that have occurred over time.
http://s18.postimg.org/9uar3ow0p/Hadcrut4_vs_Raw_AMO.png
And this is monthly data, rather than annual. I think one must use monthly data because the natural oscillation cycles such as the ENSO and the AMO operate on monthly time-scales, not annual.
Furthermore, the “noise” in the climate system actually operates on a 2 week to 1 month time basis. The large global temperature excursions seem to last for about 2 weeks at a time (while the highest resolution data is monthly so I guess one has to use what is available).
Daily UAH lower troposphere temps in 2012 and 2013 to see what I mean about that last statement.
http://s22.postimg.org/hd5dsb4gx/Daily_UAH_12_13_Aug13.png
Or Ryan Maue’s compilation of daily surface temperatures provided by the NCEP CFSv2 weather model.
http://models.weatherbell.com/climate/cfsr_t2m_2012.png

September 26, 2013 5:02 pm

Richard Courtney I would be happy to consult on climate matters and forecasts on a commercial basis for anybody interested in my advice.

September 26, 2013 5:10 pm

Willis;
Your citation says nothing, zero, about longwave radiation somehow being captured and re-radiated by a layer of water vapor just above the surface.
>>>>>>>>>>>>>>>>>>>>>>
My bad. I meant to the cite to show that the IPCC differentiates between radiative forcing and surface forcing, and the surface forcing due to CO2 by their own numbers is small by comparison. I should have left the comment about water vapour being the reason why out of the comment as the cite doesn’t in fact address the reason why that is, only that it is. AR4 WG1 2.2 also goes to some length to explain that Radiative Forcing is very different from Surface Forcing:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-2.html
I’ve brought this up in discussion with RGB, but the best explanation that I got (from the warmist perspective) came from Joel Shore. His position is that water vapour being much higher at low altitudes, it in fact does suppress radiative forcing in both directions, so it absorbs LW coming up and sends 1/2 back down, but it also absorbs LW coming down and sends 1/2 back up. Since CO2 concentrations relative to water vapour at low altitude are insignificant, but at high altitude where water vapour concentration drops off, CO2’s effects are more pronounced from a radiative forcing perspective, and should in theory have a larger effect on temperature at altitude than at surface. Surface temps then rise not because of surface forcing, but because the air column being warmed at altitude becomes more stable and via the lapse rate surface temps rise.
That explanation is also unsatisfactory to me, and I will freely admit that I’m commenting from memory, and that a couple of paragraphs doesn’t do the topic justice. That said, my main point was to draw attention to the fact that the 3.7 w/m2 is radiative forcing and is a different number from surface forcing.

richardscourtney
September 26, 2013 5:16 pm

Dr Norman Page:
In your post at September 26, 2013 at 5:02 pm you say

Richard Courtney I would be happy to consult on climate matters and forecasts on a commercial basis for anybody interested in my advice.

Sincere thanks for your honesty and openness.
If you offer only clear and falsifiable forecasts then that would be useful information on WUWT.
We recently had a person posting vague forecasts on WUWT with the clear intent of using WUWT as a free advertising medium for his intended forecasting business. Given that you have commercial interest, please only make specific and falsifiable forecasts on WUWT. Your posting vague forecasts could be understood as being a misuse of our hosts generosity. And that could lead to loss of permission for the posting of forecasts on WUWT which would be a loss.
Again, genuine thanks for your frankness, and I hope you accept the sincerity of my request and the reason for it.
Richard

September 26, 2013 5:17 pm

Matthew R Marler says:
September 26, 2013 at 1:03 pm
davidmhoffer: Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it.
How close to the surface? Downwelling IR is measured by the TAO buoys.
>>>>>>>>>>>>>>>>>>
In part please see my response to Willis above. Yes, the buoys measure downwelling LW, but they don’t know where any given photon came from. It could have oirginated 1 cm, 1 m, or 1 km above the buoy. It could have come from a CO2 molecule or a water vapour molecule or some other source, but the buoy doesn’t know which ones are which. It only knows total energy flux, frequency and wavelength. The issue here is that water vapour at surface in the tropics runs in the 30,000 ppm+ range to CO2’s 400. But higher altitudes and higher latitudes have colder temperatures, water vapour plumets, and so CO2’s effects are more pronounced.

DesertYote
September 26, 2013 5:38 pm

Nancy C says:
September 26, 2013 at 1:23 pm
###
Power meters are generally bolometric devices.

Mike Wryley
September 26, 2013 6:31 pm

Nancy
dBm, dB milliwatt, more likely a context of 50 ohm Rf environments
The 600 ohm reference once common in balanced audio environments dates both of us,

Matthew R Marler
September 26, 2013 7:12 pm

Dr Norman Page: The global temperature has been within narrow limits for at least 500 million years.
That is compatible with a chaotic system that generates a bounded distribution of states.
davidmhoffer: The global temperature has been within narrow limits for at least 500 million years.
I read it. Thanks.

Matthew R Marler
September 26, 2013 7:13 pm

oops. I quoted the wrong text for davidmhoffer. It’s supposed to be this: In part please see my response to Willis above.

September 26, 2013 7:18 pm

Notwithstanding all the work done on facts to disprove the IPCC, Al Gore, Michael Mann, Joe Ramm etal best to keep in mind [you’re] not in a haystack of hay, [you’re] in a haystack of needles made of lies and fraud.
What they have is well honed lies, protected by liars telling lies to keep you busy deconstructing the last pack of lies. It is just busy work.
It is about getting the low infomation voters to vote for the redistribution of wealth.
If you prove up that it is a lie and your 1000% correct, they will remain standing telling new lies about another ,, “ice age” or “wild winds of change” or “no water vapor left for [clouds”] or some such line of bull.
Pull their tax payer paid tickets with votes.

Editor
September 26, 2013 9:43 pm

Dr Norman Page says:
September 26, 2013 at 3:57 pm

Willis Iā€™m interested in forecasting climate ā€“ I donā€™t think you read the post to the end and the last post was just the last in a series on this subject going back about a year to see where Iā€™m at. Here below.is the conclusion.

To summarize- Using the 60 and 1000 year quasi repetitive patterns in conjunction with the solar data leads straightforwardly to the following reasonable predictions for Global SSTs
1 Continued modest cooling until a more significant temperature drop at about 2016-17

Norman, seriously … that is not a forecast. “Modest cooling” and “more significant cooling” are not falsifiable.

2 Possible unusual cold snap 2021-22

Again, no numbers, not falsifiable, useless.

3 Built in cooling trend until at least 2024

Starting when? How much of a trend?

4 Temperature Hadsst3 moving average anomaly 2035 ā€“ 0.15

0.15 or more? 0.15 or less? How long a moving average?

5Temperature Hadsst3 moving average anomaly 2100 ā€“ 0.5

How long an average? 0.5 or more? 0.5 or less?

6 General Conclusion ā€“ by 2100 all the 20th century temperature rise will have been reversed,

Numbers! All of this needs numbers!

7 By 2650 earth could possibly be back to the depths of the little ice age.

Oh, please. The forecast for 2100 was bad enough. This is a joke. Meaningless. Fluff.

8 The effect of increasing CO2 emissions will be minor but beneficial ā€“ they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .

Not falsifiable.
Norman, think of it as a bet. If you were going to bet, would you bet on something as vague as whether there would be “modest cooling”? No way, you’d be arguing endlessly about whether -0.1Ā°C per decade is “modest” or not.
As it stands, I wouldn’t bet on a single one of those “forecasts”, because I couldn’t tell if I’d won or lost.
Best regards,
w.

Editor
September 26, 2013 9:49 pm

davidmhoffer says:
September 26, 2013 at 5:17 pm

Matthew R Marler says:
September 26, 2013 at 1:03 pm

davidmhoffer:

Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it.

How close to the surface? Downwelling IR is measured by the TAO buoys.

>>>>>>>>>>>>>>>>>>
In part please see my response to Willis above. Yes, the buoys measure downwelling LW, but they donā€™t know where any given photon came from. It could have oirginated 1 cm, 1 m, or 1 km above the buoy.

In my Bible, “Climate Near The Ground, they say that 72% of the DLR is coming from the bottom 87 metres, 6.4% from the next 89 metres, 4% from the next 90 metres, and so on.
w.

September 26, 2013 10:55 pm

Willis;
In my Bible, ā€œClimate Near The Ground, they say that 72% of the DLR is coming from the bottom 87 metres, 6.4% from the next 89 metres, 4% from the next 90 metres, and so on.
>>>>>>>>>>>>>>>>>>
Well that would leave precious little to come from CO2, the bulk of which is well above the first 300 meters or so, assuming it is reasonably well mixed? I tried to look this up in my own bible, which I think is different from yours because it just said something about going forth and multiplying upon the face of the earth, so I went outside and scratched 6×7=42 in the dirt. Then I just contemplated for a time on life, the universe, and everything.

Bill from Nevada
September 26, 2013 11:47 pm

I have been long interested in the metrics of global warming claimants’ refrains. Only a couple of days ago I saw Ars Technica mentioned as a bastion of censoring in the name of CO2, went over there and saw for myself, the zeal with which reference to instrumental readings, is absolute anathema to warmers.
The absolute inability of the climate changers to predict which way a thermometer will move always has been the end of their claim. They can’t, it’s that simple, and t.h.e.y. d.e.s.p.i.s.e. instruments.
For this reason only hypotheticals and politics are allowed at warmer sites in general, at least that’s the way it had been going and how I saw it over there, most recently.
It might have been here that I actually saw them mentioned as a warmer battalion bastion.

September 27, 2013 2:43 am

Surely this is recurrent periodic Scafettian cyclomania? Even Anthony Watts said so..
Of course any post 1950 step function or singularity COULD be simply the influence of a decadal millennial centennial and epochal cycle all coinciding, (or not).
Sadly we don’t have trustworthy data, especially with high frequencies in it, going back very far.

September 27, 2013 3:31 am

“I agree with those who have written here that climatology would be better served by trying to discover possible causes for these observed quasi-cycles instead of constructing ever more Ptolemaic epicycles-like GIGO models.”
But you neglect to point out that it was only many centuries after the Ptolemaic epicycles had been thoroughly mapped that the simplification on 9-10 elliptical orbits was possible.
Look: I do understand signal analysis. I don’t understand the mind set of the posters here.
All Fourier and related analyses do, is transform the data from one based around a time axis, to one based around a frequency axis, so that if the data does represent a series of superposed cycles, it should become fairly obvious, by inspection. It is not even a curve FITTING exercise, it is simply the data represented in a different way. If Ptolemy had done Fourier analysis on planetary motions he would have seen orbital periods staring him in the face.
The value of doing this, is to establish first of all whether there are cyclical variations, and demonstrate their fit to the data by isolating the dominant ones and reconstructing a time series.
Not unsurprisingly, the fit is good. But all that demonstrates is that there is a cyclically modellable component in the data.
There may also be a residual error that relates to something like CO2. Or perhaps another much longer period cycle. The technique can’t without extension into deep time establish the difference.
Bit what it CAN do and HAS done, is to say that once the data is presented and the sum of cyclic components, the case for at least some cyclical drivers of climate cannot be avoided.
Reconstruction of the most important 4 frequency ‘bins’ gives a good fit. That would not happen if those bins were simply a small energy percentage of the whole spectral energy plot.
The fact that its only 4 is encouraging. The more chaotic the signal, the more the energy is not coherent with respect to specific frequencies. That is a point people do not seem to understand. That is, this technique will not properly reconstruct a signal that is chaotic, using just a few bins out of the many available. Ergo the points made that temperature is not wholly chaotic, it has a clearly discernible cyclicity.
Likewise the existence of a slow influence – such as that presumed for CO2 – is in fact identical to a very long cycle cyclic variation over the period for which we have measured the (CO2) rise.
So sadly we cant say whether its CO2 or not by this method, but what we can say is that IF the cyclic nature of the data as evinced by the data itself is considered to represent something beyond mere coincidence, then it tells us that whatever the cycles are, they can account for the larger part of recent climate variation, and CO2 has an upper bound set on its actual and potential effects. Which is important.
In short, cyclomania does have sensible implications. It is turning unknown unknowns into known unknowns.
Something is happening here, even if we don’t know what it is, Mr Jones..

Paul Vaughan
September 27, 2013 4:29 am

Leo Smith (September 27, 2013 at 3:31 am) wrote:
“The more chaotic the signal, the more the energy is not coherent with respect to specific frequencies. That is a point people do not seem to understand.”
Sensible discussion of Jeffery S. Patterson’s contributions is impossible here due to the level of ignorance &/or deception.

RC Saumarez
September 27, 2013 5:32 am

@Jeffrey S Patterson
Thanks for your response. I think that there is a very interesting conceptual problem. I develpoed a somewhat bastardised version of a principle component method to extract signal from noise and I think that one of the conclusions I developed applies here. The problem stems from how we think about orthogonality, particularly if you have gone through the classical DSP training in the medaeval period (i.e. 1970s) as I did.
In SSA one forms the covariance matrix of the lagged signal. Since this is real symmetric the eigenvectors are real and orthogonal and you then compose signal components on this basis. The problem, which took me ages to see, is that the extraction of orthogonal components are based on energy in the signal, but in this case is not cross-spectral power. However, this does not imply that the components one extracts are orthogonal in the conventional sense of the integral of their product is zero.
One of the quoted properties of SSA is that given a signal:
f(t)=exp(-kt).sin(at)
SSA can decompose the signal into the exponential decay and the sine wave. (I’ve just tried it and it appears to). However, exp(-kt) and sin(at) are NOT orthogonal functions and the decomposition is not unique. For example, if one expands this by McLaurin’s theorem one could generate a family of functions that describe it and if one expands the cross components to 7 terms one would get a pretty good representation of the signal, each with appropriate coefficients – but the even terms would not be part of sin(at) component.
This is why I am rather sceptical about being able to distinguish an anthropogenic component by SSA simply because I do not think one can assume that there MUST be seperation between them..
In fact, unless the anthropogenic component were a very peculiar shape such as a series of pulses, I am sceptical that one can seperate it out using DSP methods at all and it is more likely that a model based approach is more sound. I hasten to add that one would need a rigorously verified model before making any pronouncements and the current crop of models have some pretty severe predictive limitations!

September 27, 2013 6:10 am

Willis You are being deliberately obtuse or misleading or misunderstanding the numbers.The forecast temperatures clearly refer to the Fig 8 -SST Global Temperature anomaly in the last post at http://climatesensenorpag.blogspot,com
Thus the 2035 anomaly number is minus 0.15 and the 2100 number is minus 0.5
For some reason you didn’t recognize the minus sign.
The earlier 3 Forecasts are general trends and events in the context of the forecast of the minus 0.15 anomaly in 2035.The 2650 comment follows logically from a repeat of the 1000 to 2000 cycle.
I say later that at this time this forecast is speculative but it is by no means meaningless.
You cant replace something with nothing. What would your best shot at the Global HadSST3 numbers for 2035 , 2100 and 2650 be?

September 27, 2013 6:28 am

RC Saumarez says:
September 27, 2013 at 5:32 am
Thank you for your remarks. Unfortunately I can parse a key paragraph.

In SSA one forms the covariance matrix of the lagged signal. Since this is real symmetric the eigenvectors are real and orthogonal and you then compose signal components on this basis. The problem, which took me ages to see, is that the extraction of orthogonal components are based on energy in the signal, but in this case is not cross-spectral power. However, this does not imply that the components one extracts are orthogonal in the conventional sense of the integral of their product is zero.

I think you are saying that the extracted eigenvectors are orthogonon (which of course they are) but the reconstruction may not be. This is the leakage issue I wrote of in the article and which I don’t think is an issue here. SSA is just a type of filter and just as there is no unique way to set the characteristic polynomia of an FIR, and there is leakage between the pass band and stop bands, so too in the SSA there are many adaptions possible and the residuals along each eigenvector (which are minimized by the algoritm) are not necessarily independent. Those effects are mitigated here by fixing F to its max value and setting k to provide the best SNR. The ACF of the residual is very impulsive (implies minimum cross mode correlation) and passes the UNit Root Test with p=0.
It is I think important that we be able to derive the SSA’s impulse response to ensure it is not ringing. One can’t do this in the normal way because the filter readapts to whatever signal you feed it! I’m working out a technique to use the extracted eigensystem to calculate the transfer function directly.

September 27, 2013 6:44 am

In my last post I said “I think you are saying that the extracted eigenvectors are orthogonon (which of course they are) but the reconstruction may not be” I meant the components comprising the reconstruction may not be orthogonal. I’m not sure this is correct but I do know that the residuals of each component are not guaranteed to be independent. Perhaps that’s two ways of saying the same thing – I’ll have to cogitate on that a bit. Filters introduce sample-to-sample correlation, there’s no way around that. That doesn’t mean the passband signal doesn’t represent the input signal with improved SNR.

September 27, 2013 6:49 am

Leo Smith says:
September 27, 2013 at 2:43 am
Surely this is recurrent periodic Scafettian cyclomania?
How so? We’ve said nothing about cycles and made no projections. We’ve only separated to the greatest extent possible, those components which could have an AGW component from those which could not and examined the resulting slope over time. I’m not seeing your analogy with Scafettia.

September 27, 2013 7:11 am

There is nothing wrong with Scafetta’s cycles except that he hasn’t gone to low enough frequencies ie the 1000 year cycle. If he included that his projections would I think, using the useful eyeball method, be very similar to my own -see above. I think Jeffs approach is helpful – and urge him again to use it on the 2000 year Christiansen proxy temperature reconstruction data – which I believe is archived on line. See Fig 3 and accompanying link on last post at http://climatesense-norpag.blogspot.com

September 27, 2013 7:42 am

Sorry y’all the Fig number in my last comment 7:11 should be 7.

September 27, 2013 7:59 am

Dr Norman Page says:
September 27, 2013 at 7:11 am
..[I] urge him again to use it on the 2000 year Christiansen proxy temperature reconstruction data ā€“ which I believe is archived on line.
I’ve looked for the dataset but haven’t been able to locate it. Any pointers?

September 27, 2013 8:02 am

Reblogged this on The Montpelier Monologues and commented:
Here’s a reblog of my article of 9/25/2013 posted on WUWT. I’ll follow up here with discussions of the method and follow on analysis

September 27, 2013 8:22 am

Jeff I’m pretty sure this is it. You need to read carefully the Christiansen paper and check the NOOA archive to make sure this is what was used in my Fig 7
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/ljungqvist2009/ljungqvist2009recons.txt

September 27, 2013 8:31 am

Looking more closely I think the NOOA data is the original from which my Fig seven was compiled as described in the paper. You might consider emailing the authors and politely requesting the actual annual data file for my Fig 7 ie Fig5 in the original paper.

Matthew R Marler
September 27, 2013 8:57 am

Willis Eschenbach: In my Bible, ā€œClimate Near The Ground, they say that 72% of the DLR is coming from the bottom 87 metres, 6.4% from the next 89 metres, 4% from the next 90 metres, and so on.
I want to thank you and davidmhoffer for your comments, but I am stuck where I began. Doubling the atmospheric CO2 concentration will produce some increase in the net downwelling IR at the surface. If it doesn’t (and some people do indeed argue that it can’t, but I don’t believe them), then doubling CO2 will not change the (equilibrium) surface temperature. Now over dry land the effect is to increase surface temperature. So my question remains: over ocean and wet land, how much of the increased downwelling energy goes to sensible heating, and how much to vaporization of the water?
davidmhoffer’s argument is that there is so much more water vapor than CO2, even after doubling CO2, that the increased downwelling of IR at the surface can’t be very much. Indeed, no one has argued that it is very much: the predicted equilibrium effect is only 1/2% (1.3K/288K), but a case that it is 0 is not complete, and not believable.

Matthew R Marler
September 27, 2013 8:58 am

Jeffery S. Patterson, you are a good sport to put your work up here for examination and criticism, and to defend it.

Matthew R Marler
September 27, 2013 9:03 am

Dr. Norman Page: You cant replace something with nothing. What would your best shot at the Global HadSST3 numbers for 2035 , 2100 and 2650 be?
How about the statement that, on present knowledge, the Global HadSST3 numbers for 2035, 2100 and 2650 can’t confidently be predicted with sufficient accuracy for the prediction to matter? A clear statement of what isn’t known is not “nothing”.

Editor
September 27, 2013 9:27 am

Dr Norman Page says:
September 27, 2013 at 6:10 am

Willis You are being deliberately obtuse or misleading or misunderstanding the numbers.The forecast temperatures clearly refer to the Fig 8 -SST Global Temperature anomaly in the last post at http://climatesensenorpag.blogspot,com

No, they don’t “clearly” refer to that or I would have seen it. In any case, they still lack specificity.

Thus the 2035 anomaly number is minus 0.15 and the 2100 number is minus 0.5
For some reason you didnā€™t recognize the minus sign.

Norman, as I said, your claims are far too vague to be falsified. For example, in this particular case you said:

4 Temperature Hadsst3 moving average anomaly 2035 ā€“ 0.15

If I said no, your forecast was wrong, then you can simply pick a different time period for your moving average. Or you can say that you meant a centered moving average, not a trailing moving average.
I say again: If it can’t be falsified, it’s not a forecast …and by and large what you call “forecasts” are totally and completely unfalsifiable.

The earlier 3 Forecasts are general trends and events in the context of the forecast of the minus 0.15 anomaly in 2035.The 2650 comment follows logically from a repeat of the 1000 to 2000 cycle.
I say later that at this time this forecast is speculative but it is by no means meaningless.
You cant replace something with nothing. What would your best shot at the Global HadSST3 numbers for 2035 , 2100 and 2650 be?

I say that any “forecast” for 2650 is both speculative, meaningless, and a joke. I say that if you think a forecast for 2650 is a valid forecast in any sense of the world, you’ve lost the plot entirely.
Look, if you want to be the New Age Nostradamus and make wacky “forecasts” for events half a millennium from now, that’s your choice, I can’t stop you.
But claiming that a climate forecast for 2650 is science, especially a “forecast” with an unspecified “moving average” of unknown length? Don’t make me laugh.
IF IT IS NOT FALSIFIABLE IT IS NOT A FORECAST!! Again I say, think of it as a bet that you really, really don’t want the other guy to be able to weasel out of. If you say “it will be warm tomorrow”, can he get out of paying you? Sure … you didn’t say how warm. If you say “it will raid tomorrow” he can say “you didn’t say how much”, and if you said how much, he can say “you didn’t say where”.
And as a result, you need NUMBERS, NUMBERS, NUMBERS. Even if you are forecasting what you call “general trends”, you still need numbersā€”how big will the trends be, how will they be calculated, what data will you use, what are the starting and ending points. Saying a “moving average anomaly” says nothing, Norman. How long an average? Gaussian average or regular? Centered average or trailing?
As I said, the bad news is that what you’ve done to date in your 30-year so-called “forecasts” is mumble and wave your hands … so you’ll have to throw it all out. Standing around claiming your existing “forecasts” are good will only cause people to point and laugh. Throw them out and start over with new ones, ones with too many numbers if anything, forecasts that are solidly defined, and thus falsifiable.
Because asking people about what their forecasts are for 2650 … that goes nowhere.
w.

September 27, 2013 9:36 am

Matthew You and Willis are perfectly entitled to can believe that you cant predict sufficiently accurately to matter. I beg to differ as far as I am concerned and made a prediction which I think is testable within a usefully short time frame. See my comment at 9/26/3:57 pm
“You ask for falsifiability .My forecasts would be seriously in question if there is not 0.15 ā€“ 0.2 degrees of cooling in the global SSTs by 2018-20.”

RC Saumarez
September 27, 2013 9:37 am

Patterson.
Thanks, I’m not trying to be bloody minded and I’ve never used SSA. The problem, as I see it, is that we are used to thinking of “orthogonal components” in a signal being orthogonal to each other because this is so conditioned into our thinking (or at least mine!) from Fourier, Z transforms etc, and I made the intellectually lazy jump of assuming that a PCA orthogonality necessarily implied component orthogonality in the strictly inner product sense.
The problem arises from the vector space into which you project the signal – usually this an orthonormal space. However, the lagged covariance space is not orthonormal and the basis of this space and projection into it doesnā€™t necessarily form a mapping into an orthonormal signal space. I’ve looked at little at the basis of SSA, but frankly I think I would have to do an MPhil on the subject to really get my head around it!
However, we can construct an argumentum ad absurdum, as Lord Monkton would probably phrase it. If we take a signal that is based on orthonormal components:
f(t)=cos(at)+cos(2at)
this will presumably separate into 2 cosine components that are orthonormal. If we then take a signal that is the sum of a pulse and triangle, which are of different lengths, SSA will identify these, but they are not orthogonal signals. I.E the results of SSA does not necessarily produce a set of orthogonal basis signals.
This is the problem with decomposition of a temperature signal. If the “natural” and “anthropogenic” signals are correlated, will they necessarily be identified by SSA?
I don’t know about you, but I went into quite a lot of maths when learning signal processing but then used the techniques and used the theory as “rules of thumb”. I’ve found that I’ve made a few clangers and then I’ve had to go and dig into the theory, which I hadn’t understood as well as I should. Unfortunately as I get get older, this gets more difficult!
Cheers,
Richard Saumarez

September 27, 2013 10:02 am

Willis It is not a great stretch to propose that the period from about 2000 – 3000 will see a quasi repeat of the trends from 1000 – 2000 see Figs 6 and 7 of the post at http://climatesense.norpag.blogspot.com
That is a useful and reasonable working hypothesis. Much better than the IPCC CO2 driver claim
There are several ways of projecting the trends forward. The numbers I give are just one way of providing very reasonable ball park estimates.
I think your problem (and many of the other skeptics too ) is that you can’t believe that forecasting the general trends of climate over the next several centuries can be so obvious, simple and commonsensical.
I agree we need to understand the mechanisms and many of your posts are very illuminating in that regard I especially like your post at 26/9:42
Are you familiar with http://www.happs.com.au/images/stories/PDFarticles/TheCommonSenseOfClimateChange.pdf
I think there is a great deal there that you would find interesting re mechanisms and teleconnections.

September 27, 2013 10:14 am

Willis One more thing If you showed me data that disagreed with my forecast I would have no problem admitting it. There is no arguing with a dry hole,

milodonharlani
September 27, 2013 10:24 am

Leo Smith says:
September 27, 2013 at 3:31 am
I did neglect to point out that it took almost 1500 years to falsify Ptolemy’s epicycles, both for institutional reasons (adherence to a ruling scientific paradigm & religious dogma) & for lack of the needed instrument (the telescope with which to observe the phases of Venus), but only because I didn’t think it necessary. Ptolemy had the right number of orbits for the then known planets, but made the mistake of placing the sun where the earth should be among them. The orbits however featured epicycles upon them in order to compensate for their circular [orbits] instead of [elliptical orbits]. The earth was also offset slightly from the center of the presumed concentric spheres.
I agree with your conclusion, correct me if I’m misstating it, that cycles can be observed in climate, even if humans haven’t yet figured out what causes most of them. It appears to me that they are observable on decadal, centennial, millennial, myriadal (if that’s a word) time scales, as well as the better-explained 100,000 year order of magnitude Milankovitch cycles. A case can also IMO be made for longer climatic cycles on the four orders of magnitude from a million to a billion years.
Since Milankovitch cycles are strongly supported & well explained, why not longer & shorter ones? Discussions & explorations of possible explanations for them have been suppressed by the presently prevailing ideological paradigm & politico-religious dogma of CACA.

September 27, 2013 10:24 am

Matthew R Marler;
davidmhofferā€™s argument is that there is so much more water vapor than CO2, even after doubling CO2, that the increased downwelling of IR at the surface canā€™t be very much. Indeed, no one has argued that it is very much: the predicted equilibrium effect is only 1/2% (1.3K/288K), but a case that it is 0 is not complete, and not believable.
>>>>>>>>>>>>>>>>>>>
I never said it was 0, in fact I’ve been very active in this forum debunking the claims of those who claim it is. What I was trying to get at is that the commonly quoted CO2 doubling = 3.7 w/m2 ~ 1.2 deg C is not what you think it is. That calculation has nothing to do with either surface temperature or surface forcing. IPCC AR4 WG1 kinda glosses this over and refers you back to AR3, links to which I don’t have handy but here’s the basic physics.
Stefan Boltzmann Law is that P(w/m2)=5.67*10^-8*T^4
with T in degrees K.
So run the numbers. Average temperature of earth surface is commonly given as 15 C or 288 K. If you add 3.7 w/m2 to 288 K you would get an increase of 0.68 degrees…. not 1.2 degrees. So where does the 1.2 degrees come from? Glad you asked.
The “effective black body temperature” of earth is about -20 C or 253K. That’s the temperature of earth as seen from space. It isn’t the temperature of earth at the surface, nor is it the temperature of earth at Top of Atmosphere (TOA). It the temperature somewhere in between, roughly at the Mean Radiating Level (which is a lengthy discussion unto itself). So let’s add 3.7 w/m2 to 253K and run the numbers through SB Law and we get….. 1.0 degrees.
So we now have two numbers for sensitivity to CO2 doubling, one at surface and one at effective black body temperature of earth. Which one is correct? Answer: NEITHER.
Doubling of CO2 changes the effective black body temperature of earth by precisely 0. What it changes is the altitude at which the effective black body temperature of earth occurs. Now it gets messy from there. If the atmosphere was uniform in composition, we could probably extrapolate some linear function from the MRL to arrive at surface forcing in w/m2 and temp change, but the atmosphere ISN’T uniform. You’ve got thousands of ppm of water vapour at low altitude, and only dozens at high altitude.
Further, the 3.7 w/m2 number doesn’t exist at any given point in the atmosphere in the first place. It is calculated from the sum of all downward LW emissions that otherwise would not have existed from the surface up to the TOA. So, it doesn’t exist as an energy flux in the traditional sense in the first place, it is smeared across the atmospheric air column and it has to be put in the context of the energy flux that already existed before CO2 doubled. With low altitude water vapour running in the 30,000 ppm + range, an extra dose of CO2 is tiny in terms of effective surface forcing…which is what the graph I linked to in the first place shows.

milodonharlani
September 27, 2013 10:24 am

Elliptical instead of eccentric in the above. Sorry.

Matthew R Marler
September 27, 2013 12:05 pm

davidmhoffer: Further, the 3.7 w/m2 number doesnā€™t exist at any given point in the atmosphere in the first place. It is calculated from the sum of all downward LW emissions that otherwise would not have existed from the surface up to the TOA. So, it doesnā€™t exist as an energy flux in the traditional sense in the first place, it is smeared across the atmospheric air column and it has to be put in the context of the energy flux that already existed before CO2 doubled. With low altitude water vapour running in the 30,000 ppm + range, an extra dose of CO2 is tiny in terms of effective surface forcingā€¦which is what the graph I linked to in the first place shows.
I agree it’s not a uniform value in space or time.
I agree the increased surface downwelling IR caused by doubling CO2 is tiny — the projected effect from equilibrium calculations is a 0.5%, appx, increase in the hypothetical equilibrium temperature. Realistically, I don’t believe the equilibrium calculations tell us what we want to know, and the induced surface change is not uniform in space or time any more than the radiation change.
Back to my original question: Given that the increase in downwelling IR is non-uniform and tiny, does the effect differ among water, wet ground, and dry ground? Is it possible for a tiny increase in downwelling IR on the Equator (say) in the central Pacific, in summer (or winter, or always) to increase the water vapor without raising the surface temperature? With 70% of the earth surface being ocean, it would seem that knowing this is a requirement for even a first calculation of the transient effect of doubling CO2 concentration.

Editor
September 27, 2013 2:26 pm

Dr Norman Page says:
September 27, 2013 at 10:14 am

Willis One more thing If you showed me data that disagreed with my forecast I would have no problem admitting it. There is no arguing with a dry hole,

You truly don’t seem to get it. Your so-called “forecasts” are so vague that it is nearly impossible for the data to disagree with them.
I give up. Wave your hands and say “tomorrow will be kinda like today” … or in your words, “Willis It is not a great stretch to propose that the period from about 2000 ā€“ 3000 will see a quasi repeat of the trends from 1000 ā€“ 2000”
What is a “quasi-repeat”? There’s no “there” there in that statement. It’s pure handwaving, and is meaningless. What data could disagree with a forecast of a “quasi-repeat”?
I’ve told you what you need. Numbers and specificity. Until you provide them, you’ll be just another crank Nostradamus wannabee. I can’t seem to dent your armor, I give up. The field is yours, you’ve left me behind.
w.

September 27, 2013 2:30 pm

The recent post on climate sensitive by Willis Eschenbach persuaded me that I should include the pre-1900 data in the analysis. I’ve done so here. It confirms the result above and adds a new wrinkle that lends credence to the comments here that even the .6 degC/century change in slope cannot be attributed to AGW.

September 27, 2013 5:04 pm

Matthew R Marler;
Given that the increase in downwelling IR is non-uniform and tiny, does the effect differ among water, wet ground, and dry ground?
>>>>>>>>>>>>>>>>>
Yes. Still water will absorb LW pretty much 100% in the first few microns, causing the water to be vapourized, Not much of the ocean is still however, there are waves, flotsam, lotsa rain, etc, which complicates the matter, but my current understanding is that this would be the dominant process. Ground on the other hand the dominant process would be to absorb the LW, raising the temperature of the ground surface. Either way you have additional energy at the surface/atmosphere interface, and where it goes from there gets pretty complicated.

September 27, 2013 5:47 pm

Willis Not much point in carrying this further. I think forecasting an HadSST3 5 year moving average of minus 0.5 at 2035 and minus O.5 at 2100 is pretty precise and is clearly distinguished and distinguishable from e,g the IPCC warming trend what more would you expect at this time? Also pointing out the possibility of continued cooling until 2650 is perfectly reasonable looking at the Christiansen data set and the current state of solar activity.
Enjoyed your English travel bit -I’m from Liverpool originally.

Editor
September 27, 2013 6:48 pm

davidmhoffer says:
September 27, 2013 at 5:04 pm

Matthew R Marler;

Given that the increase in downwelling IR is non-uniform and tiny, does the effect differ among water, wet ground, and dry ground?

>>>>>>>>>>>>>>>>>
Yes. Still water will absorb LW pretty much 100% in the first few microns, causing the water to be vapourized, Not much of the ocean is still however, there are waves, flotsam, lotsa rain, etc, which complicates the matter, but my current understanding is that this would be the dominant process. Ground on the other hand the dominant process would be to absorb the LW, raising the temperature of the ground surface. Either way you have additional energy at the surface/atmosphere interface, and where it goes from there gets pretty complicated.

Again, the numbers don’t work. Evaporation is about 80 W/m2. Downwelling IR averages 330 W/m2. Much evaporation is from the sun. That means that 90% of the DLR worldwide is not going to evaporation, but to warming the surface.
In both the land and the ocean, the IR is absorbed in the first few microns. There is an urban legend that the IR is transferred down into the land, but not into the ocean … I don’t believe that in the slightest. What’s stopping it in the ocean but not in the land? See the four questions in my post “Radiating the Ocean“, If you can’t answer all four of them your radiation theory is in trouble.
w.

Editor
September 27, 2013 7:05 pm

Dr Norman Page says:
September 27, 2013 at 5:47 pm

… I think forecasting an HadSST3 5 year moving average of minus 0.5 at 2035 and minus O.5 at 2100 is pretty precise and is clearly distinguished and distinguishable from e,g the IPCC warming trend what more would you expect at this time?

OK … suppose we get to 2038, and for the year 2035 the trailing 5 year average is -0.4, and because temperatures continued to drop, the centered 5 year average for 2035 is -0.6 … is your forecast right or wrong? And more importantly what is the baseline for the temperature? 1951-1980? 1981-2010?

Also pointing out the possibility of continued cooling until 2650 is perfectly reasonable looking at the Christiansen data set and the current state of solar activity.

It is so far from falsifiable that it has no meaning at all. What’s the point of such far-fetched and unverifiable speculation? Stick to what we can check, even 2100 is way too far out.

Enjoyed your English travel bit -Iā€™m from Liverpool originally.

Thanks kindly. I had a good time in Liverpool, I liked the feeling of the city.
w.

September 27, 2013 7:29 pm

In both the land and the ocean, the IR is absorbed in the first few microns.
>>>>>>>>>>>>>>>>
Agreed. But the land doesn’t evaporate. And I said “still water” and that wave action and other factors changed the equation. And I said that what happens afterward gets very complicated for both scenarios. You’ve now got water vapour being generated that is in thermal conductivity with the water surface. So yes, I would also argue that the net effect is warming of both, but that wasn’t the question. The question was is there a difference between how LW interacts with water vs land and the answer is yes.

Matthew R Marler
September 28, 2013 10:03 am

Willis: Again, the numbers donā€™t work. Evaporation is about 80 W/m2. Downwelling IR averages 330 W/m2. Much evaporation is from the sun. That means that 90% of the DLR worldwide is not going to evaporation, but to warming the surface.
Thanks for the comments, but my question is about what happens if DWIR is added to the DWIR already present. I appreciate that the mechanics are complicated. My wonder was prompted by boiling water: if you increase the flame before the water starts boiling, the temperature of the water increases; if you increase the flame after the water starts boiling, then you increase the vaporization rate. Obviously, the situation is different at ocean and lake surfaces where you have chaotic mixing of the surface, wind and spin drift and such, and no literal “boiling”.
“Equilibrium” calculations treat the surface of the Earth as flat, uniform in surface texture, and uniformly insolated. Equilibria are quite rare in high dimensional non-linear dissipative systems (even on flat, uniform surfaces with uniform input) so the equilibrium calculations are a priori suspect. With a round, non-uniform surface non-uniformly insolated, I think that it is impossible to predict on present knowledge what a doubling of CO2 concentration will actually produce; and if there is an equilibrium, it will only appear after the transient processes have been long underway. With water on 70% of the Earth surface and large wet land regions, that strikes me as a serious known unknown.
As I hinted, I suspect that in warm wet regions (N. Pacific summer), increases of CO2 may increase vaporization with much less increase in temperature than has been calculated; in cold dry regions, or hot dry regions, I expect the balance of temperature change/vaporization change to be different.
Thanks again for the interchange.

Bart
September 28, 2013 11:09 am

FTA:
‘As Monk would say, ā€œHereā€™s what happenedā€.’
That Monk is a smart guy. It’s been pretty obvious, actually, for a long time that the climate modelers had conflated the natural cyclical upswing with a sudden anthropogenic rise. But, the fact that the rise from approximately 1970-2000 was almost precisely the same as the rise from 1910-1940 gave the game away.
“This analysis shows that the real AGW effect is benign and much more likely to be less than 1 Ā°C/century than the 3+ Ā°C/century given as the IPCCā€™s best guess for the business-as-usual scenario.”
It’s actually pretty obvious from other data that it must be even less than that, and effectively zero. If we look at the relationship between CO2 and temperatures, it is apparent that to a very high degree of fidelity that
dCO2/dt = k*(T – Teq)
CO2 = atmospheric concentration
k = sensitivity factor
T = global temperature anomaly
Teq = equilibrium temperature
k and Teq are parameters for a 1st order fit. They may change over time, but are well represented by constants for the modern era since 1958 when precise measurements of CO2 became available.
This is a positive gain system – an increase in temperatures produces an increase in CO2 concentration. If we now presume that there is a positive feedback between CO2 and temperature, we get a positive feedback loop, which would be unstable.
There are other negative feedbacks, e.g., the T^4 radiation of heat. But, to maintain stability, these would have to be dominant, in which case the overall effect of CO2 on temperature would be negligible anyway. All roads lead to Rome – whatever the overall system response is, it must be such that the effect of CO2 on temperatures is effectively nil.
Now, a note on how the relationship above comes about. Atmospheric CO2 obeys a partial differential diffusion equation. The interface with the oceans sets boundary conditions. The boundary condition can be considered to obey something akin to Henry’s law (buffering processes complicate the actual relationship)
CO2(boundary) = Kh*CO2_Oceans(boundary)
The derivative of this is
dCO2(boundary)/dt = dKh/dt*CO2_Oceans(boundary) + Kh*dCO2_Oceans(boundary)/dt
Kh is a function of temperature, and thus can be expanded to first order as
Kh = Kh_eq + Kh_partial*(T – Teq)
where Kh_partial is the partial derivative of Kh to temperature. The oceans have been a net source of CO2 to the atmosphere. Assuming these are dominant, then
dCO2(boundary)/dt := (Kh_partial*dCO2_Oceans(boundary)/dt) * (T – Teq)
which is the form of the equation above with
k = Kh_partial*dCO2_Oceans(boundary)
In words, the influx of CO2 from the oceans produces a temperature dependent pumping action into the atmosphere.
The full dynamics are an atmospheric diffusion equation, with ocean boundary conditions as above, as well as a boundary condition with the land, which establishes a flow from the atmosphere into the miinerals and biota of the land, and an outflow from anthropogenic release of latent CO2. This is vastly simplified, of course, as the oceans contain their own biota and other CO2 absorbing processes. So, rather than division strictly into oceans and land, there is some overlap between the two reservoirs. In any case, though I have not yet worked out the details, it is clear where all this is heading. A very simplified ODE system model is
dCO2/dt = (CO2eq – CO2)/tau + H
dCO2_eq/dt = k*(T – Teq)
CO2 = atmospheric CO2
CO2eq = equilibrium CO2 established by the oceanic boundary condition
H = human inputs
tau = a time “constant”
The equilibrium CO2 is established by the interface with the oceans, and is relentlessly driven upward by temperatures above the equilibrium level. These feed into the atmospheric diffusion equation, which is being driven by human inputs, but is also being depleted by natural sinks which react in proportion to the CO2 level above equilibrium.
If “tau” is short, then H will be dramatically attenuated, and have little overall effect, and CO2 will track CO2eq. The actual dynamics are undoubtedly much more complicated, and “tau” would be more precisely modeled as an operator theoretic value which smooths the CO2 differential, leading to a “long tail” response, though not too long in the most significant components, as the data show that human inputs are being fairly rapidly sequestered.
But, this is effectively what the data show is happening. There really is no doubt about it. And, because of the positive feedback effect noted above, CO2 concentration cannot have a significant effect on temperature, because otherwise, we already would have maxxed out at some enormous level of CO2 and exceedingly high temperatures eons ago.

Bill from Nevada
September 29, 2013 7:49 pm

Amateurs attempting to make phase change refrigerant a heater are so hilarious.

Tim Folkerts
September 30, 2013 2:16 pm

Bart says:
“it is apparent that to a very high degree of fidelity that
dCO2/dt = k*(T ā€“ Teq)”

One big problem here is that the CO2 levels are averaged over 12 months. So what this relationships shows is that the the current temperature anamoly is correlated to the combination of the LAST 6 months of CO2 and the NEXT 6 months of CO2. In other words, there is no way to know of the if is CO2 driving temperature, or temperature driving CO2 from this relationship. So the conclusion that “an increase in temperatures produces an increase in CO2 concentration” is pure speculation with this data”.
A better way to analyze this would be to first compare the correlation of temperature with the NEXT 12 months of CO2 and then with the PREVIOUS 12 months of CO2 and see which is better.

September 30, 2013 2:46 pm

Bart, you are essentially wrong on several points:
If we now presume that there is a positive feedback between CO2 and temperature, we get a positive feedback loop, which would be unstable.
If the positive feedback is modest, then the system is not unstable, only gives an extra increase of temperature and CO2 levels with (fb) and without (nofb) feedback of CO2 on temperature:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/feedback.jpg
If we look at the relationship between CO2 and temperatures
The strong relationship is between the variability of (d)temperature(/dt) and dCO2/dt, not with the slope of dCO2/dt. By fitting the trends with an arbitrary factor and bias, you attribute the whole slope of dCO2/dt to temperature, but the slope is the result of all contributions to the increase, including human emissions.
The oceans have been a net source of CO2 to the atmosphere
Vegetation is a net sink for CO2 (~1 GtC/yr, humans emit ~9 GtC/yr), based on the oxygen balance. Besides vegetation and oceans, all other known natural sinks are either to small or too slow. The atmospheric increase is ~4 GtC/yr. Some 4 GtC/yr human emissions (as mass) + the extra release from the oceans goes where?
In words, the influx of CO2 from the oceans produces a temperature dependent pumping action into the atmosphere.
According to Henry’s Law a temperature increase gives an increase in equilibrium setpoint of ~16 Āµatm CO2 with the atmosphere. Thus an increase of ~16 ppmv in the atmosphere will bring the in- and outfluxes of the ocean-atmosphere system back to what they were previous to the temperature increase. Starting from a system in dynamic equilibrium:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_temp.jpg
The increase of CO2 in the atmosphere both reduces the increased upwelling and increased the downwelling
If the increase of CO2 was caused by a sudden extra upwelling of extra CO2 from the deep oceans (the “Coke effect”), that would have a similar effect on the balance, as in the case of a temperature increase:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_incr.jpg
Temperature changes and upwelling changes act independent of each other and are simply additive:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_incr_temp.jpg
Thus the dynamics of the ocean processes prove that a continuous release of the oceans from a sustained difference in temperature is impossible, as the increased CO2 level in the atmosphere influences both the release and uptake of CO2 from/into the oceans.

Bart
September 30, 2013 6:45 pm

Tim Folkerts says:
September 30, 2013 at 2:16 pm
“So what this relationships shows is that the the current temperature anamoly is correlated to the combination of the LAST 6 months of CO2 and the NEXT 6 months of CO2.”
The average is applied non-causally, as you say. As a result, it has zero phase. Its only effect is to attenuate higher frequencies, in particular, zeroing out the annual variation so that underlying trends can be observed.
“In other words, there is no way to know of the if is CO2 driving temperature, or temperature driving CO2 from this relationship.”
No. The derivative relationship establishes it. It would be absurd to argue that the rate of change of CO2 drives temperature. If that were the case, we could boost CO2 up until it was the greater part of the atmosphere, but once we stopped pumping, the temperature would revert to its equilibrium level.
A derivative also provides leading phase. Thus, the overall CO2 concentration, which is the integral of the derivative, lags temperature by 90 deg in phase. This also establishes causality. The cause, temperature, always precedes the effect, CO2 concentration.
Ferdinand Engelbeen says:
September 30, 2013 at 2:46 pm
See my reply back at The Hockey Schtick.

October 1, 2013 11:05 am
Greg Goodman
October 5, 2013 1:38 am

“Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. ”
A more accurate description would be “where the climatologists PUT it”.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2
Much of the reason there a two ramps in 20th c. is because Hadley inserted of -0.5K drop in 1945. The raw data has a lot more downward trend and more variability in 19th c. The 20th c was a more continuous rise from 1910.
Recent modification rounded off the step change but it’s still there and still as big.
This is what the overall adjustment is for hadSST3.
http://curryja.files.wordpress.com/2012/03/hadsst3-cosine-fit1.png

gordie
October 7, 2013 2:52 am

Marler says:
“…doubling CO2 will not change the (equilibrium) surface temperature.”
One of the early proponents of the idea of GHG warming , John Tyndall, would
probably have agreed with you.
Thus:
“It is evident that olefiant gas of 1 inch tension [1 / 15th of an atmosphere pressure] must
extinguish a large proportion of the rays which are capable of being absorbed by
the gas, and hence the succeeding measures having a less and less amount of heat
to act upon must produce a continually smaller effect.” (Bakerian Lecture, 1861)
This is acknowledgement of (progressive) saturation of the (primary) process.