Detecting the AGW Needle in the SST Haystack

Guest essay by Jeffery S. Patterson

My last post on this site, examined the hypothesis that the climate is dominated by natural, harmonically related periodicities. As they say in the business, the critics were not kind. Some of the criticisms were due to a misunderstanding of the methodology and others stemmed from an under appreciation for the tentativeness of the conclusions, especially with respect to forecasting. With respect to the sparseness of the stochastic analysis, the critics were well founded. This lack of rigor is why it was submitted as a blog post and not a journal paper. Perhaps it served to spark someone’s interest who can do the uncertainty analysis properly, but I have a day job.

One of the commentators suggested I repeat the exercise using a technique called Singular Spectrum Analysis which I have done in a series of posts starting here. In this post, I turn my attention away from cycles and modeling and towards signal detection. Can we find a signature in the temperature data attributable to anthropogenic effects?

Detecting small signals in noisy data is something I am quite familiar with. I work as a design architect for a major manufacturer of test equipment, half of which is dedicated to the task of finding tiny signals in noisy data. These instruments can measure signals on the order of -100dBm (1 part in 10-13). Detecting the AGW signal should be a piece of cake (tongue now removed from cheek).

Information theory (and a moment’s reflection) tells us that in order to communicate information we must change something. When we speak, we modulate the air pressure around us and others within shouting distance detect the change in pressure and interpret it as sound. When we send a radio signal, we must modulate its amplitude and/or or phase in order for those at the other end to receive any information. The formal way of saying this is that ergodic processes (i.e. a process whose statistics do not change with time) cannot communicate information. Small signal detection in noise then, is all about separating the non-ergodic sheep from the ergodic goats. Singular Spectrum Analysis excels at this task, especially when dealing with short time series.

Singular Spectrum Analysis is really a misnomer, as it operates in the time-domain as opposed to the frequency domain as the term spectrum normally applies. It allows a time-series to be split into component parts (called reconstructions or modes) and sorts them in amplitude order, with the mode contributing most to the original time series first. If we use all of the modes, we get back the original data exactly. Or we can choose to use just some of the modes, rejecting the small, wiggly ones for example to provide the long term trend. As you may have guessed by now, we’re going to use the non-ergodic information-bearing modes, and relegate the ergodic, noisy modes to the dust bin.

SSA normally depends on two parameters, a window length L which can’t be longer than ½ the record length and a mode selection parameter k (k is sometimes a multi-valued vector if the selected modes aren’t sequential but here they are). This can make the analysis somewhat subjective and arbitrary. Here however, we are deconstructing the temperature time-series into only two buckets. Since the non-ergodic components contribute most to the signal characteristics, they will generally be in the first k modes, and the ergodic components will be sequential, starting from mode k+1 and including all remaining L-k-1 modes. Since in this method, L only controls how much energy leaks from one of our buckets to the other, we set to its maximum value to give the finest grain resolution to the division between our two buckets. Thus our analysis depends only on a single parameter k which is set to maximize the signal to noise ratio.

That’s a long time to go without a picture so here are the results of the above applied to the Northern Hemisphere Sea Surface Temperature data.

clip_image002

Figure 1 -SST data (blue) vs. a reconstruction based on the first four eigen modes (L=55, k=1-4)

The blue curve is the data and the red curve is our signal “bucket” reconstructed from the first four SSA modes. Now let’s look in the garbage pail.

clip_image004

Figure 2 – Residual after signal extraction

We see that the residual indeed looks like noise, with no discernible trend or other information. The distribution looks fairly uniform, the slight double-peak due probably to the fact that the early data is noisier than the more recent. Remembering that the residual and the signal sum to the original data, and since there is no discernible AGW signal in the residual, we can state without fear of contradiction that any sign of AGW, if one is to be found, must be found in the reconstruction built from the non-ergodic modes plotted in red in figure 1.

What would an AGW signal look like? The AGW hypothesis is that the exponential rise in CO2 concentrations seen since the start of the last century should give rise to a linear temperature trend impressed on top of the climate’s natural variation. So we are looking for a ramp, or equivalently a step change in the slope of the temperature record. Here’s an idea of what a trendless climate record (with the natural variation and noise similar to the observed SST record) might look like, with (right) and without (left) a 4 °C/century AGW component. The four curves on the right represent four different points in time where the AGW component first becomes detectable: 1950, 1960, 1970 and 1980.

clip_image006 clip_image008

Figure 3 – Simulated de-trended climate record without AGW component (left) and with 4°C/century AWG components

Clearly a linear AGW signal of the magnitude suggested by the IPCC should be easily detectible within the natural variation. Here’s the real de-trended SST data. Which plot above does it most resemble?

clip_image010

Figure 4 -De-trended SST data

Note that for the “natural variation is temporarily masking AGW” meme to hold water, the natural variation during the AGW observation widow would have to be an order of magnitude higher than that which occurred previously. SSA shows that not to be the case. Here is the de-trended signal decomposed into its primary components (note, for reasons too technical to go into here, SSA modes occur in pairs. The two signals plotted below include all four signal modes constituted as pairs)

clip_image011

Figure 5 – Reconstruction of SSA modes 1,2 and 3,4

Note the peak-to-peak variation has remained remarkably constant across the entire data record.

Ok, so if it’s not 4°C/century, what is it? Remember we are looking for a change in slope caused by the AGW component. The plot below shows the slope of our signal reconstruction (which contains the AWG component ,if any), over time.

clip_image013

Figure 6 – Year-to-year difference of reconstructed signal

We see two peaks, one in 1920 well before the effects of AGW are thought to have been detectable and one slightly higher in 1995 or so. Let’s zoom in on the peaks.

clip_image015

Figure 7 – Difference in signal slope potentially attributable to AGW

The difference in slope is 0.00575 °C/year or ~.6 °C/century. No smoothing was done on the first-difference plot above as is normally required, because we have eliminated the noise component which makes this necessary.

Returning to our toy climate model of figure 3, here’s what it looks like with a .6 per century slope (left) with the de-trended real SST data on the right for comparison.

clip_image017clip_image010[1]

Figure 8 – Simulated de-trended climate record with .6°C/century linear AGW components (see figure 3 above) left, de-trended SST (northern hemisphere) data right

Fitting the SST data on the right to a sine wave-plus-ramp model yields a period of ~65 years with the AGW corner at 1966, about where expected by climatologists. The slope of the AGW fit? .59 °C/century, arrived at complete independent of the SSA analysis above.

Conclusion

As Monk would say, “Here’s what happened”. During the global warming scare of the 1980’s and 1990’s, the quasi-periodic modes comprising the natural temperature variation were both in their phase of maximum slope (See figure 5). This naturally occurring phenomenon was mistaken for a rapid increase in the persistent warming trend and attributed to the greenhouse gas effect. When these modes reached their peaks approximately 10 years ago, their slopes abated, resulting in the so-called “pause” we are currently enjoying. This analysis shows that the real AGW effect is benign and much more likely to be less than 1 °C/century than the 3+ °C/century given as the IPCC’s best guess for the business-as-usual scenario.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
Matthew R Marler
September 26, 2013 1:03 pm

davidmhoffer: Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it.
How close to the surface? Downwelling IR is measured by the TAO buoys.

September 26, 2013 1:04 pm

Jeff you say
” Given that, we also don’t know what the shape and nature of the natural variation is … except that it is chaotic.”
The climate system is not chaotic on any meaningful time scale. The global temperature has been within narrow limits for at least 500 million years. The latest ephemerides are good back to about 45 million years before the decimal points in the planetary positions diverge chaotically. If you want to see the shape or pattern of the natural variation see Figs 5,6,and 7 of the last post at http://climatesense-norpag.blogspot.com

DesertYote
September 26, 2013 1:12 pm

Jeff Patterson says:
September 26, 2013 at 12:32 pm
###
My example was unclear. I was trying to describe the environment within a typical shopping mall constructed using steal girders, not a Faraday cage.

Nancy C
September 26, 2013 1:23 pm

Sorry for off topic, but Merrick, dBm is usually used to reference power into a specific 600 ohm load. So it’s usually understood that 0dBm = .001W but also corresponds to .775 volt (.775/600 x .775 = .001). You’re right, technically 0dBm is not .775 volts. On the other hand, if you say you’re measuring a -100dBm signal, most likely you’re actually measuring it’s voltage (.775/100,000) and then calling it power because you know it’s going into a “fixed” load. How accurately you can measure the power depends on how accurately you can measure the voltage, and since power is a function of the square of voltage, it would be silly to use the number of decimal places in the power to say what the resolution is. Like saying if I can measure length with a certain ruler to 1 part in 1000 I must be able to measure area with that same ruler to 1 part in 1000 x 1000. I think Jeff’s updated response reflects resolution in terms of voltage not power, which seems correct to me assuming the other numbers are correct. But sorry again for even bringing it up in the first place.

John Trigge
September 26, 2013 1:28 pm

None of these analyses are of any value unless performed by a ‘climate scientist’. As has been shown with statistics, only their results are of value, no matter how skewed, incorrect, misguided (or wrong) they are.

John Trigge
September 26, 2013 1:28 pm

Please forgive my ‘end sarc’ not appearing.

RC Saumarez
September 26, 2013 1:35 pm

This is an interesting post.
I have never used SSA and have just read an account of its mathematics. I have two reservations about the conclusions presented here.
1) SSA is an arbitrary linear decomposition of the lagged covariance matrix. At first sight, a ramp due to anthropogenic influences might be expected to be represented as an eigenvector. However, if the pCO2 has been rising approximately exponentially, there would be expected to be a linear increase in forcing. Since the first few components of the decomposition accounts for much of the shape of the temperature curve, how can one be certain that an anthropogenic component is not represented in the eigenvalue of the most linear component rather than a separate orthogonal component? I realise that you have done simulations with a ramp but if this has a distinctly different time of onset, I am not certain that this is a sufficient test to eliminate the anthropogenic component.
2) The decomposition is linear and therefore it it is not clear how multiplicative effects would be represented. For example, considering the effects of aerosols and/or dust, one could assume that these are linear, superimposable forcings. However, if they have a multiplicative effect on the total energy in the system, then trying to detect their contribution to the temperature signal is far more difficult and would involve a homomorphic deconvolution, which given the data would be extraordinarily difficult. This is a more general comment but one could argue that given the non-linear processes in climate, the anthropogenic component may not be detectable without non-linear modelling. (I am sure that there those who would say this).
I’m happy to be shown to be wrong on this

September 26, 2013 1:36 pm

Dr Norman Page says:
September 26, 2013 at 1:04 pm
Jeff you say
” Given that, we also don’t know what the shape and nature of the natural variation is … except that it is chaotic.”
=======================================
That was not my quote but rather it was from Willis Eschenbach September 26, 2013 at 7:55 am

September 26, 2013 1:43 pm

Jeff Sorry- consider the comment readdressed to Willis .

September 26, 2013 1:51 pm

@RC Saumarez September 26, 2013 at 1:35 pm
SSA can be subjective, but as the article points out, this particular use of it is not. L is set to its max value to give the highest spectral resolution, k is set for best SNR.
One of the nice things about SSA is that any line can be transparently subtracted from the data and simple be added in point-by-point to the trend mode if desired. As pointed out, the analysis done on the detrended data. Removing the line of regression doesn’t affect the slope analysis as it is a common mode.
Regards,
Jeff

Ursus Augustus
September 26, 2013 2:19 pm

So basically a bunch of desert dwellers finally made it to the coast at around low tide and completely freaked out as they saw the sea level rising and by mid tide became quite completely hysterical. They turned on those amongst them who looked at the signs on the beach that shouted out a cyclical phenomenon. Come high tide they are still chanting their chants, dancing their sacred dances and threatening to sacrifice the rationalists to the sea god/demon. They even blame the rationalists for causing the tide to come in by starting a cooking fire using driftwood they now believe to be sacred and cursed.

Editor
September 26, 2013 2:31 pm

davidmhoffer says:
September 26, 2013 at 10:45 am

Matthew R Marler;

The 3.7W/m^2 of forcing that results from CO2 doubling falls mostly on water and other wet surfaces

>>>>>>>>>>>>>>>>>>
Actually it doesn’t, it never even make it to the surface for the most part. Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it. Very little gets to surface. You need not believe me, I’m just citing the IPCC explanation of the difference between radiative forcing and surface forcing:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-2-23.html

I see absolutely nothing in the citation about what you claim. Here’s what it says:

Figure 2.23. Globally and annually averaged temporal evolution of the instantaneous all-sky RF (bottom panel) and surface forcing (top panel) due to various agents, as simulated in the MIROC+SPRINTARS model (Nozawa et al., 2005; Takemura et al., 2005). This is an illustrative example of the forcings as implemented and computed in one of the climate models participating in the AR4. Note that there could be differences in the RFs among models. Most models simulate roughly similar evolution of the LLGHGs’ RF.

Your citation says nothing, zero, about longwave radiation somehow being captured and re-radiated by a layer of water vapor just above the surface.
The downwelling longwave is indeed absorbed and re-radiated on its way downwards. It does reach the earth, however. Remember that after each absorption, it is eventually re-radiated half upwards and half downwards. Also, a similar process is happening with the upwelling longwave radiation.
w.

Editor
September 26, 2013 2:53 pm

Dr Norman Page says:
September 26, 2013 at 1:04 pm

Jeff you say

Given that, we also don’t know what the shape and nature of the natural variation is … except that it is chaotic.”

The climate system is not chaotic on any meaningful time scale. The global temperature has been within narrow limits for at least 500 million years. The latest ephemerides are good back to about 45 million years before the decimal points in the planetary positions diverge chaotically. If you want to see the shape or pattern of the natural variation see Figs 5,6,and 7 of the last post at http://climatesense-norpag.blogspot.com

Thanks for re-addressing this to me, Norman. I fear your claim about the climate is disputed by none other than Benoit Mandelbrot himself … see here for his discussion of the issues.
w.

September 26, 2013 3:01 pm

Thanks Jeff.
You may want to examine Central England Temperatures that date back to 1659.
Here is one data source – I am uncertain if it is the best one, and I cannot comment on the existence or absence of a warming bias in CET’s.
The CET website says “The mean daily data series begins in 1772 and the mean monthly data in 1659. … Since 1974 the data have been adjusted to allow for urban warming.”
http://www.metoffice.gov.uk/hadobs/hadcet/
Regards, Allan

Editor
September 26, 2013 3:14 pm

Dr Norman Page says:
September 26, 2013 at 1:04 pm

If you want to see the shape or pattern of the natural variation see Figs 5,6,and 7 of the last post at http://climatesense-norpag.blogspot.com

I checked your “30 Year Forecast”. it contains forecasts like this one:

At this time the sun has entered a quiet phase with a dramatic drop in solar magnetic field strength since 2004. This suggests the likelihood of a cooling phase on earth with Solar Cycles 21, 22 ,23 equivalent to Solar Cycles 2,3,4, and the delayed Cycle 24 comparable with Cycle 5 so that a Dalton type minimum is probable “. …………………………

“There will be a steeper temperature gradient from the tropics to the poles so that violent thunderstorms with associated flooding and tornadoes will be more frequent in the USA

Last year, lots of tornadoes. You declared the prediction a grand success. This year, almost no tornadoes … is your prediction now a failure? And there has been a long-term, sustained dearth of hurricanes. Are thunderstorms more violent? I haven’t seen one study to say so … but none of that is the real problem.
The problem is that without numbers your forecasts are meaningless. Without value. Worthless. Why? Because they are not falsifiable. You have not specified the time period, the numbers, none of that. Since they cannot be falsified they are not forecasts at all.
So you might as well scrap your whole “30-Year Forecast” and start over. This time, make real prediction. What’s a real prediction? Here’s one:

During two of the three years 2010-2013, the number of tornadoes of F2 strength or better in the US will exceed 127.

Do you see the difference? At the end of 2013, we can say with surety whether my forecast is right or wrong.
But your pseudo-cast, that “tornadoes will be more frequent in the USA”, that says nothing. During what time period? How strong a tornado? More frequent than what?
Scrap it and start over, I don’t see a real forecast in the whole lot. Remember … if it can’t be falsified, it’s not a forecast, so you need numbers, numbers, numbers …
w.

milodonharlani
September 26, 2013 3:14 pm

Willis Eschenbach says:
September 26, 2013 at 2:53 pm
Mandelbrot and Wallis (1969) wrote before Hayes, Imbrie & Shackleton (1976) confirmed Milankovitch cycles in what Mandelbrot would have classified as paleoclimatology. It could well be that orbital mechanics influence climatic & meteorological phenomena on time scales both longer & shorter than 10,000 to 100,000 years, but Earth’s movements do seem to dominate on the order of 100,000 years.
It also may be that on the scale of decades to millennia climate is indeed chaotic, although that hypothesis is IMO by no means strongly supported, while not yet falsified. But neither has the hypothetical existence of quasi-periodic waves like D-O & Bond cycles been falsified. Observed multi-decadal climatic phenomena such as the PDO & AMO & centennial to millennial events like the Holocene Climatic Optimum, the Minoan, Roman, Medieval & Modern Warm Periods, with intervening Cold Periods IMO tend to support the reality of cycles rather than noise amid the chaos.
I agree with those who have written here that climatology would be better served by trying to discover possible causes for these observed quasi-cycles instead of constructing ever more Ptolemaic epicycles-like GIGO models.

September 26, 2013 3:57 pm

Willis I’m interested in forecasting climate – I don’t think you read the post to the end and the last post was just the last in a series on this subject going back about a year to see where I’m at. Here below.is the conclusion. You ask for falsifiability .My forecasts would be seriously in question if there is not 0.15 – 0.2 degrees of cooling in the global SSTs by 2018-20.
Re weather -Obviously the short term weather events will vary considerably during decadal time spans and in different geographical regions. On a cooling world the jet stream moves more meridionally with the formation of blocking highs which bring cold fronts and cold further south in winter and can develop high temperature highs in summer, Similarly warm moist air streams can move further north around the highs .Thus on a cooling and generally dryer planet there is a much greater temperature gradient particularly across the fronts- with all that that implies with regard to weather. This type of weather pattern has been more frequent over the last several years. Major – class 4 and 5 hurricanes will be less frequent on a cooling world, Sandy e,g was barely a hurricane at all when it went ashore. Nothing in the U.S since Katrina Enough on the weather.
Re Mandelbrot – nowhere in the long quote you linked to does he say anything I would disagree with -in particular in this quote he doesn’t say the climate is chaotic. Here’s the cooling forecast which is reasonably specific.
“To summarize- Using the 60 and 1000 year quasi repetitive patterns in conjunction with the solar data leads straightforwardly to the following reasonable predictions for Global SSTs
1 Continued modest cooling until a more significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
9 Warning !!
The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon. If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up. This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others .A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly. I am sure, however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate.

richardscourtney
September 26, 2013 4:00 pm

Dr Norman Page:
I read your post at September 26, 2013 at 3:57 pm.
Please say if you intend to sell your ‘forecasts’ commercially.
Richard

Henry Clark
September 26, 2013 4:23 pm

Unfortunately, any calculation, however much mathematical analysis may be performed, is limited by its input data and starting assumptions. What is the source of the SST data used in this article? It is probably HADCRUT3 as in the last article, which is published by CRU (infamous for Climategate). That may be moderately less fudged than the later publication HADCRUT4, somewhat a better choice, but that doesn’t make it an accurate one.
A problem is activists like them and Hansen’s GISS have repeatedly rewritten decades-old thermometer readings towards cooling the early/mid 20th century and warming the late 20th century. So the 0.6 degrees/century is a spurious result (or, most favorably, like an upper limit) compared to what history was before activists rewrote it, as illustrated in http://img176.imagevenue.com/img.php?image=81829_expanded_overview_122_424lo.jpg
I appreciate the work done by Mr. Patterson but just believe that anytime the likes of HADCRUT is used, even if done so merely because it was the most readily accessible digital source (since the alarmists have the most money for spreading such on the internet), there should be prominent disclaimers to readers.

Bill Illis
September 26, 2013 4:39 pm

Instead of using some artificial pseudo-cycle, why not use a real 60 year ocean oscillation like the AMO for example.
Just see how close the Raw, undetrended AMO is to Hadcrut4. There is some scary correlation here and it also helps explain some of the large ENSO spikes that have occurred over time.
http://s18.postimg.org/9uar3ow0p/Hadcrut4_vs_Raw_AMO.png
And this is monthly data, rather than annual. I think one must use monthly data because the natural oscillation cycles such as the ENSO and the AMO operate on monthly time-scales, not annual.
Furthermore, the “noise” in the climate system actually operates on a 2 week to 1 month time basis. The large global temperature excursions seem to last for about 2 weeks at a time (while the highest resolution data is monthly so I guess one has to use what is available).
Daily UAH lower troposphere temps in 2012 and 2013 to see what I mean about that last statement.
http://s22.postimg.org/hd5dsb4gx/Daily_UAH_12_13_Aug13.png
Or Ryan Maue’s compilation of daily surface temperatures provided by the NCEP CFSv2 weather model.
http://models.weatherbell.com/climate/cfsr_t2m_2012.png

September 26, 2013 5:02 pm

Richard Courtney I would be happy to consult on climate matters and forecasts on a commercial basis for anybody interested in my advice.

September 26, 2013 5:10 pm

Willis;
Your citation says nothing, zero, about longwave radiation somehow being captured and re-radiated by a layer of water vapor just above the surface.
>>>>>>>>>>>>>>>>>>>>>>
My bad. I meant to the cite to show that the IPCC differentiates between radiative forcing and surface forcing, and the surface forcing due to CO2 by their own numbers is small by comparison. I should have left the comment about water vapour being the reason why out of the comment as the cite doesn’t in fact address the reason why that is, only that it is. AR4 WG1 2.2 also goes to some length to explain that Radiative Forcing is very different from Surface Forcing:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-2.html
I’ve brought this up in discussion with RGB, but the best explanation that I got (from the warmist perspective) came from Joel Shore. His position is that water vapour being much higher at low altitudes, it in fact does suppress radiative forcing in both directions, so it absorbs LW coming up and sends 1/2 back down, but it also absorbs LW coming down and sends 1/2 back up. Since CO2 concentrations relative to water vapour at low altitude are insignificant, but at high altitude where water vapour concentration drops off, CO2’s effects are more pronounced from a radiative forcing perspective, and should in theory have a larger effect on temperature at altitude than at surface. Surface temps then rise not because of surface forcing, but because the air column being warmed at altitude becomes more stable and via the lapse rate surface temps rise.
That explanation is also unsatisfactory to me, and I will freely admit that I’m commenting from memory, and that a couple of paragraphs doesn’t do the topic justice. That said, my main point was to draw attention to the fact that the 3.7 w/m2 is radiative forcing and is a different number from surface forcing.

richardscourtney
September 26, 2013 5:16 pm

Dr Norman Page:
In your post at September 26, 2013 at 5:02 pm you say

Richard Courtney I would be happy to consult on climate matters and forecasts on a commercial basis for anybody interested in my advice.

Sincere thanks for your honesty and openness.
If you offer only clear and falsifiable forecasts then that would be useful information on WUWT.
We recently had a person posting vague forecasts on WUWT with the clear intent of using WUWT as a free advertising medium for his intended forecasting business. Given that you have commercial interest, please only make specific and falsifiable forecasts on WUWT. Your posting vague forecasts could be understood as being a misuse of our hosts generosity. And that could lead to loss of permission for the posting of forecasts on WUWT which would be a loss.
Again, genuine thanks for your frankness, and I hope you accept the sincerity of my request and the reason for it.
Richard

September 26, 2013 5:17 pm

Matthew R Marler says:
September 26, 2013 at 1:03 pm
davidmhoffer: Radiative forcing from CO2 runs into a layer of water vapour close to surface that absorbs and re-radiates it.
How close to the surface? Downwelling IR is measured by the TAO buoys.
>>>>>>>>>>>>>>>>>>
In part please see my response to Willis above. Yes, the buoys measure downwelling LW, but they don’t know where any given photon came from. It could have oirginated 1 cm, 1 m, or 1 km above the buoy. It could have come from a CO2 molecule or a water vapour molecule or some other source, but the buoy doesn’t know which ones are which. It only knows total energy flux, frequency and wavelength. The issue here is that water vapour at surface in the tropics runs in the 30,000 ppm+ range to CO2’s 400. But higher altitudes and higher latitudes have colder temperatures, water vapour plumets, and so CO2’s effects are more pronounced.

DesertYote
September 26, 2013 5:38 pm

Nancy C says:
September 26, 2013 at 1:23 pm
###
Power meters are generally bolometric devices.