
Source: Mantua, 2000
The essay below has been part of a back and forth email exchange for about a week. Bill has done some yeoman’s work here at coaxing some new information from existing data. Both HadCRUT and GISS data was used for the comparisons to a doubling of CO2, and what I find most interesting is that both Hadley and GISS data come out higher in for a doubling of CO2 than NCDC data, implying that the adjustments to data used in GISS and HadCRUT add something that really isn’t there.
The logarithmic plots of CO2 doubling help demonstrate why CO2 won’t cause a runaway greenhouse effect due to diminished IR returns as CO2 PPM’s increase. This is something many people don’t get to see visualized.
One of the other interesting items is the essay is about the El Nino event in 1878. Bill writes:
The 1877-78 El Nino was the biggest event on record. The anomaly peaked at +3.4C in Nov, 1877 and by Feb, 1878, global temperatures had spiked to +0.364C or nearly 0.7C above the background temperature trend of the time.
Clearly the oceans ruled the climate, and it appears they still do.
Let’s all give this a good examination, point out weaknesses, and give encouragement for Bill’s work. This is a must read. – Anthony
Adjusting Temperatures for the ENSO and the AMO
A guest post by: Bill Illis
People have noted for a long time that the effect of the El Nino Southern Oscillation (ENSO) should be accounted for and adjusted for in analyzing temperature trends. The same point has been raised for the Atlantic Multidecadal Oscillation (AMO). Until now, there has not been a robust method of doing so.
This post will outline a simple least squares regression solution to adjusting monthly temperatures for the impact of the ENSO and the AMO. There is no smoothing of the data, no plugging of the data; this is a simple mathematical calculation.
Some basic points before we continue.
– The ENSO and the AMO both affect temperatures and, hence, any reconstruction needs to use both ocean temperature indices. The AMO actually provides a greater impact on temperatures than the ENSO.
– The ENSO and the AMO impact temperatures directly and continuously on a monthly basis. Any smoothing of the data or even using annual temperature data just reduces the information which can be extracted.
– The ENSO’s impact on temperatures is lagged by 3 months while the AMO seems to be more immediate. This model uses the Nino 3.4 region anomaly since it seems to be the most indicative of the underlying El Nino and La Nina trends.
– When the ENSO and the AMO impacts are adjusted for, all that is left is the global warming signal and a white noise error.
– The ENSO and the AMO are capable of explaining almost all of the natural variation in the climate.
– We can finally answer the question of how much global warming has there been to date and how much has occurred since 1979 for example. And, yes, there has been global warming but the amount is much less than global warming models predict and the effect even seems to be slowing down since 1979.
– Unfortunately, there is not currently a good forecast model for the ENSO or AMO so this method will have to focus on current and past temperatures versus providing forecasts for the future.
And now to the good part, here is what the reconstruction looks like for the Hadley Centre’s HadCRUT3 global monthly temperature series going back to 1871 – 1,652 data points.

I will walk you through how this method was developed since it will help with understanding some of its components.
Let’s first look at the Nino 3.4 region anomaly going back to 1871 as developed by Trenberth (actually this index is smoothed but it is the least smoothed data available).
– The 1877-78 El Nino was the biggest event on record. The anomaly peaked at +3.4C in Nov, 1877 and by Feb, 1878, global temperatures had spiked to +0.364C or nearly 0.7C above the background temperature trend of the time.
– The 1997-98 El Nino produced similar results and still holds the record for the highest monthly temperature of +0.749C in Feb, 1998.
– There is a lag of about 3 months in the impact of ENSO on temperatures. Sometimes it is only 2 months, sometimes 4 months and this reconstruction uses the 3 month lag.
– Going back to 1871, there is no real trend in the Nino 3.4 anomaly which indicates it is a natural climate cycle and is not related to global warming in the sense that more El Ninos are occurring as a result of warming. This point becomes important because we need to separate the natural variation in the climate from the global warming influence.

The AMO anomaly has longer cycles than the ENSO.
– While the Nino 3.4 region can spike up to +3.4C, the AMO index rarely gets above +0.6C anomaly.
– The long cycles of the AMO matches the major climate shifts which have occurred over the last 130 years. The downswing in temperatures from 1890 to 1915, the upswing in temps from 1915 to 1945, the decline from 1946 to 1975 and the upswing in temps from 1975 to 2005.
– The AMO also has spikes during the major El Nino events of 1877-88 and 1997-98 and other spikes at different times.
– It is apparent that the major increase in temperatures during the 1997-98 El Nino was also caused by the AMO anomaly. I think this has lead some to believe the impact of ENSO is bigger than it really is and has caused people to focus too much on the ENSO.
– There is some autocorrelation between the ENSO and the AMO given these simultaneous spikes but the longer cycles of the AMO versus the short sharp swings in the ENSO means they are relatively independent.
– As well, the AMO appears to be a natural climate cycle unrelated to global warming.

When these two ocean indices are regressed against the monthly temperature record, we have a very good match.
– The coefficient for the Nino 3.4 region at 0.058 means it is capable of explaining changes in temps of as much as +/- 0.2C.
– The coefficient for the AMO index at 0.51 to 0.75 indicates it is capable of explaining changes in temps of as much as +/- 0.3C to +/- 0.4C.
– The F-statistic for this regression at 222.5 means it passes a 99.9% confidence interval.
But there is a divergence between the actual temperature record and the regression model based solely on the Nino and the AMO. This is the real global warming signal.

The global warming signal (which also includes an error, UHI, poor siting and adjustments in the temperature record as demonstrated by Anthony Watts) can be now be modeled against the rise in CO2 over the period.
– Warming occurs in a logarithmic relationship to CO2 and, consequently, any model of warming should be done on the natural log of CO2.
– CO2 in this case is just a proxy for all the GHGs but since it is the biggest one and nitrous oxide is rising at the same rate, it can be used as the basis for the warming model.
This regression produces a global warming signal which is about half of that predicted by the global warming models. The F statistic at 4,308 passes a 99.9% confidence interval.

– Using the HadCRUT3 temperature series, warming works out to only 1.85C per doubling of CO2.
– The GISS reconstruction also produces 1.85C per doubling while the NCDC temperature record only produces 1.6C per doubling.
– Global warming theorists are now explaining the lack of warming to date is due to the deep oceans absorbing some of the increase (not the surface since this is already included in the temperature data). This means the global warming model prediction line should be pushed out 35 years, or 75 years or even 100s of years.
Here is a depiction of how logarithmic warming works. I’ve included these log charts because it is fundamental to how to regress for CO2 and it is a view of global warming which I believe many have not seen before.
The formula for the global warming models has been constructed by myself (I’m not even sure the modelers have this perspective on the issue) but it is the only formula which goes through the temperature figures at the start of the record (285 ppm or 280 ppm) and the 3.25C increase in temperatures for a doubling of CO2. It is curious that the global warming models are also based on CO2 or GHGs being responsible for nearly all of the 33C greenhouse effect through its impact on water vapour as well.

The divergence, however, is going to be harder to explain in just a few years since the ENSO and AMO-adjusted warming observations are tracking farther and farther away from the global warming model’s track. As the RSS satellite log warming chart will show later, temperatures have in fact moved even farther away from the models since 1979.

The global warming models formula produces temperatures which would be +10C in geologic time periods when CO2 was 3,000 ppm, for example, while this model’s log warming would result in temperatures about +5C at 3,000 ppm. This is much closer to the estimated temperature history of the planet.
This method is not perfect. The overall reconstruction produces a resulting error which is higher than one would want. The error term is roughly +/-0.2C but the it does appear to be strictly white noise. It would be better if the resulting error was less than +/- 0.2C but it appears this is unavoidable in something as complicated as the climate and in the measurement errors which exist for temperature, the ENSO and the AMO.
This is the error for the reconstruction of GISS monthly data going back to 1880.

There does not appear to be a signal remaining in the errors for another natural climate variable to impact the reconstruction. In reviewing this model, I have also reviewed the impact of the major volcanoes. All of them appear to have been caught by the ENSO and AMO indices which I imagine are influenced by volcanoes. There appears to be some room to look at a solar influence but this would be quite small. Everyone is welcome to improve on this reconstruction method by examining other variables, other indices.
Overall, this reconstruction produces an r^2 of 0.783 which is pretty good for a monthly climate model based on just three simple variables. Here is the scatterplot of the HadCRUT3 reconstruction.

This method works for all the major monthly temperature series I have tried it on.
Here is the model for the RSS satellite-based temperature series.

Since 1979, warming appears to be slowing down (after it is adjusted for the ENSO and the AMO influence.)
The model produces warming for the RSS data of just 0.046C per decade which would also imply an increase in temperature of just 0.7C for a doubling of CO2 (and there is only 0.4C more to go to that doubling level.)

Looking at how far off this warming trend is from the models can be seen in this zoom-in of the log warming chart. If you apply the same method to GISS data since 1979, it is in the same circle as the satellite observations so the different agencies do not produce much different results.

There may be some explanations for this even wider divergence since 1979.
– The regression coefficient for the AMO increases from about 0.51 in the reconstructions starting in 1880 to about 0.75 when the reconstruction starts in 1979. This is not an expected result in regression modelling.
– Since the AMO was cycling upward since 1975, the increased coefficient might just be catching a ride with that increasing trend.
– I believe a regression is a regression and we should just accept this coefficient. The F statistic for this model is 267 which would pass a 99.9% confidence interval.
– On the other hand, the warming for RSS is really at the very lowest possible end for temperatures which might be expected from increased GHGs. I would not use a formula which is lower than this for example.
– The other explanation would be that the adjustments of old temperature records by GISS and the Hadley Centre and others have artificially increased the temperature trend prior to 1979 when the satellites became available to keep them honest. The post-1979 warming formulae (not just RSS but all of them) indicate old records might have been increased by 0.3C above where they really were.
– I think these explanations are both partially correct.
This temperature reconstruction method works for all of the major temperature series over any time period chosen and for the smaller zonal components as well. There is a really nice fit to the RSS Tropics zone, for example, where the Nino coefficient increases to 0.21 as would be expected.

Unfortunately, the method does not work for smaller regional temperature series such as the US lower 48 and the Arctic where there is too much variation to produce a reasonable result.
I have included my spreadsheets which have been set up so that anyone can use them. All of the data for HadCRUT3, GISS, UAH, RSS and NCDC is included if you want to try out other series. All of the base data on a monthly basis including CO2 back to 1850, the AMO back to 1856 and the Nino 3.4 region going back to 1871 is included in the spreadsheet.
The model for monthly temperatures is “here” and for annual temperatures is “here” (note the annual reconstruction is a little less accurate than the monthly reconstruction but still works).
I have set-up a photobucket site where anyone can review these charts and others that I have constructed.
http://s463.photobucket.com/albums/qq360/Bill-illis/
So, we can now adjust temperatures for the natural variation in the climate caused by the ENSO and the AMO and this has provided a better insight into global warming. The method is not perfect, however, as the remaining error term is higher than one would want to see but it might be unavoidable in something as complicated as the climate.
I encourage everyone to try to improve on this method and/or find any errors. I expect this will have to be taken into account from now on in global warming research. It is a simple regression.
UPDATED: Zip files should download OK now.
SUPPLEMENTAL INFO NOTE: Bill has made the Excel spreadsheets with data and graphs used for this essay available to me, and for those interested in replication and further investigation, I’m making them available here on my office webserver as a single ZIP file
Downloads:
Annual Temp Anomaly Model 171K Zip file
Monthly Temp Anomaly Model 1.1M Zip file
Just click the download link above, save as zip file, then unzip to your local drive work folder.
Here is the AMO data which is updated monthly a few days after month end.
http://www.cdc.noaa.gov/Correlation/amon.us.long.data
Here is the Nino 3.4 anomaly from Trenbeth from 1871 to 2007.
ftp://ftp.cgd.ucar.edu/pub/CAS/TNI_N34/Nino34.1871.2007.txt
And here is Nino 3.4 data updated from 2007 on.
http://www.cpc.ncep.noaa.gov:80/data/indices/sstoi.indices
– Anthony
Nice job Bill but your treatment of the logs needs attention.
Physically the argument of the ln() should be non-dimensional i.e. the expression should have the form:
∆T=C*ln([CO2]/[CO2]o)
which can be expanded to: C*ln([CO2])-C*ln([CO2]o)
so rather than treat the constant term as a free variable in your fit it should be a constant with [CO2]o=285.
In this case at the start when [CO2]=285, ln(1)=0 therefore ∆T=0
In your case the two curve fits give ~306 and ~326 and you can see this by looking at the graph and seeing where the two lines cross 0. I would suggest that you try the fit with this model instead.
Secondly you attach significance to the ‘intercept’ of the graphs this is in error mathematically, the ln(x) function approaches -∞ asymptotically as x->0.
Physically this is in error because at small values of [CO2] the dependence becomes linear.
It’s interesting to see that a simple lumped parameter model using basically the variation of the two ocean basin SST anomalies (detrended) and a greenhouse term gives such a good agreement.
I hope WordPress can handle the math symbols, apologies if it can’t.
In 2005, I did a similar research, but using yearly data and including a wide variety of parameters. The results were published on http://www.junkscience.com/MSU_Temps/J_Janssens.htm , where one can also find a early model-to-play. Early 2007, I updated the data and improved the model using statistical tools. The methodology and results were published on my website at http://users.telenet.be/j.janssens/Climate2007/Climatereconstruction.html .
Though these are just statistical approximations, they do provide insight in the factors influencing the temperature data, and in the weaknesses of the professional climate models. I for one learned that the influence of the oceans was much bigger than expected, and because these variations (AMO,…) existed long before any anthropogenic “pollution”, the current global warming -to me- seems much more due to these factors than to e.g. manmade CO2.
To Phil,
Thanks, but isn’t the formula really (I can’t do the symbols).
— Change in T x–>y = C Ln(CO2x) – 26.9 – [C ln(CO2y) – 26.9]
— the constants cancel each out when you are talking about a change in T so they are not represented but the proper formula would still include them.
The essay says there is no GW trend in the AMO, yet most other studies have found a
positive trend of around 0.5C over 120 years. The statement seems to be based on a graphing of this dataset …http://www.cdc.noaa.gov/Correlation/amon.us.long.data
But if we look at the
NOAA description of the dataset it tells us that the index is derived by detrending the SST data. So is the essay bringing us the startling conclusion that a detrended dataset contains no trend?
If the detrended AMO dataset was used in the regression analysis then this will tend to alias any non-linear forcings, so any conclusions based on the residuals from that regression may be simple artifacts of the detrending.
It does not seem legitimate to simply assign any residual to CO2 warming, and not any other factors, also – as alluded to in the text but not the analysis, the GHG forcing does not act instantaneously or even on an annual timescale – there a lag in the climate response which means that there is estimated to be around
0.6C of warming ‘in the pipeline’ , any extrapolation needs to include this, not to mention the additional forcing from non-GHG feedbacks, which tend to be exponential …
For a peer-reviewed analysis along the same lines see http://holocene.meteo.psu.edu/shared/articles/KnightetalGRL05.pdf
One interesting thing about Jan Jannsens analysis compared to this one is that it only uses 3 variables to create the fit which I think is pretty much as good as Jan’s. Jan … do you agree? Jan,for example modelled volcanoes direct whereas this analysis suggests AMO/ANSO is directly impacted by Volcanoes so ” we dont need it”
If this analysis is right C02 is overplayed as a climate influence, and I for one agree.
We are left with the open question of what mechanism drives AMO and ENSO. Well perhaps volcanoes are part of the answer but CO2 driving global warming isn’t. I haven’t found anything in the literature which suggests a plausible cause of OMO/ENSO variation – its a natural cycle – i.e we don’t know.
Anybody seen a plausible explanation?
This and Jan’s model do make predictions on temperature but we will have to wait a long time to test them
However Bill — you could do a hindcast by say using data up to say 1978 and seeing how well it forecasts the last 30 to 2008 ( 30 years being everybodies favourite minimum climate interval.
I notice you say the correlation look different since 79 ( RSS keeping GISS/HADCRUT honest ) so perhaps 1900 to 1980 and seeing how it predicts 1870 to 1900 and the last 30 years?
Bill Illis, a very interesting piece of work. However, what is missing to my eye is a comparison of your use of CO2 to represent the trend, and just using a straight line to represent the trend.
To make the case that CO2 is involved, you need to show that the fit using CO2 plus ENSO 3.4 plus AMO is significantly better than the corresponding fit using a linear trend plus ENSO 3.4 plus AMO.
My best to you,
w.
Is there a place to read about the current assumptions about the oceans impact on climate? How does it relate to the above article? How are underwater volcanos and rifts factored in or are the effects too small to matter?
I couldn’t find anything at realClimate but maybe I didn’t know where to look.
There’s a lot of interesting stuff here. I am glad that Norm Kalmanovitch dropped in with some information on CO2 IR activity.
One thing I see constantly in papers, without anybody ever justifying the mechanism, is the claim that the warming due to CO2 is a logarithmic function. This then leads to talk of a “climate sensitivity” parameter being the mean global temperature rise due to a doubling of CO2.
But if you get into the details of infrared absorption by CO2 following the notes that Norm left us up above; I don’t see any justification for the assumption of a logarithmic relationship between CO2 concentration in the atmosphere and the temperature rise (mean globally).
Now I don’t doubt that you can take some sets of data, and curve fit them to a logarithmic currve. Everybody knows you can hide all kinds of pestilence by simply plotting data on a log-log plot. The ability to curve fit data, and calculate correlation coefficients between sets of data doesn’t prove that there is any cause and effect relationship whatsoever.
You wouldn’t believe the total mayhem that scientists have wreaked by simply messing around with numbers, in the belief that you couldn’t possibly closely match real data to the results of just messing around with numbers.
Well if you believe that, you need to review the history of the “Fine Structure Constant” which has the value e^2 / 2 h c e0 where e is the electron charge; h is Planck’s constant; c is the velocity of light, a e0 (epsilon zero) is the permittivity of free space, and is approximately 1/137. The 1/137 form is intimately linked to that sordid history. The first chapter of the great fine structure constant scandal involved Sir Arthur Eddington who one proved that alpha (TFSC) was EXACTLY 1/136; but when nature didn’t comly with his thesis, and the measured value became much closer to 1/137, the good Professor Eddington thereupon proved that alpha was indeed EXACTLY 1/137. Well it isn’t, it’s about 1/ 137.0359895 and has been measured so accurately that it was used as a method for measuring the velocity of light (which IS now specified as an EXACT number (2.99792458E8 m/s) ).
Dear deluded Professor Eddington, became known as Professor Adding one !
Well that was only the first episode of the FSC scandal. In the mid 60s someone derived the 1/FSC number as the fourth root of (pi to some low integer power times the product of about four other low integer numbers raised to low integer powers). I’ll let you university types search the literature for the paper. It computed 1/FSC to within 65% of the standard deviation of the very best experimental measured value of 1/FSC which is 8 significant digits. And the paper included ZERO input from the physical real world universe; it was a purely mathematical calculation. But of course it had to be correct because everybody knows you can’t get the right answer just by mucking around with numbers. The lack of observational data input phased nobody in the science community who embraced this nonsense; well for about a month. That’s how long it took some computer geek to do a search for all numbers that were of the same form; 4th root of the products of low integers to low integer powers and pi to a low integer power.
The geek turned up about a dozen numbers that were equal to 1/FSC within the standard deviation of the best experimental measurments; and one of those numbers was actually within about 30% of the standard deviation; twice as accurate as the original paper. A more sophisticated mathematician developed a multidimensional sphere thesis where the radius of the sphere was the 1/FSC number and a thin shell that was =/- one standard deviation fromt hat radius contained a number of lattice points thatw ere solutions to the set of integers in the puzzle. So he computed the complete set of answers that fit the prescription; the result of doing nothing more than mucking around with numbers that was accepted as correct because it so accurately fitted the observed data.
So watch out what you go for, just because some fancy manipulations fit your data; particularly noisy like data that can hide real errors from the predictions.
One can model the optical transmission of absorptive of materials as a logarithmic function. If a certain thickness transmits 10% of a given spectrum, twice the thickness will transmit 1%, and so on; BUT such materials absorb the radiation and convert it entirely to thermal energy.
Not so with water vapor or CO2 or any other GHG. Some absorption processes may convert some of the energy to heat energy; but mostly the absorbed IR photon is simply re-emitted, perhaps with a frequency shift due to Doppler effects, or even Heisenberg uncertainty. Subsequent re-absorption by other GHG molecules, may face totally diffrent results due to temperature and pressure changes in between successive absorption-re-emission events. The likelihood that such processes follow any simple logarithmic function is rather remote, and the possibility that the global mean surface temperature change due to such porocesses also follows a logarithmic function; even more so.
I know all the books and papers say it’s logarithmic; how many of them derive the specific logarithmic function based on the molecular spectroscopy physics ?
How, (if at all), does this dovetail with Spencers hypothesis regarding the PDO?
DaveE
John Philip says … “the GHG forcing does not act instantaneously or even on an annual timescale -”
What is the physical/physics basis for saying the GHG forcing does not act instantaneously or certainly within a year. We are talking about photons of light here. Where is the energy going?
and “… there is a lag in the climate response which means that there is estimated to be around 0.6C warming in the pipeline”
I noted the theory is now the deep oceans are absorbing some of the increase and it might them take us longer to reach the doubling temperature. How long then? Because I think global warming researchers have a duty to tell us that now. Does the temperature reach the doubling level 35 years, 75 years or 100s of years after CO2 reaches the doubling plateau?
The points about the construction of the indices is well-taken. Where can we find the raw data before it is detrended?
Bill Illis (10:46:40) :
Thanks, but isn’t the formula really (I can’t do the symbols).
– Change in T x–>y = C Ln(CO2x) – 26.9 – [C ln(CO2y) – 26.9]
— the constants cancel each out when you are talking about a change in T so they are not represented but the proper formula would still include them.
No the formula is: ∆T=C*ln([CO2]/[CO2]o)= C*ln([CO2])-C*ln([CO2]o)
So if you fit a function of the form ∆T=C*ln([CO2])-B
B=C*ln([CO2]o) in your one case C=2.73 and B=15.8
so in your fit ln([CO2]o)=B/C=5.79, therefore [CO2]o=326 ppm. (Which if you look at your zoom-in graph is exactly where the red line crosses zero)
My suggestion is that you should do the regression on ∆T=C*ln([CO2]/285)
I expect that would give you a slightly lower C with the line crossing zero at 285 ppm.
REPLY:Unfortunately I can’t install LATEX for symbol translation here on this blog, but if you want to display the formula, you could spell it out with appropriate symbols, do a screen cap, and post it up to a picture website like flickr etc and link to it here. Just trying to help. – Anthony
“That is why I made the Linux analogy – built collaboratively by a community of intensely interested individuals – with nothing more than sweat equity. I assume Bill Illis did not get paid for his little research work – yet he has put together a potentially very interesting piece of science.”
Things like that generally get started by someone who is curious and wanting to see if they can make something themselves. That was how Linux got started when Linus Torvalds sat down to play with his 386 and decided out of simple curiosity if he could make an unix-like OS. Others got interested and began adding pieces so initially it was sort of a “stone soup” effort.
But then things changed and in a very important way. To give an example, we used Linux at work. We made some changes to some programs to better support our particular environment. Over time as the “upstream” versions of these programs were released, we would have to fit our changes into the new code release. Sometimes it was easy, other times it was more difficult depending on what changed in that upstream release. One day I saw someone asking about a feature on a mailing list for one of these programs and it was a feature we had actually implemented in our environment. I made a decision to provide our changes to the software developer as a “contribution” and they were adopted and incorporated into the standard package. We never had to experience that pain of patching our changes into the code after that. The “upstream” maintainer adopted the maintenance of those changes and a lot of other people benefited from the new features we added. But overall the motivation was self-interest. It was to our benefit to have someone else maintain that code and offload the job of having to merge our changes with every new code release.
So while things often get started out of curiosity, and people will often take an interest in almost a “hobby” sense, what really gets something rolling is when it actually becomes useful in a “real world” sense. And while people will sometimes gift some work out of the goodness of their hearts, often the biggest returns are from people in whose interest it is to get their changes into the broader code base than to have to hack at it every time something changes. It becomes more efficient to open the source than to keep it closed.
Same with projects here. People whose livelihood depends on accurate weather data might find it in their interest to help with the surface stations project or to share what information they have more generally. A firm in the agricultural industry, for example, might make better long term decisions if they knew that growing seasons were actually shortening or flat and not lengthening. If there is no warming, then making economic decisions based on the assumption that growing seasons will get longer in the future can cost someone a fortune. And if I am selling something and if I have the right information, nobody is going to buy it unless they also have the right information so it pays both parties (it is in their self-interest) to get accurate information out there so the producer provided the right thing and the market demands the right thing.
What is going on now with our government data is practically criminal in the economic sense. Because of politically-based bias, real economic damage is potentially being done. I am all for “open source” science models.
But if you get into the details of infrared absorption by CO2 following the notes that Norm left us up above; I don’t see any justification for the assumption of a logarithmic relationship between CO2 concentration in the atmosphere and the temperature rise (mean globally).
That’s because Norm’s exposition falls way short of what really happens!
In our atmosphere the absorption spectrum of CO2 is a very close packed series of absorption lines (so many and so closely packed together that they look like a broad band unless viewed at high resolution). At very low pressures and temperatures (like on Mars) the individual lines are very sharp and separate as pressure and temperature are increased to the values seen in our atmosphere the individual lines are broadened by collisional and Doppler effects and eventually will overlap each other. As a consequence the absorbance dependence changes, at very low pressures and temperatures it will be linear, at very high pressures it will be √[CO2], in between there is a transition, for CO2 in our atmosphere it’s in the intermediate region and is best described by ln (and this can be measured).
Astronomers have used this for a long time, they term it the ‘curve of growth’, usually applied to atomic spectra in interstellar space.
BUT such materials absorb the radiation and convert it entirely to thermal energy.
Not so with water vapor or CO2 or any other GHG. Some absorption processes may convert some of the energy to heat energy; but mostly the absorbed IR photon is simply re-emitted,
No, in our atmosphere up to the tropopause or so virtually all of the energy absorbed by CO2 is converted to the thermal motion of colliding molecules, primarily N2 & O2. The emission lifetime of the excited CO2 is much longer than the mean time between collisions and so is rapidly quenched. As you get up into the stratosphere the situation changes and the CO2 has time to emit.
To Phil,
okay now I see what you are saying.
I was modeling the ln(280ppm or 285ppm) to be at -0.4C rather than Zero.
This whole model is based on the anomaly (from the baseline) so it crosses Zero when the particular temperature series baseline anomaly passes Zero. Now each series has a slightly different baseline and when you are comparing series you have to match up the baselines but I wanted ln(280ppm) to be at -0.4C.
Phil,
I’m curious about your statement that the CO2 spectrum consists of a whole bunch of closely spaced lines (in the IR ?) . Do you know of any link to a high resolution spectrum for CO2. I have looked and never been able to find any good spectra for the common GHG culprits. Yet I would have thought that with all the climate interest in those gases, that the spectra would have been studied to death. The only data of much use I’ve been able to find comes from The InfraRed Handbook, from the Infrared Information Analysis Center, (ERIM) and I presume that is somewhat dated.
What is the physical basis for the many fine lines in the IR region ?
It would seem to me that in the earth atmosphere at least at ground level, that you must have a pretty continuous absorption from around 13-17 microns; but I’m puzzled as to why a molecular spectrum has many fine lines (I am not a chemist).
George
Bill – Thermal inertia of the climate system is pretty uncontroversial, see for example this
write up of a paper by Meehl et al which
quantifies the relative rates of sea level rise and global temperature increase that we are already committed to in the 21st century. Even if no more greenhouse gases were added to the atmosphere, globally averaged surface air temperatures would rise about a half degree Celsius (one degree Fahrenheit) and global sea levels would rise another 11 centimeters (4 inches) from thermal expansion alone by 2100.
“Many people don’t realize we are committed right now to a significant amount of global warming and sea level rise because of the greenhouse gases we have already put into the atmosphere,” says lead author Gerald Meehl. “Even if we stabilize greenhouse gas concentrations, the climate will continue to warm, and there will be proportionately even more sea level rise. The longer we wait, the more climate change we are committed to in the future.”
So the ‘physics’ explanation is that the heat largely goes into the oceans which take years to decades to warm in response, 70% of the surface is ocean and it takes around a decade for the surface layer to mix with the deep ocean …
The paper concludes with a cogent statement by Meehl: “With the ongoing increase in concentrations of GHGs [greenhouse gases], every day we commit to more climate change in the future. When and how we stabilize concentrations will dictate, on the time scale of a century or so, how much more warming we will experience. But we are already committed to ongoing large sea level rise, even if concentrations of GHGs could be stabilized.”
…
The inevitability of the climate changes described in the study is the result of thermal inertia, mainly from the oceans, and the long lifetime of carbon dioxide and other greenhouse gases in the atmosphere. Thermal inertia refers to the process by which water heats and cools more slowly than air because it is denser than air.
There is a discussion of the length of time to equilibrium in
this paper [may need a free subscription to access].
The AMO data before de-linear-trending is here … http://www.cdc.noaa.gov/Correlation/amon.us.long.mean.data
but see the caveats on the NOAA page linked earlier. Hope this helps.
The logarithmic response to C02 is straightforward physics and nothing to do with any climate theory.
Put simply CO2 absorbs light at specific wavelengths. As CO2 levels increase more and more of these wavlenths are absorbed. However the law of diminishing returns applies in that once CO2 has absorbed some there is less left to absorb so an increase from say 280 to 300 has less effect than an increase from 380 to 400. It turns out this “law of diminishing returns” follows a log function.
This was covered on this website on 4/9/2008 in an article on this topic
Fine lines and CO2 absorbtion spectra
As I recollect A photon reacts to a CO2 molecule by changing its excitation state, either by changing the vibration mode of the atoms or by raising an electron’s energy going round one of the atoms to a higher level. I think IR absorbtion is in the latter category. Anyway there a a very large number of possible vibration modes and it requires different amounts of energy to move from one to the other. Each possible mode change creates an absorbtion line so there are lots of them
John Philip can you just tell us what Hansen said in 1985 in the Science article. Most of us do not have a subscription.
I note he published a temperature forecast just a few years later which did not include any ocean absorption that we can tell of since his Scenario B forecast temps are about twice as high as they are currently.
The temperature trend since 1979 indicates we can never reach the 3.25C doubling level no matter how much the oceans absorbs or how much lag time there is. It would take a thousand years.
Great analysis, Bill. Just one question. given that the temperature response to increased CO2 is logarithmic, why do we only see a warming signal post 1970 or so? One would expect a greater warming signal early in the rise of CO2, rather than later. Or am I missing something?
Well according to the official NOAA global energy budget, of the 390 W/m^2 emitted from the earth’s surface, only 40 W/m^2 escapes to space, so that means that GHG are already absorbong 90% of the total available IR, so only 10% is left to capture no matter how much GHG gets up there.
By the way if CO2 has such a long lifetime in the atmosphere (200 years they say), how come NOAA has plots shwing that at the north pole the CO2 in the atmosphere drops 18 ppm in just five months. That doesn’t sound like it would take 200 years, or even 10 years to remove all of it.
One other quick question; if we are already committed to gross sea level rise and temperature rise because of GHG already emitted, and the ML data certainly shows that CO2 keeps on going up unabated despite everybody’s Kyoto committments; then why has the earth been cooling for the last ten years? We should have had about 1/10 of the ten degrees or so predicted for the year 2100 in temperature rise, instead we have had a very sizeable temperature fall; so much for the effect of thermal lag times.
Some people may buy Meeh’s thesis (I do agree there are thermal lags) but why would the temperature go the wrong way, when the “forcing” continues to climb in the same direction.
As to the multiple fine lines in the CO2 IR spectrum; I’m familiar with the so-called symmetrical stretch mode which is not IR active, and the asymmetrical stretch mode which is IR active around 4 microns or so, and the degenerate bending mode which I believe is the 14.77 or 15 micron mode that everyone talks about (haven’t been able to get a definitive value for what wavelength that is.
But anything involving elecron levels in the atoms; as distinct from molecular vibrations, would seem to involve much higher photon energies than required for the molecular effects, so one would expect them to be visible light or shorter wavelengths.
Since CO2 is a linear molecule with no dipole moment, it would not be too active in rotation about the molecular axis, and other rotation modes say about the carbon atom and other axes, would seem to be much longer wavelengths.
But i’m eager to learn, so if someone can explain the energy level foundation for the many fine lines in the IR spectrum of CO2 I’m all ears.
To Don Keiller,
Really good question, I’m going to have to look at the rate of change here too, something I missed. I’ll post back when I can go through it all. Going by experience with this model, I’ll need to double-check everything before responding.
Bill Illis: The following link is to a google spreadsheet with the ERSST.v2 version of NINO3.4 SST and SST anomaly data. It’s the monthly data fresh out of NOAA’s NOMADS system from January 1854 to October 2008. I tried to replace the Trenberth data with it, but ran into the following problem.
I entered annualized ERSST.v2 NINO3.4 data into the Annual Temperature Anomaly Model, starting at 1871, at column C, row 44. It created a host of #VALUE errors. I thought the difference in climatology (The ERSST.v2 base years are 1971 to 2000) was putting the data was out of a working range for the model, so I recalculated the anomalies based on the same base years as the Trenberth NINO3.4 data (1950-1979). That didn’t help.
http://spreadsheets.google.com/pub?key=p4p8emYTQFThxsDz4aQ05NA
Please try the ERSST.v2 data and see if it works for you.
Regards