by Roy W. Spencer, Ph. D.
What can we learn from the IPCC climate models based upon their ability to reconstruct the global average surface temperature variations during the 20th Century?
While the title of this article suggests I’ve found evidence of natural climate cycles in the IPCC models, it’s actually the temperature variability the models CANNOT explain that ends up being related to known climate cycles. After an empirical adjustment for that unexplained temperature variability, it is shown that the models are producing too much global warming since 1970, the period of most rapid growth in atmospheric carbon dioxide. This suggests that the models are too sensitive, in which case they are forecasting too much future warming, too.
Climate Models’ 20th Century Runs
We begin with the IPCC’s best estimate of observed global average surface temperature variations over the 20th Century, from the “HadCRUT3″ dataset. (Monthly running 3-year averages are shown throughout.) Of course, there are some serious concerns over the validity of this observed temperature record, especially over the strength of the long-term warming trend, but for the time being let’s assume it is correct (click on image to see a large version).
Also shown in the above graph is the climate model temperature reconstruction for the 20th Century averaged across 17 of the 21 climate models which the IPCC tracks. To provide a reconstruction of 20th Century temperatures included in the PCMDI archive of climate model experiments, each modeling group was asked to use whatever forcings they believed were involved in producing the observed temperature record. Those forcings generally include increasing carbon dioxide, various estimates of aerosol (particulate) pollution, and for some of the models, volcanoes. (Also shown are polynomial fits to the curves, to allow a better visualization of the decadal time scale variations.)
There are a couple of notable features in the above chart. First, the average warming trend across all 17 climate models (+0.64 deg C per century) exactly matches the observed trend…I didn’t plot the trend lines, which lie on top of each other. This agreement might be expected since the models have been adjusted by the various modeling groups to best explain the 20th Century climate.
The more interesting feature, though, is the inability of the models to mimic the rapid warming before 1940, and the lack of warming from the 1940s to the 1970s. These two periods of inconvenient temperature variability are well known: (1) the pre-1940 warming was before atmospheric CO2 had increased very much; and (2) the lack of warming from the 1940s to the 1970s was during a time of rapid growth in CO2. In other words, the stronger warming period should have been after 1940, not before, based upon the CO2 warming effect alone.
Natural Climate Variability as an Explanation for What The Models Can Not Mimic
The next chart shows the difference between the two curves in the previous chart, that is, the 20th Century temperature variability the models have not, in an average sense, been able to explain. Also shown are three known modes of natural variability: the Pacific Decadal Oscillation (PDO, in blue); the Atlantic Multidecadal Oscillation (AMO, in green); and the negative of the Southern Oscillation Index (SOI, in red). The SOI is a measure of El Nino and La Nina activity. All three climate indicies have been scaled so that their net amount of variability (standard deviation) matches that of the “unexplained temperature” curve.
As can be seen, the three climate indices all bear some level of resemblance to the unexplained temperature variability in the 20th Century.
An optimum linear combination of the PDO, AMO, and SOI that best matches the models’ “unexplained temperature variability” is shown as the dashed magenta line in the next graph. There are some time lags included in this combination, with the PDO preceding temperature by 8 months, the SOI preceding temperature by 4 months, and the AMO having no time lag.
This demonstrates that, at least from an empirical standpoint, there are known natural modes of climate variability that might explain at least some portion of the temperature variability seen during the 20th Century. If we exclude the post-1970 data from the above analysis, the best combination of the PDO, AMO, and SOI results in the solid magenta curve. Note that it does a somewhat better job of capturing the warmth around 1940.
Now, let’s add this natural component in with the original model curve we saw in the first graph, first based upon the full 100 years of overlap:
We now find a much better match with the observed temperature record. But we see that the post-1970 warming produced by the combined physical-statistical model tends to be over-stated, by about 40%. If we use the 1900 to 1970 overlap to come up with a natural variability component, the following graph shows that the post-1970 warming is overstated by even more: 74%.
Interpretation
What I believe this demonstrates is that after known, natural modes of climate variability are taken into account, the primary period of supposed CO2-induced warming during the 20th Century – that from about 1970 onward – does not need as strong a CO2-warming effect as is programmed into the average IPCC climate model. This is because the natural variability seen BEFORE 1970 suggests that part of the warming AFTER 1970 is natural! Note that I have deduced this from the IPCC’s inherent admission that they can not explain all of the temperature variability seen during the 20th Century.
The Logical Absurdity of Some Climate Sensitivity Arguments
This demonstrates one of the absurdities (Dick Lindzen’s term, as I recall) in the way current climate change theory works: For a given observed temperature change, the smaller the forcing that caused it, the greater the inferred sensitivity of the climate system. This is why Jim Hansen believes in catastrophic global warming: since he thinks he knows for sure that a relatively tiny forcing caused the Ice Ages, then the greater forcing produced by our CO2 emissions will result in even more dramatic climate change!
But taken to its logical conclusion, this relationship between the strength of the forcing, and the inferred sensitivity of the climate system, leads to the absurd notion that an infinitesimally small forcing causes nearly infinite climate sensitivity(!) As I have mentioned before, this is analogous to an ancient tribe of people thinking their moral shortcomings were responsible for lightning, storms, and other whims of nature.
This absurdity is avoided if we simply admit that we do not know all of the natural forcings involved in climate change. And the greater the number of natural forcings involved, then the less we have to worry about human-caused global warming.
The IPCC, though, never points out this inherent source of bias in its reports. But the IPCC can not admit to scientific uncertainty…that would reduce the chance of getting the energy policy changes they so desire.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.






ET (22:05:00) :
If you have a paper that describes the current belief for internally driven cycles, I’d appreciate a link to read.
http://solarphysics.livingreviews.org/Articles/lrsp-2005-2/
So far; I have never received a correct answer to the question; “What is Ohm’s Law ?”
In which case I would really like to know how it relates to the climate>
Really? Because Ohm’s law isn’t that hard to explain, and the relationship to climate seems pretty obvious. Now I’m not an electrical engineer. Of course I’m not a physicist either, and I shoot my mouth off about physics all the time, so….
Ohm’s law is that the current between two points in a circuit is directly proportional to the electrical potential between them and inversely proportional to the resistance between them. With I being current, E being electrical potential (volts), and R being resistance the equation is I=E/R, more commonly expressed as E=IR. This applies to any energy relationship. In a mechanical system, substitute velocity for current, force for electrical potential and friction for resistance. In a thermal system, substitute energy transfer for current, temperature differential for electrical potential, and thermal insulation for resistance. This has what to do with climate? The energy the planet radiates to space is directly proportional to the temperature of the planet versus the temperature of space (absolute zero) and inversely proportional to the amount of insulation, or R value, which CO2 increases. If the R value increases, then the rate of energy radiated out (cooling) must decrease. Aha! Scream the warmists, less cooling means higher temperatures! Positive feedback! Positive feedback! Aha! Scream the sceptics, higher temperatures mean higher energy transfer! Negative feedback! Negative feedback!
You want I should explain capacitors and oceans too?
davidmhoffer, you started off well but finished incorrectly. If R increases, the climate science analog for Ohm’s Law would dictate that surface temperature increases so that the same amount of power is radiated (i.e., current is the same, only resistance changed as the atmosphere expanded adiabatically with a nearly constant lapse rate resulting in a higher potential difference developing) – you need that for equilibrium conditions i.e., energy is conserved or charge is conserved as the case may be (1st law of thermoD). And the positive/negative feedback that is debated in climate science hasn’t entered the picture yet. To do so, the resistor can be simply modeled as R/(1-f) for illustrative purposes where the debate is over whether f <0 (negative feedback) or whether 0<f<1 (bounded positive feedback) based on the other physical processes involving greenhouse gases and water vapor.
Actually, I take that back, for debate purposes, in my illustrative model, negative feedback would reduce the incoming current (reflecting clouds) while positive feedback would increase the resistance (a higher tropopause due to added CO2).
RB
Actually if you go through what I wrote, you will see that it is pretty accurate. Now my examples was ONLY for outgoing radiation, the temperature differential between earth and space being the “voltage”. You would need a separate model for the energy coming in given that it has a much higher voltage (Sun temp vs earth, and highly variable and so on). But if we proceed on the assumption (as the warmists do) that energy input is stable, we only need to look at energy output.
If the system were steady state, and the R value increased, then I would decrease because at the point in time when R increases, the voltage hasn’t changed. A decrease in I (energy going out) with energy input staying stable can only have one result which is an increase in temperature. Since an increase in temperature relates to an increase in voltage, current (energy txfer) would rise until equilibrium (in = out) is established. However, if you are going to model it that way, then this would also result in energy transfer between Sun and earth declining as the temperature differential between them is reduced.
Neither model is accurate to 100%, but is not the point. The point is that steady state requires that energy in = energy out. The climate models predict ever increasing energy retained which can only be accomplished if R adds energy to the system. Which it doesn’t.
davidmhoffer, you are incorrect. The analog is I=dq/dt of charge is the same as P=dQ/dt of heat. Therefore, ‘I’ does not change for a power balance. The potential V, or surface temperature T, has to change i.e. go higher. That is why greenhouse effect works. I’m sorry, but you completely did not get it in your last paragraph and there is no further involvement in the debate from my end.
Nice discussion. Personally, i imagine the feedbacks as in operational amplifier circuits… but if you have only resistors, you have no time lag, so it’s often more useful to imagine it as IIR filters (infinite impulse response filters, from digital signal processing). Easier to imagine for me than adding capacitors to the imagined OpAmp circuitry.
RB,
I hereby annoint you honorary gazelle status. Throw some semi related formulas around, draw a conclusion and then refuse to discuss it further. BTW, the formula for Power is Power = Current X Current X Resistance. Since Power is related to the current squared, and linearly to the resistance, your argument falls apart so badly that your best bet is in fact to run away with your tail between your legs and sulk. Oh wait, you already did.
DirkH – yes on the lag time issue. feedback model in an op amp works too. I use the example of a capacitor in parallel with the resistor evening out fluctuations in power input like the ocean does temperature. But the capacitor has a slight resistance associated with it too, so has lag time in both charging up in a warming cycle and discharging in a cooling cycle. No matter what model you choose, energy in = energy out. Perpetual motion doesn’t exist.
Good grief, davidmhoffer, when you take the analogy between Fourier’s law for heat conduction and Ohm’s law for an electrical conduction, you are not equating radiated power to electrical power. The analogy, in case you missed it, is radiated power is equivalent to electrical current. Not power in one domain to power in another domain. You just offered one more statement as to why this discussion will not go anywhere.
The analogy, in case you missed it, is radiated power is equivalent to electrical current>
you gave me your word that you would no longer debate the matter. Lie number one.
Lie number two: your statement of above is not only wrong, it is not what I said. the analogy is that energy transfer is equivelant to current. Power is a measure of the rate of transfer, and had nothing to do with the original analogy or my discussion of it. Power in and power out need never match because they can only be measured at an instant in time. Energy in and energy out can be measured as totals over a given time period.
Just found this thread, buried in the politics avalanche.
I think that GCMs when used for reporting weather, tell us from the horses mouth what happens when the time step gets out of the bounds that exist by the multiple linear assumptions on the involved solutions of equations.
Weather predictions are good for a day or so with high probability, fairly good for about five days and progressively lousier from then on.Why?
Because of the methods and assumptions in the numerical calculations: the sphere gridded with boxes and solutions assumed/fitted with the linear approximations and then the fitted system is time stepped into the future. Every average value entered is a linear approximation too. We see that the predictions hold for a good number of time steps, I think they take 20min steps, until the nonlinearity of something steps in: turbulance, high and low propagations, cloud covers, etc get out of the range of the linear approximation from the fitted state, and predictions fail.
Thus when the exact type of model is morphed into climate use we only have to ask “when will the projections fail” , not if. It will depend on the time steps and how far in time the multiplicity of linear approximations will get out of step. The climate analogue tells us that this is not really fixed ( 1 to 5 days). If we look at the deviation of the IPCC “projections” from the data, one to five years seems to be the limit given by the data.
clarification:
The climate analogue tells us that this is not really fixed ( 1 to 5 days). If we look at the deviation of the IPCC “projections” from the data, one to five years seems to be the limit given by the data
weather modeling is the analogue of climate modeling.
What they haven’t noticed:
http://www.sfu.ca/~plv/100204.PNG