Guest Post by Willis Eschenbach
I haven’t commented much on my most recents posts, because of the usual reasons: a day job, and the unending lure of doing more research, my true passion. To be precise, recently I’ve been frying my synapses trying to twist my head around the implications of the finding that the global temperature forecasts of the climate models are mechanically and accurately predictable by a one-line equation. It’s a salutary warning: kids, don’t try climate science at home.
Figure 1. What happens when I twist my head too hard around climate models.
Three years ago, inspired by Lucia Liljegren’s ultra-simple climate model that she called “Lumpy”, and with the indispensable assistance of the math-fu of commenters Paul_K and Joe Born, I made what to me was a very surprising discovery. The GISSE climate model could be accurately replicated by a one-line equation. In other words, the global temperature output of the GISSE model is described almost exactly by a lagged linear transformation of the input to the models (the “forcings” in climatespeak, from the sun, volcanoes, CO2 and the like). The correlation between the actual GISSE model results and my emulation of those results is 0.98 … doesn’t get much better than that. Well, actually, you can do better than that, I found you can get 99+% correlation by noting that they’ve somehow decreased the effects of forcing due to volcanoes. But either way, it was to me a very surprising result. I never guessed that the output of the incredibly complex climate models would follow their inputs that slavishly.
Since then, Isaac Held has replicated the result using a third model, the CM2.1 climate model. I have gotten the CM2.1 forcings and data, and replicated his results. The same analysis has also been done on the GDFL model, with the same outcome. And I did the same analysis on the Forster data, which is an average of 19 model forcings and temperature outputs. That makes four individual models plus the average of 19 climate models, and all of the the results have been the same, so the surprising conclusion is inescapable—the climate model global average surface temperature results, individually or en masse, can be replicated with over 99% fidelity by a simple, one-line equation.
However, the result of my most recent “black box” type analysis of the climate models was even more surprising to me, and more far-reaching.
Here’s what happened. I built a spreadsheet, in order to make it simple to pull up various forcing and temperature datasets and calculate their properties. It uses “Solver” to iteratively select the values of tau (the time constant) and lambda (the sensitivity constant) to best fit the predicted outcome. After looking at a number of results, with widely varying sensitivities, I wondered what it was about the two datasets (model forcings, and model predicted temperatures) that determined the resulting sensitivity. I wondered if there were some simple relationship between the climate sensitivity, and the basic statistical properties of the two datasets (trends, standard deviations, ranges, and the like). I looked at the five forcing datasets that I have (GISSE, CCSM3, CM2.1, Forster, and Otto) along with the associated temperature results. To my total surprise, the correlation between the trend ratio (temperature dataset trend divided by forcing dataset trend) and the climate sensitivity (lambda) was 1.00. My jaw dropped. Perfect correlation? Say what? So I graphed the scatterplot.
Figure 2. Scatterplot showing the relationship of lambda and the ratio of the output trend over the input trend. Forster is the Forster 19-model average. Otto is the Forster input data as modified by Otto, including the addition of a 0.3 W/m2 trend over the length of the dataset. Because this analysis only uses radiative forcings and not ocean forcings, lambda is the transient climate response (TCR). If the data included ocean forcings, lambda would be the equilibrium climate sensitivity (ECS). Lambda is in degrees per W/m2 of forcing. To convert to degrees per doubling of CO2, multiply lambda by 3.7.
Dang, you don’t see that kind of correlation very often, R^2 = 1.00 to two decimal places … works for me.
Let me repeat the caveat that this is not talking about real world temperatures. This is another “black box” comparison of the model inputs (presumably sort-of-real-world “forcings” from the sun and volcanoes and aerosols and black carbon and the rest) and the model results. I’m trying to understand what the models do, not how they do it.
Now, I don’t have the ocean forcing data that was used by the models. But I do have Levitus ocean heat content data since 1950, poor as it might be. So I added that to each of the forcing datasets, to make new datasets that do include ocean data. As you might imagine, when some of the recent forcing goes into heating the ocean, the trend of the forcing dataset drops … and as we would expect, the trend ratio (and thus the climate sensitivity) increases. This effect is most pronounced where the forcing dataset has a smaller trend (CM2.1) and less visible at the other end of the scale (CCSM3). Figure 3 shows the same five datasets as in Figure 2, plus the same five datasets with the ocean forcings added. Note that when the forcing dataset contains the heat into/out of the ocean, lambda is the equilibrium climate sensitivity (ECS), and when the dataset is just radiative forcing alone, lambda is transient climate response. So the blue dots in Figure 3 are ECS, and the red dots are TCR. The average change (ECS/TCR) is 1.25, which fits with the estimate given in the Otto paper of ~ 1.3.
Figure 3. Red dots show the models as in Figure 2. Blue dots show the same models, with the addition of the Levitus heat content data to each forcing dataset. Resulting sensitivities are higher for the equilibrium condition than for the transient condition, as would be expected. Blue dots show equilibrium climate sensitivity (ECS), while red dots (as in Fig. 2) show the corresponding transient climate response (TCR).
Finally, I ran the five different forcing datasets, with and without ocean forcing, against three actual temperature datasets—HadCRUT4, BEST, and GISS LOTI. I took the data from all of those, and here are the results from the analysis of those 29 individual runs:
Figure 4. Large red and blue dots are as in Figure 3. The light blue dots are the result of running the forcings and subsets of the forcings, with and without ocean forcing, and with and without volcano forcing, against actual datasets. Error shown is one sigma.
So … my new finding is that the climate sensitivity of the models, both individual models and on average, is equal to the ratio of the trends of the forcing and the resulting temperatures. This is true whether or not the changes in ocean heat content are included in the calculation. It is true for both forcings vs model temperature results, as well as forcings run against actual temperature datasets. It is also true for subsets of the forcing, such as volcanoes alone, or for just GHG gases.
And not only did I find this relationship experimentally, by looking at the results of using the one-line equation on models and model results. I then found that can derive this relationship mathematically from the one-line equation (see Appendix D for details).
This is a clear confirmation of an observation first made by Kiehl in 2007, when he suggested an inverse relationship between forcing and sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available [here]) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work, and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.
However, Kiehl ascribed the variation in sensitivity to a difference in total forcing, rather than to the trend ratio, and as a result his graph of the results is much more scattered.
Figure 5. Kiehl results, comparing climate sensitivity (ECS) and total forcing. Note that unlike Kiehl, my results cover both equilibrium climate sensitivity (ECS) and transient climate response (TCR).
Anyhow, there’s a bunch more I could write about this finding, but I gotta just get this off my head and get back to my day job. A final comment.
Since I began this investigation, the commenter Paul_K has since written two outstanding posts on the subject over at Lucia’s marvelous blog, The Blackboard (Part 1, Part 2). In those posts, he proves mathematically that given what we know about the equation that replicates the climate models, that we cannot … well, I’ll let him tell it in his own words:
The Question: Can you or can you not estimate Equilibrium Climate Sensitivity (ECS) from 120 years of temperature and OHC data (even) if the forcings are known?
The Answer is: No. You cannot. Not unless other information is used to constrain the estimate.
An important corollary to this is:- The fact that a GCM can match temperature and heat data tells us nothing about the validity of that GCM’s estimate of Equilibrium Climate Sensitivity.
Note that this is not an opinion of Paul_K’s. It is a mathematical result of the fact that even if we use a more complex “two-box” model, we can’t constrain the sensitivity estimates. This is a stunning and largely unappreciated conclusion. The essential problem is that for any given climate model, we have more unknowns than we have fundamental equations to constrain them.
CONCLUSIONS
Well, it was obvious from my earlier work that the models were useless for either hindcasting or forecasting the climate. They function indistinguishably from a simple one-line equation.
On top of that, Paul_K has shown that they can’t tell us anything about the sensitivity, because the equation itself is poorly constrained.
Finally, in this work I’ve shown that the climate sensitivity “lambda” that the models do exhibit, whether it represents equilibrium climate sensitivity (ECS) or transient climate response (TCR), is nothing but the ratio of the trends of the input and the output. The choice of forcings, models and datasets is quite immaterial. All the models give the same result for lambda, and that result is the ratio of the trends of the forcing and the response. This most recent finding completely explains the inability of the modelers to narrow the range of possible climate sensitivities despite thirty years of modeling.
You can draw your own conclusions from that, I’m sure …
My regards to all,
w.
Appendix A : The One-Line Equation
The equation that Paul_K, Isaac Held, and I have used to replicate the climate models is as follows:
Let me break this into four chunks, separated by the equals sign and the plus signs, and translate each chunk from math into English. Equation 1 means:
This year’s temperature (T1) is equal to
Last years temperature (T0) plus
Climate sensitivity (λ) times this year’s forcing change (∆F1) times (one minus the lag factor) (1-a) plus
Last year’s temperature change (∆T0) times the same lag factor (a)
Or to put it another way, it looks like this:
T1 = <— This year’s temperature [ T1 ] equals
T0 + <— Last year’s temperature [ T0 ] plus
λ ∆F1 (1-a) + <— How much radiative forcing is applied this year [ ∆F1 (1-a) ], times climate sensitivity lambda ( λ ), plus
∆T0 a <— The remainder of the forcing, lagged out over time as specified by the lag factor “a”
The lag factor “a” is a function of the time constant “tau” ( τ ), and is given by
This factor “a” is just a constant number for a given calculation. For example, when the time constant “tau” is four years, the constant “a” is 0.78. Since 1 – a = 0.22, when tau is four years, about 22% of the incoming forcing is added immediately to last years temperature, and rest of the input pulse is expressed over time.
Appendix B: Physical Meaning
So what does all of that mean in the real world? The equation merely reflects that when you apply heat to something big, it takes a while for it to come up to temperature. For example, suppose we have a big brick in a domestic oven at say 200°C. Suppose further that we turn the oven heat up suddenly to 400° C for an hour, and then turn the oven back down to 200°C. What happens to the temperature of the big block of steel?
If we plot temperature against time, we see that initially the block of steel starts to heat fairly rapidly. However as time goes on it heats less and less per unit of time until eventually it reaches 400°C. Figure B2 shows this change of temperature with time, as simulated in my spreadsheet for a climate forcing of plus/minus one watt/square metre. Now, how big is the lag? Well, in part that depends on how big the brick is. The larger the brick, the longer the time lag will be. In the real planet, of course, the ocean plays the part of the brick, soaking up
The basic idea of the one-line equation is the same tired claim of the modelers. This is the claim that the changing temperature of the surface of the planet is linearly dependent on the size of the change in the forcing. I happen to think that this is only generally the rule, and that the temperature is actually set by the exceptions to the rule. The exceptions to this rule are the emergent phenomena of the climate—thunderstorms, El Niño/La Niña effects and the like. But I digress, let’s follow their claim for the sake of argument and see what their models have to say. It turns out that the results of the climate models can be described to 99% accuracy by the setting of two parameters—”tau”, or the time constant, and “lambda”, or the climate sensitivity. Lambda can represent either transient sensitivity, called TCR for “transient climate response”, or equilibrium sensitivity, called ECS for “equilibrium climate sensitivity”.
Figure B2. One-line equation applied to a square-wave pulse of forcing. In this example, the sensitivity “lambda” is set to unity (output amplitude equals the input amplitude), and the time constant “tau” is set at five years.
Note that the lagging does not change the amount of energy in the forcing pulse. It merely lags it, so that it doesn’t appear until a later date.
So that is all the one-line equation is doing. It simply applies the given forcing, using the climate sensitivity to determine the amount of the temperature change, and using the time constant “tau” to determine the lag of the temperature change. That’s it. That’s all.
The difference between ECS (climate sensitivity) and TCR (transient response) is whether slow heating and cooling of the ocean is taken into account in the calculations. If the slow heating and cooling of the ocean is taken into account, then lambda is equilibrium climate sensitivity. If the ocean doesn’t enter into the calculations, if the forcing is only the radiative forcing, then lambda is transient climate response.
Appendix C. The Spreadsheet
In order to be able to easily compare the various forcings and responses, I made myself up an Excel spreadsheet. It has a couple drop-down lists that let me select from various forcing datasets and various response datasets. Then I use the built-in Excel function “Solver” to iteratively calculate the best combination of the two parameters, sensitivity and time constant, so that the result matches the response. This makes it quite simple to experiment with various combinations of forcing and responses. You can see the difference, for example, between the GISS E model with and without volcanoes. It also has a button which automatically stores the current set of results in a dataset which is slowly expanding as I do more experiments.
In a previous post called Retroactive Volcanoes, (link) I had discussed the fact that Otto et al. had smoothed the Forster forcings dataset using a centered three point average. In addition they had added a trend fromthe beginning tothe end of the dataset of 0.3 W per square meter. In that post I had I had said that the effect of that was unknown, although it might be large. My new spreadsheet allows me to actually determine what the effect of that actually is.
It turns out that the effect of those two small changes is to take the indicated climate sensitivity from 2.8 degrees/doubling to 2.3° per doubling.
One of the strangest findings to come out of this spreadsheet was that when the climate models are compared each to their own results, the climate sensitivity is a simple linear function of the ratio of the trends of the forcing and the response. This was true of both the individual models, and the average of the 19 models studied by Forster. The relationship is extremely simple. The climate sensitivity lambda is 1.07 times the ratio of the trends for the models alone, and equal to the trends when compared to all the results. This is true for all of the models without adding in the ocean heat content data, and also all of the models including the ocean heat content data.
In any case I’m going to have to convert all this to the computer language R. Thanks to Stephen McIntyre, I learned the computer language R and have never regretted it. However, I still do much of my initial exploratory forays in Excel. I can make Excel do just about anything, so for quick and dirty analyses like the results above I use Excel.
So as an invitation to people to continue and expand this analysis, my spreadsheet is available here. Note that it contains a macro to record the data from a given analysis. At present it contains the following data sets:
IMPULSES
Pinatubo in 1900
Step Change
Pulse
FORCINGS
Forster No Volcano
Forster N/V-Ocean
Otto Forcing
Otto-Ocean ∆
Levitus watts Ocean Heat Content ∆
GISS Forcing
GISS-Ocean ∆
Forster Forcing
Forster-Ocean ∆
DVIS
CM2.1 Forcing
CM2.1-Ocean ∆
GISS No Volcano
GISS GHGs
GISS Ozone
GISS Strat_H20
GISS Solar
GISS Landuse
GISS Snow Albedo
GISS Volcano
GISS Black Carb
GISS Refl Aer
GISS Aer Indir Eff
RESPONSES
CCSM3 Model Temp
CM2.1 Model Temp
GISSE ModelE Temp
BEST Temp
Forster Model Temps
Forster Model Temps No Volc
Flat
GISS Temp
HadCRUT4
You can insert your own data as well or makeup combinations of any of the forcings. I’ve included a variety of forcings and responses. This one-line equation model has forcing datasets, subsets of those such as volcanoes only or aerosols only, and the simple impulses such as a square step.
Now, while this spreadsheet is by no means user-friendly, I’ve tried to make it at least not user-aggressive.
Appendix D: The Mathematical Derivation of the Relationship between Climate Sensitivity and the Trend Ratio.
I have stated that the relationship where climate sensitivity is equal to the ratio between trends of the forcing and response datasets.
We start with the one-line equation:
Let us consider the situation of a linear trend in the forcing, where the forcing is ramped up by a certain amount every year. Here are lagged results from that kind of forcing.
Figure B1. A steady increase in forcing over time (red line), along with the situation with the time constant (tau) equal to zero, and also a time constant of 20 years. The residual is offset -0.6 degrees for clarity.
Note that the only difference that tau (the lag time constant) makes is how long it takes to come to equilibrium. After that the results stabilize, with the same change each year in both the forcing and the temperature (∆F and ∆T). So let’s consider that equilibrium situation.
Subtracting T0 from both sides gives
Now, T1 minus T0 is simply ∆T1. But since at equilibrium all the annual temperature changes are the same, ∆T1 = ∆T0 = ∆T, and the same is true for the forcing. So equation 2 simplifies to
Dividing by ∆F gives us
Collecting terms, we get
And dividing through by (1-a) yields
Now, out in the equilibrium area on the right side of Figure B1, ∆T/∆F is the actual trend ratio. So we have shown that at equilibrium
But if we include the entire dataset, you’ll see from Figure B1 that the measured trend will be slightly less than the trend at equilibrium.
And as a result, we would expect to find that lambda is slightly larger than the actual trend ratio. And indeed, this is what we found for the models when compared to their own results, lambda = 1.07 times the trend ratio.
When the forcings are run against real datasets, however, it appears that the greater variability of the actual temperature datasets averages out the small effect of tau on the results, and on average we end up with the situation shown in Figure 4 above, where lambda is experimentally determined to be equal to the trend ratio.
Appendix E: The Underlying Math
The best explanation of the derivation of the math used in the spreadsheet is an appendix to Paul_K’s post here. Paul has contributed hugely to my analysis by correcting my mistakes as I revealed them, and has my great thanks.
Climate Modeling – Abstracting the Input Signal by Paul_K
I will start with the (linear) feedback equation applied to a single capacity system—essentially the mixed layer plus fast-connected capacity:
C dT/dt = F(t) – λ *T Equ. A1
Where:-
C is the heat capacity of the mixed layer plus fast-connected capacity (Watt-years.m-2.degK-1)
T is the change in temperature from time zero (degrees K)
T(k) is the change in temperature from time zero to the end of the kth year
t is time (years)
F(t) is the cumulative radiative and non-radiative flux “forcing” applied to the single capacity system (Watts.m-2)
λ is the first order feedback parameter (Watts.m-2.deg K-1)
We can solve Equ A1 using superposition. I am going to use timesteps of one year.
Let the forcing increment applicable to the jth year be defined as fj. We can therefore write
F(t=k ) = Fk = Σ fj for j = 1 to k Equ. A2
The temperature contribution from the forcing increment fj at the end of the kth
year is given by
ΔTj(t=k) = fj(1 – exp(-(k+1-j)/τ))/λ Equ.A3
where τ is set equal to C/λ .
By superposition, the total temperature change at time t=k is given by the summation of all such forcing increments. Thus
T(t=k) = Σ fj * (1 – exp(-(k+1-j)/τ))/ λ for j = 1 to k Equ.A4
Similarly, the total temperature change at time t= k-1 is given by
T(t=k-1) = Σ fj (1 – exp(-(k-j)/τ))/ λ for j = 1 to k-1 Equ.A5
Subtracting Equ. A4 from Equ. A5 we obtain:
T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ + ( [1 – exp(-1/τ)]/λ ) (Σfj*exp(-(k-j)/τ) for j = 1 to k-1) …Equ.A6
We note from Equ.A5 that
(Σfj*exp(-(k-j)/τ)/λ for j = 1 to k-1) = ( Σ(fj/λ ) for j = 1 to k-1) – T(k-1)
Making this substitution, Equ.A6 then becomes:
T(k) – T(k-1) = fk*[1-exp(-1/τ)]/λ + [1 – exp(-1/τ)]*[( Σ(fj/λ ) for j = 1 to k-1) – T(k-1)] …Equ.A7
If we now set α = 1-exp(-1/τ) and make use of Equ.A2, we can rewrite Equ A7 in the following simple form:
T(k) – T(k-1) = Fkα /λ – α * T(k-1) Equ.A8
Equ.A8 can be used for prediction of temperature from a known cumulative forcing series, or can be readily used to determine the cumulative forcing series from a known temperature dataset. From the cumulative forcing series, it is a trivial step to abstract the annual incremental forcing data by difference.
For the values of α and λ, I am going to use values which are conditioned to the same response sensitivity of temperature to flux changes as the GISS-ER Global Circulation Model (GCM).
These values are:-
α = 0.279563
λ = 2.94775
Shown below is a plot confirming that Equ. A8 with these values of alpha and lamda can reproduce the GISS-ER model results with good accuracy. The correlation is >0.99.

This same governing equation has been applied to at least two other GCMs ( CCSM3 and GFDL ) and, with similar parameter values, works equally well to emulate those model results. While changing the parameter values modifies slightly the values of the fluxes calculated from temperature , it does not significantly change the structural form of the input signal, and nor can it change the primary conclusion of this article, which is that the AGW signal cannot be reliably extracted from the temperature series.
Equally, substituting a more generalised non-linear form for Equ A1 does not change the results at all, provided that the parameters chosen for the non-linear form are selected to show the same sensitivity over the actual observed temperature range. (See here for proof.)





David Riser says: “Others add in things like ocean temps from various layers, gravitational effects …”
What gravitational effects does that involve? Which models?
Adding in the 23% variation of the lunar tidal attraction with its 8.85 year cycle modulated by the 18.6 year variation in declination may produce some interesting patterns 😉
However, AFAIK tides are put in directly because computer models don’t work too well at predicting tides either.
Could you give more detail about these “gravitational effects” in models?
@Matthew R Marler
Looking through the threads I see that you are correct. Nobody said the result had already been… oh wait, that’s not true. Looking through the threads, we see multiple people claiming that it is an old result or words to that effect. Hence my comment.
What’s your problem here, bucko? You reply on behalf of other people to say that what I am asking them for is incorrect because you claim we have not been discussing it – even though much of the thread is about it?
Are you saying that Willis has done new and original work here, or are you saying he has not. If you are saying the latter then show citations to somebody doing before him. Either way, try to be a little less cryptic because you are getting right on my tits.
Adam, take a breath. No point in getting annoyed about blog posts. Sturgeon’s law 90% or anything is crap. Sturgeon’s second law 99% of blog posts are crap.
Chill out.
Barry,
Weather is chaotic, so producing variable output in a model isn’t hard; controlling it is more of a problem. For the numerical weather forecasting aspect that David Riser is emphasising, they commonly produce an ensemble, deliberately varying the initial conditions. That’s where they get nunbers when they tell you there is x% chance of rain. ECWMF formalises this as their Ensemble Prediction System. Here is their 5.5Mb user guide which explains a lot about their system, including EPS.
For climate simulation it is a bit different. They can be the same programs, like GFDL or UKMO, and they often use ensembles. For example, the GISS-ER result that Willis uses here is an ensemble of five. That of course reduces the variabilty. But because they aren’t claiming to get the weather right on any particular day (or month), but rather to get the dynamics right for the long term, they are happy to go back further to get a start. For the future they use different scenarios for forcing, and programs like CMIP3 and CMIP5 will prescribe particular ones that the programs should follow. There’s a table here in the AR4 which describes the various models at that time and their internal differences. And here is their discussion of the start-up processes.
Willis’ observation demonstrates that the models themselves prove : climate is NOT well represented by constantly increasing CO2 forcing + noise.
http://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/#comment-1326425
my plots on volcano response shows linear models and the implicit concept of climate sensitivity are irrelevant.
http://climategrog.wordpress.com/?attachment_id=286
Anyone who does not agree with that please raise a hand (and provide a coherent reason for not agreeing).
Willis’ epiphany explalned:
X=Y
For all things that represent X there are things that represent Y.
Willis has found by experiment that the ratio of trends (X) explains Y which is explained as Nick Stokes has described, in the higher analysis of what X and Y represent. It falls out of the analysis – and in fact that is how Willis stumbled onto it. It was always there, obviously, among myriad other equivalencies. I don’t think this is a particularly big deal as it hasn’t a thing to do with climate.
This is not the first example where Willis has discovered X=Y for all valid examples of X and Y. It is why I cringe when Willis dives deep into the math. There are limitations to being self-taught. Still, Willis is a brilliant man, more akin to Edison than Tesla, but brilliant. I’m envious of the skill set he brings to the table and his ability to present complex ideas and reductions to the lay audience. And he doesn’t suffer valid and non-valid criticisms gracefully as will be seen shortly.
Greg Goodman says: June 4, 2013 at 9:04 pm
“This is the modellers’ preconceived understanding that they have built into the models themselves and adjusted with the “parameterised” inputs : that climate is nothing but a constantly increasing CO2 forcing + noise.”
I’ve no idea where you get all that from. It’s nonsense. Willis in this post works on total forcing; there’s no breakdown into components. And modellers have no such preconceived understanding; even if they did it would be irrelevant. They are solving the Navier-Stokes equations.
No-one claims that climate is increasing CO2 forcing plus noise. There are simple models (Willis’s is one) which treat it as increasing (mostly) total forcing, with some decay function, plus noise.
Forcing is important – it’s right up in the first section of the AR4 SPM. But no-one says it is just CO2.
Nick writes: “Forcing is important – it’s right up in the first section of the AR4 SPM. But no-one says it is just CO2.”
So supposing CO2 stays constant – does the climate change ?
Can the models explain the Little Ice Age ?
@Greg Goodman yeah, you are right. Apologies to Matthew.
“I’ve no idea where you get all that from. ….Forcing is important – it’s right up in the first section of the AR4 SPM. But no-one says it is just CO2.”
If you don’t know where I get it from , I suggest you read the linked post again.
no-one _says_ it is just CO2, but the models do. That is what Willis’ observation means as I explained in some detail. The fact that the models can be approximated in their global average output by a linear model means that the dominant features are linear. There’s a whole lot more going in there much of which is probably not linear and they produce a lot more than a global average temperature. However, they are predominantly linear.
Furthermore, Willis’ observation is not a trivial result for all linear models in all circumstances, it is specific to applying an additional condition on the linear equation, that of constant deltaF
Now if the models all line up bang on a slope equal to lambda that means not only that they are linear in their global average but they too are conforming to that additional condition. And we know where the constantly increasing “forcing” comes from we’ve been talking about for the last 20 years.
This means that all the variation in forcing in the models is averaging out to give the same behaviour as the linear model under constant dF once the transients have settled.
ie all the variations are equivalent to symmetrical random ‘noise’ and the dominant feature is the linearly increasing forcing.
In fact the linearly increasing forcing is the calculated CO2 radiative forcing plus the hypothesised water vapour amplification. The latter is greater than the former and had not foundation in observational data.
THAT is the preconceived understanding; and it is irrelevant. THAT is the model which has failed thus providing us with the NEGATIVE result which will be useful from now on:
climate is NOT well represented by constantly increasing CO2 forcing + noise.
Clive,
Yes, if CO2 stayed constant and other forcings changed, the models would show a climate response. I don’t know of any LIA runs. You can only usefully run the models forward from a reasonably well-known starting point, with a lot of spatial detail; I doubt if they could find one.
Still no credible reply to the lack of cooling due to volcanism:
http://climategrog.wordpress.com/?attachment_id=286
If you take out volcano forcing from the models to better reflect this, they will go sky high from 1963 onwards.
I can understand why Nick is not “enthusiastic” but that does not erase what happens in the data.
@Nick Stokes:
And I can fly a helicopter, but my ability to keep it in the air is indeterminate 🙂
Nick Stokes – What Willis has managed to prove is that after transient effects have died out, the relationship of changes in forcing to changes in temperature is:
λΔF = ΔT
Which is the very definition of λ as equilibrium climate sensitivity (ECS). Whether the equilibrium is a zero ΔF or a constant one, a constant forcing pattern leads to ECS. By definition. Somehow I find the (re)discovery of the definition of ECS to be something less than earthshaking…
Theo Goodwin says:
June 4, 2013 at 3:20 pm
I’m not sure I understand your critique, so let me say what i was trying to say in a different way.
GCM’s (depending on which one, how course of resolution the run is and the size of the time step) can calculate >5M surface temps samples per year. All of these are then averaged to a single annual value. What this hides is that one area can be 30C high, and one area can be 30C low, and they average out to a reasonable value.
“but the phase of the cycle is indeterminate”
because they have not yet worked out that it’s driven lunar perigee cycle. Then they’ll get the phase and the period in sync with the wobbles the models are able to make.
http://climategrog.wordpress.com/?attachment_id=281
“3-to-5” year ENSO cycle is 8.85 / 2.0 peak being split by something longer circa 28 years.
MiCro says:
June 4, 2013 at 12:12 pm
I found a reference to what I was trying to remember.
I’ve extended my data mining code of the NCDC data set to extract both Rel Humidity and Surface Pressure, and will write something up on measured trends, once I’ve finished this I’ll ask Anthony if he’ll be as so kind to publish it here.
Greg Goodman says:
June 5, 2013 at 6:44 am
Could be orbital (Moon, Jupiter, Saturn), or it could be the time constant for enough heat to get stored in one (or more) oceans surface waters that then alter trade winds, surface pressure or the bulge of warmer water that can then get a stronger tidal push/pull ????
Adam: Are you saying that Willis has done new and original work here, or are you saying he has not.
MatthewRMarler, to Willis Eschenbach: I have at least 3 times written that you have discovered something interesting.
@Matthew R Marler
I asked:
“Are you saying that Willis has done new and original work here, or are you saying he has not.”
You answered:
“I have at least 3 times written that you (Willis) have discovered something interesting.”
How is that answer relevant to the question of “new and original work”? You use the word “interesting”, that is not an answer to the question about “new and original work”.
So I will ask you (and others) again. It really is a simple Yes, No, or Don’t Know situation.
Has Willis presented here new and original work? Please answer either Yes, No, or Don’t Know.
If the answer is No then please provide the sources for where the result(s) has(have) been previously made available. I don’t think this is an unreasonable request. Do you?
PS, my position is that I Don’t Know. Which is why I am keen to find out what the experts think.
It’s too bad “god” chose to ignore prayers for those who died, especially the children.
Adam,
Don’t know? A most excellent and underused position. We who don’t know….well, at least we know for sure that we don’t know. What about y’all poseurs? I say that because of all the posturing.
Even a decent scientist may reject the implication they spent a decade or two in a circular argument. Not a great one though.
So even if Willis is confirmed to have found a sophistry fallacy in the minds of modelers, they may need to dance around their nostalgia for a few months. Or decades.
The oddity here to me is the lack of (direct) refutal (of Willis’ proposition.) Only a, “you are an outtie, we are innies” kind of argument, beneath my expectation of serious thinkers. “You are smart but not allowed in the club” is the tone I heard a few times. If you don’t like what Willis said, the least you can do is explain yourself. Is this a cult?
So Adam, I was thinking about pressing for clarity myself. Now that you did it, I can just say “what Adam said.”
Hansen’s 1984 Climate Sensitivity paper:
Nick Stokes
A climate model solves differential equations. It can tell you how things will change, providing you tell it the starting point.
This is so badly wrong that it is not even funny and you should know it. That you can write such an obvious nonsense well knowing that it is nonsense is certainly asking questions about your motivations.
No, the climate models do anything but solve differential equations.
What the climate models do is to take huge chunks of atmosphere and ocean (about 100kmx100km) and try to conserve energy, mass and momentum. I say try because they don’t succeed very well for obvious reasons – too low resolution and poor interface understanding.
Of course in real physics the conservation laws translate in Navier Stokes equations for the system of fluids we are contemplating here.
But it would be an insult for every physicist to even suggest that N numbers computed on N 100.km x 100 km cells might be anywhere near to a solution of Navier Stokes !!
They are not, can’t be and will never be.
This is btw the fundamental reason why the models get the spatial variability and biphasic processes (precipitation, clouds, snow and ice) hopelessly wrong. This is also why they will never be able to produce the right oceanic currents or the right oceanic oscillations which are the defining features of climate and are indeed solutions of differential equations that the Mother Nature is solving at every second.
So let us be very clear, climate models are just primitive heaps of big boxes where the interfaces are added by hand and each box attempts to obey conservation laws. They solve no differential equations, converge to no solutions and approximate no exact local law of physics.
The only thing they can do, and here Willis has a point, is to get completely trivial and tautological relations right.
Indeed dT/dF = (dT/dt)/dF/dt and when one destroys the whole spatial variability by only taking global averages (what removes btw any physical relevance to the variables) then every model that at least half assedly respects energy conservation simply MUST get this tautology right.
If it didn’t, then I think even Jones or Hansen would have noticed 😉