The Fatal Lure of Assumed Linearity

Guest Post by Willis Eschenbach [note new Update at the end, and new Figs. 4-6]

In climate science, linearity is the order of the day. The global climate models are all based around the idea that in the long run, when we calculate the global temperature everything else averages out, and we’re left with the claim that the change in temperature is equal to the climate sensitivity times the change in forcing. Mathematically, this is:

∆T = lambda ∆F

where T is global average surface temperature, F is the net top-of-atmosphere (TOA) forcing (radiation imbalance), and lambda is called the “climate sensitivity”.

In other words, the idea is that the change in temperature is a linear function of the change in TOA forcing. I doubt it greatly myself, I don’t think the world is that simple, but assuming linearity makes the calculations so simple that people can’t seem to break away from it.

Now, of course people know it’s not really linear … but when I point that out, often the claim is made that it’s close enough to linear over the range of interest that we can assume linearity with little error.

So to see if the relationships really are linear, I thought I’d use the CERES satellite data to compare the surface temperature T and the TOA forcing F. Figure 1 shows that graph:

land toa radiation imbalance vs surface temp annualFigure 1. Land only, forcing F (TOA radiation imbalance) versus Temperature T, on a 1° x 1° grid. Colors indicate latitude. Note that there is little land from 50S to 65S. Net TOA radiation is calculated as downwelling solar less reflected solar less upwelling longwave radiation. Click graphics to enlarge.

As you can see, far from being linear, the relationship between TOA forcing and surface temperature is all over the place. At the lowest temperatures, they are inversely correlated. In the middle there’s a clear trend … but then at the highest temperatures, they decouple from each other, and there is little correlation of any kind.

The situation is somewhat simpler over the ocean, although even there we find large variations:

ocean toa radiation imbalance vs surface temp annualFigure 2. Ocean only, net forcing F (TOA radiation imbalance) versus Temperature T, on a 1° x 1° grid. Colors indicate latitude.

While the changes are not as extreme as those on land, the relationship is still far from linear. In particular, note how the top part of the data slopes further and further to the right with increasing forcing. This is a clear indication that as the temperature rises, the climate sensitivity decreases. It takes more and more energy to gain another degree of temperature, and so at the upper right the curve levels off.

At the warmest end, there is a pretty hard limit to the surface temperature of the ocean at just over 30°C. (In passing, I note that there also appears to be a pretty hard limit on the land surface air temperature, at about the same level, around 30°C. Curiously, this land temperature is achieved at annual average TOA radiation imbalances ranging from -50 W/m2 up to 50 W/m2.)

Now, what I’ve shown above are the annual average values. In addition to those, however, we are interested in lambda, the climate sensitivity which those figures don’t show. According to the IPCC, the equilibrium climate sensitivity (ECS) is somewhere in the range of 1.5 to 4.5 °C for each doubling of CO2. Now, there are a several kinds of sensitivities, among them monthly, decadal, and equilibrium climate sensitivities.

Monthly sensitivity

Monthly climate sensitivity is what happens when the TOA forcing imbalance in a given 1°x1° gridcell goes from say plus fifty W/m2 one month (adding energy), to minus fifty W/m2 the next month (losing energy). Of course this causes a corresponding difference in the temperature of the two months. The monthly climate sensitivity is how much the temperature changes for a given change in the TOA forcing.

But the land and the oceans can’t change temperature immediately. There is a lag in the process. So monthly climate sensitivity is the smallest of the three, because the temperatures haven’t had time to change. Figure 3 shows the monthly climate sensitivities based on the CERES monthly data.

temperature change MONTHLY per doubling of CO2 ceresFigure 3. The monthly climate sensitivity. 

As you might expect, the ocean temperatures change less from a given change in forcing than do the land temperatures. This is because of the ocean’s greater thermal mass which is in play at all timescales, along with the higher specific heat of water versus soil, as well as the greater evaporation over the ocean.

Decadal Sensitivity

Decadal sensitivity, also called transient climate response (TCR), is the response we see on the scale of decades. Of course, it is larger than the monthly sensitivity. If the system could respond instantaneously to forcing changes, the decadal sensitivity would be the same as the monthly sensitivity. But because of the lag, the monthly sensitivity is smaller. Since the larger the lag, the smaller the temperature change, we can use the amount of the lag to calculate the TCR from the monthly climate response. The lag over the land averages 0.85 months, and over the ocean it is longer at 2.0 months. For the land, the TCR averages about 1.6 times the monthly climate sensitivity. The ocean adjustment for TCR is larger, of course, since the lag is longer. Ocean TCR is averages about 2.8 times the monthly ocean climate sensitivity. See the Notes below for the calculation method.

Figure 4 shows what happens when we put the lag information together with the monthly climate sensitivity. It shows, for each gridcell, the decadal climate sensitivity, or transient climate response (TCR). It is expressed in degrees C per doubling of CO2 (which is the same as degrees per a forcing increase of 3.7 W/m2). The TCR shown in Figure 4 includes the adjustment for the lag, on a gridcell-by-gridcell basis.

temperature change TCR per doubling of CO2 CERESFigure 4. Transient climate response (TCR). This is calculated by taking the monthly climate sensitivity for each gridcell, and multiplying it by the lag factor calculated for that gridcell. [NOTE: This Figure, and the values derived from it, are now updated from the original post. The effect of the change is to reduce the estimated transient and equilibrium sensitivity. See the Update at the end of the post for details.]

Now, there are a variety of interesting things about Figure 3. One is that once the lag is taken into account, some of the difference between the climate sensitivity of the ocean and the land disappears, and some is changed. This is particularly evident in the southern hemisphere, compare Southern Africa or Australia in Figures 3 and 4.

Also, you can see, water once again rules. Once we remove the effect of the lags, the drier areas are clearly defined, and they are the places with the greatest sensitivity to changes in TOA radiative forcing. This makes sense because there is little water to evaporate, so most of the energy goes into heating the system. Wetter tropical areas, on the other hand, respond much more like the ocean, with less sensitivity to a given change in TOA forcing.

Equilibrium Sensitivity

Equilibrium sensitivity (ECS), the longest-term kind of sensitivity, is what would theoretically happen once all of the various heat reservoirs reach their equilibrium temperature. According to the study by Otto using actual observations, for the past 50 years the ECR has stayed steady at about 130% of the TCR. The study by Forster, on the other hand, showed that the 19 climate models studied gave an ECR which ranged from 110% to 240% of the TCR, with an average of 180% … go figure.

This lets us calculate global average  sensitivity. If we use the model percentages to estimate the equilibrium climate sensitivity (ECS) from the TCR, that gives an ECS of from 0.14 * 1.1 to 0.14 *2.4. This implies an equilibrium climate sensitivity in the range of 0.2°C to 0.3°C per doubling of CO2, with a most likely value (per the models) of 0.25°C per doubling. If we use the 130% estimate from the Otto study, we get a very similar result, .14 * 1.3 = 0.2°C per doubling. (NOTE: these values are reduced from the original calculations. See the Update at the end of the post for details.]

This is small enough to be lost in the noise of our particularly noisy climate system.

A final comment on linearity. Remember that we started out with the following claim, that the change in temperature is equal to the change in forcing times a constant called the “climate sensitivity”. Mathematically that is

∆T = lambda ∆F

I have long held that this is a totally inadequate representation, in part because I say that lambda itself, the climate sensitivity, is not a constant. Instead, it is a function of T. However, as usual … we cannot assume linearity in any form. Figure 5 shows a scatterplot of the TCR (the decadal climate sensitivity) versus surface temperature.

land TCR vs surface temp globalFigure 5. Transient climate response versus the average annual temperature, land only. Note that the TCR only rarely goes below zero. The greatest response is in Antarctica (dark red). 

Here, we see the decoupling of the temperature and the TCR at the highest temperatures. Note also how few gridcells are warmer than 30°C. As you can see, while there is clearly a drop in the TCR (sensitivity) with increasing temperature, the relationship is far from linear. And looking at the ocean data is even more curious. Figure 6 shows the same relationship as Figure 5. Note the different scales in both the X and Y directions.

ocean TCR vs surface temp globalFigure 6. As in Figure 5, except for the ocean instead of the land. Note the scales differ from those of Figure 5.

Gotta love the climate system, endlessly complex. The ocean shows a totally different pattern than that of the land. First, by and large the transient climate response of the global ocean is less than a tenth of a degree C per doubling of CO2 (global mean = 0.08°C/2xCO2). And contrary to my expectations, below about 20°C, there is very little sign of any drop in the TCR with temperature as we see in the land in Figure 5. And above about 25°C there is a clear and fast dropoff, with a number of areas (including the “Pacific Warm Pool”) showing negative climate responses.

I also see in passing that the 30°C limit on the temperatures observed in the open ocean occurs at the point where the TCR=0 …

What do I conclude from all of this? Well, I’m not sure what it all means. A few things are clear. My first conclusion is that the idea that the temperature is a linear function of the forcing is not supported by the observations. The relationship is far from linear, and cannot be simply approximated.

Next, the estimates of the ECS arising from this observational study range from 0.2°C to 0.5°C per doubling of CO2. This is well below the estimate of the Intergovernmental Panel on Climate Change … but then what do you expect from government work?

Finally, the decoupling of the variables at the warm end of the spectrum of gridcells is a clear sign of the active temperature regulation system at work.

Bottom line? The climate isn’t linear, never was … and succumbing to the fatal lure of assumed linearity has set the field of climate science back by decades.

Anyhow, I’ve been looking at this stuff for too long. I’m gonna post it, my eyeballs are glazing over. My best regards to everyone,

w.

NOTES

LAG CALCULATIONS

I used the Lissajous figures of the interaction between the monthly averages of the TOA forcing and the surface temperature response to determine the lag.

lissajuous formulaFigure N1. Formula for calculating the phase angle from the Lissajous figure.

This lets me calculate the phase angle between forcing and temperature. I always work in degrees, old habit. I then calculate the multiplier, which is:

Multiplier = 1/exp(phase_angle°/360°/-.159)

The derivation of this formula is given in my post here. [NOTE: per the update at the end, I’m no longer using this formula.]

To investigate the shape of the response of the surface temperature to the TOA forcing imbalance, I use what I call “scribble plots”. I use random colors, and I draw the Lissajous figures for each gridcell along a given line of latitude. For example, here are the scribble plots for the land for every ten degrees from eighty north down to the equator.

scribble plots 80 to 0 by 10 toa imbalance

Figure N2. Scribble plots for the northern hemisphere, TOA forcing vs surface temperature.

And here are the scribble plots from 20°N to 20°S:

scribble plots 20 to -20 by 5 toa imbalance

Figure N3. Scribble plots for the tropics, TOA forcing vs surface temperature.

As you can see, the areas near the equator have a much smaller response to a given change in forcing than do the extratropical and polar areas.

DATA AND CODE

Land temperatures from here.

CERES datafile requisition site

CERES datafile (zip, 58 MByte)

sea temperatures from here.

R code is here … you may need eyebeach, it’s not pretty.

All data in one 156 Mb file here, in R format (saved using the R instruction “save()”)

[UPDATE] Part of the beauty of writing for the web is that my errors don’t last long. From the comments, Joe Born identifies a problem:

Joe Born says:

December 19, 2013 at 5:37 am

My last question may have been a little obscure. I guess what I’m really asking is what model you’re using to obtain your multiplier.

Joe, you always ask the best questions. Upon investigation, I see that my previous analysis of the effect of the lags was incorrect.

What I did to check my previous results was what I should have done, to drive a standard lagging incremental formula with a sinusoidal forcing:

R[t] = R[t-1] + (F[t] – F[t-1]) * (1 – timefactor) + (R[t-1] – R[t-2]) * timefactor

where t is time, F is some sinusoidal forcing, R is response, timefactor = e ^ (-1/tau), and tau is the time constant.

Then I measured the actual drop in amplitude and plotted it against the phase angle of the lag. By examination, this was found to be an extremely good fit to

Amplitude as % of original = 1 – e ^ (-.189/phi)

where phi is the phase angle of the lag, from 0 to 1. (The phase angle is the lag divided by the cycle length.)

The spreadsheet showing my calculations is here.

My thanks to Joe for the identification of the error. I’ve replaced the  erroneous figures, Figure 4-6. For Figs. 5 and 6 the changes were not very visible. They were a bit more visible in Figure 4, so I’ve retained the original version of Figure 4 below.

NOTE: THE FIGURE BELOW CONTAINS AN ERROR AND IS RETAINED FOR COMPARISON VALUE ONLY!! temperature change per doubling of CO2 ceres

NOTE: THE FIGURE ABOVE CONTAINS AN ERROR AND IS RETAINED FOR COMPARISON VALUE ONLY!!

Advertisements

  Subscribe  
newest oldest most voted
Notify of

I suspect that Willis is a lot closer to understanding the climate than a lot of these so called climate scientists are.

Next, the estimates of the ECS arising from this observational study range from 0.2°C to 0.5°C per doubling of CO2.
==========
Wow! Great work. Wondered where you’d been hiding. The color graphics are a fantastic aid to visualizing the complexity. This lower ECS estimate make much more sense and explains the “pause” much better than the climate models.

Another excellent analysis, Willis. We pay gobs of bucks to support super computer number crunching to get results I could get with a slipstick and y=mx+b? Have Decitrig, will compute.

Bottom line? The climate isn’t linear, never was … and succumbing to the fatal lure of assumed linearity has set the field of climate science back by decades.
=============
It has the advantage of allowing a whole generation of climate scientists to tackle really hard science questions armed with nothing more than liberal arts degrees.

David Riser

Excellent post Willis! Of course anyone who spends a little time thinking about it would realize that trying to model complex systems with a linear model is not going to work. It explains perfectly why they fail so monstrously.
v/r,
David Riser

When I saw

∆T = lambda ∆F

I was reminded that during my Engineering studies, where I learnt (a little) computer programming. That was in the 1970’s when one still had to be “clever” to use computers and get reasonable answers to complicated questions; within a limited time.
Anyways; I’d observed that almost all the “constants” in formulae with which I was dealing, were only “constant” over a short range of conditions.
An example of this is the treatments of Young’s Modulus (E) is a linear approximation of the elasticity of a material over a limited range. For most engineering purposes, a constant is good enough as the design is well within the elastic range in most applications and dimensional variability (the ability to make stuff to the exact dimensions of the design) and material variability tend to, in most cases, swamp any “error” of using the constant for E. Using a constant is also quicker for checking stuff by hand.
But if you need to be really accurate in predicting deflection under stress, or your design approaches plasticity, then you must also consider stress vectors in magnitude, the material’s bulk modulus and other obvious factors such as temperature.
It puzzled a few of my tutors as to why I made E a function of stress in my programming assignments. I had to draw a stress-strain curve to explain. The difference in results is small, and the effort to “completely” define the function is huge.
I was discouraged from doing it the right way after a while because they thought it a waste of time even when my E() function as a place-holder, returned a constant regardless of the value of input parameters. The “waste” of programming time was a minute or two. And the documentation value was immense.
When I looked into some (a lot) of the GCM and other modelling code (mis)used by the catastrophists, It was clear that the models would never be able to produce realistic results as they had fundamentally variable parameters as constants. There was no possibility to introduce coupling and confounding factors. In the systems being modelled, it does matter that that parameter varies “only” a few percent because an iterative finite element analysis will accummulate those errors.

The climate-sensitivity parameter lambda does not take a fixed value either by place or by time. The IPCC takes its value as beginning at 0.31 Kelvin per Watt per square meter, as 0.44 after 100 years, as 0.5 over 200 years, and 0.88 over 3500 years. The curve is similar to the top left quadrant of an ellipse. It is anything but linear.
The key question is whether any value as high as 0.88 for lambda is justifiable. To obtain so high a value, the models assume that temperature feedbacks are so strongly net-positive as to triple the small direct warming from CO2, which is not much more than 1 K per doubling. Suddenly that becomes 3.26 K thanks to imagined net-positive temperature feedbacks. However, the mutual-amplification function that all of the models use is defective in that it is based on an equation from process engineering that has a physical meaning in an electronic circuit but has no physical meaning in the real climate unless one introduces a damping term. But any plausible value for a damping term would divide the IPCC’s central estimate of climate sensitivity by at least 4.
So don’t worry too much about linearity, for lambda is a Humpty-Dumpty variable: it can take any value that is likely to keep the climate scare going.

Joe Prins

Willis,
As usual, I understood at least some of it. However, how anyone can claim that almost anything in nature, including its fauna, is linear, is beyond me. From quarks to black matter the universe is a wave. Representing a wave with a linear “anything” is almost willful misrepresentation. Like you said, it is easier to make things linear. What I mostly noticed though was the actual amount of data that disappears when using a linear function. Being a bit of a cynic, could that be the reason?

S. Geiger

Do climate models “assume” the linear equation, or is that equation used (ex post facto) to back out a bulk lamda value based on the models derived forcings and its temperature output? My understanding would be that lamda is NOT a fixed parameter, rather something that can be derived and compared afterwords.

I’ve been pondering a “degrees of surface T per W/m^” over ocean by month by latitude as a way to show something similar. That the degrees per W/m^2 are all over the place. I think this may make that unnecessary. Nicely done.

Scott

I think you will find that Willis has shown in a previous post that the models can be reduced to a liner approximation as they all have different forcings matched with different lamda that when graphed effectively reduces all their effort to a simple liner equation, hence the starting point of this post.

Tea Jay

If Present Trends Continue….. you can predict anything by selecting the measurement period

IIRC Idso got a ECS of 0.4 deg C for doubling of CO2 years ago in his “natural experiments”.

Pathway

I like the scribble plots. I think they may be a new art form.

thingadonta

Yeah I’ve always had a problem with ‘assumed linearity’. I’m entirely with you on this, assumed linearity is one of the major flaws in current climate research.
One of the best examples I can think of illustrating non-linearity, which I came across in my university days and which has intrigued me ever since, is the way iron behaves in stars. From memory, as a star burns up elements during its lifetime, all the elements prior to iron behave in a more or less linear fashion during nuclear fusion, until the star starts to use up the iron in its mass. Iron just doesn’t want to behave like the others. The linear trend breaks down spectacularly, and causes a supernova, which totally destroys the star (depending on its mass). So much for a linear trend within stellar processes.
Most scientists assume linearity out of a combination of laziness and ignorance about the real world. Non-linearity is much harder to predict because the exact position of inflection points and changes in trend/rates may be unknown, meaning non linear discoveries in science usually come well after assumed linearity. (There is also a political element which I won’t go into here, as a major part of social planning assumes the human benefits of ‘evenness’ over units of space and time, which is the same sort of assumption as ‘linearity’).
In my field of mineral exploration non-linearity underpins much of the research. The distribution of mineral concentration in the earth’s crust is very non-linear, both in terms of spatial extent and in terms of time. Often one simply cannot use linear models to analyse it, yet I find that many academics and public service scientists often assume linearity in the way they conduct their research and make public policy, and this is also partly the reason that mineral exploration is largely left to the private sector; there are simply too many public policy analysts who want to ‘arrange’ the minerals evenly across the landscape, where in most cases this is simply not how they occur in nature. They are unevenly distributed, their position is often unknown or uncertain, and this also creates unevenness in social wealth and ultimately society. And this sort of unevenness in a likewise fashion with other fields, permeates right through to the stockmarket, and ultimately to wealth between individuals, cultures, and nations. Non linearity is built into the economic system, from the foundation of the raw materials that underpin economies, and is also at least partly the sort of reason that central and social planning often fails. It is difficult to tell this to various politicians and ideologues, but it is often the first mistake they make when making major policy decisions.

jorgekafkazar

Bernd Felsche says: “…When I looked into some (a lot) of the GCM and other modelling code (mis)used by the catastrophists, It was clear that the models would never be able to produce realistic results as they had fundamentally variable parameters as constants. There was no possibility to introduce coupling and confounding factors. In the systems being modelled, it does matter that that parameter varies “only” a few percent because an iterative finite element analysis will accumulate those errors.”
I was added to a project back in the 60’s where they were deriving material physical properties at temperatures of about 5,000°F from measurements on very small specimens. Certain materials gave negative thermal conductivities. Something was clearly wrong. It remained a mystery until, over dinner one night, I had an “AHA moment.” The early research team, to save apparatus costs, had assumed linearity over the shortest specimen dimension, so short that any deviation from linearity was assumed similarly “small.” During finite element analysis, however, that erroneous profile smeared itself throughout the relaxation network, inverting the temperature distribution at the very center of those specimens. After running the program, the hot side became the cold side and vice versa.
This shows the Tower of Babel principle: Any computer program whose complexity exceeds the ability of any human to grasp the calculations in their entirety is doomed to mysterious failures, as is any program which has been dumbed down via simple-minded assumptions to match the ability of its creators.

Steve Keohane

Elegant Willis. Thanks for your work. Your scatter-grams are great. The flattening of some of the color’s curve at 30° is reminiscent of a log curve. But the impact of combinations? of effects are driving things in different directions. For example the cyan, 0-30°N. Latt. in figure 2 has something akin to a refraction pattern in a negative response at 12°-28°C SSTs, but scattered in a positive response with a lower limit at .1°C per W/m^2.

phlogiston

In climate as in the study of many natural systems, nonlinearity = enlightenment.
Those “scribble plots” look like Lorenz attractors.
Willis has nicely demonstrated the essence of nonlinearity which is that as a system e.g. an atmospheric or ocean system evolves or changes it changes the controlling parameters of that system in real time as it does so. Those parameters e.g. lambda are not fixed or linear.
Or as James Gleik put it in his book “Chaos”: “Nonlinearity means that the act of playing the game has a way of changing the rules”.

These graphs seem to indicate on a quick look, that the climate has a thermostat type effect such that far from being linear, there are limits to how hot or how cold the climate system will go.
For instance, on the hot side of things, does heat create more clouds, limiting the increase of heat as the clouds reflect more radiation? I’m not sure what the effect is on the cold side of things. That’s the direction that can get a bit “runaway” in my mind. Get cold, get ice, reflect more radiation, get colder, etc.
We have a history of the cold side running away with things in the Earth’s long history. But we seem to have some hard limits to how hot things can get.

bobl

Willis,
Thank you, I have tried and tried to bring out the point many times that treating climate sensitivity as a constant is wrong, wrong, wrong, and that as temperature rises climate sensitivity falls, to a point at which at about 30 degrees C over water it becomes 0. The gain of the system is inversely related to temperature. Taking up Monckton’s point, this is analogous in engineering to an amplifier approaching saturation. At some point increasing the drive or feedback can’t increase the output because the energy available has been exhausted. It has been my contention for some time that climate is being held in such a narrow range by saturation effects, that is Lamda being an inverse function of temperature. If this is the case (and it almost certainly is) then Climate Sensitivity is irrelevant.
It’s important however to consider physical mechanisms though. In the climate the energy isn’t limited by the power source exactly, it’s limited by the instability of evaporating too much water in one place, and the dew point in the atmosphere causing condensation. So as our “Drive” goes up, we reach a threshold – dare I say a tipping point, where there is too much water to be held in solution at a given temperature and it precipitates as clouds. That then causes the Input power from the sun to fall until the gain reaches zero (the system is saturated). This is very Non linear behaviour it rapidly onsets at particular humidity, temperature and air pressures. As average temperature rises, the region near the equator that this happens will expand, reducing the average gain across the planet. Ultimately – Venus earth is more like Miami earth, cosy and warm with Maxes around 30 and mins around 20, just like in the tropics.
WIllis,
The other problem is that lamda is also a Complex Number. The idea that feedbacks in the climate can be treated as a single lumped SCALAR is just naive in the extreme. I wanted to write a paper on that but need to find someone to help with the math. (I’m not sure how to deal with the non linearities). The only solution that allows any sort of positive feedback but is as stable as our climate is one that is approaching saturation and inherently non-linear
On the other hand such high transient sensitivity in the presence of obvious saturation means that in the past ice ages weather should have been very unstable, high gain should have lead to greater instability, but it didn’t. This probably means the gain is nothing like they say either, and that we have saturation occuring AND lower gain.

Bob Roberts

Totally off topic, but I’ve been told David Appell, after being banned here for his nonsense, brags of sneaking back under several aliases to continue his nonsensical harassment of the climate realists who post here. He apparently admitted to doing this on the following thread: http://dailycaller.com/2013/12/18/reddit-bans-comments-from-global-warming-skeptics/#comment-1169709832

Mario Lento

I swear I’m not a fan boy. Just a big fan. I love the way you find some aspect of science, look at it, turn it upside down, study it and then come up with a plan to crunch some numbers to support what you figured out. Good to see you back.
PS – /sarc on/ With oceans as energy buffers that vary how they take up or give off energy, the feedbacks, the time constants, convection, radiations, humidity, latent heat of vaporization, and condensation, clouds, wind, pressure. They all add up to a neat linear system that won’t ever change unless we add CO2.

Willis Eschenbach

Mario Lento says:
December 18, 2013 at 9:48 pm

I swear I’m not a fan boy. Just a big fan. I love the way you find some aspect of science, look at it, turn it upside down, study it and then come up with a plan to crunch some numbers to support what you figured out. Good to see you back.

Thanks, Mario, but actually the process is much less directed than you imagine. What I do is more like play. I just graph up various combinations of variables, and think about them, and then graph up some more combinations, and consider how I might change my graphs to give more insight into some aspect or other …
After looking at lots and lots of graphs, if I’m lucky I can start to get some idea of what might be happening … but that’s never guaranteed.
In other words, it’s amateur science, which means science done for fun and the love of the game rather than for money. I just mess around with the variables until something interesting pops up.
Regards,
w.

Espen

0.2°C to 0.5°C per doubling of CO2? Uh-oh, I’m not sure that’s enough to hold back the next glaciation 🙁

Leonard Lane

Willis: Great to see you back, i pray your health is improving daily. Great post, illustrates the non-linearity much better than I have seen before, and for land and oceans.
I often chuckle when I remember the old saying I heard somewhere in the 60’s but don’t remember where. It is. “Constants aren’t and variables won’t”.
Good health to you.

Martin A

” Net TOA radiation is calculated as downwelling solar less reflected solar less upwelling longwave radiation. ”
My understanding is that the error in satellite measurement of these two quantities is too great to permit their difference to be calculated meaningfully. So, presumably some Climate Science “forcing” results were used here, rather than genuine data. Is that a fair assumption?

Peter Miller

This once again goes to demonstrate the time proven adage of “nature always abhors a straight line”.
I was so impressed by the logic of this article that I was really hoping for some alarmist criticism to see if anyone could try and undermine Willis’ logic. There has been none so far.
As it stands, and assuming the conclusions are correct, then it is one of the most important articles ever written on climate science, totally destroying the foundations of climate alarmism.
The temperature sensitivity of 0.2°C to 0.5°C for a doubling of CO2 levels is, as you say, lies within the boundaries of statistical noise.
So if we could return to the year 1900 and could somehow strip out the effects of natural climate cycles, UHI, agriculture, irrigation, plus cherry picking and homogenisation of historical data, by how much would the Earth’s temperature have risen today?
The answer: Not a lot and much less than the usually quoted figure of 0.7°C.
The cost of climate alarmism is apparently approaching $1.0 billion per day, so you can rely on the fact the contents of Willis’ article will be ignored and/or condemned and/or ridiculed by the Climate Establishment. Nothing can be allowed to derail the Global Waming Gravy Train.

Greg

Beautiful pics !
A few thoughts. Neg slope tails in land record may not be too important in total energy if area is considered.
I’m surprised there’s not more change in slope in oceans near tropics but by eye it’s a least a factor of 3 or 4 which is huge. Fitting central area, poles and tropics to get relative figures may be useful.
If I wanted Lissajous figures I’d be interested in the NH oceans. There is a whole ‘tube’ of loops that look very regular there like a tunnel of breaking surf, it would be interesting to isolate them and see what the storey is.
Not too convinced by the lag formula approach based on a pure harmonic. I’d suggest lag regression plots or just plotting different lags and see how flat you can get the loops. They are not round by you can compromise.
See examples I posted on Euan Mearns’ site:
http://euanmearns.com/uk-temperatures-since-1956-physical-models-and-interpretation-of-temperature-change/#comment-266
http://climategrog.wordpress.com/?attachment_id=638
regards.

Ryan

I agree with Lord Monkton. The IPCC has never claimed linearity. They have always said that the “forcing” (hate that term – there’s no “force” involved) is itself dependent on the concentration of CO2 in the atmosphere. At some point the levels of CO2 in the atmosphere reach a saturation point and have no further impact. Some scientists have claimed that the 17 year pause might indicate we have already reached the effective saturation point of CO2 in the atmosphere.
The models are effectively “piecewise linear” with the linear equation only being relevent to the concentration we have at any given time (normally the present).
As you know, I dispute the models entirely. I claim that the theory of CO2 based warming supposes that an IR emitter can add energy to another IR emitter and normally we would never model a system that way since it would allow a system to “pull itself up by its own bootstraps” to a higher level of overall energy in direct contradiction of the law of conservation of energy. That is to say, in its simplest form the greenhouse gas theory implies a greenhouse gas would make a planet warmer, causing it to emit more energy, making the greenhouse gas more energetic, causing it to emit more energy, making the planet more energetic so on ad infinitum. This is not possible. We avoid the same difficulty in radio frequency calculations by assuming the radio transmitter does not receive any energy from nearby radio transmitters, even though in principle you would expect that a radio transmitter would indeed receive energy from other radio transmitters.
We always have to ignore the possibility that two emitters of energy can absorb energy from each other, not matter how tempting it might be to speculate that this is what must be happening, because it will always lead us to a model that contradcits the conservation of energy law.

RMB

A fair bit of this is above my pay rate but if you fire heated gas from a paint stripping gun at the surface of water, the water temperature will not rise indicating that surface tension blocks physical heat. Water accepts radiation but not “heat”. Would that have anything to do with it.

AlecM

The problem is that the only bit of surface IR energy comprising OLR is in the ‘atmospheric window’. The H2O IR (defined as its spectral temperature) comes from -1.5 deg C, about 2.6 km in temperate zones, and the CO2 IR mainly comes from the lower stratosphere.
This is because for equal surface and local air temperature, there is zero net surface IR emission in self-absorbed GHG bands, standard radiative physics.
The concept of surface forcing is unscientific and irrelevant: for Climate Alchemy to become a Science, it has to junk forcing.

I like this work by Willis and in the end I think we are all going to find that the limiting factor for ocean heat content at a given level of insolation (after accounting for internal ocean circulations) is atmospheric pressure on the ocean surface.
And ocean heat content controls air heat content on a watery world.
Of course, that brings us full circle back to atmospheric mass and gravity leaving the radiative characteristics of GHGs nowhere in comparison.

AlecM

In my comment above change the first sentence to ‘The problem is that the only bit of surfaceIR energy’
[Fixed – w.]

phlogiston

stephen wilde says:
December 19, 2013 at 2:23 am
Its feedbacks like that which give rise to the nonlinearity.

lgl

“The lag over the land averages 0.85 months, and over the ocean it is longer at 2.0 months.”
Yes, so you are still dealing with annual sensitivity, not decadal.

johnmarshall

First temperature can only be accurately taken when a body is in thermal equilibrium. The earth never is!!!!!!!!! So how can we accurately find an average temperature???
Secondly the assumed TOA solar input is assumed to be 340W/m2 by the IPCC. With this poor energy input the water cycle would not work!!! Actual solar input is 1370W/m2 which averages to 500W/m2 onto the SUNLIT hemisphere which is enough to have a water cycle. This is reality not some model built to suit a crappy theory.

jhborn

As always I stand in awe of your ability to visualize the mountains of data.
There’s a detail I don’t understand, though. The formula for the multiplier comes from diffusion through a depth. In particular, it uses the relative amplitude at the depth that gives the phase lag you observed. I’m having trouble understanding why that quantity is relevant in this context. A particular difficulty is that in diffusion the phase lag can exceed 2 pi, whereas I don’t see how that would happen for the radiation / surface-temperature relationship.
Could you elaborate a little on how the one relationship is relevant to the other?

dearieme

Aw come on. Take that first diagram, invert it a la Mickey Mann, twist it around in the best tradition of the Global Warmmongering twisters and behold! A hockey stick.

Robert of Ottawa

Very interesting, presentation, looking at the numbers in different ways. I certainly like how clearly the oceans limit at 32C. I actually dove in water that warm in Darwin Bay, Australia. Now I’m off to shovel the snow of the driveway.

Bill Illis

I find this very convincing.
Why? Because it is based on real data, reflects more accurately what we are actually seeing, is much closer to what basics physics says should happen, and Willis doesn’t get another grant or academic posting based on slanting the results and hiding the data.

“We always have to ignore the possibility that two emitters of energy can absorb energy from each other, not matter how tempting it might be to speculate that this is what must be happening, because it will always lead us to a model that contradcits the conservation of energy law.”
Not quite. Two RF emitters near each other will indeed absorb energy from each other and output energy on a new frequency. This is called inter-modulation and is a big problem at some locations. Now, is the total RF energy changed? I strongly doubt it, but I don’t know for sure.

nikki

Figure 1. reminds me on HR diagram, see :
http://en.wikipedia.org/wiki/H-R_Diagram
Can you please plot logT- LogF graph?
You can call it than Eschenbach- Štritof (to much sch or Š) or Willis- Nikki diagram. 🙂

tjfolkerts

Willis says: So to see if the relationships really are linear, I thought I’d use the CERES satellite data …
Unfortunately, that first graph doesn’t tell you anything about the linearity of sensitivity. What it tells you is that convection carries lots of energy from the equator to the poles.
Consider Antarctica – the red dots. There is a net negative radiation imbalance, so more radiative energy is leaving than arriving each year. On an annual basis, this means that roughly equal amounts of OTHER energy must be arriving, which would be air and water currents. Similarly, the areas near the equator have a positive value, meaning large amounts of energy are being carried away by convection.
The “hook” in the red data does NOT tell us that as the forcing increases, the temperature will decrease in that region. It simply tells us that convection carries a lot of energy to the coasts of Antarctica but not so much to the interior.
Without digging into the details of your calculations, I wonder if convection may be confounding some of your other calculations.

jhborn

My last question may have been a little obscure. I guess what I’m really asking is what model you’re using to obtain your multiplier.
Suppose, for example, that according to your model the response y is related to the stimulus x as follows:
\frac{dy}{dt}+\frac{1}{\tau}y=\frac{\lambda}{\tau}x
and the stimulus is sinusoidal:
x = cos(\omega t)=\Re\{\exp(i\omega t)\} ,
then, since the system is (gasp) linear, we know that the response is given by
y=|A|\cos(\omega t+\theta)=\Re\{A\exp(i\omega t)\}.
Plugging that into the system equation gives:
i\omega A\exp(i\omega t)+\frac{1}{\tau}A\exp(i\omega t)=\frac{\lambda}{\tau}\exp(i\omega t)
A = \frac{\lambda}{1+i\omega\tau}\rightarrow\theta=\arctan(-\omega\tau)
|A|=\frac{\lambda}{\sqrt{1+\tan^2\theta}}=\lambda\cos\theta
So your multiplier would have been \sec\theta. It isn’t, of course. You seem instead to have chosen a model in which the response is that of a semi-infinite slab at some depth. Since I don’t see why you base responses at the surface to responses at various depths, though, I suspect that not all is as it seems.
I know I’ve just gotten hung up on a detail, but I’ll appreciate any answer you have time for.

Greg

Willis: “But the land and the oceans can’t change temperature immediately. There is a lag in the process. So monthly climate sensitivity is the smallest of the three, because the temperatures haven’t had time to change.”
No, it would be more appropriate to compare delta_d/dt(SST) to delta_Rad , the fast response is mostly orthogonal ie rate of change.
The response to the ‘lambda’ relaxation equation is neither purely in-phase nor orthogonal but a sliding mix of the two which varies with frequency, so you really can’t just plot SST-Rad and start drawing simplistic conclusions. and defining “monthly” sensitivities.
http://climategrog.wordpress.com/?attachment_id=399
However, if you can estimate the lag for a particular frequency range, or the ratio of in-phase and orthogonal components that make up the temp response , that could give an estimation of tau and hence lambda.
Do that separately for tropics and temperate zones and it put numbers on the degree of regulation provided by your governor.
Just an idea.

Greg

Joe, you seem familiar with this stuff. Do you see any flaws in what I linked there?
http://climategrog.wordpress.com/?attachment_id=399

Thanks Willis, for this inspiring approach – I wish I had your graphing capabilities!
and to reply to Martin A – yes, you are right to finger the TOA imbalance ‘data’…..its resolution is about 5 watts per square metre with a consistent excess of that value over zero…..and the modellers are looking for 0.5 to 1 watt excess as their expectation. So – the ‘data’ is ‘constrained’ to use NASA’s phrase, by the ocean heat content data. As Bob Tisdale will tell you, the OHC data is not accurate either and – guess what? It is adjusted in ‘re-analysis’ to reflect the expected excess from the TOA!!!! I think this is what is called a circular argument! It doesn’t seem to worry the modellers at all.

bobl

If the IPCC don’t claim linearity how do they justify simple scalar number (3) as their feedback multiplier, when this multiplier is clearly inversely related to temperature. Surely the question must then be, how quickly does gain fall as temperature rises, as this will have a critical effect on ultimate equilibrium temperature rise for a doubling of CO2

Thanks Willis; A superb post!
Your illustrations make clear lambda is a chimera at best. Again proof that it is not possible to model Earth’s climate with the little physical knowledge that we have now.
And then to conduct “experiments” where these models supply the “data” is criminal.

DocMartyn

My guess is that if you looked at two slabs of ocean at plus and minus 60 degrees latitude over the course of years you would see a pair of race track ‘8’s, where you have to pump more energy into the ocean in spring and less in the fall, to achieve the same temperature.