Nature hates straight lines

Guest Post by Willis Eschenbach

Yeah, I know Nature doesn’t have human emotions, give me a break. I’m aware it is unscientific and dare I call it atavistic and perhaps even socially unseemly to say Nature “hates” straight lines, but hey, it’s a headline, cut me some poetic slack.

My point is, everyone is aware that nature doesn’t deal in straight lines. Natural things move in fits and starts along complex paths, not straight from point to point. Phenomena have thresholds and edges, not slow linear changes at the perimeter. Tree branches and coastlines are jagged and bent. Things move in arcs and circles, relationships are complex and cyclical. Very little in nature is linear, particularly in complex systems.

Forcing is generally taken to mean downward radiation measured at the TOA (top of atmosphere). The IPCC says that when TOA forcing changes, the surface temperature changes linearly with that TOA forcing change. If there is twice the forcing change (twice the change in solar radiation, for example), the IPCC says we’ll see twice the temperature change. The proportionality constant (not a variable but a constant) that the IPCC says linearly relates temperature and TOA forcing is called the “climate sensitivity”.

Figure 1. Photo of impending change in climate sensitivity.

Today I stumbled across the IPCC justification of this linearity assumption. This is the basis of their claim of the existence of a constant called “climate sensitivity”. I quote it below.

I’ve removed the references and broken it into paragraphs it for easy reading. The references are in the original cited above. I reproduce all of the text on the web page. This is their entire justification for the linearity assumption. Having solved linearity in a few sentences, they then proceed to other matters. Here is their entire scientific justification for the assumption of linearity between forcing and temperature change (emphasis mine):

Linearity of the Forcing-Response Relationship

Reporting findings from several studies, the TAR [IPCC Third Assessment Report] concluded that responses to individual RFs [Radiative Forcings] could be linearly added to gauge the global mean response, but not necessarily the regional response.

Since then, studies with several equilibrium and/or transient integrations of several different GCMs [Global Climate Models] have found no evidence of any nonlinearity for changes in greenhouse gases and sulphate aerosol. Two of these studies also examined realistic changes in many other forcing agents without finding evidence of a nonlinear response.

In all four studies, even the regional changes typically added linearly. However, Meehl et al observed that neither precipitation changes nor all regional temperature changes were linearly additive. This linear relationship also breaks down for global mean temperatures when aerosol-cloud interactions beyond the cloud albedo RF are included in GCMs. Studies that include these effects modify clouds in their models, producing an additional radiative imbalance.

Rotstayn and Penner (2001) found that if these aerosol-cloud effects are accounted for as additional forcing terms, the inference of linearity can be restored. Studies also find nonlinearities for large negative RFs, where static stability changes in the upper troposphere affect the climate feedback (e.g., Hansen et al., 2005).

For the magnitude and range of realistic RFs discussed in this chapter, and excluding cloud-aerosol interaction effects, there is high confidence in a linear relationship between global mean RF [radiative forcing] and global mean surface temperature response.

Now, what strikes you as odd about that explanation of the scientific basis for their claim of linearity?

Before I discuss the oddity of that IPCC explanation, a short recap regarding climate sensitivity. I have held elsewhere that climate sensitivity changes with temperature. I will repeat the example I used to show how climate sensitivity goes down as temperature rises. This can be seen clearly in the tropics.

In the morning the tropical ocean and land is cool, and the skies are clear. As a result, the surface warms rapidly with increasing solar radiation. Climate sensitivity (which is the amount of temperature change for a given change in forcing) is high. High sensitivity, in other words, means that small changes in solar forcing make large changes in surface temperature.

By late morning, the surface has warmed significantly. As a result of the rising temperature, cumulus clouds start to form. They block some of the sun. After that, despite increasing solar forcing, the surface does not warm as fast as before. In other words, climate sensitivity is lower.

In the afternoon, with continued surface warming, thunderstorms start to form. These bring cool air and cool rain from aloft, and move warm air from the surface aloft. They cool the surface in those and a number of other ways. Since thunderstorms are generated in response to rising temperatures, further temperature increases are quickly countered by increasing numbers of thunderstorms. This brings climate sensitivity near to zero.

Finally, thunderstorms have a unique ability. They can drive the surface temperature underneath them below the temperature at which the thunderstorm formed. In this case, we have local areas of negative climate sensitivity – the solar forcing can be increasing while the surface is cooling.

As you can see, in the real world the temperature cannot be calculated as some mythical constant “climate sensitivity” times the forcing change. Sensitivity goes down as temperature goes up in the tropics, the area where the majority of solar energy enters our climate system.

So the IPCC claim of linearity, of the imagined slavish response of surface temperature to a given change in TOA forcing, goes against our daily experience.

Let me now return to the question I posed earlier. I asked above what struck you as odd about the IPCC explanation of their claim of linearity regarding forcing and temperature. It’s not the fact that they think it is linear and I disagree. That is not noteworthy.

Here’s what made me stand back and genuflect in awe of their claims. Perhaps I missed it, but I didn’t see a single word about real world observations in that entire (and most important) justification for one of their core positions.

I didn’t see anyone referenced who said something like ‘We measured solar radiation and downwelling longwave radiation and temperature at this location, and guess what? Temperatures changed linearly with the changes in radiation.’ I didn’t see anything at all like that, you know, actual scientific observations that support linearity.

Instead, their claim seems to rest on the studies showing that scientists looked at four different climate models, and in each and every one of the models the temperature change was linearly related to forcing changes. And in addition, another model found the same thing, so the issue is settled to a “high confidence” …

I gotta confess, that wasn’t the first time I’ve walked away from the IPCC Report shaking my head, but that one deserves some kind of prize or award for sheer audacity of their logic. Not a prize for the fact that they think the relationship is linear when Nature nature hates straight lines, that’s understandable, it’s the IPCC after all.

It is the logic of their argument that left me stammering.

Of course the model results are linear. The models are linear. They don’t contain non-linear mechanisms. And of course, if you look at the results of linear models, you will conclude with “high confidence” that there is a linear relationship between forcing and temperature. They looked into five of them, and case closed.

I mean, you really gotta admire these guys. They are so far into their models that they actually are using the linearity of the model results to justify the assumption of linearity embodied in those same models … breathtaking.

I mean, I approve of people pulling themselves up by their own bootstraps, but that was too twisted for me. The circularity of their logic made my neck ache. I kept looking over my shoulder to see if the other end of their syllogism was circling behind to strike me again. That’s why I genuflected in awe. I was overcome by the sheer beauty of using a circular argument to claim that Nature moves in straight lines … those guys are artists.

Meanwhile, back in the real world, almost no such linear relationships exist. Nature constantly runs at the edge of turbulence, with no linearity in sight. As my example above shows, the climate sensitivity changes with the temperature.

And even that change in tropical climate sensitivity with temperature is not linear. It has two distinct thresholds. One is at the temperature where the cumulus start to form. The other is at the slightly higher temperature where the thunderstorms start to form. At each of these thresholds there is an abrupt change in the climate sensitivity. It is nowhere near linear.

Like other natural flow systems, the climate is constantly restructuring to run “as fast as it can.” In other words, it runs at the edge of turbulence, “up against the stops” for any given combination of conditions. In the case of the tropics, the “stops” that prevents overheating is the rapid proliferation of thunderstorms. These form rapidly in response to only a slight temperature rise above the temperature threshold where the first thunderstorm forms. Above that threshold, most of any increase in the incoming energy is being evaporated and used to pump massive amounts of warm air through protected tubes to the upper troposphere, cooling the surface. Above the thunderstorm threshold temperature, little additional radiation energy goes into warming the surface. It goes into evaporation and vertical movement. This means that the climate sensitivity is near zero.

Now it is tempting to argue that the IPCC linearity claim is true because it only applies to a global average temperature. The IPCC only formally say that there is “a linear relationship between global mean RF [radiative forcing] and global mean surface temperature response.” So it might be argued that the relationship is linear for the global average situation.

But the average of non-linear data is almost always non-linear. Since daily forcing and temperature vary non-linearly, there is no reason to think that real-world global averages vary linearly. The real-world global average is an average of days during which climate sensitivity varies with temperature. And the average of such temperature-sensitive records is perforce temperature sensitive itself. No way around it.

The IPCC argument, that temperature is linearly related to forcing, is at the heart of their claims and their models. I have shown elsewhere that in other complex systems, such an assumed linearity of forcing and response does not exist.

Given the centrality of the claim to their results and to the very models themselves, I think that something more than ‘we found linearity in every model we examined” is necessary to substantiate this most important claim of linearity. And given the general lack of linearity in complex natural systems, I would say that their claim of linearity is an extraordinary claim that requires extraordinary evidence.

At a minimum, I think we can say with “high confidence” that it is a claim that requires something more weighty than ‘the models told me so’ ...

0 0 votes
Article Rating
186 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
HR
October 25, 2010 2:19 am

In your tropical example isn’t it the radiative forcing thats changing not the climate sensitivity. Clouds and the like are blocking solar radiation to the surface?

Dusty
October 25, 2010 2:20 am

Brilliant yet again Willis.
This phrase struck me from their justification: ‘…………. have found no evidence of any nonlinearity …………’ Now that double negative is spin in action; it is not the same as saying: ‘………… have found evidence of linearity ……….’. Just thought I’d mention it.

Eric (skeptic)
October 25, 2010 2:34 am

Here’s the doom-monger argument in two sentences. It’s obvious that the sun warms the earth and there’s a nonlinear relationship of sensitivity to solar input over the course of day (same over seasons in temperate areas). But the linear relationship is from “forcing” which we define as the delta in radiative power (solar, CO2 or other) from some baseline to the new regime with greater “forcing”.
They justify this crap (mostly the ridiculous definition of “forcing”) using models which contain constant parameterized responses (e.g. parameterized convection) which produce linear responses. It also requires the mind boggling assumption that the world and its weather is somehow affected by this hypothetical world-wide average delta. Ask yourself if the tropical daytime nonlinear response described by Willis is affected by it being 0.5 degrees warmer in that location (unlikely since it is dwarfed by the daily forcing change) or 0.5 degrees warmer elsewhere (impossible).

Andrew30
October 25, 2010 2:35 am

Someone comes up with an idea for a computer program, what it should do and what the input and output should look like.
The computer programmers create a program that creates the required output.
The computer operator runs the program and generates the required output.
The climate scientist writes a paper about the output from the computer program.
The scientist’s employer writes a press release about the scientist’s paper.
The media publishes the press release as a news article.
The people read the article and think it is based on actual data.
It is not based on actual observations in nature; it is the output of a computer program.
The climate scientist is just a ghost writer for the computer programmer.
Perhaps the media should just talk directly to the computer programmers, since they are the ones that actually know why and how the program generated the press release.
No one even needs to go outside or actually measure anything.

Eric (skeptic)
October 25, 2010 2:36 am

HR: clouds are sensitivity.

Jerry
October 25, 2010 2:45 am

Are you comparing apples to oranges?
Is it possible the models are working on much longer time scales than say hourly at a location?
Perhaps you are both right (Wills and ‘the models’) Willis is correct on very short time scales? The models are correct on very long time scales?

Tom
October 25, 2010 2:49 am

I don’t like the assumption of linearity. But I don’t find the reasoning in this post very persuasive. I’m not arguing for climate linearity here; I’m arguing that sceptics need to present a well-reasoned position and that this post isn’t it. In two parts:
There are a lot of scientific laws that claim linearity, where if you look closer things don’t behave anything like what the law describes. Ohm’s law says V=IR; if you look closely, you find that electrons are jumping all over the place. Some will go backwards, some will go sideways, the majority will go in the right direction but with a wide spread of velocities. But when you zoom out a bit, the average law works quite well. Same with the kinetic theory of gasses; PV=NkT at a wide scale, but at a smaller scale you can see that actually atoms are crashing around in entirely random ways and locally concepts like “pressure,” “volume” and “temperature” don’t make a lot of sense. So you can’t point to local weather effects and claim that they disprove the aggregate average theory of linear climate sensitivity; the randomness of small-scale weather processes doesn’t disprove the overall theory of linearity any more than observing individual electron tunnelling in silicon disproves Ohm’s law.
Secondly, whether proving linearity from a model is justified depends on the details of the models and whether they do actually assume linear sensitivity. You would need to follow all those references you deleted to find out. Perhaps the model is based on some low-level empirically-derived theory that allows you to infer whether the aggregate behaviour is linear or not; we can’t tell from the text you cite. Perhaps they actually model all the thunderstorm creation processes in the tropics, and snowfall at the poles, based on observed data, and find that when you add it all up it comes out linear. The kinetic theory of gasses is again a great example, though in reverse; the aggregate gas law was developed empirically and the model of an ideal gas was developed by reasoning about what underlying processes might produce that aggregate behaviour. There was no direct evidence that gasses were made up of tiny particles when the model was developed; that didn’t make it wrong. The model agreed with the empirically-derived law, and was successful in explaining it. It turned out to in fact be pretty much right. If you followed your reasoning in the mid-19th century, you would decry the KTOG as fairy-tales that no sane man would believe and you would later look very foolish.

tallbloke
October 25, 2010 2:56 am

“I mean, you really gotta admire these guys. They are so far into their models that they actually are using the linearity of the model results to justify the assumption of linearity embodied in those same models … breathtaking.”
Straightline thinkers like Gavin are never going to see the circularity of their own arguments.

CodeTech
October 25, 2010 3:07 am

Here’s a classic control system with feedback.
1. Increased solar radiation heats area of planet, causing increased evaporation of ocean, lake, river, soil, whatever is down there.
2. Increased evaporation results in higher percentage of H2O in higher pressure lower layer of atmosphere. H2O is LIGHTER than the mixture of N2/O2, and tends to “want” to rise, but is prevented from doing so by boundary layer.
3. Eventually, enough H2O pools that it starts pushing through boundary layer and rising. As it cools by entering lower pressure areas, some of it condenses into droplets or crystals which cause visible clouds. These clouds block incoming sunlight, reducing incoming energy.
4. At this point, the system can achieve a level of equilibrium, where clouds control incoming radiation and manage to hold the temperature at a level appropriate for an area’s current pressure.
5. In some cases the amount of H2O is so great in a concentrated area that it bursts through the boundary layer and large amounts from the area go with it, creating massive upwellings that can exceed 100mph and carry millions of tons of moisture to areas of low enough pressure that they are in the -40C range. Welcome to a thunderstorm, or hailstorm. These types of storms haul HUGE amounts of heat high enough in the atmosphere that excess energy can radiate directly into space, eventually dragging supercooled H2O down to ground level in the form of rain or hail.
6. No matter how much cooling or heating happens during the day, then there is night. At night, all energy radiation is exactly one direction: outward. At any given moment, slightly more than 1/2 of the planet is experiencing night. At any given moment as much energy is radiated outward from the back of the planet as is being absorbed in the front.
7. Thus, it is physically impossible to achieve dramatic temperature changes from minor trace gas changes, since equilibrium is based on total mass of atmosphere and its major gas balance. Now, if the percentage of O2 and N2 were to significantly change, so would the equilibrium temperature of the planet.

John Marshall
October 25, 2010 3:10 am

It’s those dam models again!

Roger Longstaff
October 25, 2010 3:16 am

So linear computer models find no evidence of non-linearity? Makes you want to weep!
The “Sunday Times” (normally a respected newspaper in the UK) yesterday had an article “2010 the warmest year on record”, but now buried on page 15 rather that the normal spot on the front page. Perhaps the MSM are now taking the hint, along with the Royal Society and all the other clowns who were taken in by this rubbish. But I notice that last week’s draconian (and necessary) spending cuts by the UK government still left billions for wind farms and carbon capture demonstration plants. Why are governments the last ones to “catch on”?

old construction worker
October 25, 2010 3:18 am

“Rotstayn and Penner (2001) found that if these aerosol-cloud effects are accounted for as additional forcing terms, the inference of linearity can be restored.”
Sounds like a fudge factor to me. Take dust, an aerosol. If the concentration dust is greater from ground to, lets say, 100 meter then dust would have a warming effect on surface temperature. But if the concentration of dust was greater form 100 meter to 200 meters than 0 to 100 meters, then that would have a cooling effect on ground temperature would it not? After all, dust is a solid and is absorbing short wave radiation.

Adam Gallon
October 25, 2010 3:19 am

As an aside, Connolly may be gone from Wiki, but his spirit lives on!
http://motls.blogspot.com/2010/10/wikipedia-william-connolley-was-just.html
An edit by Lubos of Briffa’s page and it’s quickly erased and threats of sanctions are made!

Chris Wright
October 25, 2010 3:21 am

It’s quite likely that the IPCC is wrong and the forcing is not linear. But that’s not the most important point.
What really matters is the IPCC’s almost obsessive dependence on computer programs (calling them GCM’s sounds much grander, but basically they’re just computer programs written by people who are desperate to prove AGW).
There was a recent example from NASA, in which they use a computer program to ‘prove’ that CO2 is the driver of climate.
In reality, they create computer programs which have a series of assumptions that fit their own beliefs. They then examine the outputs of the computer programs to see if theirt assumptions are correct. And – who would have guessed it! – the computer programs do indeed confirm that those assumptions were correct.
To any outside observer, this process is so circular as to almost defy belief. It’s not science. It’s propaganda.
So why do they do it? I think the answer is pretty obvious. Their glorified Playstation programs are brilliant at forecasting climate that has already happened. But any fool can forecast what has already happened. But, because they have little connection with the real world, they are useless at predicting future climate. Therefore, studies based on real observations of the real world will tend to contradict the Playstation models.
It’s much safer to check their assumptions against the Playstation programs. By doing this they can be sure they will get the right answer every time.
I’d better stop now as I’m beginning to feel a little sick.
Chris

Bob of Castlemaine
October 25, 2010 3:24 am

An interesting post Willis. I guess when you live in the virtual world of the IPCC conflicting observational evidence is of little importance.

Orkneygal
October 25, 2010 3:25 am

Does this mean its worse than we thought?

Archonix
October 25, 2010 3:40 am

Jerry says:
October 25, 2010 at 2:45 am

Are you comparing apples to oranges?
Is it possible the models are working on much longer time scales than say hourly at a location?
Perhaps you are both right (Wills and ‘the models’) Willis is correct on very short time scales? The models are correct on very long time scales?

Nature doesn’t work like that. You can look at the fluid motion of milk in a cup of tea and the motion of a massive gas cloud nebula in space and describe both using the same basic rules, yet the cup of tea is an event measured in seconds and the motion of that nebula is measured in millions of years. Possibly an extreme example and I’m certain someone will find a “yes, but” that they think counters the argument but bioth are governed by the same rules of fluid motion and will both behave in the same chaotic and apparently random way as a result, just over different time scales.
We live in a fractal universe. The same rules apply at the top and the bottom, the only difference appears to be the amount of time needed to observe them. Things don’t suddenly become linear after some ill-defined threshold of convenience.

John Day
October 25, 2010 3:42 am

Playing devil’s advocate here, I think you’re trying to refute a climate-theoretic concept, which plays out over decades, with weather-theoretic concepts spanning a few hours. Climate sensitivity (‘lambda’) is also one of those ‘first-order’ approximations, which no one expects to hold exactly at all geospatial scales.
delta-Temp = lambda x delta-Forcing
You’re correct, Willis, the relationship between temperature and forcing is not exactly linear. But all _observed_ processes tend to be non-linear because of process and measurement ‘noise’.
But having said that, most non-linear processes can be approximated by integrating enough tiny linear steps.
Also, the issue here is CO2 forcing, not solar forcing, no one will dispute the general idea that the Sun warms the Earth. But if you believe that there is a positive contribution from CO2 to the net imbalance between incoming and outgoing irradiation (‘radiative forcing’), then lambda merely expresses a first order approximation, over climatic epochs, of the temperature change that would be ‘observed’, at the climate level, due to that forcing.
I don’t see where proving this relationship doesn’t exactly hold on a hourly basis necessarily proves or disproves any climate theory. And, yes, I’m a CAGW skeptic.

TomVonk
October 25, 2010 3:47 am

While this issue is indeed a major issue , one has to be careful with the arguments .
I have to mention that any parameter Y related to a variable X will answer in a linear way (e.g Y=a.X) provided that the variation of X is small .
This is a trivial consequence of the Taylor expansion .
Sure it is necessary that the relationship between Y and X be sufficiently smooth but this is just a technical detail .
So if one considers a very small variation of a radiative flux (a couple of W/m²) compared to the overal flux (hundreds of W/m²) , then any variable , not just temperature will answer in a linear way .
Especially if you choose a variable like global temperature that has been so averaged that it almost stays constant whatever happens .
Let’s imagine that there exists an extremely highly non linear relationship F such as the Global average of temperature Tg :
Tg = F(Global radiative flux , albedo , sun activity , wind speed , polar ice surface , cloudiness , humidity , O3 concentration , microalgae mass and time) .
Now if you consider a small variation of the variable “Radiative flux” during a short time , all other variables being considered constant , the Tg response will be almost exactly linear .
One can even give easily the value of the proportionality constant (e.g climate “sensitivity”) – it is the partial derivative of F with regard to the Global radiative flux .
Of course there are at least 2 problems .
One is lethal – you don’t know F in the real world and it is not available to experimentation . So you have no clue and will never have about what the partial derivative of F are .
Second is almost lethal too . The linearity holds only for a short time as long as the system didn’t move far from the point where it was before .
As soon as any or all variables acting on F can no more be considered constant , the linearity no longer holds . In other words you can make predictions only for a very short time .
Of course in your example with thuderstorms there are no “small” variations , no “equilibriums” and no “everything else being constant” .
That’s why the Taylor development of your G (note that your G is NOT the same as the above F !) is useless and you are exposed to the full non linearity .
However this local result does not necessarily prove something about the averages .
In other words the uselesness of a Taylor development of G (e.g your thunderstorms) does not necessarily imply the uselesness of a Taylor development of F (e.g global averages) .
You will have noticed that I said “not necessarily” .
Actually the implication is often correct but the point is that it is not automatical , one has to go a bit more in depth to prove it .
Giving just the example of G is not enough .

Policyguy
October 25, 2010 3:49 am

Tom says:
October 25, 2010 at 2:49 am
——-
Wonderful, physics acknowledges linear relationships when they are found. You reference V=IR. How about VT=D? There are plenty more. Your mention of dancing electrons is ridiculous in this context. At the quantum level, nothing is certain.
The point you are missing is that in every case of a settled physical relationship there is direct measurement and observation that supports it. Here, in the case of the behavior of climate, there is no data to support these claims, nothing but models that perform as they were written to perform. How much money was wasted paying for people to write computer programs to demonstrate a preconceived result without supporting data??
The sad fact here is that there are individuals so invested in this shell game that they write supposing intellectual arguments to obfuscate the obvious and attempt to continue the charade. Wake up.

DEEBEE
October 25, 2010 3:49 am

Willis — you made your case that nature is not linear. IMO IPCC claim for linearity is juvenile. But having “proved” that, what is missing from this post is addressing the obvious SO WHAT? Without that it just looks like a hit and run.

RockyRoad
October 25, 2010 3:53 am

Straightline thinkers using circular logic. How much more convoluted can that be? (Maybe they’re completely dedicated disciples of http://www.art.com/gallery/id–a51898/m-escher-posters.htm?RFID=274465&CTID=999901394
M. C. Escher?)

RockyRoad
October 25, 2010 3:57 am

Straightline thinkers using circular logic. How much more convoluted can that be? (Maybe they’re completely dedicated disciples of
M. C. Escher
?)

Alan the Brit
October 25, 2010 4:07 am

When talking to a member of the Wet Office back Feb/March, when he drew his little famous Wet Office graph of the slow rise in temps, followed by a slight cooling, then a warming up to the end of the 20th Century, he explained all about how they programme these models ot obey the laws of physics, etc. I then challlenged him on the validity of the models & their output he responded by asking “how did we get that result to match surface temperatures?” He couldn’t see the logic of his argument that a graph had been produced by computer/hand in earlier attempts, but now models replicated it therefore the models must be right! I reminded him that somebody has to tell the computer to show warming for given inputs of CO2, it cannot do it all by itself, this seemed to leave him dumb-founded & he couldn’t counter my point! They are all so up their own models as Wllis says that they cannot contemplate that they could be wrong, it’s inconceivable to them it would appear. All I did extract from him the admission that there are huge uncertainties, & that these are played down, although denied the Wet Office did this but I suggested he was not looking at the articels in the msm etc, where no doubt exists, & Dr Vicky Pope et al never mentions the word “uncertainty”, well not in public anyway!

steven
October 25, 2010 4:11 am

“Why are governments the last ones to “catch on”?”
An easy question to answer.
The world warms and they did nothing despite being warned: they take the blame.
The world warms and they wasted billions because they were warned: they blame the scientists.

steven
October 25, 2010 4:12 am

ooops, the world doesn’t warm should be the second one. More coffee please.

Jerry
October 25, 2010 4:19 am

Archonix says
“Nature doesn’t work like that. You can look at the fluid motion of milk in a cup of tea and the motion of a massive gas cloud nebula in space and describe both using the same basic rules, yet the cup of tea is an event measured in seconds and the motion of that nebula is measured in millions of years.”
Nature has a lot of non-linear responses. Non-Newtonian fluids for example. The reponse of the system in that case is very dependant on the speed of application.
I suggest that daily responses to external forcings may partially mask long term changes to forcings.
I restate my question to Willis. Does your example match parameter for parameter with the models? Especially the temporal impulse and response?

Duncan
October 25, 2010 4:20 am

From the first paragraph, I was sure you were writing about the magazine.

Gaylon
October 25, 2010 4:23 am

Tom says:
October 25, 2010 at 2:49 am
The laws you cite have all made spectacular predictions with astoundking accuracy over an extended period of time.
The GCM’s have also made spectacular predictions, none of which have come to pass and some of which turned out to be diametrically opposed to actual observation over the past 30 years:
increasing Hurricanes,
sea-level rise,
raising temps,
etc,
etc,
etc…

Roger Clague
October 25, 2010 4:26 am

Tom says:
October 25, 2010 at 2:49 am
“The average law works quite well. Same with the kinetic theory of gasses; PV=NkT”
The gas law works well because it is only used for gasses.
The sustance which most affects the heat and temperature of the atmosphere is H2O. H2O changes phase in the troposphere. It does not respond in a linear way. Vapor, clouds and storms have different physics. Clouds reflect, H2O vapor doesnt. Change of phase involves latent heat. There is no average law for H2O in the atmospere.
The IPCC theory of linear radiative forcing is wrong.

rbateman
October 25, 2010 4:28 am

The classic example of non-linearity:
The sun takes a typical path relative to the observer over a years time, and generates a sine.
The temperature lags the actual path and creates a different sine from the path.
The actual storm fronts and pressure cells that pass on by the observer create infinitely complex patterns superimposed upon the 2 sines.
The result of this non-linearity is that it is virtually impossible for the observer to live long enough to see a carbon-copy ( : of any given year.
And we have not included any other superimpostitions that nature provides to the weather.
So, what have Global Climate modelers actually achieved?
Global Climate modelers have proven that AI is impossible without non-linear equations.
They have also proven that supercomputer clusters are fundamentally linear and without AI.

John Day
October 25, 2010 4:35 am

> It’s those dam models again!
Let me play devil’s advocate one more time and correct a common misconception that I see over and over again:
There are no ‘model free’ _measurements_ of Nature!!!
What ‘model’ do you ‘consult’ to get your local time and temperature?
“I don’t use a model, I just look at my watch and the thermometer in my patio!”
Ha! You’ve just consulted two ‘models’!
Your watch doesn’t show you ‘Time’, rather it is a model based on the integration of tiny ‘ticks’ or escapements, which model the flow of time. Accuracy dependent on a number of factors, escapement rate, calibration and your visual acuity.
Likewise, your thermometer doesn’t show you ‘Temperature’. It’s another model which exploits the thermal coefficient of expansion for alcohol, mercury, or the resistive behavior of currents in a thermocouple, all non-linear approximations to this phantom concept of Temperature, which ideally only exists in your minds.
So, climatologists have to use models to measure time and temperature, just like we all do, all of the time. The only difference is the uncertainty associated with the measurements. Our clocks and thermometers are generally pretty good models, so we forget they’re models and pretend they represent Something Real.
Climate models have greater uncertainty, so that’s when all the fun begins in these climate blogs, skeptic or otherwise.
Personally, I think a lot of these AGW discussions are like trying to predict heads or tails while the coin is still in the air, but I’m sure someone has a ‘model’ for that too!
😐

Bill Marsh
October 25, 2010 4:43 am

“and excluding cloud-aerosol interaction effects”
Actually I found that phrase to be the most striking when I read the explanation. What they seem to be doing is making the statement, “we want to say that the relationship is linear”, then they proceed to eliminate whatever process does not provide linearity and voila, they have a linear relationship.
Would we accept an accounting statement that a company was ‘profitable’ if the accounting firm stated up front that they ‘excluded any negative transactions’ therefore the company is profitable?
It’s one reason I abandoned feedback analysis. Every ‘model’ I studied started out with the statement, “we eliminated the following processes to make the model mathematically tractable”. I could see no value in producing a model of a physical system that didn’t actually model the system.

Gaylon
October 25, 2010 4:44 am

Jerry says:
October 25, 2010 at 4:19 am
C’mon Jerry,
You’re not seriously going to take issue by insisting that the real world example Willis gives match ‘parameter for parameter’ with the model are you?
Don’t you think that’s putting the ‘horse before the cart’ just a little bit?
If the model predictions have not come to pass, any of the numerous (enlighten us if you can) then they are summarily falsified as having no predictive value. The linear assumption made by the IPCC and others may well be the reason, as Willis points out.
You can’t refute cicular logic by using circular logic regardless of what ‘the temporal impulse and response’ is. We just don’t know enough about this “non-linear” and chaotic system to make meaningful predictions about the climate process outside of 72 hour timeframes. I think this is what Willis was implying when he said, “…those guys are artists.”

Robinson
October 25, 2010 4:44 am

The world warms and they did nothing despite being warned: they take the blame.
The world warms and they wasted billions because they were warned: they blame the scientists.

Haha Steven, very good. May I add that apart from politicians who majored in Philosophy (and hence have some familiarity with the argumentum ad verecundiam), most politicians are completely clueless about how science works and what the limitations of it are. Even when they are familiar, they tend to be in awe of it. Richard Dawkins has done much to promote this kind-of awe. I’m not criticising him for promoting reason over superstition, but his kind of idealism assumes each scientist has no Human frailty and the ethics of a Saint. Reading The Hockey Stick Illusion as I am at present, nothing could be further from the truth!

LazyTeenager
October 25, 2010 4:48 am

I could spend al day picking apart to the dubious assumptions in this post. There are just so many of them.
Just for starters:
1. it is assumed that the linearity result is based purely on models. The excerpt provided does not say that. That should be checked.
2. The entire argument is based on verbal hand waving. A lesson that was learn’t a very long time ago is that language us not powerful enough to describe complex processes. That’s why science and math trumps philosophy and gurus contemplating their navels.
3. There is an obvious mismatch between the idealised process described for a day in the tropics and the process of global climate.
As usual I have good reason for being extremely skeptical.

Nick
October 25, 2010 4:59 am

In the first quoted paragraph ” …could be linearly added to gauge the global mean response,but not necessarily the regional response.” Does that help,Willis?

Ian W
October 25, 2010 5:03 am

Tom says:
October 25, 2010 at 2:49 am
I don’t like the assumption of linearity. But I don’t find the reasoning in this post very persuasive. I’m not arguing for climate linearity here; I’m arguing that sceptics need to present a well-reasoned position and that this post isn’t it.

No – the argument from the article is simple: The IPCC should base its understanding of the atmosphere on OBSERVATIONS and not on the IPCC assumptions faithfully implemented in software
The climate does not have ONE sensitivity. As Willis says there are huge step jumps in sensitivity dependent on LOCAL conditions. There is no such thing as an average global weather which can then be summed into an average climate. As the temperature increases at the equator are forced higher – the step jump to negative forcing from severe storms will occur earlier and over more degrees of latitude. This is why hurricane season is in summer time.
But this is all shown by recent satellite metrics that are providing this information and were paid for by the same government agencies that fund IPCC . As the real world information is there and paid for one wonders why the IPCC stays in the computer room and studies their own software. Perhaps because unlike the satellites, their software programs provide the expected result?

thingadonta
October 25, 2010 5:03 am

its more non linear than we thought.
An obsession with linearity is essentially political.
I have also noticed that the same modellers who use linear assumptions, as you mention, also tend to also “average out” various datasets, such as proxy temperature reconstructions, and assume the averages of these proxies reveal some kind of meaningful trend. All these averages do is dampen and hide uncertainty in the proxies. Such is the case in Mann et al’s more recent paper about the MWP and LIA going back to about 1200 AD. Thy are using the same linear assumptions projected back into the past, as they do for the future, which is also part of the reason behind the hockey stick’s flatness back to 1000AD. ‘Averaging different uncetainties’ just doesnt work.
There is also another ‘linear- average’ that they assume. They state that if climate senstiviy is high to one variable, than it must be high for all variables. ie if climate is sensitive to eg solar variation, then it must also be just as sensitive to C02. From this they deduce that the common skeptical argument that the MWP was greater than reconstructions such as Mann 1998, only makes things worse, ie it creates more projected warming from human c02. So they have: little warming in the MWP and c02 effect is high, or high warming in the MWP and c02 effect is high. This way they get to have it both ways. A perverse example of their extreme ‘linear’ type of thinking.

R T Barker
October 25, 2010 5:07 am

Uh Oh! I am linearly extrapolated in a non-linear world.

Blade
October 25, 2010 5:07 am

Willis Eschenbach [Top Post] says:
“… I would say that their claim of linearity is an extraordinary claim that requires extraordinary evidence.”

Bingo. I would say that anything that shows up as linear is immediately suspect and will likely be quickly debunked (well, we’ll need that raw data first). That is my gut instinct and is based on a complex but abstract reason. I still cannot correctly put it into words, but I’ll try anyway …
I can vaguely remember 9th grade algebra, where you have plotting paper and pencils and rulers (there were no calculators yet and no computers). We would write some basic function and draw the graph. You would begin with x = y or x = y + 1 and so on. At this stage the results were of course nice straight lines. Later, you add exponents and degrees. (And it was no fun drawing something like x³ + 4x, of course many years later you just drop whatever function you want into a TI-xx graphing calculator and a nice graph pops on the screen, but I digress).
The point is even then I suspected something very strange and rigid and synthetic about the concept of a straight line linear graph. Studying electricity and electronics provided me the analog perspective. I believe that what has happened is that many become locked into the artificial, neat and pretty, but synthetic world that early math classes guide you into. In a sense it is the purest digital view of our purely analog world. But, I am still not explaining this right …
It is kind of like what the alphabet, vocabulary and language is versus the complete human voice, they are little pieces of the whole. Math, arbitrary numerical representations and algorithms are like the alphabet making words and languages out of pieces of the infinite number space. Linear functions can get us to little pieces of a sine wave without ever seeing it. Trees meet forest. I get a real Matrix (movie) feeling pondering this.
I have the distinct impression that most statisticians (those in the AGW cabal) are locked into this Matrix (a very simple one) where everything can and must be described with the simplicity of a 9th grade math problem. These statisticians are usually decimated once people show up with experience outside of the classroom (and their parents’ basement). I think this is why we see Meteorologists, Geologists, Physicists, Electricians, Programmers, etc, chewing them up for breakfast. (I am also starting to think I was wrong back in the early 80’s when I first used Lotus and Excel. I figured these were just fantastic steps forward, but now wonder if they are making us lazier and dumber. But I digress again, last time :-).
But my point is really a question, Is there anything that is actually linear? Can a linear graph ever represent something real? This post, particularly the title, Nature hates straight lines, really makes sense to me.

LazyTeenager
October 25, 2010 5:09 am

PolicyGuy says
——————-
nothing but models that perform as they were written to perform
——————-
You know nothing about the models and you are pretending insight where you have none. You have been fooled by your own sophistry.
The models simulated the climate based on the laws of physics, and a number of empirical relationships. No one knows what is going to come out ahead of time and occaisionally the results are surprising. The initial conditions to the models often have random components to see how that affects the range of outcomes.
No one writes thousands of lines of code and spends millions of dollars just to produce predetermined answers. If that was the intent they could do that in an afternoon with a bunch of print statements.
Didu you fail the most important skeptics test: are the words coming out of my mouth utter rubbish.

stephen richards
October 25, 2010 5:16 am

Tom says
Tom you miss the point of these laws / models; As with Newtonian physics these laws only work within the specified limits of classical physics. They are fine for everyday calculations, for ensuring that you have the right fuse in your consummer unit, etc. They cease to mimic reality at the atomic / electron level. This is where the Quantum mechanics model takes over. Climate models are so far from this level of modeling that they are no better than a playstation game.
Physical models, for that’s what the formulii are that you have quoted, are tested by physical experiments in order to find their limits and are then not used to describe the ‘observed’ events beyond those limit(s).
Climate models have not gone through this final process to my knowledge.

stephen richards
October 25, 2010 5:18 am

LazyTeenager says:
October 25, 2010 at 4:48 am
You are arm waving.

stephen richards
October 25, 2010 5:20 am

LazyTeenager says:
October 25, 2010 at 4:48 am
3. There is an obvious mismatch between the idealised process described for a day in the tropics and the process of global climate.
Ok This is your opportunity to stop waving your arms and explain this phrase in detail.

jessie
October 25, 2010 5:24 am

Decide whether ‘discrete or continuous’
then application of [instruments of] specificity or sensitivity?
Politics can use all the above for effect, they are elected for a fraction of the time.
Science should state up front, they are here, to provide their genius for the benefit of humanity, for which without, we would not be enlightened.
Interesting post Willis Eshenbach, thank you.

Jimbo
October 25, 2010 5:25 am

“Amplification of Global Warming by Carbon-Cycle Feedback Significantly Less Than Thought, Study Suggests”
http://www.sciencedaily.com/releases/2010/01/100127134721.htm
http://www.nature.com/nature/journal/v463/n7280/full/nature08769.html

cal
October 25, 2010 5:31 am

Although I think Willis is correct that climate sensitivity is unlikely to be linear and independent of temperature there is a flaw in his argument. The question is whether clearly non linear local and short term negative feedbacks can be used challenge the linearity claim. Tom raises V=RI Ohms law but his argument about random electron motion misses the point. The real challenge is that posed by capacitance and impedence which create non linear behaviour when I is not constant. Such transient behaviour does not disprove Ohms law. In the climate system the oceans are a huge capacitor and water generally (through evaporation and cloud formation) a huge impedence. The problem is that clouds are also part of the climate’s “resistance”. The people who create the climate models cannot model deep sea currents or clouds so it is unlikely that any model they generate will contain a relationship which predicts that the overall avearage level of cloud will change permanently given a new average temperature. Thus a quasi linear constant radiative forcing is built into the model. One cannot be surprised if it is one of the outputs!
Thus the strongest argument (as Willis and others have pointed out) is that you cannot postulate any relationship (liner or otherwise) without testing if it holds true in real life. This has not been done.
Finanally I am bemused by the idea that a linear relationship should be postulated at all. The energy radiated by the earth is dependent on its absolute temperature to the fourth power. So if you increase the radiation received by 4% the absolute temperature has to increase by about 1% to compensate. As Tom Vonk points out this might look linear (particularly measured in degrees C) for very small changes but it is highly non linear in reality. Can anyone who knows the models explain why linearity should be considered even remotely likely?

October 25, 2010 5:34 am

This goes back to logic 101. The IPCC is basically saying that since A implies B, not A implies not B. Going to their argument: linearity is A, evidence is B. The IPCC is saying “GCM have found no evidence of any nonlinearity for changes in greenhouse gases.” Thus, not A is nonlinearity, and not B no evidence. So, since no evidence exists otherwise implies the system is nonlinear, that means evidence implies linearity. Does that makes sense? This example may help too: If I have the flu, that implies I have a virus but I do not have the flu, that implies I do not have a virus. You see how the second statement is the opposite of the first, yet simple logic and real world sense says it is not true.
I think the IPCC statements are either incompetence in action or purposeful obfuscation. Either way, nothing they state can be trusted. Who is really going to examine every sentence for logical fallacies like this?

MrPete
October 25, 2010 5:47 am

LazyTeenager – good name. Yes, you are lazy. If you’d actually read the OP carefully, you’d find that the quoted IPCC material does claim the linearity assumption has been tested purely by analysis of models:

Since then, studies with several equilibrium and/or transient integrations of several different GCMs [Global Climate Models] have found no evidence of any nonlinearity…

The IPCC assertion is based on studies of models. Not studies of the real world.
Your skepticism is well founded but misplaced.

NS
October 25, 2010 6:00 am

I guess there are Newtonian assumptions behind the claim. ie increasing the total energy in the system will cause a linear change in the system where the system is unique. This would assume CO2 is not well-mixed, which is incorrect.

Tom
October 25, 2010 6:05 am

Gosh, that got some backs up…
@Gaylon – I do not dispute the inaccuracy of the GCMs. I dispute the logic employed in this article, specifically the non sequitur summed up as, “Local day-to-day weather does not display linear sensitivity therefore global climate does not display linear sensitivity.”
@Policyguy – Calm down a bit. I am not defending the models – they may be every bit as bad as you say. What I am saying is that Willis doesn’t look into how the models were constructed, or whether an inference of linearity from them is valid – he just removes the citations and assumes the inference is invalid. This is just as much assuming your conclusion as he accuses the IPCC of. Comparing electrons to large-scale current is indeed ridiculous – it was meant to be – just as ridiculous as comparing hour-to-hour tropical weather behaviour with long-term global climate averages. You see the point – why don’t you apply it?
Clague – I’m not sure why this is relevant. My point was not that there is any connection between gas models and how the climate works, and I’m not sure how you’ve managed to read it that way.
And W – I don’t disagree. My point is simply that Willis hasn’t shown either that the IPCC used observations or that they didn’t. He has assumed that the models used are completely disconnected from reality, removed the references to make it harder for you to check, and then concluded that the models are disconnected from reality. He has also shown that local, short-term weather effects are not linear or time invariant and jumped to the conclusion that therefore large-scale, long-term weather effects are not linear or time invariant. He is not necessarily wrong (I think he is probably right), but he hasn’t proven his point, either.

tallbloke
October 25, 2010 6:08 am

From: Tom Wigley
To: Kevin Trenberth
Subject: Re: BBC U-turn on climate
Date: Wed, 14 Oct 2009 16:09:35 -0600
Cc: Michael Mann , Stephen H Schneider , Myles Allen , peter stott , “Philip D. Jones”, Benjamin Santer , Thomas R Karl , Gavin Schmidt , James Hansen , Michael Oppenheimer
Kevin,
I didn’t mean to offend you. But what you said was “we can’t account
for the lack of warming at the moment”. Now you say “we are no where
close to knowing where energy is going”. In my eyes these are two
different things — the second relates to our level of understanding,
and I agree that this is still lacking.
Tom.

Jason Calley
October 25, 2010 6:17 am

Tom Vonk says at Oct 25, 3:47 am, “While this issue is indeed a major issue , one has to be careful with the arguments .
I have to mention that any parameter Y related to a variable X will answer in a linear way (e.g Y=a.X) provided that the variation of X is small .”
I certainly may be misunderstanding things, but I believe that the linear relationship for small variations of x does not hold in chaotic systems. This is not to say that NO values of x have a linear realtionship to y, but only to point out that not ALL values of x have that linear relationship. After all, even chaotic systems have attractors and lslands of stability. It is just that interspersed among those regions of linear relationships we have a fine scattering of chaotic areas. Once x crosses into a chaotic region, we no longer have a linear relationship, and the predictive ability of the model breaks down.
Please correct me if I am mistaken on that.

Martin Lewitt
October 25, 2010 6:18 am

Willis,
In addition to an assumption of linearity, there is an assumption of equivalence between forcing types which is uncritically accepted as show by the mathematical formula for converting W/m^2 to its CO2 doubling equivalent. Even a basic understanding of nonlinear dynamic systems, makes it clear that it is unsafe to assume that forcings with different distributions or different coupling to the climate are equivalent. I found a refreshing acknowledge of this in Hegerl and Knutti’s review article:
“The concept of radiative forcing is of rather limited use for forcings with strongly varying vertical or spatial distributions.”
and this:
“There is a difference in the sensitivity to radiative forcing for different forcing mechanisms, which has been phrased as their ‘efficacy'”
http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf
Unfortunately the “efficacy” is from the work of Hansen, which threw away part of the difference between solar and GHG forcing by using models with simplified oceans and no stratosphere, and coupling GHG to the whole mixing layer of the oceans, when CO2 wavelengths only penetrate microns and solar can penetrate 10s of meters.
On consequence of this assumed equivalence, is that they can try to calculate model independent estimates of climate sensitivity using solar and aerosols and then just translate it to a climate sensitivity to CO2 doubling. Their estimates for solar variability are low, even for the Maunder and Dalton minimums so their sensitivities to solar forcing ends up high and by equivalence CO2 is high. This is why light that the current solar minimum is shedding on solar variability, especially in the UV range. Those “model independent” estimates (which often use models BTW), assume all the solar coupling was purely radiative, while UV couples non-linearly, chemically through the creation of stratospheric and to a lesser extent tropospheric ozone, a greenhouse gas.
The equivalence assumption extends to the models, all they have to do is match the global average temperature and look like a climate. Models with twice the sensitivity and twice the aerosols are equivalent and “match” each other and the climate. So what if the climate produced twice the increase in precipitation (per Wentz), the climate is just one 30 year instance. The models have it out numbered.

October 25, 2010 6:18 am

The rate of energy lost to space with no atmosphere would definitely be non-linear following W=kT^4 where T is the absolute temperature at the surface. With an atmosphere containing clouds and water vapor, T can be for the surface, cloud tops, or water vapor molecules. So what satellites see is a temperature mixture. The rates of evaporation, condensation, freezing, and thawing are factors that tend to control the coefficient k. CO2 has no measurable effect. http://www.kidswincom.net/CO2OLR.pdf

Robinson
October 25, 2010 6:19 am

John Day said:

Climate models have greater uncertainty, so that’s when all the fun begins in these climate blogs, sceptic or otherwise.

Come now. To say there are no “model free” measurements of nature is entirely correct. But if I make a clock from a dandelion and tell you that it measures time – one hour passes for every petal I pick off – you would be justified in questioning the efficacy of my method, notwithstanding the accuracy of my petal picking. Uncertainty isn’t the only problem with climate models, as you well know.

Bill Illis
October 25, 2010 6:24 am

The Stefan Boltzmann equations which are the fundamental equations governing energy and temperature in the universe and on Earth are logarithmic, not linear.
The IPCC top-of-the-atmosphere forcing of 3.7 Watts/m2 has to translate into an extra 16.0 Watts/m2 at the surface to meet the 3.0C per doubling proposition.
Depicted here.
http://img685.imageshack.us/img685/6840/sbearthsurfacetemp.png
http://img213.imageshack.us/img213/2608/sbtempcperwatt.png
The theory starts out by using this formula and then it abandons it half-way through and starts to adopt linearity.
I think they are assuming doubling CO2/GHGs results in an extra +3.7 W/m2 at the tropopause – feedbacks then add another +7.8 W/m2 and the temperature at the tropopause rises +3.0C (using the logarithmic formula) – the IPCC then switches to linearity and assumes the Lapse Rate stays intact at 6.5C/km and the surface warms by the same +3.0C. If they continued to use the logarithmic formula, the surface would warm much less than tropopause.

Martin Lewitt
October 25, 2010 6:26 am

Moderator, apologies for the typos, please post this revision instead:
Willis,
In addition to an assumption of linearity, there is an assumption of equivalence between forcing types which is uncritically accepted as shown by the mathematical formula for converting W/m^2 sensitivity to its CO2 doubling equivalent. Even a basic understanding of nonlinear dynamic systems, makes it clear that it is unsafe to assume that forcings with different distributions or different coupling to the climate are equivalent. I found a refreshing acknowledgement of this in Hegerl and Knutti’s review article:
“The concept of radiative forcing is of rather limited use for forcings with strongly varying vertical or spatial distributions.”
and this:
“There is a difference in the sensitivity to radiative forcing for different forcing mechanisms, which has been phrased as their ‘efficacy’”
http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf
Unfortunately the “efficacy” is from the work of Hansen, which threw away part of the difference between solar and GHG forcing by using models with simplified oceans and no stratosphere, and coupled GHG to the whole mixing layer of the oceans, when CO2 wavelengths only penetrate microns and solar can penetrate 10s of meters.
One, consequence of this assumed equivalence, is that they can try to calculate model independent estimates of climate sensitivity using solar and aerosols and then just translate it to a climate sensitivity to CO2 doubling. Their estimates for solar variability are low, even for the Maunder and Dalton minimums, so their sensitivities to solar forcing ends up high and by equivalence CO2 sensitivity is high. This is why the light that the current solar minimum is shedding on solar variability, especially in the UV range is so important. Those “model independent” estimates (which often use models BTW), assume all the solar coupling was purely radiative, while UV couples non-linearly (chemically) through the creation of stratospheric and to a lesser extent tropospheric ozone, a greenhouse gas.
The equivalence assumption extends to the models, all models have to do is match the global average temperature and look like a climate. Models with twice the sensitivity and twice the aerosols are equivalent and “match” each other and “match” the climate. So what if the climate produced twice the increase in precipitation (per Wentz), the climate is just one 30 year instance. The models have it out numbered.

DirkH
October 25, 2010 6:26 am

“I gotta confess, that wasn’t the first time I’ve walked away from the IPCC Report shaking my head[…]”
Oh come on Willis. They’re kids. They’ll grow up and learn a thing, maybe.
http://nofrakkingconsensus.wordpress.com/2010/10/22/an-even-younger-senior-author/

Francisco
October 25, 2010 6:34 am

Tom says:
“the randomness of small-scale weather processes doesn’t disprove the overall theory of linearity any more than observing individual electron tunnelling in silicon disproves Ohm’s law.”
==========
Ohm’s law holds only for so called ohmic materials (metals) that exhibit roughly constant resistance at roughly constant temperature, and even then it is only a good approximation.
Most materials in the real world thumb their nose entirely at Ohm’s law by showing all kinds of non-constant (i.e. voltage-dependent) resistance/conductance.
I’ve heard biologists aptly refer to Ohm’s law as “Ohm’s dream”.

simpleseekeraftertruth
October 25, 2010 6:44 am

It could be that the straight line is only a tangent: but they would have thought of that wouldn’t they?
🙂

Neil Jones
October 25, 2010 6:48 am

WYSIWYG Science – What You Seek Is What You Get.

October 25, 2010 6:59 am

The IPCC wrote “…excluding cloud-aerosol interaction effects…”
I read “… ignoring the dog we know where this tail is going…”

October 25, 2010 7:00 am

Forcing is generally taken to mean downward radiation measured at the TOA (top of atmosphere). The IPCC says that when TOA forcing changes, the surface temperature changes linearly with that TOA forcing change.

Willis: Thanks for another illuminating posting… slowly, slowly the seven veils are being stripped away… only to reveal an IPCC emperor with no clothes…
The thing that makes me smile about the concept of climate sensitivity is that it means the SUN must have been responsible for driving the surface temperature up during the Medieval Warm Period and down during the Little Ice Age.
The Intergovernmental Panel of Climate Clairvoyants just make up the rules as they go along… nothing hangs together… circular logic… conflicting rules… assumptions… fudged data… bogus models… propaganda… evasion… conspiracy… deception… misdirection… and lies… my oh my… what wonders they can see in their crystal balls…. Climate Science has become an oxymoron.
The typical tropical day clearly illustrates the non-linear climate sensitivity for a specific spot on the globe.. and lets hope there is not a cold northerly wind or a hot southerly wind that will affect their observations… and when they get to the night they see temperatures fall over time… surely they know the basic science behind the non-linear Cooling Curve http://en.wikipedia.org/wiki/Cooling_curve
Even if they you managed to establish the climate sensitivity for a specific spot on the globe it would still vary up the air column (say up to tree height or beyond) and by season for that spot on the globe… and how about the local area… it can be different 100 meters away, let alone 100 kilometers… and trying to say that somehow they have established a global climate sensitivity is simply beyond comprehension… let alone that it is linear… so now they have morphed into the Intergovernmental Panel of Climate Crazies.

Bruckner8
October 25, 2010 7:01 am

I think the response here are missing the point entirely. (Understandable, considering the effort Willis went through to show why he thinks these ideas are non-linear.)
To me, the important takeaway is: THEY DID NOT USE ANY REAL OBSERVATIONS.
Once I realized that, their entire point of using models was completely futile. Models are meant to be tested against reality to increase their validity. Therefore, a model is practically ASSUMED to be programmed using reality-based observations as its #1 tenet. Willis showed this to assumption to be false.
I couldn’t care less whether this or that forcing is PROGRAMMED to be linear or not…but I am deeply concerned that their processes led them to think it was OK to make such a program without a correct scientific process: starting with observation and tested against observation!

Doug in Seattle
October 25, 2010 7:02 am

Willis, I applaud you on finding this particular pea in the IPCC shell game. The next step is to find an alternative explanation that better describes climate sensitivity.
I think Tom’s argument above regarding averaging of complex systems has some validity. In hydrogeology we use similar empirical rules to characterize and calculate groundwater flow when we know that at smaller scales groundwater acts quite differently.
I am not a big fan of invoking chaos as an explanation. I suspect that chaos is too often used as an explanation when something is complex and therefore too troublesome to figure out. Natural phenomena appear to me to exhibit a form of symmetry at all scales. This seems to argue against chaos.

Alex the skeptic
October 25, 2010 7:14 am

Two centuries ago, the river Thames used to freeze over. Then it didn’t any more. What went wrong? Was it linearity? These last winters England suffered cold snaps such as: http://news.bbc.co.uk/2/hi/in_depth/8447023.stm
When the UK mainland was 100% covered in snow. Also, the northern hemisphere had the largest land snow cover ever recorded. Is this linearity. Of course not. It’s called climate change. The climate change deniers are not us but them; those who are playing around with multimillion dollar computer games funded by our taxes. The IPCC are the climate deniers because its them would deny that climate is continuously changing.
Now the AGWers want to control it by reducing CO2. They want to software-design our climate but Anthropogenic Climate Design will not work.

Alex the skeptic
October 25, 2010 7:18 am

Two centuries ago, the river Thames used to freeze over. Then it didn’t any more. What went wrong? Was it linearity? Then the warmth came, then these last winters England suffered cold snaps such as>: http://news.bbc.co.uk/2/hi/in_depth/8447023.stm < when the UK mainland was 100% covered in snow. Also, this 2009/10 winter the northern hemisphere had the largest land snow cover ever recorded. Is this linearity. Of course not. It's called climate change. The climate change deniers are not us but them; those who are playing around with multimillion dollar computer games funded by our taxes. The IPCC are the climate deniers because its them that deny that climate is continuously changing. They say it has not changed during the past except now due to my breathing out CO2.
Now the AGWers want to control the climate by reducing/adding CO2 . They want to software-design our climate but Anthropogenic Climate Design will never work.
Great post Willis.

Kevin_S
October 25, 2010 7:25 am

I love it. This paper would be akin to me building a model of a tugboat and then calling it the U.S.S. Missouri and when someone points out that it isn’t, well I have a model saying it is therefore there is a problem with the actual real world observations.

Rhys Jaggar
October 25, 2010 7:27 am

Tell the IPCC to go look again at Hooke’s Law.
What you’ll see is that there are CERTAIN CONDITIONS WHERE LINEARITY IS APPROPRIATE, but equally, OTHERS WHERE IT IS NOT.
This is the reality of nature.
As is the possibility, ubiquitous in biology, that the same forcing agent can elicit completely different types of responses depending on concentration.
As an example ‘tumour necrosis factor alpha’, named thus for its ability to cause tumours to die and necrose (become dead tissue) is actually a stimulant of capillary growth at much lower concentrations than those which cause tumour death.
Similarly, the effect of one forcing agent AT PARTICULAR CONCENTRATIONS OF OTHER FORCING AGENTS may be different at the same concentration IF OTHER FORCING AGENTS’ CONCENTRATIONS ARE SIGNIFICANTLY DIFFERENT.
I think the mathematicians call it ‘partial differential equations’…….

RC Saumarez
October 25, 2010 7:28 am

I agree with Tom Vonk. The processes may be non-linear but for small variations the relationship between variables may be linearisable as in the Galerkin method. This is process that pervades engineering mathematics and is fine as long as you understand what is meant by a small variation in terms of non-linearity.
If forcings change by 2%, one may not be able to distinguish non-linearity; they change by 30%, 50% and 100%, deviation from linearity may become apparant. Another property of linear systems is superposition so that if the temperature responds to the sum of forcings as the sum of the individual forcings.
Given that the overall effects in climate may be due to rellatively small changes in forcings and that non-linearities, such as those described in this post, are local, linearity may no be a bad assumption given lack of experimental data demonstrating overall non-linear behaviour.

October 25, 2010 7:28 am

Oh dear. All differentiable functions are linear to first order in small quantities. It doesn’t matter how convoluted and nonlinear, or even chaotic, the curve might be, or how strong the feedbacks, it is still linear for small changes. What we are talking about here are small changes in the energy balance, of a few parts per thousand (~1W/m2 in ~1kW/m2) . So the predictable or average component of the response of the global temperature to such changes cannot help but be close to linear. The only way that could fail is if the response is pathological – either about to run into the buffers or to fall off a cliff in a (mathematical, if not physical) catastrophe – which only the most absurdly alarmist would claim.
By the way, the term “forcing” is wrong; a forcing is an oscillation imposed upon a system other than at its resonant frequency.

October 25, 2010 7:35 am

Under certain conditions, the “global mean response” might scale approximately linearly with the “global mean forcing” but clearly this assumption cannot possibly hold under all conditions. Imagine, hypothetically, a “Snowball Earth” which is completely covered in ice. Obviously one can make it colder by reducing solar luminosity or something. But that cooling cannot be greatly augmented by ice albedo feedback, as there is nowhere for additional ice to form to reflect more sunlight. But, if we warm up the Earth a little bit when it is extremely cold, the ice melting at the equator would create a strong positive feedback. Likewise, a totally ice free Earth would not have an ice albedo feedback toward warming, but it would toward cooling. So that feedback is not only dependent on the initial conditions, but also on the direction of temperature change. An even more interesting case, of water vapor in extreme cold conditions (indeed, at sufficiently cold temperatures, all atmosphere GHG’s) is that if the Earth is cooled down by a very large amount, these gases would have to condense out of the atmosphere, thus eliminating their warming effect and acting as a positive feedback toward more cooling. However, while the temperature does limit the amount of water vapor the atmosphere can hold, a warmer atmosphere does not HAVE to hold more water vapor-while a much colder one must lose water vapor. So this feedback must be strongly positive for very cold temperatures, but it need not be a warmer ones. The examples I can think of, much like Willis’s suggest that the “sensitivity” will tend to be higher at lower temperatures. However, my examples would make large differences mainly in extreme climate states, and for them the linearity assumption may be okay at conditions similar to the present. But Willis’s example is intriguing because it is not merely of hypothetical curiosity, it may make a big difference in the temperature range associated with the present, real case.

Steve Keohane
October 25, 2010 7:37 am

LazyTeenager says: October 25, 2010 at 5:09 am
PolicyGuy says
——————-
nothing but models that perform as they were written to perform
——————-
You know nothing about the models and you are pretending insight where you have none. You have been fooled by your own sophistry.

Pot meet kettle, don’t know much about computers do you. I assumed those who grew up with them might understand the computers’ limitations, guess I was wrong. Or maybe the teenager part of your moniker is a fib.

October 25, 2010 7:39 am

Apologies. I see that Tom Vonk has already said something similar to my above comment. That’ll teach me to jump in without reading to the end of the comments first.[yup]

anna v
October 25, 2010 7:41 am

Tom says:
October 25, 2010 at 2:49 am
There are a lot of scientific laws that claim linearity, where if you look closer things don’t behave anything like what the law describes. Ohm’s law says V=IR; if you look closely, you find that electrons are jumping all over the place. Some will go backwards, some will go sideways, the majority will go in the right direction but with a wide spread of velocities. But when you zoom out a bit, the average law works quite well. Same with the kinetic theory of gasses; PV=NkT at a wide scale, but at a smaller scale you can see that actually atoms are crashing around in entirely random ways and locally concepts like “pressure,” “volume” and “temperature” don’t make a lot of sense. So you can’t point to local weather effects and claim that they disprove the aggregate average theory of linear climate sensitivity; the randomness of small-scale weather processes doesn’t disprove the overall theory of linearity any more than observing individual electron tunnelling in silicon disproves Ohm’s law.
I have been discussing this often, more recently in the Spencer thread.
Physics formulations have regimes where they hold.
Meta levels. An example from everyday life is
a) the alphabet level ( or sounds if you prefer)
b)words
c) sentences with structure
b is a meta level of a and c is a meta level of b. Each is founded on the previous, but the rules are completely different.
You will not argue that a poem has too many letter “e” or needs many “a” to carry meaning.
In the same way the current frameworks of physics, which are developed with rigorous mathematics, start with quantum mechanics, go into quantum statistical mechanics, which goes into thermodynamics. A parallel line was before the 20th century, started with the atoms, goes into statistical mechanics, goes into thermodynamics.
Each level has its own rigorous mathematics There are theorems that connect the quantities from one level to the metalevel, and there are conservation laws that are strict in all levels. The mathematics is different at each level, so one could claim/show that linearity holds in a higher level while not in the lower one, but it is not for the same quantities really. It is like mixing alphabet with words.
The difference with climate modeling is that climate is not shown to be a different framework than weather. Particularly in the infamous GCMs the weather models, that cannot predict the weather reliably for more than a few days, are taken over and used for climate with averages replacing a lot of variables that weather modeling uses. There is no different mathematics, the same equations are solved .
Secondly, whether proving linearity from a model is justified depends on the details of the models and whether they do actually assume linear sensitivity. You would need to follow all those references you deleted to find out. Perhaps the model is based on some low-level empirically-derived theory that allows you to infer whether the aggregate behaviour is linear or not; we can’t tell from the text you cite. Perhaps they actually model all the thunderstorm creation processes in the tropics, and snowfall at the poles, based on observed data, and find that when you add it all up it comes out linear. The kinetic theory of gasses is again a great example, though in reverse; the aggregate gas law was developed empirically and the model of an ideal gas was developed by reasoning about what underlying processes might produce that aggregate behaviour. There was no direct evidence that gasses were made up of tiny particles when the model was developed; that didn’t make it wrong. The model agreed with the empirically-derived law, and was successful in explaining it. It turned out to in fact be pretty much right. If you followed your reasoning in the mid-19th century, you would decry the KTOG as fairy-tales that no sane man would believe and you would later look very foolish.

When one is replacing the variables of an equation by an average value, which is what the climate models are doing, one is really expanding in a perturbation series and taking the first term : constant +a*X +b*X^2+…
a is related to the average of the value of the function.
If the weather predictions are no good after N time iterations because non linearities kick in, since solutions of coupled non linear differential equations are highly non linear, more so for climate projections which have more assumed linearities.
One does not need to go to the references to know GIGO.

chris y
October 25, 2010 7:46 am

Assuming that climate sensitivity is a constant is an assumption of stationarity. That is, the climate sensitivity does not change with time, provided the changes over time are ‘small’, whatever small means. This is the same assumption inherent in all of paleoclimatology. For example, tree rings in special trees are assumed to reflect changes in local temperature, and that the linear relationship between tree ring (width or density) and temperature is the same over time.
Willis has provided an example of why climate sensitivity is not stationary. The very existence of the tree ring ‘divergence problem’ proves that the relationship between tree rings and local temperature is not stationary, and that paleoclimatology based on tree rings needs to come to grips with its own ‘UV catastrophe’ moment.

John Day
October 25, 2010 7:58 am

Robinson said:
“… if I make a clock from a dandelion and tell you that it measures time – one hour passes for every petal I pick off – you would be justified in questioning the efficacy of my method, notwithstanding the accuracy of my petal picking. Uncertainty isn’t the only problem with climate models, as you well know.”
Yes, the other major facet of a model is its _correctness_, i.e. can it consistently predict the future (or explain the past)? [We need _consistency_ because a stopped clock is accurate twice a day etc]. A third facet may be _efficiency_, with respect to resources consumed by the modeling.
So, assuming that your dandelion-clock model is correct, i.e. it can consistently measure the passage of time, my only other major concern would be the _uncertainty_ (‘error’) of each time measurement.
So, applying this to AGW, we agree that it is correct that under certain conditions CO2 can exert some positive radiative forcing, our concern should be the calculation of the _certainty_ that this forcing will (or will not) lead to catastrophic global warming.
For the record, I have not seen any convincing proofs of CAGW. If it turns out that some non-catastrophic warming can be traced to man-made activities, we will be enlightened (but not alarmed).
😐

son of mulder
October 25, 2010 8:02 am

Any linearity will be the result of the physics. Any physical system will obey a least action principle which may lead to a linear or curved trajectory for the system in question so no assumptions of linearity can be allowed without experimental verification. As it is clearly all too complex to integrate lots of partially understood
physics with much missing data over the surface of the earth and through its atmosphere and oceans then the following is the needed experiment.
Use satellites to measure average incoming and outgoing radiation energy flux, just outside the top of atmosphere, to and from the earth over a contiguous 11 then 22 then 33 years and calculate the balance. That will give reasonable and improving 1st order estimations of warming or cooling of the planet over the period after excluding the geothermal contribution of the planet. Do we even know the geothermal contribution?
Is anyone doing such measurements? Are there any results available?

HankHenry
October 25, 2010 8:04 am

Very interesting. Of course the answer for the IPCC team would be that they admit in the IPCC that clouds are poorly understood. I see no reason to dispute that point with them because it’s an admission of such a large lack. Unhappily few journalists are savvy enough to recognize what a large uncertainty it means for all the sensational warnings. Clouds are poorly understood and it’s not a trivial thing that they are. Stefan-Boltzmann works well in controlled laboratory conditions but why would anyone think that stepping back from earth a few thousand paces and squinting your eyes makes something like global cloud cover a stable average that can be thought of as constant. I believe it’s most likely that global albedo, because of the changeablility of cloud cover, is not anything so stable as the level of the sea.

According to what I read, observations of earthshine on the moon from Big Bear Solar Observatory have already established that “Earth’s average albedo is not constant from one year to the next; it also changes over decadal timescales. The computer models currently used to study the climate system do not show such large decadal-scale variability of the albedo.” http://www.universetoday.com/9611/decreasing-earthshine-could-be-tied-to-global-warming/
This result was reported in a May 28, 2004, Science article titled, “Changes in Earth’s Reflectance Over the Past Two Decades.” Interestingly the article I found reporting on this had recast things to fit into the sensational kind of global warming headline we are more familiar with: becoming – “Decreasing Earthshine Could Be Tied to Global Warming.”
I

Dr T G Watkins
October 25, 2010 8:20 am

Another telling post from Willis.
Assume linearity by ignoring the unhelpful bits that they know to be non-linear then spend millions showing the linear models produce linear results. No empirical data to back-up results. Brilliant!
Even ignore data that shows they are wrong – Lindzen/ Choi etc.
Please can our politicians be bombarded with e-mails imploring them to explore the ‘evidence’ for AGW.

Charlie A
October 25, 2010 8:27 am

This article reads like someone who is trying to use debating tricks to win an argument rather than someone who is trying to discuss the science.
I admire your Thermostat Hypothesis and predict it will turn out to be an important effect.
On the other hand, random sniping and potshots at the IPCC don’t move along the discussion very much. Although everything at some point ends up being non-linear, the simplifying assumption of linearity is a very useful tool when used over limited spans. Assuming a nearly linear relationship between forcing (over some period) and the resulting temperature change is not incompatible with feedback effects like your Thermostat Hypothesis.
Your argument isn’t that much different than trying to reject the global average temperature time series as being meaningless because it doesn’t reflect either day to day variations or the spatial variations across the globe. The global average temp indeed is not an accurate depiction of the temperature anywhere on the globe, but it is still a useful metric.

October 25, 2010 8:28 am

Willis,
I think you misunderstood the linearity assumption.
“Reporting findings from several studies, the TAR concluded that responses to individual RFs could be linearly added to gauge the global mean response,”
The are talking about the process of combining various forcings NOT the dynamics of a particular forcing.
For example, the forcing of C02 would be X , the forcing from Volcanoes would be Y.
Question. can you add these forcings LINEARLY. That is the question they are addressing. And the are not looking at the transient response but the equillibrium response. So for example, if you have a positive forcing increase from say TSI, of 32
and you have a negative forcing from say volcanoes of say -16, can you estimate the combined effect of these individual RFs by merely summing them?
Yes. There is no reason to think that the forcing from volcanoes will drive the forcing from the sun down. You can merely add them to get the net forcing: 16. And there is no reason to think that a volcano erruption will impact the core of the sun and drive its forcing higher. you can merely sum them. That is what they mean by summing the contribution of individual RFs to get the total net forcings. Now, if an increase in TSI DROVE more volcanoes, or if increased TSI changed the way particles reflect light, then adding those two forcings would not give you the correct answer.

Tom Davidson
October 25, 2010 8:49 am

As an analytical chemist, I have built a career around relating mathematical models to real-world measurements, and the only place I have found the assumption of linearity to be reliable is as an approximation during interpolation between two actual measurements of knowns (standards), and even there one often finds a deviation from linearity that is revealed by careful measurements. The Beer-Lambert Law, the physical law that relates the amount of energy absorbed by a medium to the concentration of the absorbing species (think CO2 in air) is a *logarithmic* law.

John Day
October 25, 2010 8:57 am

Steven Mosher said:
> I think you misunderstood the linearity assumption.
> The are talking about the process of combining various forcings
> NOT the dynamics of a particular forcing.
I agree. The units of forcing are watts per square meter, i.e. energy normalized over time and space. So it must be additive in the sense that energy must always be conserved, i.e. First Law of Thermodynamics.
Similar confusion arises when students are first told that the Fourier Transform is a linear operator. “But it can’t be linear because it’s summing cosine and sine functions which are highly non-linear”.
Yes, sin() and cos() are (highly) non-linear mappings, but their contributions to the Fourier Transform amount to the summations of little pieces of data ‘energy’ at different frequencies.
“So you’re saying the FT is equivalent to performing a single linear matrix multiply on some time-domain signal?”
Yes, and if you’re still in doubt about this, check out MATLAB’s fftmatrix.m command:
http://www.mathworks.com/moler/ncm/fftmatrix.m

October 25, 2010 8:58 am

LazyTeenager says:
October 25, 2010 at 5:09 am
“No one writes thousands of lines of code and spends millions of dollars just to produce predetermined answers… are the words coming out of my mouth utter rubbish.”
Yes.

Steve Oregon
October 25, 2010 9:00 am

“They are so far into their models that they actually are using the linearity of the model results to justify the assumption of linearity embodied in those same models … breathtaking”
Couple that with the endless & baseless attributions of observations to AGW and we have a thoroughly defective climate science arena.
With more concocting than scientific measuring occurring in climate science we are facing utter chaos for years to come.
So much fabrication has been and is being established as science that it will take many years to undo if at all possible.
How does science purge itself of wrongdoing that is this massive?
Are there any models showing that it can even be done?

Tom
October 25, 2010 9:08 am

@Francisco – another one who says the same as me but draws utterly the wrong conclusion. Ohm’s law works nicely at the scales encountered for general circuit design. Or does no piece of electronic equipment actually work and it’s only me who hasn’t noticed? Biologists can say what they like, I suspect referring to very small scale (intra-cell etc) processes; engineers will keep on using it for circuit design and succeeding.
@anna v – nice to have someone interact seriously with ideas. You refer to “the climate models” as though every climate model was the same, but the IPCC report refers to only two or three papers specifically to make this point. Those papers might describe models that are extremely intricate interactions of well-known, well-understood physical processes, that show that in the aggregate, simulated over a model of the earth’s surface, those processes work out to a linear (or roughly linear, or linear over the range we are likely to care about) climate sensitivity. My point is not that they do or don’t, (I am too busy/lazy to go and check) but that just saying, “OMG, Models!” doesn’t prove it either way.
@Charile A – yes, what I was trying to say, I think.

Richard M
October 25, 2010 9:14 am

LazyTeenager says:
October 25, 2010 at 5:09 am
No one writes thousands of lines of code and spends millions of dollars just to produce predetermined answers. If that was the intent they could do that in an afternoon with a bunch of print statements.

A bunch of print statements would be easy to falsify by skeptics. Is this not too obvious for you to understand? A million or more lines of code is considerably more difficult.
are the words coming out of my mouth utter rubbish.
At least your use of “my” was perceptive..

October 25, 2010 9:17 am

It seems to me, Willis, you have illustrated one of the great flaws in our mathematical efforts to forecast reality: In reality nothing is linear but compueter models want to predict liniar results. I have been forecasting weather for over 50 years. Hour to hour, day to day, week to week, season to season, year to year, nothing is liniar. I have been playing poker for 30 years. The cards odd are not linear. I have been aging for 76 years. It has not been a linear process. Everything is non-linear.

Milwaukee Bob
October 25, 2010 9:18 am

John Day said at 3:42 am
You’re correct, Willis, the relationship between temperature and forcing is not exactly linear. But all _observed_ processes tend to be non-linear because of process and measurement ‘noise’.
“not exactly”? “tend to be”? Would that be like – your not exactly pregnant but you possibly could tend to be?
But having said that, most non-linear processes can be approximated by integrating enough tiny linear steps.
“approximated”? You mean modeled… don’t you? And I agree – if the non-linear (analogue) process is simple and controlled with fixed and precisely known I/O values – which is why global weather is impossible (at this time) to “approximate” (digitally model).
I don’t see where proving this relationship doesn’t exactly hold on a hourly basis necessarily proves or disproves any climate theory.
Absolute # 1. – Climate does not exist in reality. It is human conceived condition that exists only in the collective human conscious as a human AVERAGED abstract of specific weather conditions over a relatively longer period of time than any specific weather condition exists and impossible to illustrate without the use of specific weather terms.
Absolute #2. – Climate does NOT drive weather. (See Absolute # 1 for the reason why.)
Absolute #3. – When considering all “climate” (globally averaged weather) theories, see Absolutes 1 and 2.
And proof of the above was indirectly provided in your second comment at 4:35 am – all human “measurements” are “models”. And all “models” are concepts, seen and interpretive of what we perceive to be – reality. Although real to our minds, THEY are NOT reality themselves.
The term “Climate model” is a misnomer. You cannot model climate! And it’s not that you cannot model a concept, it’s because modeling an AVERAGE of anything that is constantly changing (much less a massive and highly dynamic analogue system that is 99% unmeasured) is logically impossible. So, I can “model” weather or a part thereof, say with a thermometer (per your example), and with “real” human experience I can predict, to a limited degree of accuracy, that the “model” will show a change over a specific period of time. We can also say with absolute certainty, based on that same “real” human experience, that there is no instantaneous effect on my “model”, by your “model” of exact same type, half way around the globe. Yes, delayed effect may occur. But not by the “model” – by reality. And therein lies your modeling problem. Linking a singular effect of a virtually unmeasured system through time and space. Let me know when you think you have accomplished that task.
In the mean time I give you one more absolute –
Absolute #4. – Human summarization of global weather into the mythical thing called “climate” is totally meaningless to the individual. (See Absolute #2 for the reason why)

EthicallyCivil
October 25, 2010 9:27 am

Nature abhors a vacuum… that’s why it’s so dusty!

October 25, 2010 9:28 am

Charlie A says:
October 25, 2010 at 8:27 am
The global average temp indeed is not an accurate depiction of the temperature anywhere on the globe, but it is still a useful metric.

A useful metric for what? Manipulating? Fudging? Make believe?
Or perhaps all these global averages have had to be created and sanctified because that is all the simplistic GIGO Computer Models can use.

Paddy
October 25, 2010 9:39 am

Willis: A straight line may be the shortest distance between two points in 2 dimensional geometry. However, in our (at least) 3 dimensional world the great circle route is alway shorter than the rhumb line when navigating routes longer than the distance to the horizon. Straight lines are shorter only when measuring the distances inside the circumference of our planet. Curves are beautiful and reflect perfection in nature.

Policyguy
October 25, 2010 9:42 am

LazyTeenager says:
October 25, 2010 at 5:09 am
Quite a mouth full. Hardly warrants attention. Except you try to give an impression that is very off-base.
The development and appropriate use of models is well known. Before reliance, there is testing and tuning to determine their limits. These have never been properly tested against the correct empirical relationships and the correct databases. The purpose of the original post was to demonstrate that IPCC relied upon a circular correlation between models, not data, to confirm an important linear relationship that has yet to be demonstrated.
My characterization of these models was brief, but so was the comment. I stand by it.

David L.
October 25, 2010 9:45 am

Tom says:
October 25, 2010 at 2:49 am
I don’t like the assumption of linearity. But I don’t find the reasoning in this post very persuasive. I’m not arguing for climate linearity here; I’m arguing that sceptics need to present a well-reasoned position and that this post isn’t it. In two parts:
There are a lot of scientific laws that claim linearity, where if you look closer things don’t behave anything like what the law describes. Ohm’s law says V=IR; if you look closely, you find that electrons are jumping all over the place. Some will go backwards, some will go sideways, the majority will go in the right direction but with a wide spread of velocities. But when you zoom out a bit, the average law works quite well. Same with the kinetic theory of gasses; PV=NkT at a wide scale, but at a smaller scale you can see that actually atoms are crashing around in entirely random ways and locally concepts like “pressure,” “volume” and “temperature” don’t make a lot of sense. So you can’t point to local weather effects and claim that they disprove the aggregate average theory of linear climate sensitivity; the randomness of small-scale weather processes doesn’t disprove the overall theory of linearity any more than observing individual electron tunnelling in silicon disproves Ohm’s law.”
Actually you have it backwards. Ohms law and the ideal gas law are linear in small ranges, not as the entire pciture. It’s when you go to extremes that the models break down. PV=nRT around STP (Standard temp and pressure) but does a poor job at low pressures or very high pressuresorks. The reason it doesn’t work is because of significant interaction terms which exist in the real world but are mostly always disregarged when scientists create models. Others have tried to come up with gas laws that take into account compressibility factors or interactions of molecules but these mostly fall to empirically derived factors.
The climate models are sorely lacking in any understanding of the important interactions between factors which I believe is a main point of Willis’ excellent article.

mRE
October 25, 2010 9:47 am

Um, crystals have perfectly straight lines, and absolutely precise angles.

Bob Layson
October 25, 2010 9:47 am

Are not thunderstorms in the tropics a continual occurrence as local afternoon is always happening somewhere? Where it’s hot sweaty and time for a cool tall one. And the yellow sun forever gazes down.

George E. Smith
October 25, 2010 10:03 am

So how does the IPCC protestation of a LINEAR relationship jibe with the climatists claim of a LOGARITHMIC relationship ?
The simple fact is they can’t prove either one is true; but whenever you have small inconsequential changes in any noisy chaotic system; simple liean fits can seem to be appropriate; but it takes a lot of gall to claim a logarithmic relationship when a scatter plot of the actual real observed data fitted to such a curve results in a 3:1 range of error in the SLOPE of that relational curve; and exactly the same would happen if you made a straight line fit; the gradiient would stille exhibit a 3:1 range of plausible values.
Remember we have about 52 data points for actual annual averaged valuse for CO2 from Mauna Loa; and about the same number of annual mean values for mean globnal surface Temperature form GISS and similar studies; and those two sets can’t be fitted any better to a logarithmic curve than they can to a linear one; or any arbitrary function with a 3:! range of its most important parameter.
And good luck on deriving a theoretical basic physics model to support either one.
And they should make up their minds whether climate sensitivity includes H2O or just CO2. How can you talk about the doubling of CO2; without discussing the effect of doubling the H2O instead
We know that more warming causes more H2O (evaporation) and we also know that more warming causes more CO2 (Henry’s law and ocean take up/outgassing); and notice I said CAUSES, and not “IS CONSISTENT WITH”
So CO2 could cause ocean warming and more evaporation, hence more H2O; but also more CO2. Then H2O causes more ocean warming, and hence more evaporation; so more H2O; but also more outgassing so more CO2.
So both CO2 and H2O can cause ocean warming and cause more H2O and more CO2.
So how do you justify calling one a greenhouse gas; but calling the other a feedback factor. ? Both are PERMANENT RESIDENTS of the atmosphere; and in the case of H2O permanent residence in ALL THREE PHASES.
Also both are injected into the atmosphere in copious quantities by human activities; but far more H2O than CO2; so human activities have a role (maybe small) in putting both of those greenhouse warming/feedback causing gases into the atmosphere; and processes like rain etc result in the continuous removal of both H2O and CO2 (which is soluble in H2O) from the atmosphere..

Paul Loock
October 25, 2010 10:03 am

It’s unfair to use observations, measurements and causation in modern climatology. You have made the same mistake as Ferenc Miskolczi. We do not live in a world of physics, chemistry etc.. We are parts of superlinear computerprograms and nature is what you see on TV.

John Day
October 25, 2010 10:12 am

Milwaukee Bob said:
> “not exactly”? “tend to be”? Would that be like –
> your not exactly pregnant but you possibly could tend to be?
Yep, those quoted words are all well-known “weasel words”, often used to masquerade dodgy beliefs as science ….
… except when used to describe finding “ideal models” in Nature. They don’t exist. The best you can do is an N-th order approximation. Always. No exceptions. (Those are _not_ weasel words)
As proof of this, Milwaukee Bob, will now attempt to provide me with any of these solid examples (i.e. real observables) of “ideals” that can be found in Nature.
1. Two points on a sheet of paper that describe a perfect line. (IAW, how sharp is your pencil? Darn that pesky ‘noise of observation’!)
2. A perfect circle anywhere in the Universe. (Orbits of satellites? Surface of mercury blob suspended in deep space? Nope, sorry, they’re all perturbed by nearby and distant objects, i.e. by everything. Darn that pesky ‘process noise’!)
3. A ray of light as a “straight line”? Nope, it wobbles, perturbed by gravity etc.
No matter how ‘ideal’ you think your object is, _if_ it can be observed, I will find a microscope with enough magnification (down to atoms, and beyond) to find non-ideal flaws (‘noise’) in it.
We’re not talking about ‘close enough for government work’. We’re talking ‘Perfect’, with a capital pee.
:-]

October 25, 2010 10:24 am

mRE says:
“Um, crystals have perfectly straight lines, and absolutely precise angles.”
As Willis constantly reminds everyone: don’t assume; quote what he wrote.
Willis wrote: “Very little in nature is linear, particularly in complex systems.” [my emphasis]
Crystals are not complex systems.

George E. Smith
October 25, 2010 10:25 am

“”” mRE says:
October 25, 2010 at 9:47 am
Um, crystals have perfectly straight lines, and absolutely precise angles. “””
No they don’t !! you’ve obviously never grown a crystal.
Every crystal ever grown; even including hypothetically perfect crystals; none of which have ever been grown; has had a finite size; there simply isn’t enough matter in the unvierse to grow an infinite size crystal. So all crystals have a boundary where each and every crystal “plane” or “line” must teminate.
Beyond that boundary; there are no atoms or molecules of the “crystal”; just a vaccuum or air or whatever.
So the interatomic forces between adjacent atoms/molecules, cease to exist beyond the boundaries of the crystal; so the interratomic distances will be different near the boundary than they are internally in the crystal so the lines cannot be straight nor the planes flat; even everywhere they exist. oh I nearly forgot; that most crystals are not a zero kelvin Temepratures; so they all ahve random atomic motions all the time; so they never all actually lie on a plane or a straight line.
Save your breath; EVERY single concept that we introduce or discuss in MATHEMATICS whether POINTS, LINES, PLANES, CIRCLES, SPHERES, is a complete fiction that we made up in our heads. Absolutely nothing in mathematics actually exists anywhere in the universe as a real object; they are all fake.
Points for example have a position only and no dimensions of any kind; Heisenberg’s principle of uncertainty says that if such a thing existed; it could have any momentum up to (but not including) infinity; so it likely would never be in our ken for long enough to ever find it. I excluded infinite momentum just to cover my A***; maybe infinite is possible in that case of a real point existence.
x^2 + y^2 + z^2 = a^2 is supposed to be a sphere IN MATHEMATICS. That equation makes no provision for 8,000 metre high mountain ranges on its surface. We may have approximations to some of the fictional items of mathematics; in the real universe; which after all is why we invented the fictional ones; but our mathematical prestidigitations with those fictional elements can only approximate the actual behavior of any part of the real universe; although it could exactly explain the behavior of one of our equally ficticious models of some portion of the universe.
The trick is to construct better models that replicate the real behavior of the universe more accurately. but we can never describe the real universe in mathematics; only a model of it.

Colin from Mission B.C.
October 25, 2010 10:42 am

Charlie A says:
October 25, 2010 at 8:27 am
The global average temp indeed is not an accurate depiction of the temperature anywhere on the globe, but it is still a useful metric.
~~~
Why? How?
Proponents of CAGW assume this statement to be true, a priori. But, I’m not convinced it is. To me, a global average temperatures is like a global average phone number. Interesting…maybe…but useful? Hardly.
And, I’m not even going to get into the issues of accuracy (or lack thereof) of measuring average temperature (see surfacestations.org).

James Evans
October 25, 2010 10:50 am

Willis,
If you’re going to spend the entire first paragraph of your post defending your headline, then why not just use a different headline?

tryfan
October 25, 2010 10:55 am

Hmmm… This seems to more of a semantic problem. Isn’t the whole concept of linearity a metaphore, except in pure mathematics? It’s usable to explain stuff, but doesn’t exist in the real world.
You don’t convince me this time, Willis.

Bob Layson
October 25, 2010 10:57 am

The chief purpose in seeking ”the” global average temperature is to have something that needs fixing and to have someone or something to blame it on.
Would the global average temperature be experienced at the average latitude and longitude?

October 25, 2010 10:59 am

George E. Smith says:
October 25, 2010 at 10:03 am (Edit)
So how does the IPCC protestation of a LINEAR relationship jibe with the climatists claim of a LOGARITHMIC relationship ?
George,
they do NOT claim a linear relationship. They argue that if you have 2 forcings (for example)
1. Solar
2. Aerosol
And you want to estimate the response to both you can combine them by summing.
read what they wrote. they wrote that you can combine indpendent RF (radiative forcings) by adding them.

Tom
October 25, 2010 11:21 am

So how does the IPCC protestation of a LINEAR relationship jibe with the climatists claim of a LOGARITHMIC relationship ?

Well, if you scroll up to the next post on the blog, you’ll see it:
dF = 5.35 ln (C/C0)
dT = S dF
Radiative forcing is a logarithmic function of CO2 concentration. Temperature change is a linear function of radiative forcing.
But since you take both sides of this debate here, it’s tempting to think you’re just trying to stir up trouble.

Editor
October 25, 2010 11:27 am

Slightly O/T, but as an oddball fact, New Zealand sits astride what is regarded as the longest natural straight line in the world, clearly visible for space – the Great Fault spanning both North and South Island
Nature ain’t all bad
Andy

October 25, 2010 11:28 am

John Day.
yes, Willis missed this one badly.
First, The IPCC in this section is discussing the following:
“The TAR and other assessments have concluded that RF is a useful tool for estimating, to a first order, the relative global climate impacts of differing climate change mechanisms (Ramaswamy et al., 2001; Jacob et al., 2005). In particular, RF can be used to estimate the relative equilibrium globally averaged surface temperature change due to different forcing agents. However, RF is not a measure of other aspects of climate change or the role of emissions (see Sections 2.2 and 2.10). Previous GCM studies have indicated that the climate sensitivity parameter was more or less constant (varying by less than 25%) between mechanisms (Ramaswamy et al., 2001; Chipperfield et al., 2003). However, this level of agreement was found not to hold for certain mechanisms such as ozone changes at some altitudes and changes in absorbing aerosol. Because the climate responses, and in particular the equilibrium climate sensitivities, exhibited by GCMs vary by much more than 25% (see Section 9.6), Ramaswamy et al. (2001) and Jacob et al. (2005) concluded that RF is the most simple and straightforward measure for the quantitative assessment of climate change mechanisms, especially for the LLGHGs. This section discusses the several studies since the TAR that have examined the relationship between RF and climate response. Note that this assessment is entirely based on climate model simulations.”
So you have a variety of forcings. Nothing is said about a linear relationship.
Continuing:
“Each RF agent has a unique spatial pattern (see, e.g., Figure 6.7 in Ramaswamy et al., 2001). When combining RF agents it is not just the global mean RF that needs to be considered. For example, even with a net global mean RF of zero, significant regional RFs can be present and these can affect the global mean temperature response (see Section 2.8.5). Spatial patterns of RF also affect the pattern of climate response. However, note that, to first order, very different RF patterns can have similar patterns of surface temperature response and the location of maximum RF is rarely coincident with the location of maximum response (Boer and Yu, 2003b). Identification of different patterns of response is particularly important for attributing past climate change to particular mechanisms, and is also important for the prediction of regional patterns of future climate change. This chapter employs RF as the method for ranking the effect of a forcing agent on the equilibrium global temperature change, and only this aspect of the forcing-response relationship is discussed. However, patterns of RF are presented as a diagnostic in Section 2.9.5.”
THEN they discuss how the various forcings can be “combined”
“Reporting findings from several studies, the TAR concluded that responses to individual RFs could be linearly added to gauge the global mean response, but not necessarily the regional response (Ramaswamy et al., 2001). Since then, studies with several equilibrium and/or transient integrations of several different GCMs have found no evidence of any nonlinearity for changes in greenhouse gases and sulphate aerosol (Boer and Yu, 2003b; Gillett et al., 2004; Matthews et al., 2004; Meehl et al., 2004). Two of these studies also examined realistic changes in many other forcing agents without finding evidence of a nonlinear response (Meehl et al., 2004;
Matthews et al., 2004).”
And if you take the time to actually READ some of the papers cited you will see what they did in the experiements
http://www.cccma.ec.gc.ca/papers/gboer/PDF/CliDyn_sens_response.pdf
section 9, additivity and sensitivity.
Its also good for people to get a sense of the time constants involved.
Further, folks need to read on from the section that willis quoted to understand more of the problem
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-8-5.html
the biggest thing he failed to point out was the efficacy, and you will note that efficacies are important because of the non linearities
“Since the TAR, several GCM studies have calculated efficacies and a general understanding is beginning to emerge as to how and why efficacies vary between mechanisms. The initial climate state, and the sign and magnitude of the RF have less importance but can still affect efficacy (Boer and Yu, 2003a; Joshi et al., 2003; Hansen et al., 2005). These studies have also developed useful conceptual models to help explain variations in efficacy with forcing mechanism. The efficacy primarily depends on the spatial structure of the forcings and the way they project onto the various different feedback mechanisms (Boer and Yu, 2003b). Therefore, different patterns of RF and any nonlinearities in the forcing response relationship affects the efficacy (Boer and Yu, 2003b; Joshi et al., 2003; Hansen et al., 2005; Stuber et al., 2005; Sokolov, 2006). Many of the studies presented in Figure 2.19 find that both the geographical and vertical distribution of the forcing can have the most significant effect on efficacy (in particular see Boer and Yu, 2003b; Joshi et al., 2003; Stuber et al., 2005; Sokolov, 2006). Nearly all studies that examine it find that high-latitude forcings have higher efficacies than tropical forcings. Efficacy has also been shown to vary with the vertical distribution of an applied forcing (Hansen et al., 1997; Christiansen, 1999; Joshi et al., 2003; Cook and Highwood, 2004; Roberts and Jones, 2004; Forster and Joshi, 2005; Stuber et al., 2005; Sokolov, 2006). Forcings that predominately affect the upper troposphere are often found to have smaller efficacies compared to those that affect the surface. However, this is not ubiquitous as climate feedbacks (such as cloud and water vapour) will depend on the static stability of the troposphere and hence the sign of the temperature change in the upper troposphere (Govindasamy et al., 2001b; Joshi et al., 2003; Sokolov, 2006).
In short. when the IPCC talks about combining the forcings LINEARLY, they are not saying that the responses to forcing are linear. they are saying that the forcings can be SUMMED. linearly combined.
AND its important to understand why they say this is a “useful” approach.
basically, if you have a list of forcings ( we’ve all seen the charts) you want to know if you can simply add them together.

stumpy
October 25, 2010 11:37 am

What the IPCC should have simply said is:
“We have a high level of confidence that our models assume a linear response”
and their justification:
“we have no idea what happens in the real world (reality), its full of tricky things we dont quite understand (like clouds for example) which confuse everything, and to understand it all would be too hard and timely given the current timescale, so we prefer to use an assumption (also known as a model) so we can progress on, otherwise it will be too late for our children as we need to act now (assuming our assumption is correct)”
Que footage of dead polar bears, images of calving ice etc….

October 25, 2010 11:41 am

Smokey says: “mRE says:
“Um, crystals have perfectly straight lines, and absolutely precise angles.”
As Willis constantly reminds everyone: don’t assume; quote what he wrote.
Willis wrote: “Very little in nature is linear, particularly in complex systems.” [my emphasis]
Crystals are not complex systems.”
And, actually, crystals are not so perfect as this guy seems to think. It is in fact typical for crystals in the real world to have irregularities and defects. And beyond that, if the position of the constituent particles of a crystal were all so predictably, perfectly arranged, they they would have to have very poorly defined momenta, by the uncertainty principle, so their velocities could be just about anything you can imagine-clearly such uncertainty in the amount of particle motion, would not make much sense for a solid object. If we conversely say that it’s a solid, so the velocities of the particles must be all very low, then we are saying that we can’t define the position of the particles precisely-how can one say that the particles are arranged in a specific, precise way with respect to one another, if the particles don’t even have well defined positions?
On the quantum level, all matter is “fuzzy” and not neat and exact in shape.

GaryW
October 25, 2010 11:45 am

John Day, (October 25, 2010 at 4:35 am)
There are no ‘model free’ _measurements_ of Nature!!!
What ‘model’ do you ‘consult’ to get your local time and temperature?
“I don’t use a model, I just look at my watch and the thermometer in my patio!”
Ha! You’ve just consulted two ‘models’!

I was going to let this one slide but it just kept bugging me. You have presented an interesting inclusion to the definition of a model. To an engineer, the watch and the thermometer are calibrated measurement instruments. In both cases, a single physical quantity is measured in real time and read out directly. In the case of the clock, counting the vibrations of a mechanical element such as a quarts or ceramic crystal. The thermometer is using the change in volume of a material, calibrated to known temperature values, to display temperature.
A model is generally expected to be some sort or representation of object, event, or series of events. Model and measurement are not what I, and I believe most people, would consider to be synonymous concepts.

John Day
October 25, 2010 11:56 am

> yes, Willis missed this one badly.
Nothing enhances the credibility of an argument better than than realizing and admitting an occasional error.
I must confess that there _is_ something that is more rare than a Perfect Circle or Line in Nature.
It is the “One-Sided, Science-is-Settled Argument”, that our AGW friends have been pandering for many years. It doesn’t exist, not even close.
😐

Bill Illis
October 25, 2010 12:04 pm

Tom says:
October 25, 2010 at 11:21 am
dF = 5.35 ln (C/C0)
dT = S dF
Radiative forcing is a logarithmic function of CO2 concentration. Temperature change is a linear function of radiative forcing.
————————
How many Watts/m2 is required on Sun’s surface to increase its temperature by 1.0C?
= +50,000 W/m2
How many Watts/m2 is required on Earth’s surface to increase its temperature by 1.0C?
= +5 W/m2
I don’t know how much more non-linear a phenomenon can get.
The temperature does not change linearly with the forcing.

John Day
October 25, 2010 12:17 pm

GaryW:
>calibrated measurement instruments … single physical quantity is measured in real time and read out directly.
In the mathematics of modeling, an ‘oracle’ is such an ‘ideal’ device. An oracle can read out true values without error or interpretation. Unfortunately, oracles are fictitious entities, invented solely for the sake of argument or elaborating a proof. They don’t exist in the real world.
Every measurement tool employs a model, subject to errors of implementation and interpretation.
For example, a yardstick measures extension in one dimension.
What if the markings are incorrectly applied? What if you mistook a ‘5’ for a ‘2’? Then you’ve introduced some errors into your measurements.
What if you haven’t noticed these errors? Then there is uncertainty in all the measurements made, past and future, caused by a faulty ‘model’.
The good news is that most of these instruments (note weaseling) are so reliable that we don’t worry about them. It’s the ‘clocks’ and ‘thermometers’ used for climate modeling that we should be concerned about.
😐

JAE
October 25, 2010 12:22 pm

A warmer-guy might counter that: “there is an approximate linear relationship between forcing and sensitivity for small changes in forcing.” No?

davidmhoffer
October 25, 2010 1:00 pm

I skipped half the comments, sorry if this has been said.
Folks, this isn’t a science document, it is a document designed to make a business case. Read it like a business case, and as with any business case, keep carefull track of what they SAY and what they IMPLY and also keep carefull track of what specificaly they are referring to when the say and/or imply it.
If you read carefully the quote in Willis’ post, it doesn’t SAY response it linear. It says there’s no reason not to believe it is. Two different things. Further, referring to what? In this case they seem to be referring to a response at a given point to a forcing at that point being linear. For changes of on the order of a degree, that is a reasonable approximation. BUT:
Read the details. In 2.8.1 of the same section of AR4 WG1 you will find this quote:
” It should be noted that a perturbation to the surface energy budget involves sensible and latent heat fluxes besides solar and longwave irradiance; therefore, it can quantitatively be very different from the RF, which is calculated at the tropopause, and thus is not representative of the energy balance perturbation to the surface-troposphere (climate) system. While the surface forcing adds to the overall description of the total perturbation brought about by an agent, the RF and surface forcing should not be directly compared…”
That statement pretty much says that response to RF is NOT linear at surface. But pay attention to what they are referring to.
In 2.8.1 they are explaining that they measure forcing and (I presume) calculate sensitivity AT THE TROPOPAUSE. The temperature of the tropopause being the “effective” black body temperature of the earth, some 35 degrees colder than the surface, there is not a linear relationship in regard to forcing as it affects the tropopause versus the surface. And they said so. Let’s now put both statemtents in context.
The IPCC states that climate sensitivity at the tropopause to the doubling of CO2 is 3.7 w/m2 = 1 degree increase in temperature on a global scale. They IMPLY that surface response is linear, but to what? By their own statement in 2.8.1, it is NOT directly correlated to RF, even though it is IMPLIED (no reason not to in their words) that surface response is linear. Which, to a given forcing AT THE SURFACE it approximately is, but NOT to the climate sensitivity of of the system as a whole, which they measure at the tropopause.
So, the CO2 doubling = 3.7 w/m2 = 1 degree is IMPLIED to mean “at surface” but in fact means “at tropopause” and will yield a DIFFERENT number at surface. Based on Stefan Boltzman, an increase of 1 degree at the tropopause, where it is about -20C, would require an RF of exactly 3.7 w/m2. At surface, at an average of +15C, it would require 5.5 w/m2. The 3.7w/m2 translates to a “system” change as seen from space of 1 degree, but a surface change of only (mental math approximation mine) of .6 degrees.
By extention, with feedbacks included, the sensitivity at tropopause is 2 to 4.5 degrees, which at surface would be 1.2 to 3 degrees. FURTHER, there is no such thing as a freakin average at earth surface. The higher latitudes are colder and so have a HIGHER sensitivity to a given forcing than the warmer latitudes (Stefan Boltzman) The colder seasons have a higher sensitivity to a given forcing than the warmer seasons (Stefan Boltzman). Night time (cooler) temps have a higher sensitivity to a given forcing than day time (warmer) temps (Stefan Bolzman).
Any attempt to interpret a forcing at the tropopause to a linear response at surface is, therefore, ludicrous. As per what they SAY in 2.8.1, it is not. As per what they IMPLY, we should see temps increase by 2 to 4.5 degrees for CO2 doubling. But they only IMPLY this, and the complexity of the report combined with the vagueness in the manner it is written regarding what statement refers to what issue (the hallmark of a business document designed to mislead) makes it difficult discerne.
But the surface response they claim is, according to them, less than the sensitivity they claim.

It's always Marcia, Marcia
October 25, 2010 1:14 pm

When you are sitting at a desk doing math you can have things be linear. When you get up and walk away from the desk into the real word you can forget hopes of things being linear.
Everything is complex. We barely know anything about everything.

Vince Causey
October 25, 2010 1:15 pm

If the response is linear then there can’t be any tipping points, so linear is good.

Vince Causey
October 25, 2010 1:18 pm

Steven Mosher,
Yes, a nice effort at changing what they said to what you want them to have said (are you are lawyer by any chance? You should consider it.) Just to be sure that they said what Willis said they said:
“there is high confidence in a linear relationship between global mean RF [radiative forcing] and global mean surface temperature response.”
But perhaps they didn’t mean what they said, but meant what you said they meant. I guess anything’s possible in climate science.

Ken Harvey
October 25, 2010 1:24 pm

LazyTeenager says:
October 25, 2010 at 4:48 am
3. There is an obvious mismatch between the idealised process described for a day in the tropics and the process of global climate.
That is not an idealised process laddie. At this moment, at any moment, it is around 4p.m. somewhere in the tropics and that storm you see described is actually happening. For all practical purposes it never ever stops. There are some small variables due to air pressure etc., but the basic situation never ceases for twentyfour hours of the globe’s day.

Dave F
October 25, 2010 1:24 pm

Actually, Willis, you can see the greater sensitivity at lower temperatures in the DMI graphs for the Arctic circle. During the Arctic winters, the temperature varies greatly, but during the Arctic summers, the temperature is pretty consistent year in and year out.

George E. Smith
October 25, 2010 1:36 pm

“”” Bill Illis says:
October 25, 2010 at 12:04 pm
Tom says:
October 25, 2010 at 11:21 am
dF = 5.35 ln (C/C0)
dT = S dF
Radiative forcing is a logarithmic function of CO2 concentration. Temperature change is a linear function of radiative forcing. “””
Well Temperature usually is related to the 4th root of the W/m^2 irradiance or the reverse for an emitter; so therefore Temperature is not a linear function of “forcing” in W/m^2.
And the relationship between the CO2 and the “forcing” is itself a fourth power function of the surface Temperature; so that too is non-linear and also non logarithmic; and also a circular reasoning since we are trying to get a surface Temeprature change from a forcing that is itself a strong function of Surface Temperature.

October 25, 2010 1:58 pm

davidmhoffer says: “In 2.8.1 they are explaining that they measure forcing and (I presume) calculate sensitivity AT THE TROPOPAUSE.”
Sorry, but even though it might make more sense to you, and make more sense physically, sensitivity is always defined with respect to the SURFACE temperature not at the tropopause. Yes, the forcing is not defined at the surface, but typically higher up (actually usually referenced to the “top of the atmosphere”) and while this may seem to imply that the sensitivity is referenced to the same layer, it never is. The sensitivity figures they talk about are ALWAYS regarding the SURFACE TEMPERATURE. So your presumption is wrong.

George E. Smith
October 25, 2010 1:59 pm

“”” Steven Mosher says:
October 25, 2010 at 10:59 am
George E. Smith says:
October 25, 2010 at 10:03 am (Edit)
So how does the IPCC protestation of a LINEAR relationship jibe with the climatists claim of a LOGARITHMIC relationship ?
George,
they do NOT claim a linear relationship. They argue that if you have 2 forcings (for example)
1. Solar
2. Aerosol
And you want to estimate the response to both you can combine them by summing.
read what they wrote. they wrote that you can combine indpendent RF (radiative forcings) by adding them. “””
Well Steve I don’t even believe that either..
Let’s say I have a 1% increase in atmospheric water vapor; and I’ll ignore cloud changes for now. That 1% increase in atmospheric H2O vapor, will result in about a 1% increase in the amount of solar energy that is absorbed by the water vapor, and so reduce the gorund level solar energy by that amount. (any increase in cloud would cause an additional albedo loss, and cloud absorption loss); but let’s skip that for now. So I have a negative forcing from my increased H2O vapor due to solar energy loss.
Now the energy lost to the increased H2O vapor, is of course energy gained by the atmosphere as a result of collision thermalization; so that would result in an atmospheric temeprature increase. That in turn would result in an increase in the isotropic LWIR thermal emission from the atmospehre, about half of which would go down to the surface, and the other half go up towards space. ( you can see that about half of the extra intercepted sunlight energy returns to the surface; so it is still a net energy loss to the surface.)
But now we have a problem. That downward LWIR red shifted solar energy from the warmert atmosphere; whcih comprises a new positive “forcing” DOES NOT react with the planet in the same way as does the original solar energy from whence it came.
The solar energy passed almost uninterrupted into the highly transparent deep oceans where it will be stored for a long time as far as immediate effects. Teh LWIR forcing on the other hand is totally absorbed in about the top 50 microns of water, and instead of being conducted slowly to the deeper wateres where the soalr energy went; to therby add to that, a set of totally different and non-linear thermal processes is activated; namely a localised heating of the very ocean surface layer, and a consequent enhanced evaporation of H2O along with latent heat of evaporation into the atmospehre, and covective transport to the upper atmosphere. The Clausius-Clapeyron equation (non-linear) comes into play in that transaction and you now have apples and oranges which you cannot directly add. The two forcings are not directly additive.
But I tend to agree that small perturbations tend to produce linear effects; which is why the concept of a logarithmic “Climate Sensitivity” is so silly; not to mention the absence of any experimental evidence to support it (compared to a linear fit to the data).

John Day
October 25, 2010 2:14 pm

George said:
> and also a circular reasoning since we are trying to get a surface Temeprature
> change from a forcing that is itself a strong function of Surface Temperature.
The “dt=lambda x dF” equation is a rubric, not a law of nature. There is no conservation law for temperatures, only for energy (and momentum).
Thus the ‘flaw’ in this sensitivity rubric (as Willis correctly pointed out in his article) is that the system can absorb massive amounts of energy, without any change in temperature, so dT=0.
Article:
“Above the thunderstorm threshold temperature, little additional radiation energy goes into warming the surface. It goes into evaporation and vertical movement. This means that the climate sensitivity is near zero.”
My quibble was the apparent conflation between non-linear mappings and their cumulative effect on energy conservation, which is always linear because energy is always conserved. Linearity, in physics, translates as the the Supposition Principle. (If we could amplify energy then we could add some scalar laws too, but that would violate conservation)
http://en.wikipedia.org/wiki/Superposition_principle
I think we may be in violent agreement with Willis.

GaryW
October 25, 2010 2:16 pm

John Day says: October 25, 2010 at 12:17 pm
GaryW:
>calibrated measurement instruments … single physical quantity is measured in real time and read out directly.
In the mathematics of modeling, an ‘oracle’ is such an ‘ideal’ device. An oracle can read out true values without error or interpretation. Unfortunately, oracles are fictitious entities, invented solely for the sake of argument or elaborating a proof. They don’t exist in the real world.

John,
I am not a all sure what you are responding to here. Are you suggesting that since instrument measurements are never perfect that the values obtained from them are really just models of reality? Again, this is semantics. While your slant may be interesting philosophically, it is using a common term in a way that distorts technical meaning.
Let’s spin this back towards the subject at hand and that is about straight line approximations. Computer models are only validated when their outputs match values measured with instruments. Claiming that all measurements are merely different forms of models does not result in GCMs automatically becoming validated.
In engineering there is something called the “Engineering Fallacy.” That is a situation where a commonly used concept or calculation is assumed to represent reality instead of merely being a handy shortcut. My impression is that the folks running GCM’s are forgetting that Guesses In produce Guesses Out (GIGO). Calling measurements Models does not place GCM’s on the same level.

Milwaukee Bob
October 25, 2010 2:31 pm

John Day at 10:12 am….
I think we are agreeing…? ☺ The best you can do is an N-th order approximation. Always. No exceptions. And “an N-th order approximation” can be woefully insufficient for a given purpose and, as relates to predicting what the global weather system will be doing in _________ days, the current crop of GCMs are – – – well, “woefully deficient” for a range of reason, not the least of which is a lack of historically accurate data AND the computers to run them on are wholly inadequate in non-linear computational muscle even if we did have sufficient and precise data.
And let’s not forget, “Perfect” (note the capital pee) is a human “model” also that nature knows or cares naught of or for and while it may be difficult to observe – measure – perfection in nature, as WE idealize it, you can not say it does not exist, even if for only a moment in time, simply because in that moment in time no human is in the right place or has the capability to observe it. Which is THE point – our past and current ability to accurately “measure” weather is insufficient. Period. (Yes, again WITH a capital pee) ☺And all the “tricks” in the world can not make it so….

John Day
October 25, 2010 2:49 pm

GaryW:
> Claiming that all measurements are merely different forms of models
> does not result in GCMs automatically becoming validated.
Not what I said. I said that all measurements require some kind of model to be applied, sometimes very trivial (like a yardstick). I certainly did not want to imply that all GCM’s are automatically validated. (what a nightmare!)
> Computer models are only validated when their outputs match
> values measured with instruments.
True. I would re-state that differently: models are validated when their predictions or explanations match “reality”. Of course, “reality” really means using a trusted and validated model to validate a newer, hypothetical model, which may or may not be correct.
Just like the term ‘Prophet’, in the Jewish Bible, could only be applied to the “modelers” whose predictions actually came true.
😐

Peter Miller
October 25, 2010 3:11 pm

I suspect there are places in the Atacama Desert (not near the coast!) where it only rains once a century and small clouds appear only a few times a year, then there probably is some form of linear relationship.
I will never forget driving through Arkansas one summer around 15 years ago. The temperature outside was about 85 degrees F. We then drove through a huge thunderstorm and the temperature fell (approx 4 in the afternoon) to below 50 degrees F – and stayed there for almost 100 miles. Turning the heater on to keep warm in the southern US in mid summer was truly a bizarre experience.
That sort of thing explains why climate ‘scientists’ prefer the fact of models over the facts/data of the real world. Why not? They are so much easier to manipulate.

Francisco
October 25, 2010 4:21 pm

Tom says:
October 25, 2010 at 9:08 am
Ohm’s law works nicely at the scales encountered for general circuit design. Or does no piece of electronic equipment actually work and it’s only me who hasn’t noticed? Biologists can say what they like, I suspect referring to very small scale (intra-cell etc) processes; engineers will keep on using it for circuit design and succeeding.
============
One would expect circuit designers to choose the right materials for their needs, ohmic or non-ohmic, yes. Who’s arguing they don’t?
Biologists say what they say for good reasons. Run a current through a piece of meat, see if your V/I ratio is anywhere near constant.
My point was that most materials found in nature are strongly non-ohmic, and so Ohm’s “law” is very, very restricted — by no means universal: restricted to some (few) materials under a certain range of conditions. Another way of saying this would be that “ohmicity” is a very rare property in nature. Which illustrates the point made by the title of this post: nature does hate straight lines.

davidmhoffer
October 25, 2010 4:36 pm

Andrew says;
The sensitivity figures they talk about are ALWAYS regarding the SURFACE TEMPERATURE. So your presumption is wrong.>>
As stated this is just an opinion. I propose that you cite the sections of AR4 and post links to the places where they SAY this. Not vague references, or cites of cites of papers behind paywalls, actually SAY it.
Then I propose you follow up by explaining how it is that they they justify their sensitivity calculations. Per their documentation, doubling CO2 = +3.7w/m2 = +1 degree of warming. Based on Stefan Boltzman equations, a body at -20C requires +3.7w/m2 to increase equilibrium temperature by one degree. A body at +15 degrees, the “average” temperature at earth surface, requires +5.5w/m2 to increase equilibrium temperature by one degree.
So which is it? CO2 doubling = +5.5w/m2, or sensitivity is calculated against effective black body temperature of -20C at the tropopause?

Francisco
October 25, 2010 4:57 pm

“I mean, you really gotta admire these guys. They are so far into their models that they actually are using the linearity of the model results to justify the assumption of linearity embodied in those same models … breathtaking.”
Reminds me of drawings like this:
http://visualfunhouse.com/wp-content/uploads/2007/09/escher640.jpg
Elementary Logic courses have been warning students against “petitio principii” for many centuries. To no avail. There are some entertaining examples at the link below.
http://philosophy.lander.edu/logic/circular.html

davidmhoffer
October 25, 2010 5:02 pm

Willis (and Andrew)
Here is a rather in depth analysis of the IPCC’s claims in regard to sensitivity and feedback. Despite being part of Princeton’s dept of Mechanical Engineering and Aerospace, they couldn’t find the place in the IPCC report where it SAYS how sensitivity is calculated.
So they deduced it from various numbers and then show the math to arrive at a sensitivity of 0.7 degrees per CO2 doubling rather than the IPCC’s 1 degree at earth surface.
Andrew – I suppose this gives you a third option other then the two I already proposed. You could take the position that they plain simple got the calculations wrong. I’ll just chalk up the fact that if you take the equations in this paper and apply them to -20C at the tropopause, your gonna get 3.7 watts and a sensitivity of 1 degree to complete coincidence.
http://www.princeton.edu/~lam/documents/RadPhys08.pdf

October 25, 2010 5:40 pm

How long does a line have to be before you consider it “straight”? You can see plenty of straight lines in geologic formations…

Joe Lalonde
October 25, 2010 6:01 pm

Willis,
This is exactly what I’m trying to show. Due to rotation, the majority of things move in a arches or circular.
But, most of our science does not feel it so it ignores it as no existant!
Winds at different level of our atmosphere do some pretty funky things depending on what factors are given them. The slow lining up of our low level winds to the planets rotation changes how heat can discipate more slowly.

eadler
October 25, 2010 7:20 pm

It seems from the arguments in this article, that Willis Eschenbach doesn’t understand the what Climate Sensitivity is.
Climate sensitivity relates to the change in temperature, required to restore radiative equilibrium, which is driven by a given radiative imabalance at the initial condition, caused by a change in greenhouse gases.
Eschenbach objects to the use of climate models as the tool to determine whether the final equilibrium temperature is linear in the radiative forcing which drive the system to a new equilibrium temperature.
http://en.wikipedia.org/wiki/Climate_sensitivity
Let us take a look at the absurdity of his counter argument:

Before I discuss the oddity of that IPCC explanation, a short recap regarding climate sensitivity. I have held elsewhere that climate sensitivity changes with temperature. I will repeat the example I used to show how climate sensitivity goes down as temperature rises. This can be seen clearly in the tropics.
In the morning the tropical ocean and land is cool, and the skies are clear. As a result, the surface warms rapidly with increasing solar radiation. Climate sensitivity (which is the amount of temperature change for a given change in forcing) is high. High sensitivity, in other words, means that small changes in solar forcing make large changes in surface temperature.
By late morning, the surface has warmed significantly. As a result of the rising temperature, cumulus clouds start to form. They block some of the sun. After that, despite increasing solar forcing, the surface does not warm as fast as before. In other words, climate sensitivity is lower.
In the afternoon, with continued surface warming, thunderstorms start to form. These bring cool air and cool rain from aloft, and move warm air from the surface aloft. They cool the surface in those and a number of other ways. Since thunderstorms are generated in response to rising temperatures, further temperature increases are quickly countered by increasing numbers of thunderstorms. This brings climate sensitivity near to zero.

This argument has nothing to do with climate sensitivity at all. It doesn’t relate to any global average annual equilibrium temperature at all. It is a description of weather.
There is no way that observation of daily weather in a given locality can be used to infer anything about climate sensitivity.
The fact is that we have no way to do experiments to determine whether or not climate sensitivity as defined above, is a linear function of radiative forcing. The only way to do this is by modelling.
The whole premise of this post is nonsensical.

timetochooseagain
October 25, 2010 8:04 pm

davidmhoffer-It may well be that they erred in their calculations. At the moment I am hard pressed to find where they say that they mean the sensitivity of the surface temperature. However, I know that this is what everyone seems to think they are saying, including most of the “mainstream” scientists who talk about the sensitivity.
Maybe this is a huge fluke nobody has caught. It does seem to be rather odd that they are creating an opportunity for conflation of the surface and TOA responses.
This will require further examination.
~Andrew

davidmhoffer
October 25, 2010 8:10 pm

Eadler;
The fact is that we have no way to do experiments to determine whether or not climate sensitivity as defined above, is a linear function of radiative forcing. The only way to do this is by modelling. >>
You are kidding, right? For centuries science has been:
theory => model => predicted results
experiment => compare to predicted results => supports model/theory or doesn’t.
You now propose:
theory => model => done. The results of the model reflect the theory and should be considered proof of the theory. One can only shudder at the thought of the “new math” having morphed into the “new science”.

R. Craigen
October 25, 2010 8:39 pm

I’ve no problem with the anthropomorphism. My cats run for the hills whenever we get out the hoover to clean the carpet. The reason for this should be obvious to any physicist: nature abhors a vacuum!

George E. Smith
October 25, 2010 8:46 pm

“”””” Jeff Alberts says:
October 25, 2010 at 5:40 pm
How long does a line have to be before you consider it “straight”? You can see plenty of straight lines in geologic formations… “””””
Well in mathematics a straight line is infinitely long and has no other dimensions. There is no such thng anywhere in the universe.

davidmhoffer
October 25, 2010 8:48 pm

timetochooseagain;
At the moment I am hard pressed to find where they say that they mean the sensitivity of the surface temperature… (snip) Maybe this is a huge fluke nobody has caught. It does seem to be rather odd that they are creating an opportunity for conflation of the surface and TOA responses.>>
I could point you at a couple of the places where it sorta almost not quite says it, but why should you be exempted from the hours of fruitless searching and cross referencing that were such shear joy for me. There’s one place where it makes a clear statement about forcing and surface temps, but the surface temps reffered to are contained within the six economic scenarios which each have ranges of CO2 production rates changing over time. Good luck correlating that back to degrees C versus forcing. Why the obscure references? Why not clear concise definitions? Why not produce simple graphs depicting tropopause forcing (note – they say “at the tropopause” in almost all cases, not TOA. Is there a difference? I dunno. THEY DON’T SAY) versus surface temps. You’d think there would be ONE chart or diagram showing that. Just ONE.
I shall answer that question two ways.
1. When you wish to write a report not supported by fact, be selective of the information you use, write of it in vast quantities, and in excrutiating detail. No one will notice.
2. The “climate sensitivity” discussion distracts from a discussion of how it would exhibit itself. Suppose for a moment that the effective black body temp of earth increased by 3 degrees. What would that mean at surface? 2 degrees. But that’s an average. What would that mean at say the arctic circle. (guestimates from here to illustrate but supportable guestimates) arctic circle – 6 degrees. All year? No, plus 10 in winter plus 2 in summer. Lows of -30 instead of -40 and highs of +4 instead of +2. Equator? Plus 1. All year? Pretty much, pretty stable temps there. Oh, let’s not forget night versus day. Nights +20 instead of +19 and days +30.2 instead of 30.0.
THAT is the discussion the warmists want to avoid. Let’s do a survey. Hey all you polar bears out there, listen up. No seriously, put down your Coke and pay attention. By a show of paws, how many of you think that living through winters at -30 instead of -40 is a bad thing? None? I thought not. How about summers? Can you handle +4 instead of +2? Paws up if that’s gonna kill you guys because you might not know it but you’re the climate canaries. Those clown fish at the equator are useless, I asked them about day time highs going up 0.1 degrees and they just blew bubbles at me like the answer was obvious. No, you can’t eat the clown fish. No, not the canaries either. Why? Never mind why, just vote. What? No, the seals won’t change color, they’ll still be black. No! There aren’t any half black seals either, now would you just vote- OK, I get it. Funny polar bears.
I could go with the IPCC upper estimate of 4.5 degrees, or hey, why not 8? You still come back to the same analysis. Most of the warming happens in the coldest latitudes, in the coldest seasons, and at the coldest part of the day. The high temps don’t change all that much. Not even the IPCC has argued that Stefan Boltzman doesn’t apply.
Oh great, the polar bears all shook their Cokes and are spraying me.

AusieDan
October 25, 2010 8:49 pm

Willis – it’s all very simple.
As you say, they use linear models and get linear outcomes.
QEQ rather than QED I would say.
Climate is a chaotic (non linear) system.
Modelling climate with linear models gives – gasp – linear outcomes.
QEQ not QED
There – that’s proved it
– belief in disasterous global disruption is quite disasterous.
(Distasterous to clear thinking)
That’s all.

AusieDan
October 25, 2010 8:55 pm

Jerry says:
October 25, 2010 at 2:45 am
Are you comparing apples to oranges?
Is it possible the models are working on much longer time scales than say hourly at a location?
Perhaps you are both right (Wills and ‘the models’) Willis is correct on very short time scales? The models are correct on very long time scales?
UNQUOTE
Well Jerry
– we’ll just have to wait 100 years or so to find our who’s right and who’s wrong –
Willis with his data or the experts with their programs.
In the meanwhile let’s just destroy industrialisation, based on a hunch.

George E. Smith
October 25, 2010 8:56 pm

“”””” Francisco says:
October 25, 2010 at 4:21 pm
Tom says:
October 25, 2010 at 9:08 am
Ohm’s law works nicely at the scales encountered for general circuit design. “””””
Congratulations Francisco; you are without exception the very first person I have encountered in 50 years who actually knows what Ohm’s law is. That is a question I have asked every candidate for a circuit designer job I have ever interviewed; from technicians to Stanford PhDs. And I always get E= I.R or some variant.
You are the first to note that what George Simon Ohm actually discovered was that for a certain class of materials; mostly metallic conductors under all other physical conditions being held constant; the relationship between the current flow and the applied Voltage is a LINEAR one.
And as you say; very damn few materials are linear in the sense Ohm discovered.
So Ohm’s law says:- >>> R = c <<<

tokyoboy
October 25, 2010 9:00 pm

The place where I live has witnessed a 3.5-degC rise in temperature, no doubt due to UHI in this megalopolis:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=210476620003&data_set=1&num_neighbors=1
Apparently nobody is tortured by this. No cataclysmic catastrophe is noted over past decades.
What are you worrying about, Mr. Eadler????

savethesharks
October 25, 2010 9:03 pm

The IPCC argument, that temperature is linearly related to forcing, is at the heart of their claims and their models. I have shown elsewhere that in other complex systems, such an assumed linearity of forcing and response does not exist.
=============================================
Well said, Willis.
Chris
Norfolk, VA, USA

October 25, 2010 10:19 pm

Willis, as usual another understandable post with a point. I really prefer the ones that contain lots of side thoughts and interesting life experiences but I do understand you write to inform and educate. You brought on some controversy but I did not discern where it was justified. Reading through all those comments has tired both my eyes and my brain. Goodnight

Joe Lalonde
October 26, 2010 1:41 am

George E. Smith says:
Well in mathematics a straight line is infinitely long and has no other dimensions. There is no such thng anywhere in the universe.
Actually you are mistaken.
Quantum physics does this in a lab using lights and lazers.
And the second is every picture showing heat and air movement for this planets surface use straight lines when rotation actually makes them curve.

Tom
October 26, 2010 1:49 am

@ George E. Smith – Getting off topic, I suspect, but an interesting discussion all the same.
What Ohm discovered was that the ratio of voltage to current is constant. What we describe and use as Ohm’s law today is something very different. In a sense it is not a law but a definition – we define a quantity called impedance which is the ratio between voltage and current and we describe various easy ways of finding the impedance of materials and devices, often in terms derived from complex analysis. What Ohm described is not very useful in actually designing a useful circuit – what is today described as Ohm’s law is much more useful.
Of course you know this, and I am teaching you to suck eggs, and I’m sorry for that. But stating Ohm’s law as ‘R is a constant in certain materials’ is not a very good representation of the state of the field today, and judging job candidates on whether they happen to ignore the past 150 years of development in the field the same as you might be a trifle unfair.

Espen
October 26, 2010 1:57 am

Dave F says:
October 25, 2010 at 1:24 pm
Actually, Willis, you can see the greater sensitivity at lower temperatures in the DMI graphs for the Arctic circle.
It is also clearly visible in individual Arctic temperature records, for instance Cambridge Bay in Arctic Canada:
http://www.wunderground.com/cgi-bin/histGraphAll?day=26&year=2009&month=10&dayend=26&yearend=2010&monthend=10&ID=CYCB&type=6&width=500
Also, just look at the temperature reconstructions from ice cores for the last glacial compared to this interglacial: During the glacial, temperatures jumped wildly.
Warm climate is stable, cold climate is unstable. There are very good reasons to believe that global warming would only be benign, while global cooling can turn into a glacial catastrophe within a very short time.

TomVonk
October 26, 2010 2:34 am

Jason Caley
I certainly may be misunderstanding things, but I believe that the linear relationship for small variations of x does not hold in chaotic systems. This is not to say that NO values of x have a linear realtionship to y, but only to point out that not ALL values of x have that linear relationship. After all, even chaotic systems have attractors and lslands of stability.
No Jason Caley .
The linearity is a very general mathematical result valid for any function (provided some regularity technicalities) .
For small variations of some variable , all systems answer almost exactly in a linear way because the function is then simply represented by only the first (linear) term of its Taylor development .
This is true for chaotic or non chaotic systems alike .
Of course on must not confuse the necessarily linear answer on small perturbations of systems in equilibrium with a full scale answer over long times .
As the chaotic systems are out of equilibrium and strongly non linear , their “linear life time” is extremely short . But it exists .
AnnaV
You are perfectly right !
It is precisely this argument that made me interested in the climate “science” for the first time 12 years ago .
There is no , absolutely NO fundamental difference between weather and climate .
Or using your analogy with letters and words , climate is not a word meta level of weather which would be the letter level .
Climate science is just about taking the honest to God fluid dynamics’ , thermodynamics’ , quantum mechanics’ equations that govern the weather and then to atrociously mutilate them so that nobody recognizes them anymore .
Pretending that a new meta level of the dynamics (!) can be found by simply averaging solutions of the original equations (that they don’t know anyway) is such an extraordinary claim that it would need more than extraordinary proofs with EXTREMELY rigorous mathematics .
It is more than clear that naive computer simulations don’t box in that category at all .

Eric (skeptic)
October 26, 2010 2:49 am

eadler said on October 25, 2010 at 7:20 pm: “Climate sensitivity relates to the change in temperature, required to restore radiative equilibrium, which is driven by a given radiative imabalance at the initial condition, caused by a change in greenhouse gases.”
How does the weather in the locality described by Willis know that that there is a worldwide radiative imbalance and that radiative equilibrium needs to be restored? Ans: it doesn’t. The weather is always local, there is no way that weather can restore a global radiative equilibrium. So eadler is right in one respect, the earth will get warmer due to CO2 radiative imbalance. But he is wrong in supporting the modeled sensitivity that includes globally positive feedback from increased (globally averaged) water vapor. That is merely a model artifact based on parameterized weather. But the real world changes in water vapor due to slight CO2 warming will cause both warming and cooling depending on local weather. The global climate model are hopelessly coarse to be able to predict that warming and cooling. Someday, maybe, when climate models have the resolution of weather models. But not right now.
Perhaps eadler would like to address the tropics and show a fine grained tropical weather model with positive feedback?

jessie
October 26, 2010 3:45 am

John Day says:
October 25, 2010 at 12:17 pm
The good news is that most of these instruments (note weaseling) are so reliable that we don’t worry about them. It’s the ‘clocks’ and ‘thermometers’ used for climate modeling that we should be concerned about.
😐
______________________________
Isn’t this the heart of the argument?
And if not, the instruments; that were recalibrated, to provide specificity? To predict and model ‘sensitivity’?
Anyone that has had the honour of observing and using [as decried]:-
● blood letting
● leeches
● mercury sphygmomanometer
● aeneroid sphygmomanometer
[● pressure bandages as an interim to direct infusion]
in observation, also the use of analogue then digital visuals [read-out] would question the basis of the argument.
Instruments used for eg Haemoglobin [Hb] presented the same dilemma, when used with practiced observation. Pathology, tax funded, titrated the goal posts.

Francisco
October 26, 2010 6:04 am

Tom,
The amount of conceptual confusion generated by something as straighforward as Ohm’s law is perplexing. The most common source of confusion seems to consist in conflating Ohm’s law with a possible definition of resistance (or even impedance or conductance). There are actually raging discussions over what this “law” might mean. The conversation below is representative of these various forms of fog, and the final paragraph sums up the matter rather well.
http://www.archimedesplutonium.com/File1994_07-08.html
>
>We have two contradictory claims regarding Ohm’s Law.
>
>vbv@… claims that the R in Ohm’s Law (V=IR) is constant, that the “law” is only an approximation for certain materials, and that this “law” does not actually apply in all situations.
>
>kheidens@… says that Ohm’s Law does indeed apply in all situations, even though Ohm never actually intended it that way.
>
>Is there some physicist out there (preferably a Ph.D.) who can help settle this issue once and for all, in a level-headed manner?
I would hope that any undergraduate physics major could answer this question. It does not require a PhD. Ohm’s law DOES NOT apply in all situations. It doesn’t come close to describing the I-V curve of a diode, for example. Even for uniform materials, it is only a phenomenological approximation of a very complex theory of electron transport, and fails all over the place to one extent or another.
Of course, if you are willing to define R(x,t,T,…) as a variable quantity such that V/I = R, then you can declare that Ohm’s “Law” works all the time. But then you have turned it into a useless definition of a time-dependent, voltage-dependent, etc., resistance — no longer a law at all.

MarkR
October 26, 2010 6:06 am

Willis, you’ve made a critical mistake here:
“‘We measured solar radiation and downwelling longwave radiation and temperature at this location, and guess what? Temperatures changed linearly with the changes in radiation.’ I didn’t see anything at all like that, you know, actual scientific observations that support linearity.”
Climate sensitivity is _global_ response to a _global_ change. You can’t do it at a single location because heat can move to and from that location from elsewhere. The simplest demonstration of this is a thought experiment with a sphere in a vacuum. Apply radiative forcing RF1 at one point of the sphere and the temperature response dT1 there.
Consider an insulating sphere of surface area 100,000 sq km. Heat up 1 sq km with RF1 sufficient to raise the temperature by 1 C. This is the actual sensitivity of the sphere to a global change in heating.
Next, consider the situation where the sphere conducts well. The heat will be conducted around the sphere until it’s isothermal and using your logic the sensitivity is therefore 0.00001 C. But it’s not, you’re wrong by a factor of a hundred thousand in this case.
This is only one of the physical mistakes in your assumption, but it’s enough to demonstrate a serious flaw.
We have estimates of climate sensitivity to global forcings from palaeoclimate, see e.g. Knutti & Hegerl, 2008.

Eric (skeptic)
October 26, 2010 7:59 am

MarkR (October 26, 2010 at 6:06 am) “We have estimates of climate sensitivity to global forcings from palaeoclimate, see e.g. Knutti & Hegerl, 2008.”
Consider the case of a forcing such as solar magnetic. Suppose solar magnetic changes caused a temperature increase some time in the past. Some hundreds of years later, CO2 followed with an increase (which fed back to the temperature increase). Since your two measurements are CO2 and temperature, you cannot determine “sensitivity” of temperature to sensitivity because part of the temperature increase came from the third factor. The real world has several more forcings beyond the one in my example.
Reading Knutti & Hegerl, 2008, I see no estimates of paleo forcings. Maybe I missed it? They prove a “water vapor feedback” in a linear model based on an ocean current response to a volcanic forcing, but the rest of the paper ignores oceans and only considers CO2->warming->CO2 feedback, which as I point out, is impossible to distinguish from other forcings.

George E. Smith
October 26, 2010 8:10 am

“”” Joe Lalonde says:
October 26, 2010 at 1:41 am
George E. Smith says:
Well in mathematics a straight line is infinitely long and has no other dimensions. There is no such thng anywhere in the universe.
Actually you are mistaken.
Quantum physics does this in a lab using lights and lazers. “””
No I’m not. There is no such thing as a laser beam that has zero thickness. Everybody knows that a (TE00) laser beam has a minimum beam waist dimension; which along with the wavelength determines its angular divergence; so a laser beam is a whole lot closer to a cone (which also does not exist) than it is to a line. And that divergence goes inversely as the size of the beam waist so if the even the beam waist of the “beam” was zero, the straight line beam would actually be a hemisphere (another fiction) of light extending over 90 degrees (1/2) cone angle; you would literally have a point source of light. There’s no such thing as a point source of light either. In classical Physics you cannot satisfy the boundary conditions of Maxwell’s equations for a point radiator of electromagnetic waves; and in quantum physics, Heisenberg’s principle of uncertainty would require that source to have an infinite momentum uncertainty so it would be spread infinitely in wavelength so there would be no detectable power at any wavelength you tried to observe such a thing; unless the total radiated power was also infinite. Too many zeros and infinities to satisfy and reality constraints.
I believe that one of Steven Hawking’s discomforts with the big bang theory, is the presumed singularity that exists at the instant of its creation. For guys like him everything interesting in the field of archeo-physics, happened in the first 10^-43 seconds after the big bang.

Francisco
October 26, 2010 8:10 am

George E. Smith says
EVERY single concept that we introduce or discuss in MATHEMATICS whether POINTS, LINES, PLANES, CIRCLES, SPHERES, is a complete fiction that we made up in our heads. Absolutely nothing in mathematics actually exists anywhere in the universe as a real object; they are all fake.
====================
Well, whether these mental entities are real or not, and in what sense, is a matter of philosophical discussion.
In any case, even basic idealized geometry does *not* suggest that nature should be linear. If anything, it suggests the exact opposite. Consider any two points in a Eucledian space. You observe that:
a) The number of curved lines between them is infinite,
b) The number of straight lines between them is exactly 1.
So the final score in that match is really, really an amazing rout. The one goal scored by the Straights may save some pride, but not much. The Curves rule.
So even if you adopt a neo-Platonic perspective, where such pure geometrical entities not only have real existence, but are the actual building blocks of a “meta-reality”, we see that straigth lines are no match for curves. Why should it be surprising that they are so awfully hard to find in phenomenal reality as well? In fact, some may tell you they cannot be found at all. I tend to agree. Nature is overwhelmingly nonlinear.

George E. Smith
October 26, 2010 9:00 am

“”” Tom says:
October 26, 2010 at 1:49 am
@ George E. Smith – Getting off topic, I suspect, but an interesting discussion all the same.
What Ohm discovered was that the ratio of voltage to current is constant. What we describe and use as Ohm’s law today is something very different. In a sense it is not a law but a definition – we define a quantity called impedance which is the ratio between voltage and current and we describe various easy ways of finding the impedance of materials and devices, often in terms derived from complex analysis. What Ohm described is not very useful in actually designing a useful circuit – what is today described as Ohm’s law is much more useful.
Of course you know this, and I am teaching you to suck eggs, and I’m sorry for that. But stating Ohm’s law as ‘R is a constant in certain materials’ is not a very good representation of the state of the field today, and judging job candidates on whether they happen to ignore the past 150 years of development in the field the same as you might be a trifle unfair. “””
Well E = I.R is simply a definition of resistance; but ONLY for materials that DO obey Ohm’s Law (R= c)
For any other case you would have to use de/di = R’
For example; if you built a feedback amplifier of say a transimpedance variety; and the feedback element was a highly non linear “resistor”; and you used E= I.R to get a value for that “resistor”, you would most certainly end up with the wrong gain; and your stability analysis could also be off, and the thing might oscillate instead of be stable based on your calculation using the wrong values. As I use the term “resistor”, it is by definition (to me) both linear and frequency independent. A classical derivation of the input behavior ofa vaccuum tube amplifier; with inductance in the cathode lead (same thing works for transistors) arrives at an input circuit with a shunt “conductance” that varies as 1/f^2. That loss mechanism loads a typical input tuned circuit in say an IF amplifierand kills the Q, and limits amplifier performance.
Well anything that varies as 1/f^2 is NOT a resitor or conductor, both of which are time independent.
A correct circuit analysis of the problem shows that the correct input equivalent circuit in fact contains a SERIES reistor having a FIXED value of gm.L/C where L is the cathode lead inductance, C is the grid-cathode capacitance, and gm is of course the transconductance of the tube. The exact same parameters apply in transistor circuits; and even thoguh L and C may be very much smaller, gm is often very much higher, so it is still an important problem in high speed amplifiers. The input equivalent circuit is in fact that resistor in series with both C and L.
The result of not being pedantic is that invariably mis-communication results.
People think “climat sensitivity” is a log relationship; when maybe it is just a non-linear relationship (who knows).
But a log function is aprecisely defined mathematical function; and you can’t just slap the title on any non-linear curve; that doesn’t satisfy those strict criteria.
Mauna Loa and GISSTemp together demonstrate that cs most certainly is NOT a log function.
And I have in fact built integrated feedback amplifiers that actually did use non-linear feedback “resistors” (made from MOSFETS); and not only was the element non-linear; but it was also highly unpredictable in value in production. The final design which was in fact a very high gain integrated photo-amplifier (including photo-detector); and despite the use of non-linear unpredictable (to high accuracy) value elements; the amplifier was quite linear in its response to photo-currents; and was also of quite predictable gain. The trick was to NOT use a transimpedance feedback amplifier architecture; that whole generations of analog circuit designers (if there’s any left) can’t see beyond but to use a Currnet gain feedback architecture where the input and output signals are BOTH currents, and the current gain is determined by the ratio of two unpredictable non-linear resistors, that by virtue of the layout architecture; had exactly the same (well closely) non-linearity, and value uncertainties. Produced a 500:1 closed loop current gain from a FemtoAmp input photo current; with a 3dB bandwidth of half a megaHerz. Try doing that with your standard op amp transimpedance phot-amplifier. So far as I know it is still the highest gain bandwidth; highest sensitivity silicon photo-detector.
One of our engineers; who is a ham, actually used one as a detector in a telescope and a AA cell powered LED indicator lamp “Transmitter” to operate a voice signal transmission across the whole width of silicon valley on a ham field day. He and his mate were the only ones in the whole world to record a communication on that “band” that day; but that is getting a bit off subject.
As for judging a candidate; the idea is to identify the exceptional from the ordinary; before applying OTHER CRITERIA; and I think there is a lot of merit in knowing what scientists actually said or did; rather than what umpteen thrid party rehashes like to claim they did.
In fact a big part of theis whole climate debate; revolves around what researchers said or didn’t say or research.
For example I still haven’t been able to get from anybody who frequents this or any other site; just EXACTLY WHO it was who FIRT invented the concept of CLIMATE SENSITIVITY; and defined it as being a logarithmic relationship; which it evidently is by definition only; since no data nore basic physical theory supports such a relationship.
And I notice that Dr Lacis dismisses climate change “skeptics” as being incompetent in understanding radiative transfer Physics.
I assume that he includes Spencer, Christy, and Lindzen in that list of incompetents; and a whole host of others. Well that enables him to dismisss them; with a “find a radiation physics for dummies” advice. To give him credit; he did give some suggestions as to some books to look at. It’s always good to know what they teach in schools these days.

George E. Smith
October 26, 2010 9:15 am

Well the concepts of mathematics and all the elements thereof, can somewhat precisely describe the behavior of our MODELS of reality; where in fact they do exist; which is why we invented them all along with the models. But the whole gamut of them are a fiction; and we can only hope to approximate with our MODELS the observed behavior of the REAL universe; which itself is far too complex to fully describe.
The distinction is important; because if we cannot separate models from reality; then we are doomed to live in a world of exclusivity. The wave-particle duality would have to be abandoned.
The purpose of models and theories is to explain to the extent we can or predict the outcome of experiments; both those we have conducted; as well as experiments that have never been conducted. The model (theory) only has value to the degree that it’s explanations are consistent with reality. To that end; they do not even have to be unique. The wave particle duality is actually two different models of the same phenomena; and sometimes one of them is easier to use, while in other circumstances the other might be better. They also don’t have to “jibe” with “commonsense”
String theory does not jibe with common sense. How can something be fundamental; a primitive building block; and yet it wriggles. Things that wriggle, common sense tells us must be built up of things that are even more basic.
But even though string theory (which I am not a fan of) may seem quite beyond common sense; it will succedd or fail only on whether or not it correctly explains or predicts the outcomes of experiments; including ones yet to be performed.

eadler
October 26, 2010 9:45 am

davidmhoffer says:
October 25, 2010 at 8:10 pm
Eadler;
The fact is that we have no way to do experiments to determine whether or not climate sensitivity as defined above, is a linear function of radiative forcing. The only way to do this is by modelling. >>
You are kidding, right? For centuries science has been:
theory => model => predicted results
experiment => compare to predicted results => supports model/theory or doesn’t.
You now propose:
theory => proof of the theory. One can only shudder at the thought of the “new math” having morphed into the “new science”.

What you have said is that science is done by comparing models to prediction. This is certainly true. That doesn’t mean that the question of whether climate sensitivity is a linear function of the forcings can be determined by the data that we have available from observations of climate. The data on forcings and resulting evolution of global average temperature are too sparse to carry out such a verification.
What has been shown using this procedure is that climate sensitivity calculated from models are in the same range as climate sensitivity deduced from paleo data.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.3264&rep=rep1&type=pdf
Climate sensitivity estimated from ensemble simulations
of glacial climate

………
Our study shows that the LGM climate is likely to
constrain the upper end of the DT2x range. At least
within the range of processes captured by our
model—and provided that the simulated relationship
between LGM cooling and CO2 warming covered in our
ensemble does not strongly differ from that simulated by
different GCMs (see Section 5)—a high DT2x (larger
than 5.3C) cannot be reconciled with most recent
paleo-proxy estimates of cooling between the LGM and
pre-industrial climate. ….

Eric (skeptic)
October 26, 2010 10:33 am

eadler, you are comparing models of climate to models of climate which of course match. The only way you can validate models with paleo data is to have measurements of the forcings that caused the warming in that paleo era. Unfortunately the authors in the paper you linked do not have measurements, they just make up some forcings.
The main reason that these models cannot calculate sensitivity is that they do not model weather at sufficient resolution to determine the distribution (the evenness, especially in the tropical troposphere) of water vapor which is their postulated positive feedback. In fact small scale convection is a parameter in the model (an input) so what they are really doing is inputting a major aspect of sensitivity itself.

Ben of Houston
October 26, 2010 1:23 pm

I have read some very well-reasoned points, but most forget one thing.
The burden of proof does not lie with Willis, but the IPCC. Willis’s claims are not proof that it is non-linear, but it is sufficient to conclude that the linearity claim needs strong supporting evidence with natural observations. This is data not provided by the IPCC, and if it provided in their supporting documentation, they made no mention of it in their own text.

Joe Lalonde
October 26, 2010 1:37 pm

George E. Smith says:
October 26, 2010 at 8:10 am
I stand corrected and admit this mistake.
Light does disipate and expand with distance.

George E. Smith
October 26, 2010 3:12 pm

“”” Joe Lalonde says:
October 26, 2010 at 1:37 pm
George E. Smith says:
October 26, 2010 at 8:10 am
I stand corrected and admit this mistake.
Light does disipate and expand with distance. “””
Joe; my point is not to be argumentative; that’s not a useful game; it’s not even a game.
For a start; let’s agree that “engineering mathematics” and “pure mathematics” are not quite the same thing. Well Engineering mathematics might also be described as “Applied Mathematics”.
The engineer’s aim is to solve the problem; drain the swamp or whatever; and his attitude is that “if I can get an answer it is the right answer.”
The Pure mathematician will spend a lot of time on “existence theorems” to establish that ANY answer even exists; or if you get an answere that it is indeed the right answer.
If you ask the engineer to prove the conjecture: “An absolutely convergent infinite series converges.” you are going to get a ROFLMAO response. He’s going to laugh his head off and then say; “well if it is absolutely convergent; then of course it converges; by the way; what the hell is absolutely convergent, and why do I care ? ”
Well the pure mathematician cares; so he develops a rigorous proof of that theorem. So why does it matter ? Well the engineer figures if he adds up a number of things, it doesn’t really matter what order he sums them in; even for an infinite number of things; so he might rearrange the order to make it easier to sum them.
The Pure Mathematician will tell him; Hey you can’t do that; unless the infinite series is absolutely convergent; which means that the sequence of the absolute values of the terms in the series; is also a convergent series.
An infinite series of alternating sign terms that IS NOT absolutely convergent can be summed to ANY POSSIBLE ANSWER YOU WANT; depending on the order in which you sum the terms.
Now the engineer with his applied maths is NOT incompetent; just not complete; and he may be on very solid ground; because real life actual Physical systems almost never yield equations or mathematical descrriptions that are not well behaved; so if the engineer gets his answer, it is unlikely that it is not the correct answer (assuming he did the math properly).
The pure mathematician is likely more interested in te circumstances under which the first answer you get; might in fact not be the correct answer; but those will be pure mathematic problems rather than engineering math problems.
Same with the Ohm’s law example. Ohm’s law as in R = c is a physics problem. But as E = I.R it is a simple problem in circuit design theory; which itself is actually a model of a real system; and not the real system itself.
The circuit designer is going to use the latter; and maybe not care too much about the former unless he really does have a peculiar interest in linearity; and he may even pay attention to Ohm’s caveat; that all other physical parameters must be kept constant; like the Temperature for example.
But if we let ourselves get too sloppy in interpreting what we thought somebody did or said; as distinct from what they actually did or said; then sooner or later it will come back to bite us.
When I went to school, a BSc in Mathematics required majors in both Pure and Applied Mathematics. Today I suppose there might be 27 majors in Mathematics. A BSc in Physics on the other hand required only one major; but there were three of those to choose from:- Physics (HLSE&M etc), Radio-Physics (electronics, propagation EM theory Ionospheric physics etc), and Mathematical Physics which was all that Vector, Field theory, and Green’s Theorem kind of stuff; and you had to choose one of those in your third year.
So I did all five of those majors; in two years (3rd and 4th). And I do use more engineering math than pure; but the other keeps me honest.
Well that was then; don’t know what they teach now; my son is doing engineering now; and I’m damned if I know what they are teaching him; maybe green is cool !

davidmhoffer
October 26, 2010 3:27 pm

eadler;
What has been shown using this procedure is that climate sensitivity calculated from models are in the same range as climate sensitivity deduced from paleo data.>>
OK, so you are suggesting that if a climate model and a climate simulation are in the same range, they are proof of each other. Wait. Isn’t a simulation a model? Yes it is. Two different models, same answer, they must be right. Tell me please, which models should we use? The ones that show the MWP and the LIA or the ones that don’t? Which simulations shall we use? The ones that show the MWP and the LIA or the ones that don’t? Oh I get it, choose the ones that get the same answers and VOILA! Proof! All the others must be wrong.
That being the case, I guess warming has continued unabated for the last 15 years, sea levels have gone up a meter or so, severe weather events have increased, people are dying in droves from heat but not from cold, there are way more hurricanes, crops have failed world wide due to drought and the polar bears have increased their population fourfold as a consequence of going extinct.
Yes, I see clearly now, models and simulations in agreement are proof of sensitivity in the past. Trying to correlate their predictions with actual outcomes is an experiment just not worth carrying out and we don’t need to since we aready know they are proof of each other.
new math => new science => new reality.

Joe Lalonde
October 26, 2010 7:05 pm

George E. Smith says:
October 26, 2010 at 3:12 pm
You are very much correct.
Science has been very sloppy and complacent with just generalizing and not looking for absolute accuracy.
I have a design for power generation that engineers have no problem to say that it will run. Now add a physics design on efficiency and the engineer admitted it was beyond his area to even speculate and it would have to go to a University and would probably take years to understand the motion physics I had discovered and the designs of how this new physics can very easily be recreated.
I have yet to find that University.

Francisco
October 26, 2010 7:32 pm

The Modellers
http://www.climateaudit.org/?p=2565#comments
Chapter 1 of AR4 has some surprisingly interesting comments about models that, to the extent that the points are disclosed in the body chapters, are disclosed so opaquely that they would be undecipherable to anyone other than a few. Here are some interesting comments about flux adjustment – an issue that must surely raise civilian eyebrows. A “flux adjustment” in a GCM is defined below as an “empirical correction that could not be justified on physical principles” i.e. a fudge factor, and one of the accomplishments of recent GCMs has been to apparently get past that. AR4:
[…]
Demesure says: January 7th, 2008 at 1:17 am
For the French non flux-ajusted model LMD/ISPL (team of Hervé le Treut, lead Author of Chapter 1 of WG1 AR4), here is the archive http://web.lmd.jussieu.fr/~lmdz/LMDZ-info/ of internal correspondances between modellers.
It seems they have divergence problems of their own, for example some quite “funny” and illustrative translations in this letter: http://web.lmd.jussieu.fr/~lmdz/CRs/LMDZ/CR17012002
– “Olivier has mentionned the problem of snow accumulation reaching several km must be resolved”
– “Flux comparisons between top and bottom atmosphere show a discrepancy of about a dozen W/m2, it’s too much”
– “Zonal means show a big cold biais (5 to 15°C) at the tropopause”

Bill Illis
October 26, 2010 7:38 pm

eadler says:
October 26, 2010 at 9:45 am
What has been shown using this procedure is that climate sensitivity calculated from models are in the same range as climate sensitivity deduced from paleo data.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.3264&rep=rep1&type=pdf
Climate sensitivity estimated from ensemble simulations
of glacial climate
————————–
I imagine you realize this paper is just like all the other climate model simulations of the ice ages. Not a single one outlines what Albedo they used for all that extra glacier and sea ice. In fact, it is very telling that they do not outline the single most important factor in why temperatures fell so much in the ice ages.
This paper is just as useless as all the other ones using a climate model when the biggest key assumptions used are not outlined. In fact, I am tired of reading them looking for the key assumptions to be outlined properly – they never are. Throw this one into the pile marked “another climate paper that does not use objective data – it says whatever the author wanted it to say.” That pile will be the biggest one in your office so you don’t have to search for it.

Spector
October 27, 2010 12:35 am

As far as I can tell, it is possible to replace any curve with a straight line tangent section as long as you make sure to limit the scope of your results to the narrow range on that curve where tangency applies with reasonable accuracy.
The problem I see here is that technical limitation tends to get lost in translation, especially as written up by the popular press and the linear approximation that may be valid over a temperature range of five or ten degrees may be promulgated as a universal relationship.

Jessie
October 27, 2010 4:42 am

Willis Eschenbach says:
October 26, 2010 at 5:35 pm
I think I am falling in love with your specificity on sensivity, large cognitive bias though ….. tropics gal.
Looking forward to next post.
Bill Illis says:
October 26, 2010 at 7:38 pm
Please endnote that pile for future use
Spector says:
October 27, 2010 at 12:35 am
Check out what software the journalists have had supplied in the past few years- maybe they couldn’t publish curves and went for the headers with the quickest graphics?

Frank
October 27, 2010 11:13 am

Willis: Let’s think more deeply about your tropical thunderstorms. Do they always begin to limit warming when surface temperature (2 m) rises to a fixed (or roughly fixed) temperature? No. Rapid convection begins to limit warming only when radiative cooling can no longer prevent surface temperature from exceeding the lapse rate. At this point, the cooling that occurs from expansion in a region of rising air leaves that air less dense that air above it, and the region of air can continue to rise to the tropopause. So your tropical thunderstorms limit surface warming ONLY when air higher in the tropics is cool enough to make the temperature gradient from the surface to the upper atmosphere is steep enough. Unfortunately, thunderstorms themselves warm the upper atmosphere and reduce this temperature gradient. So you have told us only part of the story: thunderstorms limit surface temperature rise, but also warm the upper atmosphere and turn themselves off! By this mechanism, convection produces a lapse rate that is roughly the same most locations in the troposphere. (One major exception is that at night the ground cools faster than the air and this cools air near the surface faster than air higher in the atmosphere.)
If anthropogenic GHGs and water vapor warm the upper atmosphere, tropical thunderstorms aren’t going to limit surface temperature rise until that surface temperature has risen to a higher temperature than before.

Willis Eschenbach
October 28, 2010 2:19 am

Frank says:
October 27, 2010 at 11:13 am

Willis: Let’s think more deeply about your tropical thunderstorms. Do they always begin to limit warming when surface temperature (2 m) rises to a fixed (or roughly fixed) temperature? No. Rapid convection begins to limit warming only when radiative cooling can no longer prevent surface temperature from exceeding the lapse rate. At this point, the cooling that occurs from expansion in a region of rising air leaves that air less dense that air above it, and the region of air can continue to rise to the tropopause. So your tropical thunderstorms limit surface warming ONLY when air higher in the tropics is cool enough to make the temperature gradient from the surface to the upper atmosphere is steep enough. …

Well, yeah, kinda. Thunderstorms are first dependent on the formation of cumulus clouds. These depend upon the temperature differential between the surface and the level where condensation occurs (the “LCL”, or lifting condensation level).
However, where condensation and freezing are occurring in the clouds, energy is being released. It is this secondary energy, released by condensation and freezing, that drives the vertical circulation from the LCL to the upper atmosphere.
From there, the energy is free to radiate to space, or to be moved polewards. Because the tropical upper atmosphere is constantly radiating and moving polewards, the upper atmosphere does not warm.
Thunderstorms are huge heat engines that turn heat into mechanical work. The work that they do is to pump the working fluid (an air-water mixture) vertically and thus (eventually) poleward.
In such a situation, the speed at which the pump turns is the major variable. The energy is passing through the situation. Warm moist air starts at the tropical surface and is pumped vertically and hence polewards. There is no static “upper atmosphere” to be warmed. Instead, air in the upper atmosphere is constantly being added to and replaced. Thunderstorms are bringing air directly from the surface. The temperature can remain nearly the same, while larger masses of air are moved and a huge amount of work is done.

Brian H
October 30, 2010 6:03 am

MFer! I just did about ½hr. typing (while “logged in” as usual, and got an error screen saying name and email required — with the loss of the entire entry (no, back page didn’t help). The name and email fields were not displayed. Had to log out and back in to get them. >:( WUWT??
[Reply: It’s a terrible WordPress glitch. Please let them know about it. ~dbs]

Brian H
October 30, 2010 6:32 am

Abbreviated version:
Willis;
IIRC, it was recently a surprise (re-)discovery that the stratosphere is cooling, and its H2O rising.

This work highlights the importance of using observations to evaluate the effect of stratospheric water vapor on decadal rates of warming, and it also illuminates the need for further observations and a closer examination of the representation of stratospheric water vapor changes in climate models aimed at interpreting decadal changes and for future projections.

Given that the modeled changes add up to 70% on top of CO2 radiative forcing in an earlier period and then reduce CO2 radiative forcing by 40% in a later period, this is a very significant effect.

There is also the notorious Un-Hot Spot in the tropical troposphere which was hand-waved (-swatted?) away with the remarkable logic that since one radiosonde thermometer could err, so could hundreds (in unity, in the same direction), so therefore since the Hot Spot might not be missing we’ll just assume it isn’t . Oof!
_______
Reading the above comments inspires me to suggest a Granularity Gauge Number, to be (rule-)assigned to every formula and data set, specifying the range of precision and measurement and the scope of applicability. “It wad frae monie a blunder free us,
An’ foolish notion.”
The “uncertainty” measures now in use don’t seem to be up to the job, judging from the results and such comments as Jones’ expert assessment of 90% confidence he was entirely correct — good enough for government work.

Brian H
October 30, 2010 6:39 am

[Reply: It’s a terrible WordPress glitch. Please let them know about it. ~dbs]
Not to be too rude and crude, but WP is your blogware of choice. You let them know.