Insufficient Forcing Uncertainty

insufficient-force-catIt seems depending on who you talk to, climate sensitivity is either underestimated or overestimated. In this case, a model suggests forcing is underestimated. One thing is clear, science does not yet know for certain what the true climate sensitivity to CO2 forcings is.

There is a new Paper from Tanaka et al (download here PDF) that describes how forcing uncertainty may be underestimated. Like the story of Sisyphus, an atmospheric system with negative feedbacks will roll heat back down the hill. With positive feedbacks, it gets easier to heatup the further uphill you go. The question is, which is it?

Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity

click for larger image
click for larger image

ABSTRACT

Uncertainty in climate sensitivity is a fundamental problem for projections of the future climate. Equilibrium climate sensitivity is defined as the asymptotic response of global-mean surface air temperature to a doubling of the atmospheric CO2 concentration from the preindustrial level (≈ 280 ppm). In spite of various efforts to estimate its value, climate sensitivity is still not well constrained. Here we show that the probability of high climate sensitivity is higher than previously thought because uncertainty in historical radiative forcing has not been sufficiently considered. The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely. We call for further research on how best to represent forcing uncertainty.

CONCLUDING REMARKS

Our ACC2 inversion approach has indicated that by including more uncertainty in

radiative forcing, the probability of high climate sensitivity becomes higher, although low climate sensitivity (< 2°C) remains very unlikely. Thus in order to quantify the uncertainty in high climate sensitivity, it is of paramount importance to represent forcing uncertainty correctly, neither as restrictive as in the forcing scaling approach (as in previous studies) nor as free as in the missing forcing approach. Estimating the autocorrelation structure of missing forcing is still an issue in the missing forcing approach. We qualitatively demonstrate the importance of forcing uncertainty in estimating climate sensitivity – however, the question is still open as to how to appropriately represent the forcing uncertainty.

h/t and thanks to Leif Svalgaard

0 0 votes
Article Rating
170 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
AnonyMoose
July 19, 2009 9:24 pm

I’ll have to read all of that. It will be interesting to see how greater uncertainty does not make it harder to rule out lower sensitivity.

Mick
July 19, 2009 9:35 pm

“there are things we know we know. We also know there are known unknowns”
how ironic…

ohioholic
July 19, 2009 9:49 pm

“The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely."
This statement sounds like an assumption is being made. I will have to read it to find out, but not now.

July 19, 2009 10:08 pm

Although much of what was summarized here pings my [snip] meter, I parse this sentence, “The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely." as:
"The less we know about something, the harder it is to rule things out, even the results we aren't looking for."
Am I parsing this wrong? Because this seems like a very obvious statement worded ambiguously.
In my experience, the more the writer knows about a subject, the easier it is to read what what they have written. Foggy language often obscures foggy ideas. Just an observation.

John F. Hultquist
July 19, 2009 10:20 pm

From the PDF report:
page 7, lines 6, 7, & 8
“. . . subject to uncertainties (CO2, CH4, and N2O forcing), prescribed/parameterized radiative forcing without uncertainties (other greenhouse gas, aerosol, volcanic (Ammann et al., 2003), and solar (Krivova et al., 2007) forcing), and “missing forcing.”
Overall this is too technical for me but I do wonder whether water vapor is included in “other greenhouse gas” or not at all? Stratospheric H2O is mentioned at the end of Sec. 2.1, p. 5. There has been considerable discussion about clouds in the context of ‘forcing’ and they seem not to be included here.

neill
July 19, 2009 10:23 pm

The Unwashed Speaketh:
H2O is around 90% or more of GHG. CO2 is way less than 5%.
H2O, above freezing, absorbs heat. H2O, below freezing, allows heat to increase.
So, either, the CLIMATE is the egg IN the bowl, maybe rolling, but not splatting.
Or, the CLIMATE is on top of the bowl, rolling off at some point, and going SPLAT.
Right??

rbateman
July 19, 2009 10:38 pm

Yes, I will read it too.
I am going to assume, for now, that there being too much noise, too many signals, and the inability to distinguish Signal from Noise.
We need a detangler.
Until then, don’t monkey around with all those dials.
Leverite.

Dave Wendt
July 19, 2009 10:39 pm

First impression: Our models all do a lousy job of estimating climate sensitivity, but we’re pretty sure it’s worse than we thought anyway.

p.g.sharrow "PG"
July 19, 2009 10:39 pm

I must be really dumb, please explain how greater ignorance about CO2 forcing results in greater sensitivity in forcing. It would appear to me that the very tiny amount of CO2 in the atmosphere would preclude any appreciable effect.
Temperature of the hydrosphere sets the percentage of the total CO2 in the enviroment that is in solution and the percentage that is in the atmosphere. Therefore temperature leads and CO2 follows, as most research facts seems to indicate.

AlexB
July 19, 2009 10:55 pm

Sounds to me like the greater the uncertainty the more certain you can be about the climate having high sensitivity than low sensitivity. Can anyone help me out with an alternative explaination to this?

Ray
July 19, 2009 11:05 pm

What is certain is regardless of how much CO2 was in the atmosphere, every time it has been warmer civilizations have thrived and the more CO2 in the atmosphere the better it is for life on this planet. It has NEVER been warm enough on Earth or had never TOO MUCH CO2 to be toxic for life and civilizations. The debate is not over in this case, it’s nonsense to have such debate.

neill
July 19, 2009 11:16 pm

One would naturally assume that the preponderance and the properties of H2O would speak heavily for the egg in the bowl, n’est ce-pas?

Dave Wendt
July 19, 2009 11:18 pm

Second impression: Our models all do a lousy job of estimating climate sensitivity and we don’t have any real idea how to make them better, but at least we have fixed one problem. Now there is virtually no possibility that one of them will kick out a prediction of low climate sensitivity accidentally.

Antonio San
July 19, 2009 11:25 pm

Wordy way of requesting more research grants…

tallbloke
July 19, 2009 11:34 pm

Does Leif Svalgaard agree with the characterisation of solar forcing displayed in the graph shown, and please would he explain what the red solar curve is representing in case I misunderstand it.
Thanks

July 19, 2009 11:43 pm

already the uncertainty in aerosols is sufficient to make a low climate sensitivity for CO2 more likely. http://www.ferdinand-engelbeen.be/klimaat/oxford.html
Claiming an uncertainty that is only valid for the high end of the results is unscientific.

Francis
July 19, 2009 11:46 pm

And I have to read the paper, to understand…

Roddy Baird
July 19, 2009 11:47 pm

Surely this can be experimentally tested in a more direct fashion? Like testing how well parcels of “atmosphere” with slightly different levels of CO2 retain heat? It would need a little elegance in design to ensure all other factors remain the same, sure, but it would seem not too difficult a thing to do. Why would you need to “model” something like this when it would appear to be an issue that lends itself to simple measurement? Another thing that bothers me, if the sensitivities were high, surely we’d have had a run-away greenhouse effect sometime in the last 600 million years? It seems fairly well understood that a warming ocean outgases CO2, so once CO2 levels rise, they’d increase temps, which would warm the oceans which would outgas CO2, which would increase temps, which would warm the oceans, which would outgas CO2…
Why is the heat retention of different combinations of gases not tested directly? Or has that been done? A positive result, i.e. CO2 being shown to considerably effect the heat retention of a parcel of gas representative of the earth’s atmosphere would not, on its own, prove the AGW hypothesis as you’d still need to examine the various feedback processes but a negative result would falsify it. Has this been done?

bugs
July 20, 2009 12:09 am

I think you don’t understand the difference between positive feedback and negative feedback.
Negative feedback will not send the ball rolling back down the hill, it will only set a limit on how far up the hill it will go. Positive feedback will push it further up the hill. The tussle between the two will still see the ball going up the hill, the only question will be how far up the hill it goes.

Boudu
July 20, 2009 12:26 am

Well, that’s settled then.

stumpy
July 20, 2009 12:49 am

One would think that increasing uncertainty would not only indicate greater sensitivity but also less sensitivity, unless they only add uncertainty in one direction, up. Though they are correct, its a key issue that needs to be addressed and greater uncertainty needs to be conveyed to the public.

Philip_B
July 20, 2009 1:12 am

I’ll decode the rather opaquic language.
We can make the models work by changing the past solar radiative forcing and increasing the CO2 forcing sensitivity.
Firstly, it’s an implicit admission the models don’t work.
Secondly, Leif is adamant that past variations in solar radiation couldn’t have caused more than 10%(?) of the last century’s observed warming. His is the best scientific assessment we have, and I have no problem accepting it.
Thirdly, they also admit to a ‘missing forcing’. Something the models don’t include because the scientists don’t know about it.
Of course all of this is predicated on the surface record being correct and we know there are a host of issues with it. My assessment is that they are clutching at straws in order to explain something that doesn’t exist in the first place – a large part of the rise in global temperatures determined from the surface stations (and the average of Tmin and Tmax).

Disputin
July 20, 2009 1:12 am

Interesting stuff, but is there not a whiff of Thomas Aquinas here? Much discussion about the number of angels capable of dancing on the head of a pin but ignoring the possibility that there are no angels. You can tweak models of a chaotic system all you like but it still has nil predictive power.
The only real question here is, Is climate a chaotic system? If the answer is yes, then don’t waste your time trying to model it. If no, please explain to this poor moron how climate (weather writ large) can be linear while weather has been shown not to be. Also, I’d like a critique of my thought that using an ensemble of model runs simply averages out the inputs and leaves a trend that is entirely due to the assumptions built in to the model.

masonmart
July 20, 2009 1:14 am

Again it is all model based and calibrated against past events by using missing forcing parameters. It adds nothing to the discussions and has no more credibility than any other model based prediction.

michel
July 20, 2009 1:16 am

If I’m understanding this, the paper is deriving climate sensitivity from the combination of historical surface temperatures and historical forcings? Is that right? I may be missing the point.
The question would then be, if so, how certain these quantities are.

Philip_B
July 20, 2009 1:17 am

And I’d add,
The irony here is that if they admitted that a large part of the 20th century warming wasn’t real and plugged in a much lower CO2 sensitivity, the models would work a lot better.
But then 90% of climate scientists could look forward to a career of driving taxis.

Mac
July 20, 2009 1:20 am

“The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely."
That is the flaw in this arguement. Uncertainty increases ALL possibilities, NOT skew certainty towards the few.
To date all empirical studies show that low climate sensitivity is more likely than high climate sensitivity.
Back to the drawingboard for the authors of this report.

David Wells
July 20, 2009 1:29 am

Its worse than that he’s dead Jim! I feel the life force is draining away, I just wish someone would pay me shed loads of cash to write incoherent gobbledegook all day long and into infinity. How did we survive before we became deluded enough to believe we could ever really understand what is not meant to be understood because there really is no ryme or reason why our climate behaves as it does, it just does and as long as it does we might survive a little bit longer. I thought King Canute served as an object lesson to disuade deluded psychotic muppets from looking into dark corners where no sense exists and do something positive that might bear fruit, obviously not!

Alan from Australia
July 20, 2009 1:40 am

Tanaka et al refer to various papers by Mann and Jones to obtain air temperatures for their forcing, especially Mann and Jones (2003); i.e. ‘The Hockey Stick’ temperature record. They include a temperature ‘trace’ back to 1750, below the figure cited here, which includes a large dip around the Dalton Minimum, but they ascribe this to volcanic activity. Given that the Hockey Stick curve has to be one of the most discredited analyses ever published, why didn’t they use a non-tree-ring analysis such as Loehle (2007), shown in a recent blog by Roy Spencer.
http://www.drroyspencer.com/2009/07/how-do-climate-models-work/
Interesting that they cut the date-line off at 1750, just after an apparent huge rise in temperatures following the Maunder Minimum, as shown by proxies in Loehle’s analysis.

Robert Wood
July 20, 2009 1:50 am

It could be much worse than previously thought. How can these people consider running a computer model an “experiment”?.
I’m glad to see they account for Missing forcing which is is treated as an independent parameter in each year.

Robert Wood
July 20, 2009 1:50 am

It could be much worse than previously thought. How can these people consider running a computer model an “experiment”?.
I’m glad to see they account for Missing forcing which is treated as an independent parameter in each year.

July 20, 2009 2:05 am

The less we know, the higher chance things are “MUCH worse than we thought”.
I thought strong positive feed backs can be ruled out empirically, based on observations of real world.

cohenite
July 20, 2009 2:11 am

This is rather a quaint conclusion that ECS is more likely to be greater than less. Over the 20thC and up to 2009 CO2 has increased ~ 38%; depending on what metric you choose temperature has increased 0.4C -0.7C; let’s say 0.5C. ECS from IPCC is reckoned to be ~3C for 2CO2; if a 38% increase has caused a 0.5C increase [excluding all other possible causes] than the remaining 62% increase in CO2 will have to cause a 2.5C increase which effectively increases the ECS to just over 4C; still just within the official IPCC range of 2-4.5C.
There is a certain irony here. In Australia coinciding with Gore’s platitudinous visit government pronouncements were 2-fold; firstly, atmospheric temperature was no longer important, the real indicators of AGW were OHC and sea levels; secondly, in respect of OHC and sea level, the rate of increase was accelerating. Of course OHC is declining as shown by NOAA and the gold-star, pro-AGW Levitus paper, let alone looking at Loehle’ s or Di Puccio’s calculations. And the sea levels are also flat to declining as shown by Jason-1. The irony is that while the pro-AGW announcements are all ratcheting up the doom and gloom trends the only thing that is increasing is the catch up in ECS which AGW has to achieve to reach its target of ~3C.

Fraizer
July 20, 2009 2:21 am

Just a quick glance at the paper, but their “Experimental” work involves running the same computer model with inputs tweaked. From this they obtain “Experimental” data.
The results are “Worse than we thought”.

J.Hansford
July 20, 2009 2:28 am

LOL… I gather that th’ kitty is Sisyphus. Or maybe rising sea levels and a Puss with no boots, p’haps? 😉

Trevor
July 20, 2009 2:31 am

This seems like a clever ploy by the alarmists to exaggerate the potential risk. To my simple mind it seems like if it was originally stated that temp rise by 2100 was to be between say 0 deg C and 2 deg C. By now implying that their forcings are underestimated and therefore the temp rise may be a greater range, the alarmists will say the possible temp rise might now range from 0 deg C to 4 deg C.
Hey presto, minimum potential temperature increase is unchanged but maximum temp has had a boost. It’s all sleight of hand, no empirical evidence, exaggerate the uncertainty. Greater the uncertainty, greater the potential max, greater the fear.
As Scotty would say “It’s worse than that, he’s dead Jim”.

tallbloke
July 20, 2009 2:42 am

By the way, I have a new theory of climate change.
After about 1650, when the last of the major Sun worshipping cults was biting the dust, Helios became displeased with the lack of respect and devotion. He decided to turn up the heat to let us know he wasn’t to be ignored, and to put a sweat on our brow.
But this resulted in influential humans blaming the increasing heat on the thinly veiled and deceitful nymph Carbonara, in order to place a tax on the laity for their role in making her get fatter by ordering too many takeaways from Italian charcoal pizza ovens.
Helios has decided to play these false prophets at their own game, and has ordered his Sunlings to stop shoveling the coal into the furnace of Phaeton.
Prove me wrong.

H.R.
July 20, 2009 2:42 am

But, but… I thought natural variation is overwhelming the CO2 warming signal right now, according to AGW apologists. So how can the climate be all that sensitive to CO2?
Anyhow, I’m 100% certain that once the big meteor hits, it won’t matter how sensitive the climate is to CO2.

July 20, 2009 3:45 am

tallbloke (23:34:55) :
Does Leif Svalgaard agree with the characterisation of solar forcing displayed in the graph shown, and please would he explain what the red solar curve is representing in case I misunderstand it.
The red curve shows some representation of the solar cycle and does not seem too much out of whack. It has the smallest amplitudes of all the forcings, so is in line with what I would expect.

Louis Hissink
July 20, 2009 3:45 am

The simple answer is that CO2 has nothing to do with temperature.

Trevor
July 20, 2009 3:49 am

David Wells (01:29:44) :
Live long and prosper.

pyromancer76
July 20, 2009 3:50 am

This topic is way beyond my (non)ability. I hope someone can explain “inverse calculations” creating a more usable model. There seems to be quite an assumption in the abstract statement: “Equilibrium climate sensitivity is defined as the asymptotic response of global-mean surface air temperature to a doubling of the atmospheric CO2 concentration from the preindustrial level (~280ppm).
At the International Institute for Applied Systems Analysis (IIASA), Katsumasa Tanaka is the scientific coordinator of “Geoengineering to Avoid Overshoot: An Uncertainty Analysis”. “This project is motivated by the question as to how much geoengineering intervention would be required if it were to be used to avoid large global warming? How serious would be the environmental side effects caused by such an intervention?” This paper must be some of the intellectual/scientific foundations for this project.
I found something from 2008, I think, that might have more details.
http://www.uni-leipzig.de/energiemanagement/wien06_poster.pdf
In this piece Tanaka ea ask: “How can global-mean information on the carbon cycle, atmospheric chemistry, and climate system by systhesized to produce the post-industrial evolution of the Earth system?” Seems rather god-like.
A few interesting items from the IIASA homepage. It has as one of its projects a “Greenhouse Gas Initiative” which in 2008 opened towards “climate mitigation/adaptation and human development related issues”. Under “policy pathways”, the Insitute addresses: “Which policy interventions (education, sanitation, pollution control, health care, etc.) would lead to equal progress in the Human Development Index (HDI) compared to a “conventional” increase in per-capita” income, while causing less GHG emissions?…We recognize that the HDI is now a pervasive index and one that resonates well with policy makers.”

Ed H
July 20, 2009 4:11 am

Seems quite simple what is going on here. The team needs larger error bars to avoid current climate behavior proving them wrong. Higher uncertainty keeps them from being provably ruled out, while increasing how high/how much catastrophic warming they can predict even in the face of cooler temperatures.

Urederra
July 20, 2009 4:17 am

…a model suggests forcing is underestimated…

Fortunately, models, computational or not, are not a source of empirical data. So, if models cannot estimate forcing properly, that is because the models are wrong. Simple as that. The sad thing is, upon reading the abstract and conclusion, that modellers seem to forget that simple Science 101 lesson. Do they give away PhDs at the county fairs lately?

July 20, 2009 4:25 am

CO2 continues to rise, lagging behind falling world temperature, but now we have an interesting test.
The waminstas have been praying for El Nino to start up and get temperatures back to 1998 levels.
Gaia must have been listening. Looks like its on again, but bound to be weak because of a negative PDO. Can’t honestly say.

Alex Harvey
July 20, 2009 4:29 am

Is there anything novel here?
I think it is well known in the small model community that if you feel free to scale the forcing you get a range of CS values that prvide a best fit, and if you assume a lightweight ocean (and I think that diffusivity of 0.55cm^2/s gives such an ocean), there is little to discriminate between high-end value of CS.
Assuming other values for diffusivity would move the low point but the range will always be open to very high values but constrained for very low values of CS.
I have some other concerns, regarding the temperature sets used. They say:
“Data include atmospheric concentrations of CO2, CH4, and N2O and global-mean surface air temperature change each year (from year 1750 to 2000).”
I can not find a reference to the supporting material so I do not know where the temp record comes from but to model the heat uptake by the ocean you need separate land and ocean temps, perhaps someone can point me at the supporting materials.
This “could” invalidate:
“Figure 2.1 showsthat low climate sensitivity is not supported even with the missing forcing approach because the missing forcing goes beyond its 2σ uncertainty range to explain the warming in the late 20th century.”
Low end values of CS are constrained by getting the ocean uptake rright. Values for their models uptake should be constrained by measured OHC values. It would be nice to see their values.
I find the following comment a little bizarre:
“Figure 2.2 demonstrates that high climate sensitivity is not acceptable with the forcing scaling approach, which results in excessively strong cooling after large volcanic eruptions in the 19th century.”
The abstract does not give that impression. It seems that they are actually ruling high values out but I do not know what they can mean by “acceptable”.
Also they say:
“The forcing scaling factor is estimated to be 0.045, 0.999, 1.214, and 1.398 in the forcing scaling-based inversions with climate sensitivity of 1, 3, 5, and 10°C, respectively.”
Well this seems upside down to me and I suspect that 0.045 is a typo for 0.45. But who knows.
Alexander Harvey

Mac
July 20, 2009 4:33 am

To evoke ‘uncertainty’ in order to realise the ‘possibility’ of a ‘predicted’ outcome is utterly shameless……………..but that is what climate science has been reduced to.

Mother Nature
July 20, 2009 4:37 am

ACHTUNG!
Alle touristen und non-technischen lookenpeepers! Das machine is nicht fur fingerpoken und mittengrabben. Is easy schnappen der springenwerk, blowenfusen und poppencorken mit spitzen sparken.
Das machine is diggen by experten only. Is nicht fur gerwerken by das dummkopfen. Das rubbernecken sightseeren keepen das cottenpicken hands in das pockets. Relaxen und watchen das blinkenlights.

Jack Hughes
July 20, 2009 4:38 am

Who is this turkey ?
I’m calling BS on this incoherent babble.
Each sentence is worse than the previous sentence. Totally devoid of meaning.

tallbloke
July 20, 2009 4:40 am

H.R. (02:42:26) :
Anyhow, I’m 100% certain that once the big meteor hits, it won’t matter how sensitive the climate is to CO2.

Ah, but NASA will see it coming, and Bruce Willis, Big Al and Arnie will fly up and save the world by wedging Al in the hole they’ve drilled with the nuke at the bottom to make sure the energy is directed into the frozen heart of the meteor.
And if that fails Jim Hansen will adjust the trajectory data and it’ll miss us.
Oh. Wait a minute.

Ron de Haan
July 20, 2009 4:40 am

Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity
click for larger image
Typical alarmist BS.
If your write a serious scientific report, please use language that can be understood by intelligent people.
This is written for civil servants who also write endless long sentences with an “empty”message.
I like the picture of the cat though!
Never thought they like to play with melons as I do.

July 20, 2009 4:40 am

If I have understood the climate models, the radiative forcing of CO2 already builds in a threefold feedback from water vapour – and it is generally admitted that CO2 alone has a low impact, with a sensitivity between 0.5 and 1.5 C for the doubling. So everything depends upon how this water vapour behaves – in particular whether it creates more cloud. Cloud behaviour is the least understood climate science and the greatest uncertainty in the models. Taking the satellite period of records of cloud from 1983, which also coincides with the major global warming signal, cloud thinning of 5% between 1983-2000 and resultant additional short-wave flux to the ocean surface can account for at least 80% of the signal. Clouds thickened by 2% in 2001- since maintained, and ocean heat storage flatlined, along with sea-level rise from about 2004.
Thus, whether natural or anthropogenic, it is cloud changes that have driven the signal. Cloud changes resultant on shifts in atmospheric pressure and winds are the consequences of several ocean-basin cycles – all of which peaked positive (warm) between 1995-2005 and are now turning to their cool cycle.
The fact that these natural cycles can now over-ride the model’s projections (and undoubtedly contributed to the rising signal in the first place) shows clearly that the sensitivity of the models has been over-estimated. These new estimates of uncertainty only point in the opposite direction because there are no revisions of the models used – if the researchers had updated their models to include natural cycles (as a number of modelling centres are now doing) they might have come to an opposite conclusion.
In the UK, at Hadley, for example, revisions are under way – but within what is called the mid-range modelling group (2030), whilst the long-range group (2050 and beyond) still takes no account of cycles and present nice straight lines for the upcoming Copenhagen meeting as well as UK policy makers. The mid-range group expects cooling over the next decade – largely due to the shift in the Atlantic Multidecadal Oscillation – but this does not get out to the British press.
In this week’s Sunday Times, we thus have government agriculturalists recommending that our farmers consider planting olive trees in advance of 4 degrees Celsius by 2050. If planted now, I guarantee they’d be killed by frost as the second hard winter strikes in what may be two or three decades of hard winters.
This is the current level of idiocy operating in relation to the uncertain science – and it is compounded by the scientists themselves colluding with the political agenda in advance of Copenhagen.

Richard M
July 20, 2009 4:46 am

Ahhhh, probability. Brings out the gambler in me. Even if you accept more of this modelled nonsense based on WAGs it doesn’t change reality.
You are more likely to get two pair than three of a kind playing poker but our universe is more like a single hand of poker. It’s already been dealt. If the physical laws lead to a three of a kind, then the probabilities that two pair is more likely are meaningless.

Dave in Delaware
July 20, 2009 4:50 am

Did you notice that the simulations all cut off at year 2000?
The authors include the 1997 super El Nino temperature spike, but make no attempt to assess “missing forcing” in the relatively flat post 2002 period.
—-
“Figure 2.1 shows that low climate sensitivity is not supported even with the missing forcing approach because the missing forcing goes beyond its 2σ uncertainty range to explain the warming in the late 20th century.”
Not clear from the paper how they determined the “2σ uncertainty range”. A cynical thought on that statement might be .. if we believed that the missing forcing was that large, we would be out of the AGW/GCM business.

Eric Naegle
July 20, 2009 4:56 am

One thing is certain. Cats, at times, are known to roll watermelons at the water’s edge. As for the rest… I think ‘disputin’ makes the most sense.

Trev
July 20, 2009 5:13 am

I note that this comment on the Vinland map claims “If the map is genuine it provides evidence of a relatively ice-free Arctic in the Medieval Warm Period that allowed the Vikings to sail unimpeded to North America.”
http://climateresearchnews.com/

Gary
July 20, 2009 5:38 am

The really interesting question is: how do you get a cat to roll a watermelon in a lake?

Alex Harvey
July 20, 2009 5:49 am

I think that they should have included a statement as to why they stop at 2000. It could be that the are constrained by available data which they could state otherwise I am sure someone will accuse them of cherry-picking.
Also it is pretty clear that they are using annual values throughout. (the 251 degress of freedom in Fig 1 is one per year (1750-2000 inclusive). It may be a small point but diffusion models are sensitive to granularity. The heat upatke calculation involves a second order differential “d^2Temp/dt^2” term making it more sensitive to granularity than other terms like the out bound flux calculation which only has to consider the Temp(t). If this is important it would mostly likely show up with transient responses like those produced by volcanoes. They could be using monthly values of SST where available and annual vlaues for the forcings but it does not seem to be stated.
Finally, If anyone can track down the data they used I would be grateful, BTW I did not think we had individual forcing data for all the gases and aerosols used in their model for even the 20th Century let alone going back to 1750.
Alexander Harvey

Richard M
July 20, 2009 5:54 am

To take the gambling analogy one step further. A model of 5 card draw poker could be exactly right except for one small error. If deuces are wild then it turns out 3 of a kind is more likely than two pair.
This shows how missing one simple component of the problem can throw off the results completely.

Jim
July 20, 2009 5:55 am

I hope to get time to read the paper, but so far it appears to be a peer-reviewed statement that the authors don’t know enough to draw a real conclusion.

Bill Illis
July 20, 2009 5:56 am

If there is so much uncertainty, why does the IPCC force all the models to play from the same playbook.
Here is a quote from the selection criteria for a climate model to be included in the IPCC’s 2007 Fourth Assessment Report.
“Criterion 1: Consistency with global projections. They should be consistent with a broad range of global warming projections based on increased concentrations of greenhouse gases. This range is variously cited as 1.4°C to 5.8°C by 2100, or 1.5°C to 4.5°C for a doubling of atmospheric CO2 concentration (otherwise known as the “equilibrium climate sensitivity”). ”
http://www.ipcc-data.org/ddc_scen_selection.html
Normally, when a field of science faces uncertainty, they try hard to limit or nail down that uncertainty through experimental measurements and/or just accepting that the basic data probably indicates the truth.
In climate science, it seems that they’s rather just leave the uncertainty in place and/or rewrite the basic data so that it matches “the selection criteria”.

tallbloke
July 20, 2009 6:02 am

Leif Svalgaard (03:45:18) :
tallbloke (23:34:55) :
Does Leif Svalgaard agree with the characterisation of solar forcing displayed in the graph shown, and please would he explain what the red solar curve is representing in case I misunderstand it.
The red curve shows some representation of the solar cycle and does not seem too much out of whack. It has the smallest amplitudes of all the forcings, so is in line with what I would expect.

Leif I know that the TSI from the sun at ~1365.2 to ~1366.6W/m^2 is divided by four to get the incident insolation on the curved sun facing side of earth sorted sorted out, but isn’t the tropical area right under the sun going to feel the full effect of the ~ 1.4(PMOD)-2.2W/m^2(Neptune/ACRIM) swing between solar max and solar min, rather than a quarter of it? Plus of course the 8W/m^2 swing induced by earth’s eccentric orbit.
I know the magnitude of the swing doesn’t necessarily relate to the ‘forcing’ calculated for modelling ourposes from poorly understood data, but as a physical fact?

nogw
July 20, 2009 6:19 am

Solar forcing has been underestimated by models, as indicated by Scafetta in his conference at epa, where he told that FAO, a UN agency, following LOD to predict climate and future fish catches openly contradicts the IPCC affirmations.
You can dowload the complete paper from (12 pdf documents):
ftp://ftp.fao.org/docrep/fao/005/y2787e/
Where you will find a completely reasonable temperatures´ forecast.

Garacka
July 20, 2009 6:28 am

I have a hard time getting beyond “Insufficient … Uncertainty Underestimates the Risk of … Sensitivity “

Kum Dollison
July 20, 2009 6:28 am

Roddy Baird is right. Billions of Dollars spent, and nobody does a simple experiment.
A Pox on all the Houses. ALL the houses.

July 20, 2009 6:36 am

“We call for further research on how best to represent forcing uncertainty.”
Or:
“Give us more money so as we can continue our cushy academic jobs during the present economic crisis so as we can continue to learn a little bit more about something which is proving to be more and more insignificant.”

July 20, 2009 6:40 am

“The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely."
The reason is that the lower bound is zero while the upper end is unbounded.

Highlander
July 20, 2009 6:43 am

Gary: Very carefully!
.
:o)
.

Patrick Davis
July 20, 2009 6:45 am

I tell you one thing is certain, AGW is horse crap (Actually, no. Horse crap is useful). So I say AGW is…a lot of hot political air!
How do you know a politician is telling lies? Their mouth is moving and sounds come out.
How do you know “human induced climate change” is a scam? It is used in political campagins. Rudd in Australia and Obama in the US. What surprises me is the number of people who support it….until recently of course. Those who “voted” for KRudd747 and Obamassiah…are slow to learn, if at all.

July 20, 2009 6:56 am

Perhaps the title should be ‘Certain Uncertainty’.

deadwood
July 20, 2009 7:06 am

More from the “We have to treat uncertainty as evidence of certainty” school of thought.
Take out the precautionary principle and AGW falls to pieces. Its all just speculation.

July 20, 2009 7:15 am

Gary (05:38:06) :
The really interesting question is: how do you get a cat to roll a watermelon in a lake?
Good question. Maybe it’s just a model?

Steve in SC
July 20, 2009 7:17 am

I believe these boys have about got themselves treed.

Bruce Cobb
July 20, 2009 7:18 am

Louis Hissink (03:45:59) :
The simple answer is that CO2 has nothing to do with temperature.
Exactly. They are simply assuming C02 “forces” climate. Climate can’t be sensitive to a phantom, so the paper’s discussion about the climates’ sensitivity to C02 is moot, and frankly, ridiculous.

George Tobin
July 20, 2009 7:23 am

I dutifully read the linked Tanka et al paper. This is my summary impression:
First, take particular models with their working assumptions about forcings intact: (a) a sum of greenhouse forcings with uncertainties; (b) other forcings (like aerosols) parameterized with no uncertainties, and (c) “missing” forcings which comprise that which the models assume they don’t know.
Next, plug in actual data and assume no uncertainties. This is bad because the result is a low climate sensitivity. Because the models are really designed to have a high sensitivity, probabilistic analysis based on the models reports that this low sensitivity is improbable–imagine my surprise!! Therefore, because the working assumptions of the models can’t be wrong, and there is no obvious missing component, there must be more … uncertainty.
Last, we discover by formalizing this uncertainty in an inversion approach, there is the two-fold benefit of (a) saving appearances such that even when existing available data fail to confirm the preferred higher sensitivity, the result still falls within the new uncertainty range and (b) it also adds a really scary higher-than-we-thought upper range even though there is little or no indication that it’s actually happening now.
Shorter version: It is nature’s fault that the models suck and it really is worse than we thought.

IanM
July 20, 2009 7:43 am

With all the serious comments on this paper I am almost embarrassed to comment on Gary’s posting: “The really interesting question is: how do you get a cat to roll a watermelon in a lake?
The photo is obviously staged. You can see that the cat is standing on a stone, and its scared expression hints to me that it was released a small fraction of a second before the photo was taken. My wife, who is Japanese, tells me that inflatable balls are sold in Japan that look like melons. They are made as pool toys, and I suspect that this is one. Note how shallow the melon sinks in the water, not on the bottom, which is visible in the photo.

H.R.
July 20, 2009 7:46 am

tallbloke (04:40:08) :
“H.R. (02:42:26) :
Anyhow, I’m 100% certain that once the big meteor hits, it won’t matter how sensitive the climate is to CO2.
Ah, but NASA will see it coming, and Bruce Willis, Big Al and Arnie will fly up and save the world by wedging Al in the hole they’ve drilled with the nuke at the bottom to make sure the energy is directed into the frozen heart of the meteor.
And if that fails Jim Hansen will adjust the trajectory data and it’ll miss us.
Oh. Wait a minute.”

Nah. Everybody knows Jim Hansen only does temperature.
However, the US Senate will leap into action and pass a law banning the meteor from hitting us. Oh, and raise taxes to to pay for the miss ;o)

Leland Palmer
July 20, 2009 8:10 am

The missing forcing that they are referring to is likely methane forcing.
Methane is a greenhouse gas much more potent than CO2, but it is oxidized in the atmosphere into CO2 with a halflife of about 12 years, by the hydroxyl radical.
They talk about a source of 3 or so trillion metric tons of C13 depleted carbon.
A source that fits this description is the five or so trillion tons of methane hydrates currently existing on the floors of the oceans, and in the Arctic.
What they appear to be saying is that CO2 forcing alone does not explain the PETM.
To explain the PETM, you have to also include methane forcing.

John F. Pittman
July 20, 2009 8:11 am

M. Simon (06:40:03) : said
““The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely."
The reason is that the lower bound is zero while the upper end is unbounded.""
This is incorrect since forcing is never expected to be zero when one expects a logarithmic relationship of CO2 to temperature. So, as one gets close to zero it is actually harder and harder to get closeer. The bounded "0.0" is actually undefined, not a boundary condition.
The "unlikely" stands once again for ""we are unlikely to be continued with our funding if it is this low. ""

D. King
July 20, 2009 8:31 am

“Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity”
“…a willing suspension of disbelief.”
Tanaka is the Hillary Clinton of climate science.

pochas
July 20, 2009 8:36 am

Ah…. The more uncertainty the more we have to fear. But nevermind the possibility that there is nothing to fear, which is not included in our model.

Pamela Gray
July 20, 2009 8:44 am

Wow. And wow backwards. These brilliant people have come up with a cough-cough “scientific” justification for increasing the magnitude of the threat (since the current level has lost its steam) thus hoping that the populous will be fooled into once again believing in chicken little. At first it was the sky is falling. Now its the universe is collapsing. What next?

Ron de Haan
July 20, 2009 8:52 am

It has all become clear to me.
They are fishing for a new research budget.
In the mean time we have scientists who keep the threat of a possible run away climate
a live.
Earth has experienced several extinctions, probably caused by really big volcanic eruptions and/or meteorites/comets.
Each time the climate stabilized and life made a come back.
Local destruction caused by atom bomb tests took less than 40 years to recover.
The biggest risk humanity runs today is the onset of a new ice age and short term the accomplishments of our really disturbed political establishment.
Can I please have a budget for studying that?

David
July 20, 2009 8:56 am

Here is something I would like to understand about GHGs. If there is a feedback, shouldn’t we have already witnessed it? If CO2 is going to excite other gases in the atmosphere, like water vapor, why don’t other gases in the atmosphere set off the same effect? If water vapor is a feedback, why doesn’t it feedback off of itself?
Take the example of UHI. I live in a city that has a fair amount of this effect, so I would expect that the extra radiating would excite any feedback mechanisms that were present in the atmosphere around the city. There is also a fair amount of CO2 radiating from the city. Wouldn’t the feedback be noticed in this situation?

Bob Cormack
July 20, 2009 9:48 am

Given that:
1) The forcing due to changes in the Earth’s albedo (on decadal time scales) is multiple times greater than the forcing calculated for doubling CO2 ( http://wattsupwiththat.com/2007/10/17/earths-albedo-tells-a-interesting-story/ )
and
2) Climate sensitivity, including feedbacks, is not dependent on what causes the forcing;
Therefore, since the Earth’s temperature hasn’t changed much in response, it appears that there is already enough data to conclude that sensitivity is quite low.
Any model that doesn’t take this data into account is seriously incomplete and can’t tell us anything about the real world.

Joel Shore
July 20, 2009 9:55 am

Roddy Baird says:

Why is the heat retention of different combinations of gases not tested directly? Or has that been done? A positive result, i.e. CO2 being shown to considerably effect the heat retention of a parcel of gas representative of the earth’s atmosphere would not, on its own, prove the AGW hypothesis as you’d still need to examine the various feedback processes but a negative result would falsify it. Has this been done?

The absorption spectrum has indeed been carefully measured and, from that, the radiative forcing due to doubling CO2 has been determined, a value of somewhere around 4 W/m^2 (+-10%) which is agreed to be basically all serious scientists (including “AGW skeptics” Roy Spencer and Richard Lindzen). The big question, as you allude to, is how the various feedback processes in the atmosphere operate to produce the value for the climate sensitivity.

Another thing that bothers me, if the sensitivities were high, surely we’d have had a run-away greenhouse effect sometime in the last 600 million years? It seems fairly well understood that a warming ocean outgases CO2, so once CO2 levels rise, they’d increase temps, which would warm the oceans which would outgas CO2, which would increase temps, which would warm the oceans, which would outgas CO2…

A positive feedback does not necessarily lead to a run-away effect. It has to be sufficiently strong to do so. If it is not, it simply leads to amplification of the warming. (What you describe here is essentially mathematically expressed as an infinite series but, for example, the infinite geometric series 1 + 1/2 + 1/4 + 1/8 + 1/16 + … does not diverge but rather converges to the value 2.)
And, scientists who study paleoclimate have generally concluded that the climate is indeed quite sensitive to small perturbations, see e.g., http://www.sciencemag.org/cgi/content/summary/sci;306/5697/821

John F. Hultquist
July 20, 2009 10:02 am

M. Simon (06:40:03) : “The reason is that the lower bound is zero while the upper end is unbounded.”
Simple and direct. I like that. Had the same thought just after hitting the submit button earlier. Went away and when I came back decided to read all the comments before adding anything. For once. Cheers!

George E. Smith
July 20, 2009 10:11 am

“”” Insufficient Forcing Uncertainty Underestimates the Risk of High Climate
Sensitivity “””
What kind of gobbledegook is that ? Are they trying to tell us that more uncertainty would improve the accuracy of their wild A*** guesses.
These people are nutz.
George

Editor
July 20, 2009 10:21 am

The paper notes how a failure to fully account uncertainty causes the probability of high climate sensitivities to be underestimated, but it neglects to mention how failure to full account uncertainty also causes the probability of LOW climate sensitivities to be underestimated.
If low climate sensitivity proves out, it means that 20th century warming was not caused by CO2, and it bolsters the competing hypothesis: that it was caused by “grand maximum” levels of solar activity from the 1930’s through 2003. With solar activity now falling off, that means the real danger is global cooling. Funny how the authors fail to even mention the low sensitivity case.

George E. Smith
July 20, 2009 10:22 am

As to “forcing uncertainty”; we know for certain that from place to place on earth the “radiative forcing” ranges over more than a decade; something like a 12:1 range simply due to the surface temperature, and the Stefan Boltzmann law.
And since there isn’t any global network to actually measure the radiative forcing all over the globe in accordance with the usual rules for sampled data systems; then they can’t have the foggiest idea what the overall radiative forcing is for the earth as a whole (the earth knows). And that is before they even consider how the atmosphere reacts with that 12:1 radiative forcing source.
So it is pretty hard to imagine how they could actually have more uncertainty than they actually have; i would say they have total uncertainty; so that must presumably give them the greatest accuaracy to their predictions.
And I pay for this rubbish with my tax dollars ?

George E. Smith
July 20, 2009 10:27 am

“”” Joel Shore (09:55:43) :
Roddy Baird says:
Why is the heat retention of different combinations of gases not tested directly? Or has that been done? A positive result, i.e. CO2 being shown to considerably effect the heat retention of a parcel of gas representative of the earth’s atmosphere would not, on its own, prove the AGW hypothesis as you’d still need to examine the various feedback processes but a negative result would falsify it. Has this been done?
The absorption spectrum has indeed been carefully measured and, from that, the radiative forcing due to doubling CO2 has been determined, a value of somewhere around 4 W/m^2 (+-10%) which is agreed to be basically all serious scientists (including “AGW skeptics” Roy Spencer and Richard Lindzen). The big question, as you allude to, is how the various feedback processes in the atmosphere operate to produce the value for the climate sensitivity. “””
So just what “absorption spectrum” has been measured; and under what conditions has it been measured ?
But more importantly; whatever GHG absorption specra have been measwwured, no matter how carefully; they are pretty much useless, if you haven’t also measured just as carefully the RADIATION SPECTRUM that interracts with the

July 20, 2009 10:28 am

Hmmm, using greater uncertainty to narrow a conclusion. Is there any other endeavor that this works? I mean, other than in the mind of the “Intellegencia”?
In debugging code or electronics, I usually remove uncertainty (check and eliminate possible causes one at a time) and thereby narrow my focus until I find the error (conclusion). Apparently, although I’ve been successful at doing this for 30 years now, I’ve been doing it all wrong!

George E. Smith
July 20, 2009 10:31 am

That’s not a cat rolling a watermellon into a lake; its a cat rolling a watermellon out of harms way, and into the house, to escape from the rapidly rising sea level, coming up behind it; as a result of too much uncertainty in the radiative forcing.
George

Reed Coray
July 20, 2009 10:39 am

I thought the “science was settled”. Now I find out it’s not settled, it’s “worse than settled.” Would somebody please stop the merry-go-round?

ClimateFanBoy
July 20, 2009 10:57 am

Why is the kitty pushing the watermelon by the shoreline?

July 20, 2009 11:01 am

tallbloke (06:02:05) :
TSI from the sun at ~1365.2 to ~1366.6W/m^2 is divided by four to get the incident insolation on the curved sun facing side of earth sorted sorted out, but isn’t the tropical area right under the sun going to feel the full effect of the ~ 1.4(PMOD)-2.2W/m^2(Neptune/ACRIM) swing between solar max and solar min, rather than a quarter of it? Plus of course the 8W/m^2 swing induced by earth’s eccentric orbit.
For me, it is simpler to correct for the curved Earth [and albedo and night and day] at the end [or not at all, if you are using percentage changes]. Then the ‘swing’ is more like 90 W/m2.
So in percentages you have a swing of 7% and a solar cycle effect of 0.1% [or almost a hundred times smaller]. That translates into a quarter [S-B law, not round vs. flat] temperature change, so 1.7% of 285K = 5K annual swing and 0.025% of 285K = 0.07K for the solar cycle effect [also almost a hundred times smaller].

page48
July 20, 2009 11:08 am

OK, greater uncertainty means “we” can’t rule out higher sensitivity, bur for some unknown reason “can” rule out lower sensitivity.
Haven’t people started to laugh at this kind of “grant money propaganda” doubletalk by now?

Robert Austin
July 20, 2009 11:23 am

David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation.

Darell C. Phillips
July 20, 2009 11:27 am

J.Hansford (02:28:04) :
My hopeful take on that was a skeptical tide (the truth) coming in on both Cap and Trade (the CaT) and the watermelons (those who are green on the outside and red on the inside). Notice that the CaT has its back to the sea. Not a good idea. 8^)
Then, I found this (and please no one link the OTHER one on YT)-

David Walton
July 20, 2009 11:30 am

A cat pushing a watermelon on a lake. Sheer genius.

JFD
July 20, 2009 11:30 am

The “missing forcing” is produced groundwater from slow to recharge aquifers that is not in equilibrium with the atmosphere for the first cycle. This “new” water amounts to about 800 cubic kilometers per year, about 70% of which is used for food and forage irrigation. The potential energy in the water is changed into kinetic energy at constant temperature by evapotranspiration. The rising water vapor is condensed due to lower temperatures. The condensation process converts the kinetic energy back into potential energy, giving up the absorbed latent heat as specific heat thereby increasing the temperature of the atmosphere.
After the first 10 day cycle, the produced groundwater becomes part of the hydrological cycle and thus adds no more heat; however the production of groundwater from slow to recharge aquifers continues and slowly increases each year. The rate of increase in groundwater production has slowed somewhat in recent years.
The condensed new water adds about 2.5 mm to the level of the oceans each year.
I understand why the water vapor created by the hydrological cycle is not in accounted for in the AGW models but leaving out the new water (about 7% of which comes from burning fossil fuels) is a serious oversight in my view. The new water vapor adds much more heat to the atmosphere than is reflected in global warming, thus there is a “temperature relief valve” in the atmosphere, perhaps in the Tropopause, where the relative humidity is decreasing due to increasing carbon dioxide lowering the partial pressure of the mixture.
Also not considered by the AGW models is water vapor from power and industrial plant mechanical draft (and natural draft in the case of nuclear plants) cooling towers. These cooling towers accelerate evaporation and create aerosols inside the towers, which may create different cloud structures. Unless the water supply was from groundwater produced from slow to recharge aquifers, cooling towers would not increase the level of the oceans.

Bill Illis
July 20, 2009 11:42 am

Joel Shore, the highest CO2 sensitivity one can get from the PaleoClimate record is 1.5C per doubling of CO2.
There are a few periods which indicate a higher sensitivity number but most periods indicate a much lower than 1.5C per doubling figure.
Let’s look at 450 million years ago. CO2 4,500 ppm, 4 doublings – Solar forcing 5% lower than today or 4.5 watts/m^2 or 1.2C – Estimated Temp at the time +2.0C from today or just 0.8C per doubling netting out the CO2 and solar changes.
Let’s look at 35 million years ago. CO2 1,400 ppm, 2.5 doublings – Estimated temp at the time +2.0C from today or 0.8C per doubling again.
I can keep going with about 500 million other years if you want.

rbateman
July 20, 2009 11:44 am

Here we show that the probability of high
climate sensitivity is higher than previously thought because uncertainty in historical
radiative forcing has not been sufficiently considered.

As for the Solar Forcing, visible activity on the Sun has been counted from the start, and then changed in midstream. Actual measurements of visible activity on the sun began somewhat later, has suffered from changing of what is measured and how it is measured, as well a dropped effort leaving large gaps.
Let me make clear that there is a whole quantitative difference between counting the number of objects to fit into a scaling system, and the measuring of the volume of those objects.
When it comes to activer regions, sunpots, flares, etc on the Sun, it makes no difference whether counting or measuring, zero is zero.
So, when activity is moving upwards, counting takes a back seat to measuring. Since the ratio of area to count is in a constant state of change,
and the type of phenomena on the Sun also chage in relation to each other, there are no magic ratios to apply to go backwards to apply to counts.
The only way to do that is to dig up all drawings/photos and measure them in the same way.
For a picture of the record and it’s gap/uncertainties:
http://www.robertb.darkhorizons.org/SC24/GrDebSfoSnUPF.PNG
I’m still at it trying to link together the most consistent measurements w/overlaps. It’s not pretty.

David
July 20, 2009 11:50 am

Darell C. Phillips (11:27:20) :
I had to go look when you brought it up. Thanks, I am not going to sleep tonight.

Retired Engineer
July 20, 2009 12:04 pm

The watermelon isn’t floating, just sitting on hard surface right below the waterline. Ditto cat. Very few cats like water, but a rare one does. Has the SPCA seen this?
If CO2 really has a great effect, why are we here? In times long past there was lots of CO2. How did any life survive? If CO2 is the only thing driving warming, it should have been really hot back then. Unless, there are other factors? Unknown factors? Things that the models don’t consider. Perish the thought!
To claim because we don’t know, the real value must be worse than what we previously thought we knew is almost as dumb as someone saying “We had to spend a lot of money to keep from going bankrupt.”
Oh, wait …

tallbloke
July 20, 2009 12:12 pm

Leif Svalgaard (11:01:25) :
So in percentages you have a swing of 7% and a solar cycle effect of 0.1% [or almost a hundred times smaller]. That translates into a quarter [S-B law, not round vs. flat] temperature change, so 1.7% of 285K = 5K annual swing and 0.025% of 285K = 0.07K for the solar cycle effect [also almost a hundred times smaller].

Ok, but do you agree that the mid day tropical sea surface sees the full ~1.4W/m^2(PMOD) or ~2.2W/m^2(ACRIM) variation over the solar cycle?

July 20, 2009 12:49 pm

tallbloke (12:12:02) :
Ok, but do you agree that the mid day tropical sea surface sees the full ~1.4W/m^2(PMOD) or ~2.2W/m^2(ACRIM) variation over the solar cycle?
No, there is this little thing called the albedo, which takes a 31% [or so] bite out of the radiation, and the solar cycle variation is more like 1.2 W/m2, so we are talking about a tiny 0.69*1.2 = 0.8 W/m2 variation at midday. Integrated over the day, that falls to 0.3 W/m2.

Julian Flood
July 20, 2009 12:54 pm

There’s a post on Real Climate about forcing and unknowns — Swanson’s
“Warming, interrupted: Much ado about natural variability”.
I tried to post there but for some reason my tries vanish — they must have a very fierce spam filter. Or something. This is what I thought about that paper — it’s also relevant to the subject here — together with the curious way these papers make assumptions which are not spelled out fully but which influence the logic.
“Somewhere buried in Google are my thoughts on this — no doubt Professor Swanson will be delighted to find that I came to much the same conclusion as he did. My preferred metric is Hadcrut: there is a linear increase from 1910 to 1939 and a similar line from 1970 to the 90’s el Nino, about .17 deg/decade.
I cannot share his confidence that this fits neatly into the current forcing hypothesis. Forcing in the first linear period must have been very different from that during the second, not least because, as I have been told, the real CO2 effect only kicked in with full force during the ’60s. Also, his logic that the sensitivity remains the same depends on the assumption that any other effect is small and short-lived compared to that from CO2. If there is an as yet undefined forcing which operates more-or-less continuously during both periods then all bets are off. Something, not CO2, was warming the planet from 1910 onwards. The similarity with the post-war warming may be coincidence, of course, but it doesn’t seem necessary to multiply the causes when assuming the same effect is in op-eration is so much more economical.
The Folland and Parker SST correction, in my opinion, is ill-judged. Without it, the two linear warming spells stand out clearly, as does the miniature PETM that is the period 1939 to ‘46. On the clean graph it looks as if the planet were controlling itself into a steady .17 deg/decade, yielding to extra-large warming hits but, rolling with the punches, resuming from where it left off when the punching stops.
Now all we have to do is explain the WWII temperature excursion and the huge 97/98 el Nino spike. It’s a pity the latter did not occur just after the Gulf War. If it had, I could have done that too.”
JF

George E. Smith
July 20, 2009 1:08 pm

“”” Robert Austin (11:23:07) :
David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation. “””
“Least Understood factor ? ” By whom ?
I would say the water factor is well understood; so I suggest you reduce the number of people you choose to include in your nobodys who aren’t close to pinning the role of water; well you need to reduce it by at least one. And I’m not responsible for anybody else’s lack of understanding.
George

July 20, 2009 1:24 pm

This is incorrect since forcing is never expected to be zero when one expects a logarithmic relationship of CO2 to temperature.
You have a log curve (most likely) in this case. However, you have to multiply your log curve by some sensitivity value. In strictly mathematical considerations it could be zero.
But give me ANY monotonically rising curve multiplied by a sensitivity and the lower bound is zero.
Now of course for a log curve the effect is undefined at zero. But for engineering work (and that is as close as measured climate gets) it could be considered zero. But if it makes you happy I will just say at concentrations of 1E-666 ppmv the value is very close to zero. At any reasonable sensitivity (say up to 20 kw m-2). Not counting quantum effects. (Where in the universe is that CO2 molecule any way? Does it even exist?)
So really, below a certain concentration the CO2 molecule doesn’t even exist anywhere in the universe. Mathematics has its limitations. It is however a thing of great beauty and sometimes it even has practical applications in bounded ranges.
Yeah. I know. Spoken like an engineer.

Allan M
July 20, 2009 2:09 pm

Joel Shore:
“A positive feedback does not necessarily lead to a run-away effect. It has to be sufficiently strong to do so. If it is not, it simply leads to amplification of the warming.”
No. This is just a gross reversal of cause and effect. Try adding the effect of positive feedback to the result of each iteration. This is elsewhere known as “Compound interest.”

Nogw
July 20, 2009 2:29 pm

Why is is so that LOD (length of the day) forecast of temperatures and fish catches works nice for UN´s FAO and not for UN´s IPCC?
I am insisting on this issue to call your attention because it is a case of UN vs.UN.
The paper is divided in 12 pdf at:
ftp://ftp.fao.org/docrep/fao/005/y2787e/

July 20, 2009 2:43 pm

Boudu (00:26:46) :
Well, that’s settled then.

My thoughts exactly.

DaveE
July 20, 2009 2:45 pm

Allan M (14:09:24) :
You’re wasting your time with that one. Warmists don’t appreciate that positive feedback also acts on the signal fed back, I’ve had the argument too many times now.
DaveE.

July 20, 2009 2:46 pm

Allan M,
Quite right. There is a lot of confusion between amplification and positive feedback. Climate “science” is full of it. (and you can take that in several ways and for most of those currently involved true).

tallbloke
July 20, 2009 2:48 pm

Leif Svalgaard (12:49:19) :
tallbloke (12:12:02) :
Ok, but do you agree that the mid day tropical sea surface sees the full ~1.4W/m^2(PMOD) or ~2.2W/m^2(ACRIM) variation over the solar cycle?
No, there is this little thing called the albedo, which takes a 31% [or so] bite out of the radiation, and the solar cycle variation is more like 1.2 W/m2, so we are talking about a tiny 0.69*1.2 = 0.8 W/m2 variation at midday. Integrated over the day, that falls to 0.3 W/m2.

Or about 1.4W/m^2 according to the ACRIM/Neptune data. Which isn’t far off the 1.7W/m^2 the IPCC claim for co2, which is of course overinflated by their failure to account for positive phases of oceanic cycles 1975-2005.
Amd there is hardly any cloud over the Pacific warm pool most of he time when it’s in heat absorbing mode during the top of solar cycles, so that makes it a bigger forcing than just about anything else, on an ~11 year cyclic basis.
But if TSI follows the sunspot number fairly reliably, the increase in sunspot numbers during the C20th are going to have quite an effect on longer term climate trends too, especially if ACRIM’s measurements of absolute solar output are nearer the truth’s than PMOD’s.
What independent means of calibration are there? Hoyt’s survey of measurements on aerosols must spread over several solar cycles. Can they be used to estimate the difference in solar radiation between solar max and min? How were the satellite’s sensors calibrated?

Chris V.
July 20, 2009 2:51 pm

Bill Illis (11:42:22) :
Joel Shore, the highest CO2 sensitivity one can get from the PaleoClimate record is 1.5C per doubling of CO2.
Let’s look at 450 million years ago….

Bill- calculating CO2 sensitivity based on paleoclimate MILLIONS of years ago is not as straight forward as you seem to think.
The layout of the continents was much different, and the uncertainties in the proxies get bigger the further back you go.
Conditions during the last glacial maximum (CO2 levels, layout of the continents, extent of ice sheets, dust levels, etc) are much better known. Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.

John F. Pittman
July 20, 2009 3:00 pm

But not a chemical engineer M. Simon. The point of CO2 in the atmosphere never gets to zero no matter how hard you try to make it go there. 0.0 stays undefined. Also, such graphs should be put on semi-log paper so one doesn’t make an assumption it should be treated like zero. Old ChemE trick.

John S.
July 20, 2009 3:03 pm

The wide range of climate “sensitivity” to doubling of CO2 will not be narrowed by any study that conflates a purely internal capacitive effect–which is all that is involved in the Earth’s “greenhouse”–with external forcing. Capacitors produce no power on their own; they can only store and discharge energy.

Steve in SC
July 20, 2009 4:00 pm

Somebody help me out here.
We hear a lot about absorption spectra, radiative spectra, ad nauseum.
Could it just possibly be that the primary mode of heat transfer from our lonely little CO2 molecule is not radiation.
Since CO2 is such a well mixed gas, methinks convection may have a tiny bit to say about the result. That is seemingly ignored.

tallbloke
July 20, 2009 4:07 pm

Chris V. (14:51:26) :
Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.

That’s an error range big enough to make both you and Bill right.
And the models don’t give that as an output in any meaningful sense of the word proof, they close the radiation budget with that figure because everything else is fitted to it.

David
July 20, 2009 5:05 pm

“Conditions during the last glacial maximum (CO2 levels, layout of the continents, extent of ice sheets, dust levels, etc) are much better known. Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.”
Would not a cold, dry atmosphere necessitate a higher forcing value for CO2?

Bill Illis
July 20, 2009 5:30 pm

Chris V. (14:51:26) :
Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.

Those numbers indicate that CO2 was responsible for 1.9C of the 5.0C change in temperatures during the ice ages.
So the majority of the temperature change, 3.1C, is due to other factors.
How do we know that the other factors are not in fact responsible for 4.0C and CO2 was only responsible for 1.0C (or the same 1.5C per doubling).
The climate models are programmed to give 3.0C per doubling. The fact that a climate model result indicated the sensitivity is 3.0C per doubling is not an unexpected result it seems. A climate model is not proof when it is programmed to give that result.

Chris V.
July 20, 2009 6:28 pm

Bill Illis (17:30:19) :
The CO2 sensitivity numbers include the effects of feedbacks.
The climate models are not programed to give 3C per doubling. In fact, some have higher sensitivities, some lower, but they cluster around 3.

J.Hansford
July 20, 2009 6:39 pm

The trick with AGW is to distract onlookers from the fact that the empirical evidence doesn’t support the Hypothesis…. So get a cute picture of a cat rolling a water melon and speak only of assumptions based on Computer models based on the statistics derived from suspect sources….
It’s the shiny things in life that amaze the confused.

AlexB
July 20, 2009 6:55 pm

Just finished reading the paper. Can I say from the outset, please consider the environment before printing those 20 pages… ooops! Basically by freeing up the constraints of forcing from the preindustrial era before CO2 started changing you can allow CO2 to take a more dominant role in the explanation of the industrial era as it started to increase with temperature. By adding all that uncertainty to the pre-industrial era you are heavily weighting the model to look at the industrial period. In a nutshell it is ‘correlation is causation’ dressed in fancy maths. Another distraction from actual science.

Joel Shore
July 20, 2009 7:02 pm

Bill Illis says:

Let’s look at 450 million years ago. CO2 4,500 ppm, 4 doublings – Solar forcing 5% lower than today or 4.5 watts/m^2 or 1.2C – Estimated Temp at the time +2.0C from today or just 0.8C per doubling netting out the CO2 and solar changes.
Let’s look at 35 million years ago. CO2 1,400 ppm, 2.5 doublings – Estimated temp at the time +2.0C from today or 0.8C per doubling again.

Bill, this is ludicrous. You can’t just do a simple comparison with today when you are talking about geological times like that. There are all sorts of factors that vary over those timescales, including the location of the continents and mountain ranges and the output of the sun!! Perhaps you should study the actual serious science that has been done on paleoclimate.

Joel Shore
July 20, 2009 7:11 pm

Sorry, I realized after posting that Chris V. had already made the point in my last post.
Bill Illis says:

Those numbers indicate that CO2 was responsible for 1.9C of the 5.0C change in temperatures during the ice ages.
So the majority of the temperature change, 3.1C, is due to other factors.
How do we know that the other factors are not in fact responsible for 4.0C and CO2 was only responsible for 1.0C (or the same 1.5C per doubling).

We know that because we have estimates for the radiative forcing of each component. Don’t think of the LGM as showing us directly that doubling CO2 causes ~3 C rise. The more fundamental point is that it shows us that the climate sensitivity is ~0.75 C per (W/m^2) of forcing. Then, the known value of the forcing due to doubling CO2 implies this translates into 3 C per doubling.
M. Simon says:

Quite right. There is a lot of confusion between amplification and positive feedback. Climate “science” is full of it. (and you can take that in several ways and for most of those currently involved true).

There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do. This is a not uncommon occurrence in science (that different fields use terminology somewhat differently). The confusion arises only when these engineers think that they can use their understanding from how the term is used in their field.

Philip_B
July 20, 2009 7:29 pm

JFD, I just checked your calculation and got 2.2 mm per annum, but you probably used a different value for the oceans.
If your groundwater estimate is correct then you have just accounted for 2/3rds of the ocean rise trend.
It makes me wonder what effect dam filling has on the sea level rise. Several large dams are being filled at present such as the 3 Gorges. Maybe that accounts for the recent levelling off.
I always found the sea level rise the most persuasive argument for a warming climate. Now you have provided an alternative explanation.
BTW, I think the residence time and GH effect of water vapour from irrigation is seriously under estimated, because they use values from the humid tropics, whereas irrigation is in arid areas. So it is much less likely to form clouds and then rain.
Where I live in Western Australia, we get days when the humidity is high(er) near the ground, but not higher up in the atmosphere, so no clouds form. These days are 5C to 10C hotter than low(er) humidity days. The effect of humidity near the ground is very large.

Joel Shore
July 20, 2009 7:30 pm

Without having yet read the Tanaka et al. paper in any detail, I think the reason why their approach constrains the lower bound on the climate sensitivity better than the upper bound is as follows. The climate sensitivity (or really a transient climate response) is estimated from the modern temperature record as
(delta T) / (F_GHG + F_aerosols_etc)
where (delta T) is the temperature change, F_GHG is the radiative forcing due to greenhouse gases (which is known quite accurately…+-10%) and F_aerosols_etc is the forcing due to aerosols and other things (which is not known very accurately). F_aerosols_etc is believed to be negative (or at worst very weakly positive), so this gives a pretty good lower bound on the sensitivity; however, it could be so negative that it essentially cancels out the forcing due to GHGs so that the net forcing is very small, which would then imply that the climate sensitivity is very large.
However, I think scientists like James Annan, who believe that the climate sensitivity is close to 3 C and that we can rule out very much higher climate sensitivities (and very much lower ones), would argue that the historical temperature record is not all we have…and in fact is not the best constraint we have on climate sensitivity. He would say that combining other constraints from paleoclimate (like the Last Glacial Maximum) and the climate response to the Mt. Pinatubo eruption allows one to put significantly tighter constraints on the climate sensitivity…and, in particular, rule at the values much larger than 3 C.

H.R.
July 20, 2009 7:44 pm

Joel Shore(19:11:05)
“There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do.”
Then what’s all the fuss about a tipping point if it’s just amplification and not feedback?
The only evidence of climate tipping points and feedback I see, given the locations of the continents for the past couple of million years, is that the earth has a nasty tendency to plunge into 100,000-year periods of glaciation and jump back out of them for a stretch of 10k or so years.

David
July 20, 2009 7:46 pm

Joel Shore (19:11:05) :
“The more fundamental point is that it shows us that the climate sensitivity is ~0.75 C per (W/m^2) of forcing.”
OK, but how is that shown?

Bill Illis
July 20, 2009 9:11 pm

The 0.75C per watt/m^2 is a mythical guess based on the guess-estimated long-term equilibrium CO2 sensitivity. Nobody knows what this number should be.
All the climate models actually use a figure of about 0.3C per watt/m^2 (and 0.15C for volcanoes) rather than the 0.75C commonly quoted.
I meant to post on this too in terms of the uncertainty. There is actually two (and three) components to the uncertainty.
The “forcing” of GHGs and other drivers like aerosols (quoted in Watts/m^2) and then the Temp C response per Watt/m^2 change (and the lag effect of how long this takes before the full temp response occurs).
Temp C = Forcing (watts/m^2) * TempC response/watt/m^2 * Time
The estimates for the Temp response per Watt/m^2 ranges from 0.1C to 1.0C (a factor of 10) and the length of time for the lag element noone can really figure out yet (it used to be several years, more recently 30 years and now Hansen is pushing 100 to 1500 years for the lag).
Generally, the climate models over-estimate both the forcing and the TempC response and underestimate the lag effect going by the ocean heat content data which indicates there is really little lag at all.
All of this (including the Paleoclimate data) points to a lower CO2 sensitivity figure.

F. Ross
July 20, 2009 9:28 pm

Peer reviewed no doubt!
/snide off

tallbloke
July 20, 2009 11:28 pm

Philip_B (19:29:00) :
Several large dams are being filled at present such as the 3 Gorges. Maybe that accounts for the recent levelling off.

The Three Gorges dam built up an extra 30 cubic kilometers to 2003
Rise due to expansion alone was 5400Km^3 1993-2003

Darell C. Phillips
July 21, 2009 12:40 am

David (11:50:20) :
Well, I did warn you…

Sandy
July 21, 2009 1:29 am

Can an estimate be made of the ‘wind chill’ factor of the trade winds. Following Willis’s idea then in seems that the main cooling effect of tropical thunderstorms is the evaporation of the sun-warmed ocean into the trade-winds which feed the monster cu-nims, and continue well into the night.
Surely a 40 knot wind over sun-warmed ocean will strip away a significant proportion of the incident energy before it has a chance to raise the water temp. ?
Since the majority of the evaporated water will fall as rain from the cu-nim there may be a detectable salinity difference between the cu-nim zone and 20 deg North and South?
Could this give us a macro-estimate of W/m^2 of the ocean cooling effect?

Allan M
July 21, 2009 2:23 am

Joel Shore
“There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do. This is a not uncommon occurrence in science (that different fields use terminology somewhat differently). The confusion arises only when these engineers think that they can use their understanding from how the term is used in their field.”
The misunderstanding comes from the fact that engineers and their colleagues have to work with the real world. Some of us do not qualify as climate pseudo-scientists because we lack the correct jewellery. We have to rely on a long working knowledge of amplification and feedback, not on GIGO computer models with the sort of dubious maths of the likes of M. Mann & co.

Geoff Sherrington
July 21, 2009 3:00 am

Sisypuss. As revealed at the source blog for the image,
http://pictureisunrelated.com/2009/03/03/this-cat-defies-logic/#comments
Here, we call greenies watermelons because they are green on the outside and red underneath. Like you NH folk we also have other names for wet cats.
Sisyphus was a Greek mythical man doomed forver to roll a large stone up a hill, but he never made it, it always rolled down and he had to start again. Bad dude, insulted the Gods and thought he was better than Zeus. The symbolism in the picture is that one cannot go lower than the base of the hill, but one can, with various degrees of ingenuity, go further up the hill, ie greater sensitivity is possible at upper levels but one cannot go lower. To further the simile, Sisyphus was doomed to roll forever, the analogy being plausible that one cannot cherry pick time dates. Like these authors did.

Highlander
July 21, 2009 3:07 am

The question was:
—————-
Sandy (01:29:49) :
Can an estimate be made of the ‘wind chill’ factor of the trade winds. Following Willis’s idea then in seems that the main cooling effect of tropical thunderstorms is the evaporation of the sun-warmed ocean into the trade-winds which feed the monster cu-nims, and continue well into the night.
—————-
I do not believe that one could rightly refer to the ‘wind chill factor’ for the simple reason it nought but a reference to human physiology.
.
In the main, it refers to the amount of perceived cooling as a result of a wind blowing upon expose flesh in direct relation to the cooling caused by insensate perspiration.
.
From the American Heritage English Dictionary:
——-
wind-chill factor
n.
The temperature of windless air that would have the same effect on exposed human skin as a given combination of wind speed and air temperature.
——-

Steve Keohane
July 21, 2009 6:55 am

Thirty years of inexorable GHG increase, UAH anomaly for the past two years is .13°C, a .43°C/century trend. No need to fantasize about increasing the sensitivity in the climate video games.

Robert Austin
July 21, 2009 7:03 am

George E. Smith (13:08:22) :
“”” Robert Austin (11:23:07) :
David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation. “””
“Least Understood factor ? ” By whom ?
I would say the water factor is well understood; so I suggest you reduce the number of people you choose to include in your nobodys who aren’t close to pinning the role of water; well you need to reduce it by at least one. And I’m not responsible for anybody else’s lack of understanding.
George
________________________________________________________________
George, thanks for pointing out my gross ignorance in stating that of the role of water in climate regulation is poorly understood. I did not realize that great strides have been made in this area of climate science and that at least one person (you) has this understanding. I will follow your future posts assiduously so that I may learn from the master.

Joel Shore
July 21, 2009 7:38 am

David says:

“The more fundamental point is that it shows us that the climate sensitivity is ~0.75 C per (W/m^2) of forcing.”
OK, but how is that shown?

The estimate obtained from the last glacial maximum was outlined by Hansen in this Scientific American piece: http://www.met.tamu.edu/class/old_atmo629/hansen.pdf (see Fig. 3).
Bill Illis says:

The 0.75C per watt/m^2 is a mythical guess based on the guess-estimated long-term equilibrium CO2 sensitivity. Nobody knows what this number should be.

It is not a “mythical guess” and it is not just based on response to CO2. It is an estimate based on the climate response to different events in the past.

All the climate models actually use a figure of about 0.3C per watt/m^2 (and 0.15C for volcanoes) rather than the 0.75C commonly quoted.

You are confusing the sensitivity before feedbacks are included (which is about 0.3 C per W/m^2) and the one after feedbacks (which is estimate to be about 0.75 C). I am not sure where you volcanoes number comes from.

Temp C = Forcing (watts/m^2) * TempC response/watt/m^2 * Time

This equation makes no sense as even the units don’t work out. What you call “Time” should probably be a dimensionless function that ranges from 0 in the limit of very short time to 1 in the limit of long time. Note that for comparing the Last Glacial Maximum to now, the time lag issue isn’t really relevant because the timescales are long enough. The time lag issue is more relevant when estimating the sensitivity from the instrumental temperature record but this doesn’t give us very strong constraints anyway, mainly because of the uncertainty in the aerosol forcing.

Joel Shore
July 21, 2009 7:44 am

Steve Keohane says:

Thirty years of inexorable GHG increase, UAH anomaly for the past two years is .13°C, a .43°C/century trend.

There are several errors here:
(1) The base-point for the anomaly isn’t 1979. I think it is the mean over 1979 to 1998 or something like that.
(2) When you look at the past 2 years, you are cherry-picking a particularly cold period because of the La Nina.
(3) Furthermore, the La Nina / El Nino oscillations tend to show up more dramatically in the satellite temperatures than the surface record further exaggerating the effect of this cherry-pick. (By choosing the UAH analysis, you are also cherry-picking the data set with the lowest trend for the satellite record.)

July 21, 2009 10:06 am

M. Simon says:
Quite right. There is a lot of confusion between amplification and positive feedback. Climate “science” is full of it. (and you can take that in several ways and for most of those currently involved true).
There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do. This is a not uncommon occurrence in science (that different fields use terminology somewhat differently). The confusion arises only when these engineers think that they can use their understanding from how the term is used in their field.
CO2 cause the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – which causes more evolution of CO2 from the oceans – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – which causes more evolution of CO2 from the oceans – this causes a higher concentration of water vapor in the atmosphere –
Since WV is more effective than CO2 it looks like the atmosphere will saturate with WV from all that positive feedback.
OTOH if WV (due to clouds) is a negative feedback (with lags) the whole series settles nicely – after a lag.
But perhaps you would explain to me the difference between electrical positive and negative feedback and the use of the same terms in climate.
BTW if the climate model folks are not using electrical analogs for their models they are way behind the times.

David
July 21, 2009 10:09 am

Why would climate sensitivity remain static?

July 21, 2009 10:16 am

Joel Shore(19:11:05),
Let me put it to you direct. Explain the difference between positive and negative feedback in climate science and electronic design.

George E. Smith
July 21, 2009 10:17 am

“”” Robert Austin (07:03:32) :
George E. Smith (13:08:22) :
“”” Robert Austin (11:23:07) :
David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation. “””
“Least Understood factor ? ” By whom ?
I would say the water factor is well understood; so I suggest you reduce the number of people you choose to include in your nobodys who aren’t close to pinning the role of water; well you need to reduce it by at least one. And I’m not responsible for anybody else’s lack of understanding.
George
________________________________________________________________
George, thanks for pointing out my gross ignorance in stating that of the role of water in climate regulation is poorly understood. I did not realize that great strides have been made in this area of climate science and that at least one person (you) has this understanding. I will follow your future posts assiduously so that I may learn from the master. “””
I don’t recall saying anything about any ignorance on your part. I agree with your assertion that a lot of people don’t understand the water connection; that’s the whole problem. But I believe if you read “How Much more Rain will Global Warming Bring ?” by Wentz et al, in Science for July 7 2007, you will find the clues to all that anyone needs to know about the water linkage.
And the paper makes it clear that at least the modellers don’t uunderstand it, since their models are off by a factor of as much as seven. Couple that with John Christy’s paper in Jan 2001 Geophysical Research letters; showing that oceanic water and air temperatures are not correlated, and therefore the true lower troposphere history cannot be recovered from the meaningless ocean water measurments; when it was blindly assumed that the air and water temperatures were the same. Then it becomes clear that before the Argo bouys, and the polar orbit satellites; much of the global “climate data” was worthless junk; so our real knowledge of earth climate doesn’t go back much before about 1980.
And following me would not be the right thing to do; after all I have to depend on others to gather and present the data. Unfortunately, I spend most of the time trying to find out under what sort of controlled conditions that data was gathered.
As you point out water exists in the atmosphere in three phases; you’ll probably find at least 12 locations in Anthoiny’s archives, where I have stressed that simple fact; and pointed out that in the vapor form, water produces positive feedback warming, to keep us from being a frozen ball; but in the other two phases of liquid and solid, where water forms clouds; water generates negative feedback cooling. And the Wentz paper shows the extent of that effect, although they didn’t specifically cite that; but it is inescapable if you believe their data.
I am sure that there is a lot of fill in detail to obtain; but you don’t have to resort to Quantum Chromodynamics to see how water controls the earth’s mean surface temperatures.
But those who are dead set in convincing people that carbon is the devil incarnate; since it leads to their political agenda, will never see the water role.
George

July 21, 2009 11:02 am

(3) Furthermore, the La Nina / El Nino oscillations tend to show up more dramatically in the satellite temperatures than the surface record further exaggerating the effect of this cherry-pick. (By choosing the UAH analysis, you are also cherry-picking the data set with the lowest trend for the satellite record.)
But the surface record is not spatially or chronologically (min max) well distributed.
I’m told models use 15 minute time chunks. Fine. The temp records only give you min-max without an actual time when those temps were recorded closer than 24 hours. Now if you are going to initialize your model properly with the temp record you need simultaneous records (i.e. all the starting temps should be at say 0000 GMT)
Otherwise Lorenz chaos diverges your model from reality.
==
http://powerandcontrol.blogspot.com/2009/01/strange-attractors.html
Complex deterministic systems suffer not only from sensitive dependence on initial conditions but also from possible sensitive dependence on the differences between Nature and the models employed in representing it. The apparent linear response of the current generation of climate models to radiative forcing is likely caused by inadvertent shortcomings in the parameterization schemes employed. Karl Popper wrote (see my essay on his views):
“The method of science depends on our attempts to describe the world with simple models. Theories that are complex may become untestable, even if they happen to be true. Science may be described as the art of systematic oversimplification, the art of discerning what we may with advantage omit.”
If Popper had known of the predictability problems caused by the Lorenz paradigm, he could easily have expanded on this statement. He might have added that simple models are unlikely to represent adequately the nonlinear details of the response of the system, and are therefore unlikely to show a realistic response to threshold crossings hidden in its microstructure. Popper knew, of course, that complex models (such as General Circulation Models) face another dilemma.
I quote him again: “The question arises: how good does the model have to be in order to allow us to calculate the approximation required by accountability? (…) The complexity of the system can be assessed only if an approximate model is at hand.”
From this perspective, those that advocate the idea that the response of the real climate to radiative forcing is adequately represented in climate models have an obligation to prove that they have not overlooked a single nonlinear, possibly chaotic feedback mechanism that Nature itself employs.

Clouds are non-linear and the modelers admit they don’t do those well. And how do you handle an upwelling cell 1km on a side in a model with 100 km grid squares? Well you paramterize them. Are the parameters correct to sufficient accuracy? We do know that the models did not predict the cooling from 2000 until 2020. Why 2020? That is the latest IPCC estimate. So some parameter in the models is wrong. Maybe many.
So at least for now the models can’t be trusted because their output has not matched the data – i.e. CO2 sensitivity is too high. Because some forcing is now overriding CO2. Or to put it another way. Some other forcing (internal variability?) has been aliased for CO2 forcing.

JFD
July 21, 2009 11:58 am

Philip_B (19:29:00) :
Here is the complete background and calculations that I put in my comments to EPA’s carbon dioxide endangerment finding;
There are two basic ways to add new water to the atmosphere:
1. Production of ground water from slow to recharge aquifers. This water is not in equilibrium with the atmosphere so it will take one evaporation-condensation cycle, about 10 days, to come into near equilibrium with the hydrological cycle. The amount of new water added will be the production volume minus the recharge volume (which will be very small). The added new water will slightly increase the depth and surface area of the oceans each day. Due to the impact of tides, winds and gravity the increase in sea level will not be observable on a daily basis, but over time the average sea level will rise measurably. Let’s calculate the increase in ocean levels due to new groundwater added to the atmosphere.
a. Global ground water production estimates from technical literature sources range from 670 up to 830 cubic kilometers per year in 1995; for one example go to: http://www.iwmi.cgiar.org/EWMA/files/papers/Energyprice_GW.pdf
b. Available data indicates that ground water production basically follows the increase in world population times a factor about 1.20 to account for a higher rate of increase in industrial use. At some point the cost of energy and consequence loss of productivity will limit the higher growth in industrial use. Use of a groundwater production growth rate of 2.5% per year to take into account population and industrial growth since 1995, results in a current groundwater production of 1090 cubic kilometers per year based on the average of the range. I will use 900 cubic kilometers per year of groundwater production, so as to not overstate the conclusion.
c. Technical literature aquifer recharge rate estimates range from 5 to 11% of groundwater production. As noted above, recharge rates are highly variable depending on things such as geologic history, rainfall, soils and average temperature. I will use 8% as the average recharge rate.
d. Groundwater is used 9% for domestic, 21% industry and 70% agriculture, see, “Ecosystems and human well being: current state & trends” by Millennium Ecosystem Assessment.
e. The surface area of the world’s oceans is 335,258,000 square kilometers. Using one year as the measuring time, the 900 cubic kilometers per year of new water would increase the level of the oceans on average by: 900 x 10 to the 3rd power m3/year divided by 335.258 x 10 to the 6th power m2 = 0.0026845 m/year rise on a 100% return basis (see below for recharge discount). Converting to mm/year = 1000 mm/m x 0.0026845 m/year = 2.6845 mm per year rise in average ocean level on a 100% return basis.
f. The calculated value of 2.7 mm per year for ocean rise on a 100% return basis must be discounted as follows: 70% of the condensed new water falls into the oceans as rainfall after having completed one evaporation-condensation cycle; 8% of land rainfall results in recharge of aquifers. Therefore percent precipitated groundwater that winds up in the oceans = 70% + (30% – 8%) = 92%. The actual rise in ocean level per year from groundwater production is then .92 x 2.6845 = 2.47 inches per year
g. Finally, we can compare the 2.5 mm per year ocean level rise from new water added from groundwater production using various literature sources relative to ocean level rise: Sea level rise from January 1870 was 1 to 2 mm per year; Sea level rise from 1900 to 2000 was 1.7 mm plus or minus .3 mm per year; Sea level rise since 1993 was 3.1 plus or minus .7 mm per year.
2. New water is also chemically produced during the burning of fossil fuels using the hydrogen from the fuel and oxygen from the combustion air. My calculated estimate is that this new water is about 7% of the volume of the produced groundwater and would also result in sea level rise. Thus, the two basic ways of introducing new water into the atmosphere would bring the anthropogenic caused total sea level rise to 1.07 x 2.47 mm per year = 2.6 mm per year. This amount agrees completely with the observed ocean level rise.
————-
I think produced groundwater from slow to recharge aquifers is a very important point in the global warming discussions. The input data can be verified and the calculation is straight forward. I drew several other definitive conclusions for EPA and will try to add them On Topic as the opportunities arise
JFD

Joel Shore
July 21, 2009 1:31 pm

M. Simon says:

CO2 cause the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – which causes more evolution of CO2 from the oceans – this causes a higher concentration of water vapor in the atmosphere – …
Since WV is more effective than CO2 it looks like the atmosphere will saturate with WV from all that positive feedback.

What you are missing is the simple fact that an infinite series can converge to a finite value. Let’s take a very specific example: Let’s suppose that each degree of warming due to whatever cause has the first-order effect of causing an increase in water vapor that then produces another 1/2 degree of warming. So, that means that if you put enough CO2 into the atmosphere to cause 1 C of warming by itself, the additional water vapor then produces 1/2 C of warming, then the additional water vapor that this 1/2 C of warming produces causes an additional 1/4 C of warming and so forth. What you have is the infinite geometric series 1 + 1/2 + 1/4 + 1/8 + 1/16 + …, which converges to 2, meaning that the positive feedback amplifies the warming by a factor of 2.

Let me put it to you direct. Explain the difference between positive and negative feedback in climate science and electronic design.

To be honest, I am not familiar enough with the electronic design field to know exactly how they use the term. I just know that a few engineers here have stated that it is used differently and that “positive feedback” in their lingo refers to an unstable system.

I’m told models use 15 minute time chunks. Fine. The temp records only give you min-max without an actual time when those temps were recorded closer than 24 hours. Now if you are going to initialize your model properly with the temp record you need simultaneous records (i.e. all the starting temps should be at say 0000 GMT)
Otherwise Lorenz chaos diverges your model from reality.

It is well-understood in climate science that the issue of sensitivity to initial conditions prevents you from predicting the DETAILS of the climate over the long term. In fact, the models are often run in ensembles with slightly perturbed initial conditions. However, even when the models produce different DETAILS (e.g., when a particular El Nino or La Nina event occurs), they produce the same general result in response to a forcing over a long enough period of time (where a long enough period of time essentially means that the temperature change due to the forcing dominates over the temperature fluctuations due to natural variability such as ENSO).

Are the parameters correct to sufficient accuracy? We do know that the models did not predict the cooling from 2000 until 2020. Why 2020? That is the latest IPCC estimate. So some parameter in the models is wrong. Maybe many.

First of all, the prediction to 2020 is a prediction of one recent paper (which really predicts a pause in the warming, not a cooling). It is not “the latest IPCC estimate”.
Second of all, it is indeed true that the generally-accepted way of running the models does not allow the prediction of short term trends. One can compare the statistics of the models with the statistics of the actual climate to see how well the models capture natural variability (and, I believe the answer is pretty well, although not perfectly). And, for example, one can see how common a decadal or so period with a negative temperature trend is in models run with steadily-increasing GHG forcing (the answer being that it is fairly common, see http://www.cdc.noaa.gov/csi/images/GRL2009_ClimateWarming.pdf ). However, which actual specific realization the climate follows is very sensitive to the initial conditions.
There is recent work, such as a paper from the Hadley group and another by Keenlyside et al, that attempt to make actual short-term climate predictions (i.e., predictions on the scale of several years to a decade or so) but these are still pretty experimental and there is considerable debate about whether they have been done correctly. These attempts basically work on the idea that, while there is extreme sensitivity to initial conditions, the timescales for the oceans are slow enough that it may still be possible to get a reasonable prediction for the oceans over these decadal timescales if one starts with a good initialization of the ocean’s current state and that the ocean will then tend to strongly influence the general characteristics of the climate. However, this is quite tricky to do in practice.

Joel Shore
July 21, 2009 1:36 pm

By the way, I should just add that I don’t really understand what your excursion into Lorenz chaos had to do with my original point to Steve Keohane that his estimate of a trend was way off because of the various ways in which he had cherrypicked the data. Even the UAH folks themselves report a trend over the entire satellite record since 1979 of ~0.13 C per decade. I think the RSS folks and the HadCRUT and NASA GISS surface data support a somewhat larger trend of something like 0.17 C per decade over that time period.

Tom in thunderstorm cooled Florida
July 21, 2009 2:53 pm

Joel Shore (13:31:15) :
“What you are missing is the simple fact that an infinite series can converge to a finite value. Let’s take a very specific example: Let’s suppose that each degree of warming due to whatever cause has the first-order effect of causing an increase in water vapor that then produces another 1/2 degree of warming. So, that means that if you put enough CO2 into the atmosphere to cause 1 C of warming by itself, the additional water vapor then produces 1/2 C of warming, then the additional water vapor that this 1/2 C of warming produces causes an additional 1/4 C of warming and so forth. What you have is the infinite geometric series 1 + 1/2 + 1/4 + 1/8 + 1/16 + …, which converges to 2, meaning that the positive feedback amplifies the warming by a factor of 2.”
Unless, of course, it rains.

David
July 21, 2009 4:49 pm

M. Simon (11:02:12) : “Some other forcing (internal variability?) has been aliased for CO2 forcing.”
Hydrological cycle?
Joel Shore (13:31:15) :
Rising sea levels would mean more ocean to influence, and slow the process, no?
Tom in thunderstorm cooled Florida (14:53:35) :
An inconvenient truth.

Bill Illis
July 21, 2009 6:10 pm

Let’s have a look at some of the numbers that are used for the TempC response per watt/m^2.
First, a simplified version of Trenberth’s and the IPCC’s Earth Radiation budget.
http://en.wikipedia.org/wiki/File:Greenhouse_Effect.svg
Incoming solar radiation – 235 watt/m^2 – responsible for raising the Earth’s temperature from 4 kelvin to 255 kelvin or 1.07C /watt/m^2.
Greenhouse effect – 324 watt/m^2 – responsible for the 33C greenhouse effect or just 0.1C /watt/m^2.
How about doubled CO2/GHGs according to the IPCC – 4 watt/m^2 – which will translate into a 3.0C rise in temperatures or 0.75C /watt/m^2
In Gavin’s solar paper above – we have the estimated numbers for the solar cycle influence ranging from 0.05C /watt/m^2 to 0.40C /watt/m^2 – but greenhouse gases give numbers anywhere from 0.31C /watt/m^2 to 0.86C /watt/m^2.
So, we have “climate science” math that uses any figure from 0.05C /watt/^2 to 1.07C /watt/m^2.
Seems a little unusual to me (unless you already the know the number you want to get and the intermediate steps can be camoflaged in wierd terms like the Forcing is estimated to be X watts per metre squared and the Z value is 0.xxK/[watt M-2].

bill
July 21, 2009 7:47 pm

Bill Illis (17:30:19) :
Those numbers indicate that CO2 was responsible for 1.9C of the 5.0C change in temperatures during the ice ages.
So the majority of the temperature change, 3.1C, is due to other factors.
How do we know that the other factors are not in fact responsible for 4.0C and CO2 was only responsible for 1.0C (or the same 1.5C per doubling).

I understood that the log temp vs CO2 was due to the CO2 absoption bands being nearly saturated.
If this is the case then an Ice age with very low levels of water vapour in the air would desaturate some of these absoption bands making the sensitivity to CO2 non-logarithmic.
Looking at the CO2 CH4 temp from various ice core data seems to show that CO2 rises from 200ppm simultaneously with temperature at the end of the ice age:
http://img11.imageshack.us/img11/6826/iceage040kkq1.jpg
Entry to the ice age seems to be co-incident with CH4 reduction.
So during the ice age CO2 levels are low H2O levels are low CH4 levels are low which gives a much less saturated IR window. Which gives greater sensitivity to GHGs until levels equivalent to todays are attained.
As I and many others have said gouing back more than 9M years and talking of CO2 levels and temperature is irrelevant. The worl was too different.
Further, the levels of CO2 are obtained from a computer model (geocarb 3, i believe). Why should this be more accurate than current climate models?
check out this site for continental movements:
http://www.scotese.com/newpage13.htm

p.g.sharrow "PG"
July 21, 2009 11:50 pm

Leif Svalgaard (03:45:18) :
tallbloke (23:34:55) :
Does Leif Svalgaard agree with the characterisation of solar forcing displayed in the graph shown, and please would he explain what the red solar curve is representing in case I misunderstand it.
The red curve shows some representation of the solar cycle and does not seem too much out of whack. It has the smallest amplitudes of all the forcings, so is in line with what I would expect.
Had to add my 2 bits worth ;
the graphs seem to show percentage change as if amounts are equivalent.
As any engineer in refrigeration can tell you a small shortage or overage of energy input can make a large change in temperature over time in a balanced, constant operated system. That is why a system is over built to allow for forced control.
Solar input is by orders of magnitude the most important factor in the earths climate,and the hydrosphere is the second.
I’m not sure why vulcanism only shows negative periods and aerosols show a marked decrease over the graphs time.

July 22, 2009 12:13 am

Joel,
In a chaotic system if you can’t get the details right the farther you go into the future the more the divergence from reality. Because every prediction (time slice) is the initial condition for the next time slice and the system is sensitive to initial conditions.
Now if the system was a well damped amplifier with negative feedback (in the engineering sense) you would get a reversion to the mean. In a chaotic system that reversion to the mean is not assured. Which is why the terms “strange attractor” and “period doubling” are used for those kinds of system rather than amplification factor.
Now let us look at a system with positive feedback (in the electrical sense). As long as the feedback factor is less than 1 the system will be somewhat stable to component changes. A feedback factor of .1 in such a system is rather stable although gain changes and component changes will be multiplied rather than divided. As the feedback factor gets higher the gain of course increases exponentially. If you go to a feedback of .99 the gain is multiplied by 100 (roughly). At .999 (a small change) the gain is multiplied by 1,000. Things are getting twitchy. At .9999 – 10,000. Really twitch. At a feedback of 1.000 or above you get either a Schmidt Trigger or an oscillator depending on lags. The Schmidt trigger is the alarmists tripping point. However, with lags in the system the more likely result in oscillation.
BTW in electronics the positive feedback amplifier was invented by Armstrong and the common name for it (if you want to use Google) is “regenerative receiver” or “Q multiplier”.
If you look at the temp recodes for the last million years you do see tripping points. Those represent the transition from glaciation to inter-glacial periods. However, in the periods themselves negative feedback seems to dominate because the temps in the periods are relatively stable. So glaciation and inter-glacials seem to be the strange attractors.
Which says that excess warming is not our problem. Tripping into a glacial period is our problem. If CO2 is as effective as you assume (I doubt it) we should be pumping it out as fast as we can because given the historical record we are due (overdue?) for a glacial period.
What should we look for re: the tripping point? Increasingly chaotic weather (period doubling). Are we seeing that? Hard to say. The data record is not long enough.

July 22, 2009 12:33 am

Let me also note that lagged negative feedback (phase delayed) can also lead to oscillation if the phase is delayed by 180 deg. Below 180 deg the system gets twitchier the closer you get to 180 deg. It also becomes more sensitive to certain frequencies of input.
So positive feedback is not the only way to get a twitchy system.
But getting in deeper – the phase delay looks like a positive feedback.
Really. It is rather sad that climate “scientists” are so ignorant of electrical analogs.
So what do we have – a system being banged with 24 hour cycles, yearly cycles, solar cycles, PDO cycles, etc. If the phase delay for feedback matches any of those cycles oscillation will result. And guess what we have ENSO which is rather chaotic and the PDO which is more stable. Which says that the PDO is likely caused by some in phase lag. Where as ENSO is probably due to some not quite in phase lag.

July 22, 2009 12:39 am

So the deal is: positive feedback or negative feedback – lags matter.

July 22, 2009 12:48 am

Joel,
Re: feedback – the climate models do not match the data:
http://wattsupwiththat.com/2009/06/02/lindzens-climate-sensitivty-talk-iccc-june-2-2009/
My take:
http://powerandcontrol.blogspot.com/2009/06/who-ya-gonna-believe.html
No resort to positive feedback is require (besides that is too simplistic anyway) it could all be explained by a difference in lags (so called heat in the pipeline) between reality and the models.
It would be really great if climate “scientists” were smart enough and/or educated enough to produce an electrical analog of the climate system.

Allan M
July 22, 2009 2:34 am

DaveE (14:45:29) :
“Allan M (14:09:24) :
You’re wasting your time with that one. Warmists don’t appreciate that positive feedback also acts on the signal fed back, I’ve had the argument too many times now.
DaveE.”
I have to admit you are right. They just evade the issue. But then it is all words (and numbers) to them. If they don’t understand it themselves, then they can’t explain it.

Joel Shore
July 22, 2009 7:43 am

Bill Illis says:

Let’s have a look at some of the numbers that are used for the TempC response per watt/m^2.

While the sensitivity is expected to be fairly constant over some range, it is not expected to be constant over a range defined by taking the sun all the way from zero strength to full strength or the greenhouse effect all the way from zero strength to full strength. Hence, the numbers that you calculate are not estimates of the climate sensitivity for the current climate.
M. Simon says:

In a chaotic system if you can’t get the details right the farther you go into the future the more the divergence from reality. Because every prediction (time slice) is the initial condition for the next time slice and the system is sensitive to initial conditions.

I know how chaotic systems work. However, extreme sensitivity to initial conditions does not mean that nothing can be predicted. For example, while we may not be able to predict the weather on July 4th in January, we can still predict that the climate here in Rochester will be much warmer in July than in January. Likewise, the models robustly predict the warming in response to increasing greenhouse gases when the initial conditions are perturbed, even though the year-to-year wiggles are sensitive to these initial conditions.
As for tipping points, they represent an additional factor of concern. Your hypothesis that we are unlikely to encounter one when we are applying a forcing in the warming direction is based on nothing more than hope.
The electrical analog may be useful to a certain extent and I am sure that there are plenty of climate scientists who understand the electrical analog, but analogies can only get you so far.

Tom in thunderstorm cooled Florida
July 22, 2009 1:59 pm

Joel Shore (07:43:37) : “However, extreme sensitivity to initial conditions does not mean that nothing can be predicted. For example, while we may not be able to predict the weather on July 4th in January, we can still predict that the climate here in Rochester will be much warmer in July than in January.”
One reason you can predict this is because the temperature range/variation is in the tens of degrees, probably in the neighborhood of 80 degrees F. AGWers are talking about long term predictions of a couple of degrees.
Another reason you can predict this is because you have available a history of hard data evidence to support your prediction something sorely lacking in climate models.

H.R.
July 22, 2009 6:05 pm

in thunderstorm cooled Florida (13:59:44) :
You accidently left out the minor detail that seasonal variation is cyclical and not chaotic. Joel seemed to be playing a little fast and loose with that analogy to climate models.

Tom in high and dry Florida
July 22, 2009 7:45 pm

H.R.
I also left out the minor detail of Earth’s obliquity which is the well known and documented cause of Joel’s “successful” prediction about seasonal temperature differences.

Joel Shore
July 23, 2009 1:18 pm

H.R. says:

You accidently left out the minor detail that seasonal variation is cyclical and not chaotic. Joel seemed to be playing a little fast and loose with that analogy to climate models.

And, that is relevant how? The rise in forcing due to the rise in greenhouse gas levels is steady and not chaotic too. Chaos, i.e. extreme sensitivity to initial conditions, is a property of the SYSTEM…If the weather / climate system exhibits chaos (which it does), it will exhibit this chaos whether it is with a cyclical forcing like the seasonal variation or a steadily increasing forcing like that due to greenhouse gases. However, in both cases, there are still predictions that we can make about how the climate will respond even if the chaos prevents us from predicting some of the details.
Tom in high and dry Florida says:

I also left out the minor detail of Earth’s obliquity which is the well known and documented cause of Joel’s “successful” prediction about seasonal temperature differences.

Yes…And, increasing radiative forcings due to greenhouse gases are the cause of the climate changes under increasing greenhouse gas concentrations. Your point is…?

Tom in Florida
July 23, 2009 3:28 pm

Joel Shore (13:18:13) : “Yes…And, increasing radiative forcings due to greenhouse gases are the cause of the climate changes under increasing greenhouse gas concentrations. Your point is…?”
You said previously: “For example, while we may not be able to predict the weather on July 4th in January, we can still predict that the climate here in Rochester will be much warmer in July than in January.”
My point is this, you were trying to equate being able to predict a well known seasonal temperature change at a location caused by the tilt of Earht’s axis to climate models predicting tiny temperature changes many years from now due to the unproven hypothosis of man made CO2.
May I add that you knew exactly what my point was.

Joel Shore
July 23, 2009 4:23 pm

Tom,
Actually, I didn’t know what your point was. I would agree that there are quantitatively greater uncertainties associated with predicting future climate change from the buildup of CO2 than there are in predicting the seasons (although the effect of CO2 is far from “an unproven hypothesis”). However, my point is that in both cases, these sorts of predictions can be made despite the fact that the system is chaotic. Chaos constrains our ability to make certain kinds of predictions but it does not eliminate our ability to make any predictions.

H.R.
July 23, 2009 6:33 pm

Hi, Joel.
I can’t add much to Tom’s latest reply to you because I was seeing things pretty much the way he was calling them. Your analogy of climate models predicting a few degrees a hundred yers out to predicting the seasons just doesn’t seem to be apt.
Lucia has some interesting comments on models in this post.
http://rankexploits.com/musings/2008/gavin-schmidt-corrects-for-enso-ipcc-projections-still-falsify/
I read discussions such as the one I just linked to at The Blackboard and I see no particular reason to abandon my skepticism of the skill of climate models. As for seasons, I’ll grudgingly go with the consensus that predicts fall following summer, which follows spring, which follows winter. I’ve got a bit of the herd mentality about seasons.