
From the University of Michigan something I think Dr. Roy Spencer will be interested in as it is yet another case where models and satellite observations differ significantly. See the figure S1 at the end of this article – Anthony
Aerosols affect climate more than satellite estimates predict
ANN ARBOR, Mich.—Aerosol particles, including soot and sulfur dioxide from burning fossil fuels, essentially mask the effects of greenhouse gases and are at the heart of the biggest uncertainty in climate change prediction. New research from the University of Michigan shows that satellite-based projections of aerosols’ effect on Earth’s climate significantly underestimate their impacts.
The findings will be published online the week of Aug. 1 in the early edition of the Proceedings of the National Academy of Sciences.
Aerosols are at the core of “cloud drops”—water particles suspended in air that coalesce to form precipitation. Increasing the number of aerosol particles causes an increase in the number of cloud drops, which results in brighter clouds that reflect more light and have a greater cooling effect on the planet.
As to the extent of their cooling effect, scientists offer different scenarios that would raise the global average surface temperature during the next century between under 2 to over 3 degrees Celsius. That may not sound like a broad range, but it straddles the 2-degree tipping point beyond which scientists say the planet can expect more catastrophic climate change effects.
The satellite data that these findings poke holes in has been used to argue that all these models overestimate how hot the planet will get.
“The satellite estimates are way too small,” said Joyce Penner, the Ralph J. Cicerone Distinguished University Professor of Atmospheric Science. “There are things about the global model that should fit the satellite data but don’t, so I won’t argue that the models necessarily are correct. But we’ve explained why satellite estimates and the models are so different.”
Penner and her colleagues found faults in the techniques that satellite estimates use to find the difference between cloud drop concentrations today and before the Industrial Revolution.
“We found that using satellite data to try to infer how much radiation is reflected today compared to the amount reflected in the pollution-free pre-industrial atmosphere is very inaccurate,” Penner said. “If one uses the relationship between aerosol optical depth—essentially a measure of the thickness of the aerosols—and droplet number from satellites, then one can get the wrong answer by a factor of three to six.”
These findings are a step toward generating better models, and Penner said that will be the next phase of this research.
“If the large uncertainty in this forcing remains, then we will never reduce the range of projected changes in climate below the current range,” she said. “Our findings have shown that we need to be smarter. We simply cannot rely on data from satellites to tell us the effects of aerosols. I think we need to devise a strategy to use the models in conjunction with the satellite data to get the best answers.”
The paper is called “Satellite-methods underestimate indirect climate forcing by aerosols.” The research is funded by NASA.
PNAS Early Edition: http://www.pnas.org/content/early/recent
Joyce Penner: http://aoss.engin.umich.edu/people/penner
The University of Michigan College of Engineering is ranked among the top engineering schools in the country. At $180 million annually, its engineering research budget is one of largest of any public university. Michigan Engineering is home to 11 academic departments, numerous research centers and expansive entrepreneurial programs. The College plays a leading role in the Michigan Memorial Phoenix Energy Institute and hosts the world-class Lurie Nanofabrication Facility. Michigan Engineering’s premier scholarship, international scale and multidisciplinary scope combine to create The Michigan Difference. Find out more at http://www.engin.umich.edu/.
===========================================================
You can read the full text of the paper here including the SI: http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes
This figure from the SI is quite interesting:

Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Friends:
The article quotes Penner saying:
“The satellite estimates are way too small,” said Joyce Penner, the Ralph J. Cicerone Distinguished University Professor of Atmospheric Science. “There are things about the global model that should fit the satellite data but don’t, so I won’t argue that the models necessarily are correct. But we’ve explained why satellite estimates and the models are so different.”
Hmmm. Let us consider what we know about how the models incorporate climate sensitivity and aerosol effects.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.) would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model.
He says in his paper:
”One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.”
And, importantly, Kiehl’s paper says:
”These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.”
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png ]
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
”Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.”
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
In summation, all the model projections of future climate change are blown out of the water by the findings of Penner at al.
Richard
Bizarre – the University of Michigan is receiving research grants for that? The result is poor – the conclusions kind of funny at best. If reality says it is different than it was thought before and predicted in “the models” – sad for reality, because the models are definitively right – certainly with a high high “confidence level”.
Nice phrase:
Steve Keohane says:
August 2, 2011 at 3:49 am
Apparently, overestimating the effects of CO2 leads to overestimating the amount of aerosols to counteract the first fantasy.
Best
Matt from Chile
This paper is not much different than most climate science papers. It is hard to determine what they actually did or where the data came from but it concludes 3.0C per doubling is probably correct.
Having said that, the results are not much different than what the IPCC and the climate models are using.
Note this paper is about the first Aerosol-Indirect-Effect or the aerosols impact on the reflectance of sunlight by clouds. There is also second effect in regards to the lifetime of clouds but this is usually treated as an add-on to the first effect. These impacts come from both sulfate aerosols and soot aerosols (although soot can also have a positive impact as well.
The first aerosol indirect effect is typically about 40% of the total aerosol impact.
The average IPCC 1st Aerosol-Indirect-Effect is -0.5 W/m2 (GISS uses -0.7 W/m2) and this paper used six models/methods to come up with a range of -0.27 W/m2 to -1.69 W/m2. Lots of other papers have come up with a range like this so there is nothing ground-breaking here.
Since there is so much uncertainty and so many direct and indirect impacts, and ones that are probably not additive, I’m assuming this paper will have no impact, direct or indirect.
First we were reaching the CO2 tipping point. Now we are reaching the aerosol tipping point. What happens when all these tipping points collide?
To Richard Courtney (6:46 AM):
Bravo! A must read for everyone on this thread.
Sounds to me like “blah blah blah we don’t know” again when it comes to aerosols. And yet this largely unknown factor is crucial to estimating the “forcing” determining recent climate. Without a decent estimate one can claim any sensitivity is consistent with the data. Clearly we know less than we need to about aerosols, and this paper is trying to say something about this factor but it is a little muddled, since it flip-flops, I get the impression, between whether to believe observation based estimates or model estimates. It’s not clear why one would prefer modeling estimates to observational ones, unless your observations are really bad and your models really good. But hey, this is climate science, so par for the course.
After Climategate, after the de-pantsing of the IPCC, and after the “consensus” went up in flames, how do all of these AGW jokesters and hucksters keep getting research funds?
“We simply cannot rely on data from satellites to tell us the effects of aerosols. ”
That really says it all. Climate Science is turning the scientific method on its head. Since the models and satellites don’t agree, the models (theory) are right and the satellites (observations) are wrong.
Scientific method – when theory doesn’t match observation, the theory is wrong.
Climate science – when theory doesn’t match observation, the observation is wrong
Richard S Courtney says:
August 2, 2011 at 6:46 am
”One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. .. The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al.”
An excellent point. How, if the models have different estimates of climate sensitivity, can the model all simulate past global temperatures?
The answer is simple. The models are using “parametrized curve fitting” to fit the historical data. What do we know about parametrized curve fitting? That is has no predictive value beyond chance. None whatsoever. One of the parameter sets may in fact be correct, and this will have predictive value, however there is no way to know in advance which set of parameters is correct.
Also, it is also possible that none of the parameter sets is correct, and none of the models will have predictive value. The parameter set may appear to track for a short period of time, then go widely off course. There is no way to know in advance.
Aren’t computer programmers wonderful……..
…and they still can’t model tomorrow’s forecast
Sahara Sand
Mac says (2:34 am) ‘Observations don’t match projections’ has becoming a much repeated phrase of late. The models don’t work.”
Having lived though a couple product recalls- class two medical devices- were the MODELS said the response (accuracy) of a device in the field (attribute observations) would be within the prescribed bias (errors) only to have the number/frequency of observations be 3 to 10 times the expected quantity with with poor accuracy has lead me (and the FDA) to the conclusion that models are ONLY an attempt to estimate causal relationships and they are useful only in their ability to match real world results. If they don’t match the real world observations the question becomes are they good enough for what you are trying to explain/sell. There are corrective feedback mechanisms in the real world when the models do not explain, predict observations: product recalls, big fines from the FDA, no one buying your models of the weather patterns for next year, etc. The corrective feedback loop seems to be missing in a lot of the climate science models.
In the real world the gold standard is ALWAYS the observations and if they fall within the predicted output from a model great- if not the regulating bodies in the pharmaceutical, medical device, automotive, etc world let you know if you have botched the primacy of real world results vs a theoretical models estimate.
ferd berple:
At August 2, 2011 at 7:54 am you comment on my above post (at August 2, 2011 at 6:46 am ) by saying:
“The models are using “parametrized curve fitting” to fit the historical data. What do we know about parametrized curve fitting? That is has no predictive value beyond chance. None whatsoever. One of the parameter sets may in fact be correct, and this will have predictive value, however there is no way to know in advance which set of parameters is correct. ”
But it is much, much worse than you say.
Firstly, we know for a certain fact that all exccept at most one of the models is wrong because each model uses a different set of parametrisations. And if one of them were right then we have no way of knowing which one it is.
Secondly, as my above post tried to explain, the paper by Penner et al. proves that every model is using a wrong set of parametrisations, so the paper provides direct empirical evidence that all the models are wrong.
Importantly, as I tried to explain, if you input the empirical data on aerosols obtained by Penner et al. then none of the models – not one of them – could emulate global temprature change over the past century.
A model that cannot hindcast the past cannot forecast the future.
Richard
Is S02 the new pixie dust? Sprinkle enough on the models and they can fly. Just have Good (non-denier) thoughts! Common everyone, do you believe?
Aerosols…
The latest fad in global warming ‘science’!
Richard SC;
Yes, a dog’s breakfast of offsetting fudges. You pays your money and you makes your choice — but with the caveat that NONE of them can possibly be correct, given the wild divergences each experiences from its own trend lines and observations.
We fools and our money were soon parted by these charlatans.
TomRude:
Especially in the light of your name, your post at August 2, 2011 at 9:27 am reminded me of the old joke; i.e.
A man walked into a phamacist shop and asked for some deodorant.
The assistant asked, “Aerosol or ball?”
And the man replied, “No, it’s for under my armpits”.
So, are lamestream climatologists aerosols?
Richard
Peter Plail says:
August 2, 2011 at 5:18 am
“My understanding is that models are attempts to simulate real world processes so that, eventually, accurate predictions may be made. The process of developing complex models I would expect to be a long, tedious process, with continual refinements based on observations of real world situations followed by testing of each refinement. Only after showing that a refinement actually improves the model would it be “hard wired” into the model, otherwise it should be junked.”
You are being far too generous. What you describe is the use of hypotheses in physical science to build ever more complete and better confirmed theories. The theories are “about” the physical world. Models are not “about” anything; that is, they have no cognitive content whatsoever. Each model run produces a longish string of numbers that is interpreted as representing a “state of the planet” or some such thing. If the numbers disagree with reality then the modelers tweak the entire model as they are unable to identify particular numbers with particular pieces of the computer code.
As the final absurdity, Gaia Modelers model only heat exchanges caused by radiation. They do not model other natural processes such as La Nina but treat them as emergent properties.
Rob Potter says:
August 2, 2011 at 6:03 am
Yep, that is the method. And it is so juvenile! How can they bring themselves to write this stuff? How can the funding agencies keep dishing out for this stuff?
Richard S Courtney says:
August 2, 2011 at 6:46 am
Many thanks to you. Your post is a must read.
Just as I suspected all along; burning fossil fuel causes global cooling after it causes global warming.
Richard S Courtney says (August 2, 2011 at 8:52 am): “Importantly, as I tried to explain, if you input the empirical data on aerosols obtained by Penner et al. then none of the models – not one of them – could emulate global temprature change over the past century.”
Or, to paraphrase Samuel Johnson, “‘Aerosolism’ is the last refuge of a (climate) scoundrel!” 🙂
Why am I not surprised by this. Real world data conflicts with models, so it must be the data that is wrong.
If satellites don’t see this supposed 3 to 6 times greater affect with aerosols, then why doesn’t the surface based or ocean based data sets don’t either? Maybe because none of the observed data shows this because it really doesn’t exist? If it is 3 to 6 times greater than why have volcanoes not had a bigger affect then on any ocean, surface or satellite data sets? Why did aerosols do so little in preventing warming of the planet during the 1980’s and 1990’s?
How did this paper even get published when nothing can be shown to back this up as evidence and yet a recent publication using satellite was actually supported with observed evidence got so much stick with little/no actual science reasoning and plain pe[t]ty. If this was even given a much higher standard of reviewing ,this paper would have never been published. The standard of science and bias in climate is poor and no doubt this is done on purpose with the intent of stagnation.
Obviously the missing heat must be in the ocean too combined with that from CO2. (sarc/off) The correct conclusion from this paper is that model findings are incorrect by 3 to 6 times observed from satellite. (with ocean and surface based data sets, not too different) If there were real scientist these would be looking into why this is the case and not that the observed data is that wrong.
Richard S Courtney says:
August 2, 2011 at 6:46 am
Exactly, the weighting for aerosols cooling during the 20th century was way too high because they guessed this was the cause. Now there is longer more reliable data set with more accurate satellite data, this confirms that the levels previously were way over estimated because the same factor continued through the rest of the period simply doesn’t fit by very large error.