Claim: Scientists accurately model the action of aerosols on clouds

From the “models aren’t the same as reality unless you have a bigger computer” department.

This portrait of global aerosols was produced by a GEOS-5 simulation at a 10-kilometer resolution. Dust (red) is lifted from the surface, sea salt (blue) swirls inside cyclones, smoke (green) rises from fires, and sulfate particles (white) stream from volcanoes and fossil fuel emissions. Image credit: William Putman, NASA/Goddard – image for illustration only – not part of the press release below.

Global climate is a tremendously complex phenomenon, and researchers are making painstaking progress, year by year, to try to develop ever more accurate models. Now, an international group including researchers from the Advanced Institute for Computational Science (AICS) in Japan, using the powerful K computer, have for the first time accurately calculated the effects of aerosols on clouds in a climate model.

Aerosols play a key role in cloud formation, as they provide the “seeds”–called cloud condensation nuclei–that allow clouds to form and affect their life cycle. The water in the air condenses onto the tiny particles, and gradually grow into droplets and finally into raindrops that precipitate. The action of aerosols is an important element of research on climate change, as they partially counteract the heating action of greenhouse gases.

It was previously believed that increasing aerosol density would always lead to more clouds, but recent satellite observations showed that this is not necessarily true. It is now understood that, due to temperature differences between the top and bottom layers of clouds, there is a delicate balance of evaporation and condensation, with aerosols in the lower parts of the clouds promoting cloud formation, but those in the upper parts allowing the water to evaporate.

Previously, climate models were unable to model the response of these micro-processes within the clouds to aerosol variation, but using the K computer, the RIKEN-led group combined a model that simulates the entire global weather over a year, at a horizontal resolution of just 14 kilometers, with a simulation of how the aerosols behave within clouds. Unlike conventional models, which show a uniform increase in clouds over the earth when there is an increase in aerosols, the high-resolution model, which takes into account the vertical processes inside clouds, accurately depicted how large areas experience a drop in cloud cover.

According to Yosuke Sato from the Computational Climate Science Research Team at RIKEN AICS and Nagoya University, “It was very gratifying to see that we could use a powerful supercomputer to accurately model the microphysics of clouds, giving a more accurate picture of how clouds and aerosol behave in the real world. In the future, we hope to use even more powerful computers to allow climate models to have more certainty in climate prediction.”


The paper:

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
March 8, 2018 11:04 am

I would replace the word “Accurately” with “Better.”

Reply to  tomwys1
March 8, 2018 2:48 pm

This sounds like a good first step on a path to one day having a useable model.

Walter Sobchak
Reply to  goldminor
March 8, 2018 9:17 pm

It is another step along the road to models that will continue to be plagued by chaos dynamics and poor quality data over the earth’s surface. In other words its still a waste of time and money.

Reply to  goldminor
March 11, 2018 10:11 am

Agree with Walter. In grad school numerical methods courses we were taught to use modeling of such systems only for interpolation not extrapolation to 100 yrs from now. I would like to see if the models are accurate within .1C if one kept doubling floating point precision without limit up to say 1024 bits at which point I would say that the models are defective.

March 8, 2018 11:04 am

This is an important paper. It shows that when a much higher resolution global climate model that is able to resolve clouds, including as to their depth, is used the sign of the aerosol “cloud lifetime effect” radiative forcing is positive. By contrast, in all but a few models that forcing is significantly negative, and is one of the main reasons why current climate models match observed historical warming despite their generally high (transient) sensitivity.

Reply to  niclewis
March 8, 2018 11:33 am

Problem is, due to computational constraints the result cannot be directly to the up oming CMIP6 generation of climate models. Perhaps it enables better paramterization by providing another constraint.

Reply to  niclewis
March 8, 2018 12:03 pm

Yes Nic, good point. How negative is this forcing in models? I seem to recall it is order 1/2 W/m^2.

Reply to  dpy6629
March 9, 2018 1:30 am

In CMIP5 models that include indirect aerosol forcing (the effect of aerosols on clouds; ACI), the average total aerosol forcing change over 1850-2000 is about -1 W/m^2, maybe a bit more. In some models is approaches -1.5 W/m^2. Direct aerosol forcing (the effect of aerosol-radiation interations; ARI), which is all that a few models include, is typically -0.35 to -0.4 W/m^2.
So indirect aerosol forcing averages approaching -0.7 W/m^2. Part of that is the cloud albedo (1st indirect) or Twomey effect; aerosols making clouds brighter, due to seeding more but smaller cloud droplets. But the cloud lifetime (2nd indirect) or Albrecht effect is probably somewhat more important in models. The idea is that clouds with smaller droplets have more water (a larger Liquid Water Path) and last longer. Obserations do not support this effect. This study corroborates, on a global scale, another recent study (Siefert et al 2015, DOI: 10.1002/2015MS000489) that studied particular regions and found the cloud lifetime effect to produce a positive forcing.
The implication is that the 2nd indirect aerosol effect may well pretty much cancel out the 1st effect. That would mean most CMIP5 models have aerosol forcing that is 0.5 to 0.75 W/m^2 too negative. Removing that excess aerosol forcign will make the models’ historical simulations warm too fast over 1850-2005. As the new studies say, GCMs “tend to be optimised to adjust the magnitude of the aerosol indirect effect so that the models reproduce historical climate changes”.

Dave Fair
Reply to  dpy6629
March 9, 2018 8:55 am

Wow! Gee! More proof that modelers use aerosols to shape their hindcasts using too-hot models. Who’da thunk?

Reply to  dpy6629
March 10, 2018 6:23 pm

Meanwhile with all this talk of +/-0.5 W/sq.m forcing, water is pumping some 680 W/sq.m*(reducing) up into the cirrus clouds for dissipation into space. All done by the Rankine Cycle oblivious of GHGs. Aerosols have their influences; but for heaven’s sake let’s get things into perspective.
*(note I am equating 1 sq.m roughly to 1 Kg. here.)
As an aside: Does anyone know the total global potential energy/enthalpy stored in the clouds? Must vary quite a bit with the weather. What about with the Climate?
(Potential Energy = Mass/height.)

Reply to  niclewis
March 8, 2018 1:09 pm

Indeed, the see-saw between the GHG forcing and the aerosol forcing is one of the most striking issues of the search for the real sensivity vs. CO2 of the real earth climate system. The paper in question helps to narrow the assumptions and it points to lower estimates despite some “constraint papers” we saw in the younger past. They all include CMIP5 models, which are far from the reality as this paper shows. Hopefully a key paper for the upcoming next IPCC session…

Bruce Cobb
March 8, 2018 11:11 am

The bigger the computer, the faster they can come to the wrong conclusion. Progress!

Reply to  Bruce Cobb
March 8, 2018 1:05 pm

With this wondrous computer simulation the result will be of even higher precision — honest!

Reply to  tom0mason
March 8, 2018 2:16 pm

A high-def polished turd.

Reply to  Bruce Cobb
March 12, 2018 5:35 am

Indeed Frank. The problem with models is that they run on repeated iteration. The initial start, which is the result of a previous calculation may well be pretty accurate; but when that is iterated over millions of times, the term “pretty” can easily morph into “gross ugly”
The other problem lies in the fractal dimension in operation. The length of the U.K. coastline depends on the fractal dimension of the measurement. Similarly the surface area of the Earth. No idea how the models cope with that when dealing with the numerous calculations involved; each with their different definitions and assumptions.

Tom in Florida
March 8, 2018 11:21 am

“combined a model that simulates the entire global weather over a year, at a horizontal resolution of just 14 kilometers, with a simulation of how the aerosols behave within clouds. ”
So they simulated a simulation of a simulation.

Reply to  Tom in Florida
March 8, 2018 2:13 pm

The first model that is capable to simulate weather with decent accuracy longer than ~100 hours into the future. Ok …

March 8, 2018 11:29 am

Two observations, even presuming this new paper is valid.
1. 14km grid is still larger than can model convective processes like Tstorms. Fo that a grid not greater than 4 km, and preferably 2 km per UK Met is required. So parameterization is still required by the ~3 orders of magnitude computational constraint (14=>7=>3, with each reduction in size requiring 10x computation per NCAR because of the CFL constraint on numerically solved partial differential equations).
2. They didn’t say how long the ‘accurately modeled clouds’run took on their K machine. The average per run for CMIP5 was 50-60 days per NCAR. Even if the one year run was done in just two months, to do a climate simulation for TCR ( out 80 years, then average the 60-80 year results to get the definitional 70 TCR) requires two orders of magnitude more time. To estimate ECS it is three orders of magnitude.
SO, even if right this does NOT help improve CMIP6 models. The parameterization and tuning/attribution problems remain. See guest post here ‘Why Models Run Hot’ for a brief overview, and guest post ‘The Trouble with Models’ for details.

Komrade Kuma
Reply to  ristvan
March 8, 2018 12:28 pm

My brief experience using CFD in water flow is that too coarse a grid can lead to convergence to a false solution even when the grid resolution is relatively close to that required let alone larger than the scale of the local phenomenon being modelled. Of course the smaller the grid the greater the calculation time.
Hopefully this is a step in the right direction which also quantifies to some degree just how short of the mark the current models are.

Reply to  Komrade Kuma
March 8, 2018 2:20 pm

One major limit in CFD computations is that you cannot trust the models for flows that are supercritical. For air flow this is when the flow goes beyond Mach 1, for liquids (such as water) the supercritical speed is far lower.

Reply to  ristvan
March 8, 2018 1:09 pm

Something simple like being able to accurately say where the snow will fall would be helpful these days, but I guess it will be like all the other models — hopeless at that task.
The problem is that on a cooling planet that might be a priority.

Jim Heath
March 8, 2018 11:29 am

Cosmic rays are playing a part in this, it will be interesting to see how we tax exploding stars. Bummer!

March 8, 2018 11:31 am

Accurately model is when the model matches future reality not when the model matches ‘adjusted’ reality past or future. That is called a ‘three card trick’ and for that you do not need a super computer just a deck of cards , a fast hand and no morality or integrate.
Which gives climate ‘science ‘ a big head start in the last part with the K computer perhaps given them the second the ‘models’ the stacked deck .

March 8, 2018 11:35 am

But I thought they already knew enough to establish global policy?……….

Reply to  Latitude
March 8, 2018 12:01 pm

This can’t be right, the science was settled decades ago.

March 8, 2018 11:55 am

It really doesn’t matter how “big” or “fast” the computer is, the absolute rule “Garbage In, Garbage Out,” applies to all input and output. Speed is not a synonym for accuracy. However, climate modelling appears to be based entirely on GIGO.

Bryan A
March 8, 2018 12:11 pm

Now, an international group including researchers from the Advanced Institute for Computational Science (AICS) in Japan, using the powerful K computer,/b>, have for the first time accurately calculated the effects of aerosols on clouds in a climate model.

All Hail K

Peta of Newark
March 8, 2018 12:41 pm

Does this wonder komputer tell them or us how that when a warm object eg Earth’s atmosphere is placed adjacent to a (very) cold object eg The Rest Of The Universe, how does the warm object lose less heat energy as it increases its temperature.
As proposed by GHG theory yet contrary to every heat experiment ever conducted by humankind
Ah but you say, the lower part of the atmosphere warms and the top bit doesn’t.
So you’re saying that the Lapse Rate is increasing.
Let’s measure it.
Oh never mind. someone already has.
They do it every time they set off anywhere in an aeroplane. Most have altimeters and thermometers on board.
The height of the 254K isotherm (where Earth radiates from) is rising.
At 23 metres per decade. plus/minus 3 metres
The Lapse Rate is decreasing.
Earth is now losing more energy than it was 20 and 30 years ago.
Any comment?

Reply to  Peta of Newark
March 8, 2018 1:12 pm

CO2 is causing the world to deflate?

Reply to  Peta of Newark
March 8, 2018 1:45 pm

The decreased heat comes first.
The object then warms up until heat flow balances again.

Alan Tomalty
Reply to  MarkW
March 9, 2018 6:59 am

Why would the earth lose heat for no reason? The sun always shines. The earth would only start to lose more heat if it had gained heat in the 1st place. Please guys if you are being sarcastic put the sarc at the end or a line overwrite on one of your words

Reply to  MarkW
March 9, 2018 9:01 am

I meant decreased heat flow, as was mentioned in the post I responded to.

Robert from oz
Reply to  Peta of Newark
March 8, 2018 4:23 pm

Forget about the aerosol and tell me what the lotto numbers are for this week ?

Reply to  Robert from oz
March 8, 2018 7:25 pm

I would, but I don’t want to share the prize.

Walter Sobchak
Reply to  Peta of Newark
March 8, 2018 9:31 pm

Lets see the lapse rate is determined by the ideal gas law pv=nrt so the iso thermal level will grow proportionately to the temperature. The radiative surface of the atmosphere at that level will grow as the atmosphere expands as the square of the radius of the sphere. So the radiative surface will have expanded. There is no impact from stefan-boltzman because the iso thermal level still has the same temperature of 254K. Overall, I would say the expansion of the iso thermal surface beats the increased temperature of the lower levels.

March 8, 2018 12:52 pm
An oldie but a goodie. Dr Brown of Duke and his take on what is required computationally.

March 8, 2018 1:06 pm

Modelers have been using aerosols to explain why the atmosphere hasn’t been warming as much as their theory says it should, particularly in the mid-20th Century and the early 21st Century. They could do this because no one really new what aerosols were doing. They could and did give aerosol impact any value they needed to try and make their models look like they had skills. This study takes some of the unknown away, and it turns out that the models are worse than we thought!
Apparently aerosols are not cooling as much as expected, making the models even more wrong!

Grady Patterson
March 8, 2018 1:06 pm

“… we could use a powerful supercomputer to accurately model the microphysics of clouds …” “… at a horizontal resolution of just 14 kilometers …”
Ummm – am I missing a few decimal places, or is there a problem here?

March 8, 2018 1:28 pm

Love the graphic — Dust, Sulphates, Sea Salt, and Smoke. Well if that’s all we have to worry about whats the problem?
The problem is that cloud condensation nuclei are far more exotic, VOC from forests, pollen, fungi spores, microbes and bacteria in the air, unburned and incompletely burned avgas, etc. I wonder if they have all the numbers for these as they vary over the seasons?

March 8, 2018 1:36 pm

In the beginning, it was said that the ‘Science was Settled’, now a bit later on, the science is being unsettled again as people realise that they did not really know anything at all but just believed that they knew a great deal.

Don K
Reply to  ntesdorf
March 8, 2018 2:31 pm

They are “refining” the science. Ya got something against refinement buddy?

Reply to  Don K
March 9, 2018 7:05 am

Don K – you dropped a ‘d’. They are “refinding” the science. They lost about 30 years ago!

Reply to  Don K
March 10, 2018 4:25 am

Better than redefining the science, I suppose.

March 8, 2018 1:41 pm

It took an entire super computer to model the affect of aerosols on clouds. Thus leaving no available computing power available for the 1000 and 1 things that climate models also do.
They may have made a breakthrough in modeling aerosols and clouds, however this improvement is not something that can be added to the climate models.

March 8, 2018 1:47 pm

Nice that they can now model “accurately” the effect of aerosols on clouds… Now, if they could just model accurately the effect of clouds on climate we might have something to get excited about.

March 8, 2018 2:16 pm

And they know it’s accurate because?

Reply to  markl
March 9, 2018 9:03 am

The article mentioned that they compared the output of the model to readings from satellites.

March 8, 2018 2:19 pm

“It was very gratifying to see that we could use a powerful supercomputer to accurately model the microphysics of clouds, giving a more accurate picture of how clouds and aerosol behave in the real world.”
How do they know it’s accurate?
They would have to know ‘how clouds and aerosol behave in the real world’ to be able to model ‘how clouds and aerosol behave in the real world.’

Don K
March 8, 2018 2:29 pm

“but using the K computer, the RIKEN-led group combined a model that simulates the entire global weather over a year, at a horizontal resolution of just 14 kilometers …”
Does anyone know if these folks have run a model of NEXT year’s weather so we could, at least conceptually, check and see how accurate their model’s predictions are?
My guess is that this modeling, like most climate science can’t really be validated in any meaningful way.

Michael Jankowski
March 8, 2018 5:02 pm

Effing priceless! Look at this gem…
“… In particular, estimation of aerosol-induced modulation of cloudiness, called the lifetime effect, still has large uncertainties. In some global climate models (GCMs), this uncertainty has originated from uncertain cloud parameters, which tend to be optimised to adjust the magnitude of the aerosol indirect effect so that the models reproduce historical climate changes. Such model tuning is currently being evaluated using recent satellite observations, which provide a constraint on cloud parameters…When a GCM is driven by a value of rcrit optimised to reproduce the historical temperature trend, with a tuned magnitude of ACI, the model cannot represent the vertical microphysical structures observed by satellites, and vice versa”
How many times have we been told that the models are not “tuned” to “reproduce” the past but instead built from the ground-up using scientific principles and estimated values of parameters derived from scientific research?

Alan Tomalty
Reply to  Michael Jankowski
March 8, 2018 6:38 pm

When you tune the models to adjust to historic temperatures you are basically just fiddling with the same basic erroneous set of equations that you started from. And those basic erroneous set of equations were not parameterized from real observational data like real science does it. So if you start with bad parameterizations and you continue to tune based on other simulations you can never approach the real world. This study used only satellite data on precipitation. No other real data was used. However even that satellite data is essentially useless because they had to use a simulation of global rainstorms (which they would never be able to reproduce accurately) to reproduce the clouds needed to produce a simulation of precipitation so as to compare it to the satellite data. AS a previous poster said a simulation of a simulation of a simulation. These authors have made the same mistake as other computer modellers when they said ” Here we reproduce satellite-observed cloud liquid water responses using a global simulation with explicit representations of cloud microphysics, instead of the parameterizations. ” Translation : we used our best guess of how cloud microphysics works instead of using real world data to plot a graph and then fit a polynomial to the graph for each independent variable like real science does it..

Dave Fair
Reply to  Michael Jankowski
March 9, 2018 9:04 am

“… adjust the magnitude of the aerosol indirect effect so that the models reproduce historical climate changes.”
‘Nuff said.

Kristi Silber
March 8, 2018 7:00 pm

Don, Gamecock, MarkL: The model was tested against satellite observations for accuracy.
“Here, we reproduce satellite-observed LWP responses using a global simulation with explicit representations of cloud microphysics, instead of the parameterisations. Our analyses reveal that the decrease in LWP originates from the response of evaporation and condensation processes to aerosol perturbations, which are not represented in GCMs. The explicit representation of cloud microphysics in global scale modelling reduces the uncertainty of climate prediction.”
As I see it, this could be an important piece of the jigsaw puzzle of climate change – but I don’t know enough about it to say. It’s interesting that the results suggest there is a cooling bias in the GCMs. Perhaps if this smaller-scale model can’t be used yet for the long-term climate models it can at least help people tune them better, on input more realistic parameters in those models that aren’t tuned.. (Personally, I don’t think there’s anything inherently wrong with tuning, but it could be abused and I’m glad that there is a movement toward transparency and discussion about how it’s done.)

Reply to  Kristi Silber
March 8, 2018 7:30 pm

Interesting, the models assume clouds are a net positive feedback.
This studies show that clouds are a net negative feedback.
From this our dear Kristi assumes that this shows the models have cooling bias. Sheesh.

Kristi Silber
Reply to  MarkW
March 8, 2018 11:36 pm

I was wrong about that. After reading more I realized it, but thanks for reminding me.

Dave Fair
Reply to  Kristi Silber
March 9, 2018 9:06 am

Until all modelers are forced to use the same historical aerosol assumptions, it’s all BS modelturbation.

Kristi Silber
Reply to  Dave Fair
March 11, 2018 9:02 pm

Unless these are known quantities, having a range can actually be productive, as it demonstrates different outcomes. The diversity in models in some ways might increase statistical uncertainty about particular parameters, but may be worth it for the information gleaned from comparisons.

Dave Fair
Reply to  Kristi Silber
March 11, 2018 10:12 pm

No, Kristi: All the models are “trained” to mimic late 20th Century warming patterns. The “hotter” the model, the more they throw in aerosols to cool their hindcasts. No useful scientific insights are gleaned by such “Texas Marksmanship.”
In reality, model results are notorious for late 19th Century, early 20th Century and 21st Century inaccuracies. The IPCC’s AR5 had to arbitrarily reduce the “projected” model temperature estimates to reflect developing reality.
IPCC climate models are not sufficient for fundamentally altering our society, economy and energy systems.

Kristi Silber
Reply to  Dave Fair
March 13, 2018 2:35 pm

Dave Fair,
“The question of whether the twentieth-century
warming should be considered a target of model
development or an emergent property is polarizing
the climate modeling community, with 35% of
modelers stating that twentieth-century warming
was rated very important to decisive, whereas 30%
would not consider it at all during development.
Some view the temperature record as an independent
evaluation dataset not to be used, while others view it
as a valuable observational constraint on the model
development. Likewise, opinions diverge as to which
measures, either forcing or ECS, are legitimate means
for improving the model match to observed warming.
The question of developing toward the twentiethcentury
warming therefore is an area of vigorous
debate within the community.”
– The Art and Science of Model Tuning. It’s on the ‘net.

Dave Fair
Reply to  Kristi Silber
March 13, 2018 3:03 pm

Noted, Kristi; the models’ “spaghetti graphs” of temperature are notorious, but they all converge in the late 20th Century then diverge later. They also generally miss salient features (significant movements up or down) in the late 19th, early 20th and 21st Centuries.
NB – Beware of B.S. by vested interests.

Kristi Silber
March 8, 2018 7:08 pm

I HAVE A QUESTION: It seems like many people don’t accept the models because they can’t be compared with the future to test their accuracy. That effectively eliminates every climate prediction out there, though. Is there no way people who think this way could ever be convinced that the climate is changing in a way that might not be good for human and non-human life? What would it take? Or for those who do have some “faith” in climate modeling but believe they have to be better, what would it take for the necessary criteria to be met? I’m just curious..

Reply to  Kristi Silber
March 8, 2018 7:28 pm

Kristi, models can and have been tested. The methodology is simple and is widely known by everybody who has actually spent some time reading about the subject.
Feed data from the past into the model, and see if the model can accurately predict the present.
This test has been performed on the GCMs, and they fail miserably.
Beyond that, GCMs have been making predictions about the future for well over 20 years. So the future for those projections has already arrived. Once again, the models fail.

Kristi Silber
Reply to  MarkW
March 9, 2018 8:37 pm

Mark W (This will be the last time I address you unless you show me a little less disrespect. You’ve been more of a troll than I.)
I am well aware that they test models, but people seem to think nothing is science if it doesn’t give you the facts you want, doesn’t PROVE something. It is others who say models aren’t (science because they aren’t) testable, not I. You and others seem to think that if they aren’t able to accurately predict every parameter they are complete failures, and will always be failures, even if they do predict some things well and are useful in other respects. For example, they can be adapted to the regional level, which allows greater resolution and more accuracy. There are trade-offs, and different modeling groups focus on different aspects. The mean of all models is a better predictor than any individual one. You can’t expect the oldest models to be as accurate as the newer ones. The models are improving, especially as computers get bigger and faster, and the resolution gets better – this has been a major constraint. The interaction of aerosols with clouds, and water vapor generally, are seen by most modelers as the largest source of uncertainty, so with luck this research will help. But I’m sure you know all this., or you ought to.
All you did was gripe, you didn’t answer my question: when will you be convinced, if ever?
Has there been any other attempt to predict climate change that has been more successful than the models? There has been plenty of time for their development. What have all the contrarian scientists been doing besides trying to show that the Earth is just undergoing its normal changes, or predicting climate based on one or a couple factors?

Dave Fair
Reply to  Kristi Silber
March 10, 2018 1:02 pm

Kristi, please note that the IPCC’s AR5 had to arbitrarily reduce the near/mid-term temperature “projections” of the CMIP5 models. They dishonestly kept the long-term “projections,” even for the wildly ugly and uncalled for RCP 8.5.
According to Dr. Judith Curry and many others, the models are not sufficient to fundamentally alter our society, economy and energy systems. What evidence have you to the contrary?

Kristi Silber
Reply to  MarkW
March 11, 2018 9:31 pm

Dave Fair,
“AR5 had to arbitrarily reduce the near/mid-term temperature “projections” of the CMIP5 models. They dishonestly kept the long-term ‘projections,'”
I need to know more about this to address it. Page in AR5, or something. This means ABSOLUTELY NOTHING to me WITHOUT HAVING THE EXPLANATION FOR IT. One of the absolutely worst habits of deniers is assuming they know why something was done and therefore making no effort to find out. Countless legitimate decisions have been decried as corrupt through ignorance.
Judith Curry and others capitalize on this strategy.
I have great regard for Dr. Curry’s intellect and scientific ability, and I think she believes she is doing the right thing. I don’t have any way to tell either way. But I can’t forgive her for damaging the reputation of the scientific community as she has. This has been the most damaging, most reprehensible aspect of the denial movement and fossil fuel-funded propaganda. (You want evidence? Explore – I can help if you want )
“According to Dr. Judith Curry and many others, the models are not sufficient to fundamentally alter our society, economy and energy systems. What evidence have you to the contrary?”
The kinds of evidence are diverse and would take me time to gather. The problem is that when someone is in the grip of what Curry et al. say about climate research, there is no evidence. All research is subject to groupthink, bias, corruption (except theirs, apparently). I have learned from long experience that is’t a waste of think to try to provide evidence for climate change because every point has been addressed by the denial crew, not disproved but nullified by some counterargument that sounds plausible, even if completely unscientific.
I believe your question was a challenge, not a desire to know more. If not, let me know.

Dave Fair
Reply to  Kristi Silber
March 12, 2018 10:48 am

Kristi, do your own research; the arbitrary, post hoc downward adjustments to modeled temperature are in red on the relevant AR5 graph.
It is misleading (dishonest) not to reflect more uncertainty in the really hot out-year “projections” after arbitrarily adjusting nearer-term “projections.”
Alarmist scientists and their allied politicians use the “projections” based on an excessively high CO2 ECS and the unreal RCP 8.5 to scare people. If you don’t see that simple fact, you aren’t really looking.
BTW, I deny that I am a Denier. And, yes, my question was a challenge: Provide proof that CO2 ECS is actually close to the CMIP5 models’ average of 3C per doubling. Real scientists, using real scientific methods, increasingly estimate CO2 ECS to be in the 1C to 1.5C range. [Do your own review of their research.] Prove them wrong.
Your trust in unvalidated models, activist scientists and socialist politicians is concerning. UN IPCC proposals call for fundamental changes to our society, economy and energy systems, all without understanding the consequences of such sweeping changes. UN social justice and gender equity won’t feed the world.
And is an anti-Exon collection of paranoia and propaganda. You might want to withhold judgement until the results of the AG and municipality lawsuits are in. Facts will out.

Kristi Silber
Reply to  MarkW
March 13, 2018 2:44 pm

Climatefiles has original documents. That’s what I look at. Propaganda is right, but it’s not a product of the site.
Regarding the “challenge,” I don’t owe you an explanation for what I believe. That’s not because I don’t have one, it’s because it would be a waste of time explaining it to you.

Reply to  Kristi Silber
March 9, 2018 6:51 am

“It seems like many people don’t accept the models because they can’t be compared with the future to test their accuracy.”
We simply don’t know enough about the atmosphere – and the sun – to do models.
Many of us have been saying this for over 20 years. This latest report is simply proof of what we have been saying. So these people appear to have improved on one issue. Next year, we’ll get a report of how they have improved on another issue. Ad infinitum.
This report has invalidated every climate model ever. Next year’s reports will invalidate this years.
“Is there no way people who think this way could ever be convinced that the climate is changing in a way that might not be good for human and non-human life?”
Sure there is. Give us your data that shows it. ACTUAL data shows that the GMT was virtually static for 19 years, 1997-2016.
And what definition of climate are you using? You use it in a way that is inconsistent with any known definition.

Reply to  Gamecock
March 9, 2018 7:59 am

“Is there no way people who think this way could ever be convinced that the climate is changing in a way that might not be good for human and non-human life?”
Yes…if it was getting colder!
Modern environmentalism is based on a misinterpretation of the ‘Eden Myth’. The story of Adam and Eve in the Garden of Eden was allegorical, and concerned man’s relationship with God. It was never intended to be an actual, scientific description of the Earth and the climate. Yet environmentalism has accepted the story as literally true in a way. They want us to believe that there is an ideal state of the planet, where all life lives in harmony and that everything is ‘balanced’. They also want us to believe that sinful humans are disrupting this natural balance, endangering themselves and all other life. They believe that ‘change’ is bad, and therefore, a threat.
Actual science reveals that the Earth is a very dynamic planet and always has been. ‘Change’ is the natural condition of the Earth’s biosphere and climate. Life itself is a form of change, as every living thing takes something from the environment and puts something different back. Adaptation is the driving force of evolution. If there wasn’t change, nothing would have evolved!
Environmentalists have infected society with there religious belief in an unchanging ‘Garden of Eden’ Earth. They call for stasis (which only marginally exists in death), or at the very least, to minimize our ‘footprints’ on the Earth. They have tried to make us believe that we are morally corrupt (sinful) when the changes we make exceed their narrow limits, or even threaten to exceed those limits in some future.
It is this ‘fear of change’ that has created a fear in global warming. If we were rational, and not under the thrall of modern environmentalism, news of the atmospheric CO2 increasing and the potential for a warmer world would be heralded with great joy and anticipation. There is no doubt that a very modest warming of 2-3 degrees would be a great boon for humans and the biosphere in general. There is also no doubt that life on Earth was slowly approaching CO2 starvation, and that returning this gas of life to the atmosphere from whence it came is amazingly wonderful! ‘Life’ is and will be expanding.
Too bad that amount of warming isn’t going to happen.
We are still in an ice age. We are just enjoying one of those, short, temperate spells we call an interglacial period. Interglacial means ‘between’ glacial periods. We are getting ever closer to the next one, and when it happens, the biosphere will contract. ‘Life’ will contract. But, no doubt, it will also adapt, just like always.

Kristi Silber
Reply to  Gamecock
March 9, 2018 8:56 pm

Improvement is a bad thing? What do you expect? The models haven’t been able to get everything perfect partly because of computer size and speed. They are still learning, still putting it all together. The climate is immensely complex, of course. Many people think it a waste of money, but it you are one of those who believes that things are changing, it’s sure nice to be able to know in what ways so we can prepare. There’s no sense relocating coastal communities to areas that are likely to become desert, for example.
This report has not invalidated every climate model ever! Why would you think that? If taken at face value, it lowers the uncertainty. The information can be used in future versions of the models.
Wow, the “Pause” has stretched to 19 years? I want to see that data! So you think climate isn’t changing?
“1. the weather conditions prevailing in an area in general or over a long period.
2. a region with particular prevailing weather conditions.”

Alan Tomalty
Reply to  Kristi Silber
March 9, 2018 7:28 am

No one ever said that the models cant be compared to the future. They are all the time and they fail miserably every time.

Dr. Strangelove
March 9, 2018 12:05 am

“a model that simulates the entire global weather over a year, at a horizontal resolution of just 14 kilometers,”
This model is useless for real weather forecasting. For accurate forecasting you need a resolution of 1 mm. Practically impossible. Can their model predict all the tornado and hurricane paths and strengths in the world a year ahead? Impossible in chaos theory. A model that can simulate aerosol microphysics does not mean it can accurately simulate global weather over a year.

Kristi Silber
Reply to  Dr. Strangelove
March 9, 2018 9:05 pm

It was supposed to simulate, not forecast the weather. That means it will simulate rain events (for example), but not according to when they will actually happen; its the frequency and intensity on average that the models predict.

Reply to  Kristi Silber
March 10, 2018 9:43 pm

Ms Silber,
I spent a goodly portion of my career analyzing and simulating a fairly complex mechanical-aerodynamic system – rotary wing vehicles (helicopters and tilt-rotors). The equations of motion and the equations governing structural response to dynamic and aerodynamic loads, are well-known. In producing our models which we then turned into computer codes, we documented our derivations of equations, and we documented our assumptions (at least the ones where we knew we had made an assumption). We then verified our computer codes by running flight cases with the actual representation of a given helicopter and comparing the computer results with actual flight-test data. We never got perfect matches, but over the years we developed confidence in our models and the code to satisfy ourselves that use of the models and codes in development of new helicopters or tilt-rotors would result in aircraft that performed about as predicted, and wouldn’t disassemble themselves in flight.
Over the last few years, as I have been reading about about the models and codes used in analyzing CAGW and CCC, I haven’t seen much in the way of documentation of what is in the models or the code, nor any discussion, in detail, of the assumptions. I haven’t seen much in the way of verification of the models or the codes, either. In fact, there was some communication within the ClimateGate E-Mails in which the principles discussed not revealing the holy mysteries of their codes. Further, there have been published charts showing that the climate models project temperatures higher than those measured by balloons.
Based on my own experience of just how hard it is to build trustworthy models and codes for such complex mechanism, such as rotary-wing vehicles, and the lack of public documentation of the GCMs, I am skeptical of the GCMs, especially after the IPCC stated, baldly, that modeling the complex, non-linear, chaotic climate was not possible.
Are they wrong? How would we know – the IPCC said it can’t be done, and the model and code owners won’t talk about the details.

Kristi Silber
Reply to  Kristi Silber
March 11, 2018 10:19 pm

Engineer Jim,
Sorry it took me a while to get to your comment. I get involved in too many conversations and lose track.
I was just reading about the code issue and other transparency concerns. These issues are being discussed within the modeling community. One problem is that the code comes from multiple sources and there are intellectual property considerations to be worked out. However, several of the modeling groups do have their code available. I just saw several, can’t remember where. Sometimes groups want people to sign a licensing agreement, partly so that they can keep track of users.
There is also a push to make tuning records available, and some groups have published these, too.
“I haven’t seen much in the way of verification of the models”
The most common and widely-known way is through hind-casting, when you put in data from an earlier time and see if it can predict historical, observed climate. Some people don’t think this is valid because models are tuned to 20th Century data, but that’s not always the case (this is a source of debate withing the community). Another way models are validated is through comparison with others. There are other ways, too – I’m just starting to investigate this, though.
“ClimateGate E-Mails in which the principles discussed not revealing the holy mysteries of their codes.”
Yes, I’ve heard about this. I don’t know the background, but this is the kind of thing that is bound to have another side to the story and I won’t judge before knowing it. The Climategate emails are very often posted and represented in ways that are misleading when taken out of context.
:”Further, there have been published charts showing that the climate models project temperatures higher than those measured by balloons.”
I don’t really know what you mean by this.
“IPCC stated, baldly, that modeling the complex, non-linear, chaotic climate was not possible.” I’d need to see this and the context. Do you have a page number?
I think it’s good to ask these kinds of questions. I have absolutely nothing against skepticism – it’s part of being a good scientist. But once someone’s mind is made up, it’s no longer skepticism – hence the term “denier.”
I go with the models because I believe they are getting better, because I have trust in the integrity of the scientific community as a whole. Scientists are human like the rest of us, but scientific methodology is geared toward recognizing this. Elimination of subjectivity is always a concern in science, and I see absolutely no reason why this case would be any different. Those who attribute motives like greed and socialist ideology to scientists are making baseless assumptions out of a desire to find whatever excuse they can to dismiss AGW.
That’s my two cents!
– Kristi

March 9, 2018 2:53 am

To me the problem with all of these computer modelling systems is that they appear to be based on the concept of radiative forcing, where climate balance is involved. Whereas in reality it is the transfer, movement and morphing of enthalpy that is the process which needs to be considered.
The Hydro system is in fact a Rankine Cycle, with clouds being intimately involved at the micro level. This cycle ensures the movement of large amounts of energy upwards to enable eventual dissipation into space and occurs oblivious of whatever greenhouse effects are in play due to the simple fact that gaseous water is lighter than dry air and therefore rises and carries its latent heat upwards.
Aerosols obviously have their effect in that they influence the points at which phase changes take place; but the radiative forcing element is of minimal influence.
Unless these models include the Rankine Cycle thermodynamics in their calculations they are totally useless in addition to the fact that a chaotic system cannot be solved by a set of linear equations.
As an aside: Note that where water is concerned, at phase change the climate sensitivity is ZERO; but the gravitational forces involved change.

Alan Tomalty
Reply to  cognog2
March 9, 2018 7:12 am

No one has been able to explain that when the latent heat is released by water vapou when it condenses where does that latent heat go? When it is released it is actually real sensible heat so one of 2 things happen. 1) It either goes off into space or 2) it goes into troposhere into the green house gases and thus the earth would warm with every rainstorm. If 1 happens then it is the water cycle that overwhelms any possible CO2 effect being that water vapour is 20 times more prevalent than CO2 and therefore no forcing of H2) by CO2 happens.. If No 2 happens then the earth would have been a runaway global warming planet a long time ago.

Reply to  Alan Tomalty
March 9, 2018 8:51 am

You ask a good question. I will attempt to explain; but it can get complicated if you are not an engineer.
At surface evaporation the incoming radiation gets converted to latent heat ( or I would prefer Latent Enthalpy; as Heat suggests temperature).
Also the water physically reduces its density to the point where it is lighter that dry air.
Both of these processes occur at constant temperature which in climate science terms means at Zero sensitivity.
Once evaporated the water rises against gravity and thus does Work by which the latent enthalpy is converted to Potential energy. Think of the energy required to pump 20 Tonnes of water up into that small cloud 2000 feet above you.
Also in this process enthalpy is dissipated into the surrounding atmosphere, which is the part 2 element in you query. This again coming from the Latent enthalpy; but at the same time incoming radiation is still providing energy to the water; so there is a complex balance taking place.
This continues as the water rises until such time that the enthalpy runs out and condensation takes place again at constant temperature.
The point/height at which this occurs varies greatly with a proportion of the water reaching up into the cirrus clouds nudging the top of the atmosphere while the rest returns to earth as rain, ice or snow to restart the cycle.
Now in the cirrus clouds the latent enthalpy involved is that of fusion where ice crystals grow. So the behaviour is slightly different. However as these crystals grow it means that they are dissipating more energy into space than they are receiving and when they reach a certain size gravity again takes control and they descend. This comprises element 1 of your query.
The root control of this fine and complex balance is gravity for it is gravity which determines the vapour pressure of water at the various heights which balances against its partial pressure in the surrounding atmosphere.
The whole process is called the Rankine Cycle in engineering terms; but it never seems to get a mention in the climate debate, albeit that it involves very large transfers and movements of energy in comparison with the Greenhouse Effect.
In trite terms one might say that the Earth sweats to keep cool, such like you and I !
Hope this help. Sorry it’s a bit rough.

Dr. Strangelove
Reply to  Alan Tomalty
March 9, 2018 7:07 pm

Rankine cycle is a good model for thermodynamics of the atmosphere. The heat source is the sun. The heat sink is outgoing radiation to space. Temperature differential is surface temperature minus TOA temperature. Work done equals increase in gravitational potential energy of rising warm air. The role of greenhouse effect is to increase surface temperature by decreasing outgoing radiation to space. This is essentially slowing down the cooling process. The devil is in the details. Putting right numbers in the Rankine cycle.

Reply to  Alan Tomalty
March 10, 2018 4:47 am

Dr. Strangelove:
Yes you are essentially right. The temperature differential does drive the Rankine Cycle. However a great deal is known about this cycle and its thermodynamics, ever since the first steam engines. The trouble is that whereas the engineers deal in actual energy transfers the scientists still appear to be only considering radiation as the driving force; so get confused.
Radiation is only a means by which energy is transferred. It is not energy itself until it arrives at its destination at which point it can manifest itself in various ways; not only by temperature.
The Rankine Cycle demonstrates this as in a boiler at a fixed pressure the temperature remains the same however much energy you put into it. ( OK you do need a safety valve!!)
In the atmosphere this cycle ; as you rightly point out , transfers energy against gravity up from the surface to the TOA for dissipation. It is a process independent of but ancillary to radiation from the surface to space and involves large amounts of energy. (circa 680 WattHrs/Kg. of water).
One may well ask how this is accounted for in all these models.

March 9, 2018 8:03 am

How much do you wanna bet that the connection CO2 causes dust causes something or other to clouds causes warming will eventually be made?

March 10, 2018 12:39 pm

Quote: ” In the future, we hope to use even more powerful computers to allow climate models to have more certainty in climate prediction.”
If you have no certainty at all (at present with 100% failure), should you gain some degree of certainty, is that considered “more” certainty?

%d bloggers like this: