NCAR climate model gets an upgrade – but is it useful, or just more confirmation bias?

BOULDER, Colo. — The National Center for Atmospheric Research (NCAR) has released an updated version of its flagship climate model to include a host of new capabilities — from a much more realistic representation of Greenland’s evolving ice sheet to the ability to model in detail how crops interact with the larger Earth system to the addition of wind-driven waves on the model’s ocean surface.

The Community Earth System Model version 2 (CESM2) is an open-source community computer model largely funded by the National Science Foundation, which is NCAR’s sponsor, and the U.S. Department of Energy’s Office of Science.

Released publicly last week, CESM2 builds on a succession of climate models, each cutting edge for their day, stretching back decades to a time when their software only simulated atmospheric circulation. By comparison, CESM2 includes interactions among the land, ocean, atmosphere, land ice, and sea ice, representing the many important ways the different parts of the Earth system interact.

“The breadth of the science questions we can tackle just significantly expanded; that’s very exciting to me,” said Jean-François Lamarque, who led the effort to develop CESM2 until recently. “Every time we release a new model we’re providing a better tool to do the science. It’s a more complicated tool, but the world is very complicated.”

The new capabilities of CESM2 include:

  • An atmospheric model component that incorporates significant improvements to its turbulence and convection representations, which open the way for an analysis of how these small-scale processes can impact the climate.
  • Improved ability to simulate modes of tropical variability that can span seasons and affect global weather patterns, including extreme precipitation over the western United States. These more realistic representations will allow researchers to better understand those connections and could lead to improved seasonal predictions.
  • A land ice sheet model component for Greenland that can simulate the complex way the ice sheet moves — sluggish in the middle and much more quickly near the coast — and does a better job of simulating calving of the ice into the ocean.
  •  A global crop model component that can simulate both how cropland affects regional climate, including the impacts of increased irrigation, and how the changing climate will affect crop productivity. The component also allows scientists to explore the impacts of increased use of fertilizers and greater concentrations of atmospheric carbon dioxide, which can spur plant growth.
  •  A wave model component that simulates how wind creates waves on the ocean, an important mechanism for mixing of the upper ocean, which in turn affects how well the model represents sea surface temperatures.
  • An updated river model component that simulates surface flows across hillsides and into tributaries before entering the main river channel. It also simulates the speed of water as it moves through the channel, along with water depth.
  • A new set of infrastructure utilities that provide many new capabilities for easier portability, case generation and user customization, testing functionality, and greatly increased robustness and flexibility.

A full list of updates with more technical descriptions can be found at http://www.cesm.ucar.edu/models/cesm2/whatsnew.html.

This image from a global CESM2 historical simulation shows key aspects of the Arctic climate system. The speed at which simulated glacier ice flows over Greenland is represented, with warmer colors indicating faster speeds. The September 2005 sea ice concentration is depicted in grayscale, with white indicating higher ice concentrations. The time series of September mean sea ice extent simulated by CESM2 is in good agreement with the satellite observations provided by the National Snow and Ice Data Center for the late 20th century and early 21st century, with both showing the recent sea ice decline. (©UCAR. Image courtesy of Alice DuVivier, Gunter Leguy, and Ryan Johnson/NCAR. This image is freely available for media & nonprofit use.)

COMMUNITY-DRIVEN, CONTINUOUSLY IMPROVED

Work on CESM2 began in earnest about five years ago, but scientists began tinkering with how to improve the model as soon as CESM1 was released in 2010. It’s no different with CESM2.

“We’ve already started to think about what we can improve for CESM3,” Lamarque said. “We know, for example, that we want to make the ocean model better to expand the kind of scientific questions it can be used to answer.”

Collaboration and input from the broader Earth system science community has always been at the heart of the complex model development facilitated by NCAR. For example, the land model component of the new CESM2 tapped the expertise of more than 50 researchers at 16 different institutions.

CESM, which is freely available, is an important tool for Earth system science researchers across the United States and the globe who are studying everything from the predictability of seasonal droughts to accelerating sea level rise. The NCAR-based model is one of about a dozen leading climate models around the globe that scientists use to research the changing climate and contribute what they find to the Intergovernmental Panel on Climate Change.

Because the Earth system is so complicated, and computing resources are so limited, the computer models used to simulate how Earth’s climate behaves use a mix of equations that actually represent the physics, biology, and chemistry behind the processes that unfold in the Earth system — from evaporation to ozone formation to deforestation to sea ice melt — and “parameterizations,” which simplify small-scale processes and estimate their impacts.

“CESM2 is representing much more of the physics than past models, and we are doing a much better job of it,” said CESM Chief Scientist Gokhan Danabasoglu, who is now leading the model development effort. “There are numerous new capabilities in all component models as well as significant infrastructure improvements for flexibility and easier portability.”

These improved equations allow the model to do an even better job replicating the real world.

“The model is our lab — the only laboratory we get when studying the climate,” Lamarque said. “So it has to be close enough to the real world to be relevant.”

0 0 votes
Article Rating
59 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bryan A
June 13, 2018 10:10 am

the first thing that stands out to me is in the graph included in the “Pretty Color Greenland” graphic.
The CESM2.0 model run start and end levels are identical to the satellite levels.
This screams as to just how much tuning has been done to the model to make it match the staelllite

Kristi Silber
Reply to  Bryan A
June 13, 2018 8:11 pm

It doesn’t say, “identical,” it says, “in good agreement.” Pretty funny that something that is a test of the model’s skill is to you something that makes you assume there is reason not to believe it. How do you know how it was tuned?

Sparky
Reply to  Kristi Silber
June 14, 2018 1:31 am

all climate models are tuned Kristi

Chris Wright
Reply to  Kristi Silber
June 14, 2018 2:33 am

Climate models are full of parameters which can be adjusted to get any answer they want. In fact the press release admits it:
” ……“parameterizations,” which simplify small-scale processes and estimate their impacts.”

This is one of the great frauds of climate science. The models match historical data – because they have been adjusted to match historical data. As Willis observed several years ago, any parameter changes that improve the historical match will tend to be kept. Any changes that make the match worse will tend to be dropped. In other words, evolution drived by the survival of the fittest. But it’s just a type of curve fitting, however sophisticated.

Any fool can predict what has already happened. The only test of a climate model is its future predictions. Now, after thirty years of advanced super computer predictions, we can judge how close to reality they are – assuming it’s even possible to forecast the future (the IPCC admits that it’s impossible due to the chaotic nature of weather and climate). Of course, they have turned out to be laughably wrong. They forecast far more warming than has happened, even after “adjusting” the surface data.

Because climate models are adjusted to fit historical, any claims that they prove AGW is not just wrong, it’s close to fraudulent.
Chris

Tom Halla
June 13, 2018 10:14 am

Pretty illustrations, but long term forecasting? Why not tweak the Russian INM-CM4 model, which was the only model not running too hot?

knr
June 13, 2018 10:15 am

any model , open of not , is built on a set of assumptions its is the quality and the rationality behind these which cause the problems . While knowing which horse wins is easy after the race ,what matters is the ability to know which horse ‘will ‘ win , and so far that is far from the case .

Latitude
June 13, 2018 10:17 am

” The time series of September mean sea ice extent simulated by CESM2 is in good agreement with the satellite observations provided by the National Snow and Ice Data Center for the late 20th century and early 21st century, with both showing the recent sea ice decline.”

say what?……and the more recent you get…the more they go in opposite directions

Ernie76
June 13, 2018 10:21 am

The statement that closed out this post is the problem –
“The model is our lab — the only laboratory we get when studying the climate,” Lamarque said. “So it has to be close enough to the real world to be relevant.”
The proponents of AGW have relied on the models as the basis for the “science”.

Bruce Cobb
June 13, 2018 10:23 am

“If we add more bells and whistles and dohickeys to our models, it makes them supercalifragilisticexpialadocious”.

Max Dupilka
June 13, 2018 10:23 am

I just finished doing a forecasting contract for surveillance flying along the western Arctic coast of Canada. There is always a problem with very low level marine stratus cloud along the coast and moving inland depending on wind directions. The stratus can cover very large areas. The GFS weather model was interesting in that it does not appear to have the boundary layer physics to handle this. While actual temperatures were around 0-2C in the overcast marine stratus, with drizzle and fog, the GFS model was showing sunny and temperatures of 15-20C. The Canadian models showed a much more realistic forecast. This is just one small sample, but I am sure the physics in the climate models is the same and cannot account for very large areas of low-level stratus.
Having said this, the weather models are heavily relied upon, but you have to know their limitations and weaknesses.

Reply to  Max Dupilka
June 13, 2018 11:17 am
Reply to  Max Dupilka
June 13, 2018 11:54 am

In general, these models suffer from a divergence problem, where the longer it runs, the less meaningful its results become. While this obviously affects the dynamic behavior, as seen in the ice reconstruction, it can also affect the average behavior, especially when data is matched to the wrong dynamic behavior. It doesn’t have to be off by much per iteration and even the tiniest errors accumulate into big errors over time.

The typical GCM’s used for climate modeling suffer from this in spades, as they’re essentially just low precision weather forecast models.

MarkW
June 13, 2018 10:31 am

Does this mean that the predictions made by previous versions of this model are not to be trusted?

Sparky
Reply to  MarkW
June 13, 2018 1:45 pm

The new model has new and improved levels of sophistrication

MarkW
Reply to  Sparky
June 14, 2018 10:24 am

The obfuscation filters are state of the art.

Johann Wundersamer
Reply to  MarkW
June 14, 2018 3:23 pm

MarkW, “Does this mean that the predictions made by previous versions of this model are not to be trusted?”

The models predictions are to be trusted in each parallel Universe they forsay right.

Our fault: wrong Universe.

tty
June 13, 2018 10:39 am

“significant improvements to its turbulence and convection representations, which open the way for an analysis of how these small-scale processes can impact the climate”

Could it be they have finally noticed that the “small scale” convection process is the dominant energy transport process in the atmosphere, considerably more important than radiation.

June 13, 2018 10:48 am

“Because the Earth system is so complicated, and computing resources are so limited, the computer models used to simulate how Earth’s climate behaves use a mix of equations that actually represent the physics, biology, and chemistry behind the processes that unfold in the Earth system — from evaporation to ozone formation to deforestation to sea ice melt — and “parameterizations,” which simplify small-scale processes and estimate their impacts.”
THIS REMINDS ME OF THE SCOTT ADAMS’ DILBERT CARTOON.
Climate Scientist: Human Activity is warming the Earth and will lead to a Global Catastrophe
Dilbert: How do scientists know that?
Climate Scientist: It’s easy.
We start with basic science of physics and Chemistry.
Then we measure changes in temperature and carbon dioxide.
We put that data into dozens of different climate models and ignore the ones that look wrong to us.
http://dilbert.com/strip/2017-05-14

June 13, 2018 11:00 am

“CESM, which is freely available, ”

Just gotta go fire up your home-brew petaflop supercomputer and you’re all set. And oh yeah, the electricity to run it and cool it, comes from coal, natural gas, and nuclear.

June 13, 2018 11:08 am

“For example, the land model component of the new CESM2 tapped the expertise of more than 50 researchers at 16 different institutions.”

This is called, “Spreadin’ the gravy around so everyone remains dependent on riding that train into retirement.”

ResourceGuy
June 13, 2018 11:19 am

The true test of a model is its performance with turning points.

Gus
June 13, 2018 11:33 am

If it doesn’t include Svensmark’s effect [1], it’s garbage. And is its convection representation good enough to recover Chilingar’s result [2]? Also, does it account correctly for collision and radiative processes in emission of atmospheric CO2 [3] as pointed out by Smirnov?
[1] https://doi.org/10.1038/s41467-017-02082-2
[2] https://doi.org/10.4236/acs.2014.45072
[3] https://doi.org/10.1088/1361-6463/aabac6

June 13, 2018 12:07 pm

Compare this statement (from above)

The time series of September mean sea ice extent simulated by CESM2 is in good agreement with the satellite observations provided by the National Snow and Ice Data Center for the late 20th century and early 21st century, with both showing the recent sea ice decline.

With this statement,
from “Climate scientists open up their black boxes to scrutiny.”
28 OCTOBER 2016, SCIENCE. sciencemag.org • VOL 354 ISSUE 631, pages 401-402.

“Indeed, whether climate scientists like to admit it or not, nearly every model has
been calibrated precisely to the 20th century climate records—otherwise it would
have ended up in the trash. “It’s fair to say all models have tuned it,”
says Isaac Held,
a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modeling
center, in Princeton, New Jersey.”

So they tuned / “calibrated” to 20th Century climate records, then should it be a surprise when when the simulated September mean sea ice extent comes out in good agreement with observations?

Junk science in action.

Kristi Silber
Reply to  Joel O'Bryan
June 13, 2018 11:41 pm

Even if that is true – what part of the 20th C? The models could be tuned to a time 100 years before the simulation. But I’m skeptical of his claim anyway.

“22 of 23 groups reported adjusting model parameters to achieve desired properties such as radiation balance at the top of the atmosphere. Percentages are reported based on the fraction of respondents; 83% of centers use atmosphere and land only (fixed sea surface temperatures or a data ocean) to adjust parameters and 44% use single-column models, while 74% perform their adjustment with a preindustrial (1850) coupled atmosphere–ocean configuration and 39% use coupled present-day simulations. Many groups also adjust ocean (48%) and land (39%) model parameters using standalone configurations. In addition, 21% use historical twentieth-century simulations, and 17% use slab ocean models.”
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1

Reply to  Kristi Silber
June 14, 2018 4:01 am

Here you go Kristi, here is a paper on tuning

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2012MS000154

Its abstract starts off with “During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. ”

Its a fascinating read. I’m surprised it got by the gatekeepers, actually.

Ken Lonion
June 13, 2018 12:34 pm

“The model is our lab — the only laboratory we get when studying the climate,”
I like to build radio control airplanes. I can sit in my work room and study all the books and manuals, do all the math and build with my best skill. Until I take the plane out and put it in the air I am only guessing if it will even fly and how well it will perform. I wonder if their lab even has a window since they seem to have no need of the real world.

Gary Pearse
June 13, 2018 12:48 pm

Ya know, the images are so nice and mesmerizing that one could be induced to believe in them.

GeologyJim
June 13, 2018 1:38 pm

I love the irony that NCARs giant sooooooper computer that runs all their simulations is located in Wyoming and powered by “evil” coal

Wyoming got the contract because they could guarantee uninterrupted power at the lowest cost

Hmmmmm . . . . isn’t that what we all want?

Reply to  GeologyJim
June 13, 2018 1:47 pm

Colorado and Wyoming share the same electricity. And have similar electricity costs.
Utah got the NSA’s super dooper super secret computer center also because it has cheap electricity.

Wyoming got the computing center because it encourages their Republican senators to play along with the climate hustle. That ensures the funding won’t be cut.

Roy Everett
June 13, 2018 1:40 pm

This sounds like John von Neumann’s parameterized elephant to the nth degree. That is, the more parameters you can tweak to get a model to fit the past, the less accurate it is in predicting the future. For example, basic Quantum Theory has one (Planck’s Constant) the value of which is not predicted by the Theory and has to be measured to fit reality. Climate Change Models have an indefinitely high number of parameters and still doesn’t get the next decade right. Representing the output with beautifully-coloured pictures is putting lipstick on a pig.

Reply to  Roy Everett
June 13, 2018 2:00 pm

More importantly, it allows them to tune to a CO2 sensivity that confirms their preconceived expectations.
They fool themselves. And then they can’t understand why the model runs hot as the decade rolls on.

Percy Jackson
Reply to  Joel O’Bryan
June 13, 2018 3:30 pm

Hi Joel,
The program is open source. Hence you should be able to prove that claim about them tuning the CO2 sensitivity by listing the lines of code where that is done and also show what the correct parameters are. All of which would make an excellent basis for a scientific paper which I look forward to reading.

TonyL
Reply to  Percy Jackson
June 13, 2018 5:54 pm

?????????
CO2 tuning is basic to the models.
It starts like this:
1) A CO2 sensitivity is selected. This encompasses both the ECS (Equilibrium Climate sensitivity) and the TCR (Transient Climate Response).
2) The model is tuned selecting some amount of particulates/SO2 to cool things down again, particularly for hindcasting.
3) Both these parameters are then tuned to give the “best” hindcast to the selected historical record.

As a result, models with high ECS and TCR also have high levels of particulates/SO2 needed to cool them down for the historical hindcast. These models tend to run hotter than the other in the collection.
The converse is also true. Models with lower ECS and TCR generally have lower particulates, and run a bit less hot going forward.

All this is:
1) Freely acknowledged
2) The tuning processes are described in the literature.
3) Widely known
4) Foundational to the current state of modeling

Indeed, all the models with all the different ECS, TCR and particulates led to the CIMP. The Climate Model Intercomparison Project, where they contrast and compare model parameters and results.

The CIMP (now at CIMP-5) results form an important part of the IPCC Assessment Reports.

Percy Jackson
Reply to  TonyL
June 13, 2018 9:37 pm

Item 1 on that list is wrong. Both the ECS and the TRC are outputs of the models not input tuning parameters. But again if I am wrong show me the lines in the source code where ECS is given as a tuning parameter.

Kristi Silber
Reply to  TonyL
June 13, 2018 10:47 pm

Where does it say this? How is it that climate sensitivity is selected, when that is also a result of the modeling?

I suggest you read this:
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1

TonyL
Reply to  Kristi Silber
June 14, 2018 1:46 am

Circular reasoning.
They program it in, then the program echoes it out again, they claim the initial assumptions were correct, the model proves it.
Circular reasoning at it’s finest.

@ Kristi Silber
Thank you very much.
The paper you reference gives much info on tuning and some reference to ECS and TCR (although rather obliquely)
I grabbed some of it and put it here to illustrate.
Some researchers claim not to input ECS/TCR fair enough.
Other researchers claim not to put in ECS/TCR, but do, under one guise or another.
*Charming*
Here is the quote.
“Some modeling groups claim not to tune their models against twentieth-century warming; however, even for model developers, it is difficult to ensure that this is absolutely true”
Kristi:
Do you not have a problem with this? Even their own peers can not tell if it is “absolutely true”. Perhaps I am old school, but when a scientist LIES, it is pitchforks and torches time. Burn them at the stake.
Maybe that is why we behaved better back in the day.

From The Paper Kristi Recommended:
So no accusations of “Out Of Context”.
The amplitude of the twentieth-century warming depends primarily on the magnitude of the radiative forcing, the climate sensitivity, and the efficiency of ocean heat uptake. By linearizing about a basic stationary climatic state, the global-mean temperature change for a gradually increasing forcing can be approximated as
where T denotes global-mean surface temperature, F is an imposed radiative forcing, κ is the deep-ocean heat uptake efficiency, and λ is the feedback parameter that is inversely proportional to equilibrium climate sensitivity (ECS; ECS ≈ –F/λ). Climate models have values of λ that range from −0.6 to −1.8 W m−2 K−1 and κ ranges from approximately 0.5 to 1.2 W m−2 K−1. On average, in models the denominator (κ – λ) is about 2 W m−2 K−1, and in the year 2003, the forcing is around 1.7 W m−2 (Forster et al. 2013).

The often-deployed paradigm of climate change projection is that climate models are developed using theory and present-day observations, whereas ECS is an emergent property of the model and the matching of the twentieth-century warming constituting an a posteriori model evaluation. Some modeling groups claim not to tune their models against twentieth-century warming; however, even for model developers, it is difficult to ensure that this is absolutely true in practice because of the complexity and historical dimension of model development.

Kristi Silber
Reply to  TonyL
June 14, 2018 10:37 pm

“Even their own peers can not tell if it is “absolutely true”. Perhaps I am old school, but when a scientist LIES, it is pitchforks and torches time. Burn them at the stake.”

You provided context, but misinterpreted the quote. The author is saying that because of the complexity of the models, and the fact that they are developed and improved over many years, there may be an element of tuning to the 20th C that not even the modelers themselves are aware of.

The fact that you read it as an accusation that scientists lie is revealing.

June 13, 2018 4:16 pm

Lots of claims about the sophistication and capabilities of the new, CESM2 model. But it’s all for naught if the model cannot reproduce climate history from starting boundary conditions and a given, claimed driver of climate such as human-originated CO2 released into the Earth’s atmosphere.

I’m betting that CESM2 will fail to reproduce the rise in “global” temperatures from 1900 to present when inputting the cumulative release of CO2 attributed to humans over that time, especially the pause/slight decline from 1940-1975 and the pause/drastic slowdown from 2001-present.

In other words, no how many equations it contains, it’s still likely to be GIGO.

jaffa
June 13, 2018 4:27 pm

Is the science settled now? Because I thought it was settled years (decades) ago. I don’t understand how the models can be flawless and the science settled only for them to be subsequently improved, I guess that’s because I’m applying logic rather than climate science.

Kristi Silber
Reply to  jaffa
June 13, 2018 10:52 pm

No one ever claimed the models were flawless. Nor is this one. No one is claiming that climate science is “settled,” either – if anything, it’s only the fact that CO2 is a GHG and the planet is warming because of it (on average, over the course of decades) – not how much it will warm or what will happen as a result. You are not applying either logic or science, you are repeating false claims about what scientists say.

Reply to  Kristi Silber
June 14, 2018 9:32 am

Kristi goes for inadvertent irony.

Kristi says to jaffa68, “you are repeating false claims about what scientists say.” followed by “the fact that CO2 is a GHG and the planet is warming because of it;” Kristi’s own repetition of false claims.

Point us to the falsifiable physical theory of climate that predicts your claim, Kristi. That theory will need to include details of the impact of CO2 on convection, evaporation, cloud type, amount, distribution, and persistence, and precipitation.

It will also predict the thermal flux and periodicity of the AMO and PDO, among other oscillations, their coupling strengths, and the transfer of energy among them, and between them and the atmosphere.

None of that is presently available.

You have no idea what you’re talking about, and neither does anyone else in your camp. You’re all so ideologically benighted that you have no idea that you have no idea.

MarkW
Reply to  Kristi Silber
June 14, 2018 2:24 pm

I’m wondering if Kristi really believes that anything she wasn’t told in the classroom didn’t happen.

The science is settled is the claim that most climate scientists have been using for years to avoid debating with anyone who disagreed with. It’s been said, even if you were told not to believe it.

Unless someone can show that the amount of warming we can expect for CO2 is bad, there is no reason to oppose it’s continued increase.
If 5000ppm wasn’t enough to cause problems, I don’t see how 500ppm is going to.

Kristi Silber
Reply to  MarkW
June 14, 2018 11:14 pm

Brilliant, as usual.

Reply to  Kristi Silber
June 14, 2018 3:45 pm

Well, I’m impressed.

Our Kristi Silber seems to have been the recipient of an EPA graduate fellowship grant over 1998-2000 to study “Effects of Lantana camara on Rainforest Regeneration in Northeastern Australia,” carried out at Rutgers.

Very wonderful, and I’m sure much was learned. Did you publish, Kristi?

But it seems you’re a biologist. How much physics and math do you know, Kristi? Have you ever analytically evaluated climate models? Where can we access your work on that?

Here’s my surmise about you and AGW: you have opinions and talking points, but no knowledge.

Reply to  Pat Frank
June 14, 2018 4:08 pm

The opinion of a biologist on AGW is no different than the opinion of a chemist on AGW.

Reply to  Betty Pfeiffer
June 15, 2018 8:59 am

Except I’ve done the work, Betty. My position is not an opinion, like Kristi’s (or yours, undoubtedly). I have analytically justified my case. Here, for example, and here.

I assess physical error, which is something chemists know about, and about which climate modelers evidently know nothing.

Kristi Silber
Reply to  Pat Frank
June 14, 2018 11:13 pm

You Goggled me! What an honor. That’s right, an EPA STAR Fellowship, a Rutgers Predoctoral Excellence Fellowship, and a Fulbright Fellowship. Was going for a PhD in ecology and evolution, but due to health problems couldn’t finish the thesis. Got a Master’s. Worse thing that ever happened to me. I loved my profession, love science, was dang good at it.

No, I have never analytically evaluated climate models. I didn’t know that was a prerequisite for expressing an opinion or idea about them. That doesn’t mean I have no knowledge, but I never claimed to be an expert. I’m more interested in people’s reasoning and their distrust in science.

“Point us to the falsifiable physical theory of climate that predicts your claim, Kristi. That theory will need to include details of the impact of CO2 on convection, evaporation, cloud type, amount, distribution, and persistence, and precipitation.”

No, a falsifiable physical theory that CO2 is a GHG and the planet is warming because of it wouldn’t have to include all those variables. A theory is just a theory, not a claim. How about this one: http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf or
http://nsdl.library.cornell.edu/websites/wiki/index.php/PALE_ClassicArticles/archives/classic_articles/issue1_global_warming/n5._Ekholm__1901.pdf

Reply to  Kristi Silber
June 15, 2018 9:14 am

A falsifiable physical theory of climate tells you what happens to the energy

retained by CO2, Kristi. Arrhenius and climate modelers assume it all just stays in the atmosphere. There’s no reason to think that’s true.

I have no distrust in science. There is no science in the claim that CO2 emissions are warming the climate. The reason is that climate models cannot resolve the tiny perturbation that is CO2 forcing.

Look at it this way. The annual perturbation from CO2 forcing is about 0.035 W/m^2. That’s 35 milliWatts per year increase.

Model cloud forcing error alone is ±4 W/m^2. So, how is a model expected to resolve a perturbation ±114 times below its lower limit of resolution?

Climate model error runs to about ±100 W/m^2 total. That means models do not correctly partition the available energy correctly into the climate sub-systems.

How is such a model able to resolve a 35 milliWatt perturbation?

Apart from that, the uncertainty in the surface flux budget is ±17 W/m^2. How is some 0.6 W/m^2 supposed imbalance in surface flux to be resolved when the budget isn’t known to better than ±17 W/m^2?

Climate modelers claim all the errors subtract away. They’ve never demonstrated that case and the claim of error-free anomalies is nowhere to be found in the IPCC ARs.

None of the AGW work meets the standard of science requiring attention to detail. The whole AGW thing represents a descent into pseudoscience.

That’s why I oppose the AGW nonsense. My respect for science compels me to it.

Science gave us everything we have of value, including our humane ethics. AGW
is a corrosive attack on all of it.

ROM
June 13, 2018 7:12 pm

In the final analysis, the modellers of the global and local and regional climate [s ] are attempting to “Predict the Future” .

They might do just as well in their prophecies if they followed the practices of Pythia, the Oracle of Delphi.

The Oracle during her prophesising sat above a fissure in the rocks from which a stream emerged and gave off various possibly hallucogenic gases.
Rounding up the odd modelling Maiden and a goodly supply of hallucogenics for her to partake of would probably provide an as accurate prediction of the future climate as any funding driven modeller who is busily inputting his / her own personal biases into his / her mega billion dollar climate model and then claiming his / her model is the guaranteed prophetic prediction of the future of the global climate.
———
When models can accurately predict the upcoming ENSO phase,, the most climate and weather influencing phenomena on this planet, its timing, its intensity or otherwise, its changes as it evolves and where it all happens in what part of the equatorial Pacific and make an accurate prediction of every aspect of that ENSO for more than a couple of months ahead then just maybe the climate models might begin to have a tiny degree of verisimilitude.

Other than that climate models are merely projections of the climate modeller’s own deep biases and beliefs all formatted around his / her own brand of the current climate ideology.

June 13, 2018 7:27 pm

My prediction is that with the new “improvements” at least one previously considered “correct/best” parameter will have to change to make the model “work”

June 13, 2018 7:38 pm

“The model is our lab — the only laboratory we get when studying the climate”, Lamarque said.

A clear sign, if we ever needed another one at this late date, that climate modelers are not scientists.

Earth is the laboratory for climate scientists, just as interstellar space is the laboratory of astrophysicists.

It’s plain incredible that Lamarque can be so mindless about his own field. Models are never, ever laboratories. Never. Not ever.

All those vaunted improvements in the NCAR model are put in without any improvement in understanding of the physics of climate.

How can they possibly know that their representations are correct? They’re putting in descriptions of what they think it ought to be rather than what is known to be physically correct.

It’s post modern critical theory with mathematics, where what should be demonstrated is instead assumed and made axiomatic.

Kristi Silber
Reply to  Pat Frank
June 13, 2018 11:15 pm

I see. So astrophysicist go to interstellar space each day to do their work? Do they directly measure the distances between Earth and the closest black hole? Get out a reeeealllllly long measuring tape an stretch it the full distance? Otherwise “How can they possibly know that their representations are correct?” Theory and math and data, perhaps? What is it that you think is made axiomatic by climate modeling?

Methinks you haven’t done a lot of climate modeling yourself. Maybe you should read up a bit on how it’s done if you have questions about it – and certainly you should if you believe it’s just “post modern critical theory with mathematics.” Someone has been leading you astray, I’m afraid.

Reply to  Kristi Silber
June 14, 2018 9:21 am

Observation and physics, Kristi. That’s what astrophysicists deploy. The first part of your post is scientifically fatuous; made stupid by your attempt to be sardonic.

That added CO2 necessarily heats the atmosphere is axiomatic in modeling. The jump from radiation physics to necessary atmospheric heating is based on no physical theory of climate and has never been demonstrated.

Here is some of what I’ve done concerning climate modeling, Kristi, which I’d expect is already more actual thinking about it than you’ve ever done.

After you’re done with that, take a look at my published Negligence, Non-Science and Consensus Climatology. It refutes all three legs of the IPCC case: modeling, so-called paleo-temperature reconstructions, and the global air temperature record.

It’s not that I merely “believe” climate modeling is post modern critical theory with mathematics, Kristi. It’s that I know it. I’ve demonstrated it.

MarkW
Reply to  Kristi Silber
June 14, 2018 2:27 pm

How many climate models have you written Kristi? Or do you just defend them because those who lead you about by the nose have told you that they must be defended?

The many and varied problems with the models are well documented, starting with the fact that they run way too hot.

Phoenix44
June 14, 2018 2:12 am

In my first job I built models using Lotus 123 of various mining projects. As my abilities and understanding increased (or so I thought) I made my models bigger and I thought, better. After all, more detail means more accuracy surely?

I still recall reviewing it with my boss. “Spurious accuracy! ” he said, throwing the print-out back to me. “You cannot get a more accurate assumption simply by using more assumptions to make it.”

And he was right. True, a smelting charge was made up of two number multiplied together but I didn’t know those two numbers for the future anymore than I knew the smelting charge. So I took the extra lines out, and my model still failed to predict the future! But at least it tool less time and was easier to understand and audit.

Today I see just abut every model making the same error.

ren
June 14, 2018 4:50 am

Will the new models take into account winter changes in the geopotential height above the polar circle? Is it related to low solar activity?
comment image

Walter Sobchak
June 14, 2018 7:46 am

Mathematical onanism with leather seats, automatic transmission, and power brakes.

Johnny Cuyana
June 16, 2018 7:14 am

I’m from the outside — outside of the “climate study” industry — and am trying to put into perspective the significance of this new CESM app:

1] More or less, globally, how many climate model competitors are there to NCAR [read: NSF and the U.S. Department of Energy’s Office of Science team which has funded this CESM modeling app]?

By “competitors” I mean those entities who have also built apps which approach/equal/surpass the capabilities of the CESM [version 2]; and, who may have “experts” who can compete legitimately with those of NCAR.

[Of course, I would like to see a direct healthy discussion between these competitors … regarding whose techniques are “more correct” and which models are more/sufficiently comprehensive.]

2] What climate modelling app does IPCC employ? Is their app a competitor to CESM?

Last question:

3] Assuming I am not comparing grapes and pears … other than the actual climate modelers such as NCAR and IPCC, who are the “watchers” — official or otherwise, and, aside from the at-large scientific community, general public and politicians — which are responsible for conducting peer review and critiquing their products?