NCAR's new 2010 climate model for the next IPCC report

New computer model advances climate change research

From an NCAR/UCAR press release

BOULDER—Scientists can now study climate change in far more detail with powerful new computer software released by the National Center for Atmospheric Research (NCAR).

climate modeling

Modeling climate’s complexity. This image, taken from a larger simulation of 20th century climate, depicts several aspects of Earth’s climate system. Sea surface temperatures and sea ice concentrations are shown by the two color scales. The figure also captures sea level pressure and low-level winds, including warmer air moving north on the eastern side of low-pressure regions and colder air moving south on the western side of the lows. Such simulations, produced by the NCAR-based Community Climate System Model, can also depict additional features of the climate system, such as precipitation. Companion software, recently released as the Community Earth System Model, will enable scientists to study the climate system in even greater complexity.

The Community Earth System Model (CESM) will be one of the primary climate models used for the next assessment by the Intergovernmental Panel on Climate Change (IPCC). The CESM is the latest in a series of NCAR-based global models developed over the last 30 years. The models are jointly supported by the Department of Energy (DOE) and the National Science Foundation, which is NCAR’s sponsor.

Scientists and engineers at NCAR, DOE laboratories, and several universities developed the CESM.

The new model’s advanced capabilities will help scientists shed light on some of the critical mysteries of global warming, including:

  • What impact will warming temperatures have on the massive ice sheets in Greenland and Antarctica?
  • How will patterns in the ocean and atmosphere affect regional climate in coming decades?
  • How will climate change influence the severity and frequency of tropical cyclones, including hurricanes?
  • What are the effects of tiny airborne particles, known as aerosols, on clouds and temperatures?

The CESM is one of about a dozen climate models worldwide that can be used to simulate the many components of Earth’s climate system, including the oceans, atmosphere, sea ice, and land cover. The CESM and its predecessors are unique among these models in that they were developed by a broad community of scientists. The model is freely available to researchers worldwide.

“With the Community Earth System Model, we can pursue scientific questions that we could not address previously,” says NCAR scientist James Hurrell, chair of the scientific steering committee that developed the model. “Thanks to its improved physics and expanded biogeochemistry, it gives us a better representation of the real world.”

Scientists rely on computer models to better understand Earth’s climate system because they cannot conduct large-scale experiments on the atmosphere itself. Climate models, like weather models, rely on a three-dimensional mesh that reaches high into the atmosphere and into the oceans. At regularly spaced intervals, or grid points, the models use laws of physics to compute atmospheric and environmental variables, simulating the exchanges among gases, particles, and energy across the atmosphere.

Because climate models cover far longer periods than weather models, they cannot include as much detail. Thus, climate projections appear on regional to global scales rather than local scales. This approach enables researchers to simulate global climate over years, decades, or millennia. To verify a model’s accuracy, scientists typically simulate past conditions and then compare the model results to actual observations.

A broader view of our climate system

The CESM builds on the Community Climate System Model, which NCAR scientists and collaborators have regularly updated since first developing it more than a decade ago. The new model enables scientists to gain a broader picture of Earth’s climate system by incorporating more influences. Using the CESM, researchers can now simulate the interaction of marine ecosystems with greenhouse gases; the climatic influence of ozone, dust, and other atmospheric chemicals; the cycling of carbon through the atmosphere, oceans, and land surfaces; and the influence of greenhouse gases on the upper atmosphere.

In addition, an entirely new representation of atmospheric processes in the CESM will allow researchers to pursue a much wider variety of applications, including studies of air quality and biogeochemical feedback mechanisms.

Scientists have begun using both the CESM and the Community Climate System Model for an ambitious set of climate experiments to be featured in the next IPCC assessment reports, scheduled for release during 2013–14. Most of the simulations in support of that assessment are scheduled to be completed and publicly released beginning in late 2010, so that the broader research community can complete its analyses in time for inclusion in the assessment. The new IPCC report will include information on regional climate change in coming decades.

Using the CESM, Hurrell and other scientists hope to learn more about ocean-atmosphere patterns such as the North Atlantic Oscillation and the Pacific Decadal Oscillation, which affect sea surface temperatures as well as atmospheric conditions. Such knowledge, Hurrell says, can eventually lead to forecasts spanning several years of potential weather impacts, such as a particular region facing a high probability of drought, or another region likely facing several years of cold and wet conditions.

“Decision makers in diverse arenas need to know the extent to which the climate events they see are the product of natural variability, and hence can be expected to reverse at some point, or are the result of potentially irreversible, human-influenced climate change,” Hurrell says. “CESM will be a major tool to address such questions.”

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
160 Comments
Inline Feedbacks
View all comments
August 20, 2010 6:14 am

Frank K. says:
August 19, 2010 at 6:00 am
I applaud the NCAR effort. Please note the difference between this:
http://www.cesm.ucar.edu/models/cesm1.0/
and this…
http://www.giss.nasa.gov/tools/modelE/
I would like to know why we, the taxpayers, are funding (at least) two identical research efforts! We should consolidate ALL climate GCM research at NCAR – that would save a lot of money and resources.
—…—…—
I would politely disagree though:
I DEMAND that (at least) 2 different and independently-produced models compete with each other.
Preferably, each model would be privately (competitively and for-profit) production so government cannot interfere with funding rewards (for their zealots) and funding restrictions (against their critics) – and today’s international governments desperately want today’s Mann-made CAGW theories to be validated so the UN, IPCC, GISS, NASA, NOAA, NISDC, Penn State, democrat party, ecologists, and “community organizers” (etc. etc. etc.) can continue their agendas.

Richard M
August 20, 2010 6:56 am

I think the standard phrase of GIGO, no mater what you call it, is not the primary problem. Even if you had perfect data an incomplete program is worthless.
I look at the GCMs as equivalent of a bridge that is only 1% complete (or much less). Just how useful would that be to someone wanting to use the bridge? If you don’t sufficiently understand the system being modelled, it can’t be modelled … it’s that simple.

Martin Lewitt
August 20, 2010 7:50 am

The models are not worthless. They have already contributed qualitative insights into the climate that have been confirmed with followup observations. However, the recent warming is an energy imbalance of less that 1 W/m^2 globally and annually averaged and not within the range of quantitative skill of the models. All the AR4 models have significant positive feedback to CO2 forcing and there is no model independent evidence that the feedback to CO2 forcing is positive rather than negative for the current climate regime. Models with correlated errors in several areas larger than 1W/m^2 and that aren’t able to reproduce the amplitude of the observed water cycle and solar cycle responses or the cloud or surface albedo feedbacks don’t give us confidence in their attribution or their projections. The direct effects of CO2 forcing would result in a climate sensitivity somewhere in the neighborhood of 1.1 degrees C and have it responsible for about a third of the recent warming … hardly a cause for concern or uneconomic measures. Research should focus on model independent means of assessing whether the net feedbacks are negative or positive and then getting the models on the right track. Of course, we can still try to improve the physical realism in the various components of the models. Perhaps, a threshold in matching the observations will be reached that will inspire quantitative confidence in their representation of the phenomena of interest.

Ric Locke
August 20, 2010 8:44 am

More handwaving.
(1) The northern coast of Alaska is a little over 70N, so counting the aliasing/pixelation rings in the Arctic gives us two-degree cells. 180 (longitude) x 90 (latitude) gives 16,200 cells.
A fundamental requirement of FEM (Finite Element Modeling, which this is a version of) is that the cells must be small enough that the contents of any given cell are, or can be assumed to be, homogeneous — interactions take place between cells. When I can walk across my place (linear distance roughly 150 meters) and find a two-degree difference in air temperature, I have [ahem] serious doubts that that condition obtains in their model.
An engineer who claimed to be adequately modeling the hinge on a flip-phone — material totally homogeneous, response and cell interactions well understood — with only 10K or so cells would be fired and replaced by someone who knew what he/she was doing and wasn’t too lazy to get up in the morning. Yes, yes, I know, that’s why we have to have all those expensive supercomputers. Bulls*t. Even the crudest FEM program nowadays can vary the size of the cells to meet the requirement of the analysis while reducing the computational load. Look again at the Arctic. There’s pixelation/aliasing between rings, but not within them. That says they’re sticking with two degrees all the way to the poles; that is, 88 – 90 degrees is represented by 180 triangles when it ought to be one cell (actually, that circle is smaller than one cell should be). Worse: if the cell-interaction algorithm is so crude as to require the same numbers all the way down, those aren’t triangles, they’re “rectangles” with one zero side, requiring a discontinuity in the algorithm to figure out how they fit together. Are they even weighting by cell size? It doesn’t show in the results!
Total area of the surface of the earth is about 5.1 x 10^^8 sq. km. A two-degree cell at the equator is about 5 x 10^^5 sq. km. The effective resolution of their model is 1K cells — but they’re spending the computational resources needed for over 16X as many PLUS having to account for pathological conditions at the poles! This says loud and clear that These Are People Who Do Not Know What The F* They Are Doing. A Chinese plastic-molding factory making PET water bottles would put them on the street in a heartbeat.
Do they have anybody on their overpaid and undertasked staff who has ever heard of NASTRAN, let alone any familiarity with it?
(2) One of the subjects of discussion re: Dr. Curry the other day was use of EOFs. That discussion never touched on the fact that they’re using “functions” in the first place. Functions are handy for simpleminded interpolations, because if the function doesn’t fit the data you just pick one with more degrees of freedom. Voila! it fits. Unfortunately it says nothing about the behavior of the function between data points — you can get multiple cycles of variation, especially when (as here) the data are sparse and not well controlled or characterized. Applied to a classic FEM, (e.g., for stress in a plastic part) it means you get zones and ripples of stress patterns that do not appear in the real piece being modeled. That effect is vastly increased if the underlying data isn’t reliable, because the function picked to accommodate an erroneous data point by definition has to have more degrees of freedom, and thus is virtually certain to have unpredictable behavior between data points — even valid ones.
Thirty and forty years ago, before this effect was understood and there was enough computer power available to improve the process, it was one of the banes of my existence. It can be disconcerting, to say the least, to go to a grid cell containing a check point, and find that the model-predicted value is off by an order of magnitude or worse, and be totally unable to find out which data point — perhaps tens or hundreds of cells away — was erroneous, requiring the modeler to add more degrees of freedom to accommodate it. After all, it fits perfectly! And there’s a perfectly plausible way to make the situation worse while improving the fit — simply allow the algorithm to assume that the data points might be in error, and “adjust” them to conform. I would be willing to bet that at least some, if not all, of the adjustments to the datasets that so puzzle us come from that. In one case I was involved with long ago (early Seventies), the procedure ended up declaring that the riverine marshes north and east of Mobile, AL were at an elevation of roughly 100 meters AMSL, which is [ahem] not actually the case.
The solution, in my field and other applications of modeling, was linearization and block adjustment. Build the set of equations describing the interactions between cells, and reduce them to linear partial differentials; choose a step size for which the assumption of linearity is approximately valid. Introduce strawman correlation coefficients, solve the resulting system of linear equations (the “block”) using matrix methods, calculate the errors (“residuals”), then inverse the matrix and use the residuals (fractions thereof, the “weights”) to adjust the correlations. Iterate until the residuals are small enough (an aesthetic consideration to some degree), and you have a set of equations that may not be absolutely correct, but at least conforms to the data without introducing wildly-varying interpolations and/or extrapolations that put the unknown areas at the borders on Alpha Centauri B-II.
This procedure is built in to any credible modern FEM program. It isn’t always used, because the computational requirements are high and it isn’t necessary in well-controlled and well-understood cases, but it’s available. I see no hint, at NCAR or any of the other climate-modeling groups, that they even know it exists, let alone are making any attempt to use it. If anybody at WUWT, or in the “skeptic” community in general, has access to a modern FEM program and knows how to use it — I don’t fit either criterion — it might be worthwhile putting together a simpleminded model using only a few variables. It would be a lot of work, but I’d be willing to bet that a 1000-cell model using only half a dozen correlations would fit the data better than any of the multi-million-dollar boondoggles produced by the current set of eggheads.
Regards,
Ric

Ric Locke
August 20, 2010 8:53 am

Addendum: “…a 1000-cell model…” in which the cell size on the ground is constant, meaning that it varies a lot in lat/long coordinates.
Regards,
Ric

August 20, 2010 9:58 am

Bill Tuttle says:
August 20, 2010 at 3:13 am
Phil.: August 19, 2010 at 2:55 pm
I pointed out that Boeing thought the money spent was very worthwhile indeed.
At that time (1994), it was *very* worthwhile, because Boeing used a lot of NASA-derived technology — which the US taxpayers paid for. To be fair, Boeing *did* re-imburse NASA for the wind-tunnel testing portion.

But it wasn’t the CFD code/runs that cost that though, CFD is cost effective because wind tunnel testing is so expensive. As I recall the number of wind tunnel facilities has gone down in recent years for that reason. Wind tunnel testing with models isn’t perfect either with scaling issues and wall effects etc.
That’s not a knock at Anthony Jameson, btw — he’s one of aviation’s shining stars.
Yes Tony’s a good guy and the cost issues weren’t a result of his CFD.

Not My Name
August 20, 2010 1:35 pm

I can’t say a lot, but I’ve worked with NCAR for years. I wouldn’t trust the engineers and scientists there to run a hot dog stand.

August 20, 2010 2:26 pm

Phil.: August 20, 2010 at 9:58 am
CFD is cost effective because wind tunnel testing is so expensive. As I recall the number of wind tunnel facilities has gone down in recent years for that reason.
We’re getting into chicken/egg territory, here. Every aircraft manufacturer used to have an on-site wind tunnel (Grumman had one at Bethpage and one at Peconic), but when the mergers began in the ’80s, a lot of them were torn down. CAD took off, and more wind tunnels went unused, so it wasn’t cost effective to maintain them, except for the gummint sites, which were (and are) still useful.
Wind tunnel testing with models isn’t perfect either with scaling issues and wall effects etc.
The one at Langley is huge — almost zero wall effect, and it’s big enough for full-scale models of most military fighters. But since it *is* unique –yeah, booking time in it is expensive.

Arno Arrak
August 20, 2010 7:31 pm

Wonderful new toy for climate “scientists” paid for by Uncle Sam. But don’t expect it to be a fortune teller beyond what your daily weatherman’s five day forecast reveals of your future.

Jaye Bass
August 23, 2010 6:38 am

Ric Locke says:
August 20, 2010 at 8:44 am
More handwaving.

And there we have it. The great divide between academics playing at modeling with no negative feedback to improve their results and professional engineers who make things actually work, who can be fired or replaced with somebody better. Tenured academics live within a protected monopoly, while professionals live in the market. Big difference in the amount of adult supervision involved.

1 5 6 7