News The Media Missed

Guest post by Willis Eschenbach

Based on a model, unfortunately. Behind a paywall, unfortunately. Posted without comment, emphasis and formatting mine. The abstract says:

Recovery mechanisms of Arctic summer sea ice

S. Tietsche, D. Notz, J. H. Jungclaus, J. Marotzke, Max Planck Institute for Meteorology, Hamburg, Germany

We examine the recovery of Arctic sea ice from prescribed ice-free summer conditions in simulations of 21st century climate in an atmosphere–ocean general circulation model. We find that ice extent recovers typically within two years. The excess oceanic heat that had built up during the ice-free summer is rapidly returned to the atmosphere during the following autumn and winter, and then leaves the Arctic partly through increased longwave emission at the top of the atmosphere and partly through reduced atmospheric heat advection from lower latitudes. Oceanic heat transport does not contribute significantly to the loss of the excess heat.

Our results suggest that anomalous loss of Arctic sea ice during a single summer is reversible, as the ice–albedo feedback is alleviated by large-scale recovery mechanisms. Hence, hysteretic threshold behavior (or a “tipping point”) is unlikely to occur during the decline of Arctic summer sea-ice cover in the 21st century.

Received 1 October 2010; accepted 14 December 2010; published 26 January 2011.

Citation: Tietsche, S., D. Notz, J. H. Jungclaus, and J. Marotzke (2011), Recovery mechanisms of Arctic summer sea ice, Geophys. Res. Lett., 38, L02707, doi:10.1029/2010GL045698.

[UPDATE] The paper is here. My thanks to alert readers.

[UPDATE] From the conversation below.

steven mosher says:

March 2, 2011 at 11:49 am

Since we dont have any observations of recovering from ice free conditions, it would HAVE to be based on a model. nothing unfortunate there.

Thanks for your thoughts as always, steven. It is unfortunate that we HAVE to base it on a model. It is extremely unfortunate that we have to base it on current generation AOGCMs, which are not renowned for regional accuracy.

And if we had observations we would write equations that quantified over physical entities. That set of equations would be called a “theory” ,which is nothing more than a model.

Indeed it would. I should have been more clear and said “unfortunately, it is based on one of the current generation of atmosphere ocean global climate (or circulation) models (AOGCMs)”, to distinguish it from such things as models based solidly on physics on the one hand (which GCMs are not), and models which can be tested experimentally and quantifiably (such as you describe above as a “theory”) on the other hand.

The temperature trend output of the GISSE climate model can be almost exactly replicated (0.98 correlation) by a simple one line one variable equation relating forcing and temperature. Is the real world is that simple?

If you think that a climate model like NASA’s pride and joy, the GISSE GCM, is a sufficiently accurate view or model or representation or theory of the real world that you can use it to make predictions and projections of future temperatures, I’m not sure what I can say.

The real problem with the models, though, is that the modelers rarely provide test results. For example, in this study they are using the MPI_ECHAM5 coarse resolution model . They report no test results for that model. They do say that the high-resolution version of the model “has been tested extensively and performs well in simulating Arctic climate [Chapman and Walsh, 2007].”

(A detailed description of the high-res version is here.)

A visit to their reference Chapman and Walsh tells a different story. The abstract starts out by saying:

Simulations of Arctic surface air temperature and sea level pressure by 14 global climate models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change are synthesized in an analysis of biases and trends. Simulated composite GCM surface air temperatures for 1981–2000 are generally 1°–2°C colder than corresponding observations with the exception of a cold bias maximum of 6°–8°C in the Barents Sea. The Barents Sea bias, most prominent in winter and spring, occurs in 12 of the 14 GCMs and corresponds to a region of oversimulated sea ice. All models project a twenty-first-century warming that is largest in the autumn and winter, although the rates of the projected warming vary considerably among the models.

As an opening, that’s hardly a ringing endorsement. They can’t get the temperatures right, the trends “vary considerably among the models”, they’re only tested over 20 years, and this is described as “tested extensively and performs well”?

Too general, you say? OK, it says “the Max Planck Institute (MPI) ECHAM5 and the National Center for Atmospheric Research (NCAR) Community Climate System Model version 3(CCSM3) simulate temperatures that are warmer than observed in the Barents Sea.”

It says “The annual mean rms [temperature] errors range from about 2°C for the MPI ECHAM5, NCAR CCSM3, and GFDL CM2.1 models to as high as 7°C for the CSIRO Mk3.0 model.”

It says: “Annual SLP RMS errors range from 2 mb (MPI ECHAM5) to almost 9 mb (NCAR CCSM3), while the across-model range of winter SLP RMS errors is comparable to the across-model range of summer SLP RMS errors.”

It says: “Winter model biases of temperature for the Arctic Ocean, expressed as RMS errors from observational reanalysis fields, average 3 times larger than those for summer, but the across model scatter of the winter errors are 4 times larger than the corresponding summer errors. The MPI ECHAM5 and the NCAR CCSM3 GCMs outperform the other models for most seasons.”

Conclusion? While the models did poorly, the MPI ECHAM5 was one of the best of the poor models.

There is a deeper problem, however. This is the almost incomprehensible lack of data from the poles, especially any long time series. Even the satellites don’t generally cover the poles themselves and some surrounding area. The Arctic is a frozen ocean with a few scattered ice-mounted floating research stations, and unmanned sensors here and there. We know almost nothing about the conditions say thirty metres (100′) either up or down from the top of the ice anywhere.

Now, if we want to take our best guess about what’s happening where we have no data sensors or thermostats, we naturally turn to computers. An atmospheric GCM is used to give a “best fit” to whatever observational data we actually do have. The one most used these days is called the ERA-40. The resulting output is the set of what the ERA-40 folks call “synthetic observations” four times daily, at 0, 6, 12, and 18 Zulu that best fit whatever observations at whatever times and places that we have.

I have no problem with that, it’s our best guess. And there are many things for which it is useful.

But for reasons of modelers’ confusion about reality, the output of the climate model is usually called the “ERA-40 reanalysis data”. The “synthetic” got misplaced somewhere along the way. This is a huge misnomer.

Because when you compare the output of the MPI-ECHAM5 with the ERA-40 reanalysis data, you are not comparing model output to data. You are comparing the output of two closely related climate models, one that is “best-fit” to a set of constraints and one which is not. Neither is data.

This confusion of models with reality is in the Chapman et al. Abstract above, where they say:

Simulated composite GCM surface air temperatures for 1981–2000 are generally 1°–2°C colder than corresponding observations with the exception of a cold bias maximum of 6°–8°C in the Barents Sea.

when what they mean is that

“Simulated composite GCM surface air temperatures for 1981–2000 are generally 1°–2°C colder than corresponding simulated ERA-40 best-fit model results with the exception …”

This is particularly crucial in the Arctic, where there is so little data. Using the ERA-40 as our best guess is one thing, that’s fine. But I object strongly to the whole paradigm of comparing the output of two climate models, in an area where by and large we have no data, and saying that the results of that comparison are anything stronger than “interesting”.

Yes, we have little data in the Arctic. But for starters, how about they throw out the ERA-40 results entirely, and show us how the MPI ECHAM5 actually compares to WHATEVER FREAKIN’ DATA WE ACTUALLY HAVE! Compare models with actual data, novel idea, what?

I want to know how well it (and by “it” I mean the actual model used in the study, not some other “high resolution” version) predicted the actual sea level pressure as measured by gauges at Svalbard Luft and wherever else. I don’t care how well the model’s high-tone cousin matched some other similar model’s best guess of sea level pressure at some random spot on the ice with no pressure gauge in the nearest quarter million square miles. That’s meaningless.

So unfortunately, once again I fear it’s models all the way down™ …

The problem with their procedure is that we end up with absolutely no idea of the real uncertainties of the results. The paper is no help, the only mention is “All numbers for energy budget anomalies are rounded to ten AEU, to account for uncertainty arising from energy budget residuals and ensemble spread.”

So we don’t know whether the MPI ECHAM5 results are gold, or fools gold. Having looked at a lot of model results, I’d guess the latter. But we don’t know.

Finally, to run that un-verifiable model for what, 80 years into the future as they do? Pull the other one, it’s got bells on it as my friends say. At that long a run, no iterative model can be relied on to paint anything other than a portrait of the model programmers’ under- and misunderstandings.

I’ll add this to the head post.

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

76 Comments
Inline Feedbacks
View all comments