
Guest essay by Eric Worrall
h/t Willie Soon – Climate models do a poor job of reproducing observed climate. But climate scientists seem to think they can produce more accurate projections by adding fudge factors to their models, to force better agreement between models and observations.
The most accurate climate change models predict the most alarming consequences, study finds
By Chris Mooney December 6 at 1:00 PM
The climate change simulations that best capture current planetary conditions are also the ones that predict the most dire levels of human-driven warming, according to a statistical study released in the journal Nature Wednesday.
The study, by Patrick Brown and Ken Caldeira of the Carnegie Institution for Science in Stanford, Calif., examined the high-powered climate change simulations, or “models,” that researchers use to project the future of the planet based on the physical equations that govern the behavior of the atmosphere and oceans.
The researchers then looked at what the models that best captured current conditions high in the atmosphere predicted was coming. Those models generally predicted a higher level of warming than models that did not capture these conditions as well.
…
The abstract of the study;
Greater future global warming inferred from Earth’s recent energy budget
Patrick T. Brown & Ken Caldeira
Nature 552, 45–50 (07 December 2017)
doi:10.1038/nature24672
Climate models provide the principal means of projecting global warming over the remainder of the twenty-first century but modelled estimates of warming vary by a factor of approximately two even under the same radiative forcing scenarios. Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (−1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change. Our results suggest that achieving any given global temperature stabilization target will require steeper greenhouse gas emissions reductions than previously calculated.
Read more (Paywalled): https://www.nature.com/articles/nature24672
Force fitting a model to observations is potentially a worthwhile exercise, to help explore the impact of model errors. What I’m concerned about is the apparent attempt to draw premature real world conclusions from this arbitrary force fitting exercise.
Consider the diagram at the top of the page, from Pat Frank’s paper “Propagation of Error and the Reliability of Global Air Temperature Projections”. Cloud forcing is a major component of the climate system, which climate models clearly get very wrong. Producing the expected result with hind casting despite major errors is not evidence that scientists are correctly modelling the Earth’s climate system.
Scientists occasionally get lucky, but the odds of improving models with a few arbitrary corrections, without any real understanding of why models get the climate so wrong, is like the odds of winning a lottery. Announcing the real world implications of an arbitrary force fitting exercise is like telling everyone you have the winning lottery ticket before the draw – not impossible, but very unlikely.
Correction (EW): h/t Nick Stokes – The description I wrote implied Caldiera and Brown added the fudge factors themselves, which is incorrect. What they did was preferentially weight models built with other people’s fudge factors, models which appear to do a better job of hind casting TOA energy imbalance, which by inference means they get clouds less wrong than other models.
… How clouds might change is quite complex, however, and as the models are unable to fully capture this behavior due to the small scale on which it occurs, the programs instead tend to include statistically based assumptions about the behavior of clouds. This is called “parameterization.”
But researchers aren’t very confident that the parameterizations are right. “So what you’re looking at is, the behavior of what I would say is the weak link in the model,” Winton said.
This is where the Brown and Caldeira study comes in, basically identifying models that, by virtue of this programming or other factors, seem to do a better job of representing the current behavior of clouds. However, Winton and two other scientists consulted by The Post all said that they respected the study’s attempt, but weren’t fully convinced. …
Read more: Washington Post (Same link as above)
Does anyone know, when a model run is checked for hind-cast accuracy, which temperature data set/revision it is compared to?
As an ex-aerospace engineer, I’m used to using SeerSim to predict schedules and costs for producing software. Also used to having it used against us by the Government. It is a really cool tool. You can tweak about a hundred factors that gauge a company’s historical performance, software skills, methodologies used, software complexity, re-use software, etc. All on sliding scales. Since we had a target, management would send us back to make more adjustments until they got the results they wanted.
Engineers had a hard time arguing the Government’s factors, since they refused to share them with us. We didn’t share ours, either.
Our results never matched the Government’s estimates. Neither of us were ever in the ballpark. We were always too low, although most of that was due to weekly/monthly changes in the Government’s requirements (after contract was signed). That latter made it impossible to say whether Government or industry was closer to the truth, because all the variables changed.
Whenever someone mentions climate models, I think of this widely accepted model, and how both track reality so well – or in both cases, not at all. Befuddling SeerSim, we had constantly changing requirements. Befuddling climate models, we have a lack of understanding of how the Sun/orbit, or volcanic action – to mention the tip of the tip of the iceberg – throw us into short-term hot and cold sessions, that somehow are all ‘predicted’ by the models, but leave observers going WTF?
Even Einstein used “fudge factors.” Of course Einstein also thought it wasn’t science until he understood them, where they came from, and what they meant.
‘With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.’ – John von Neumann