Reply to Patrick Brown’s response to my article commenting on his Nature paper

Guest essay by Nic Lewis

Introduction

I thank Patrick Brown for his detailed response (also here) to statistical issues that I raised in my critique “Brown and Caldeira: A closer look shows global warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of testing using synthetic data, is welcome. I would reply as follows.

Brown comments that I suggested that rather than focusing on the simultaneous use of all predictor fields, BC17 should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. He goes on to say: “Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. That is because, as I stated, I disagree with BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models necessarily also applies in the real climate system. I will return to that point later. But first I will discuss the statistical issues.

Statistical issues

When there are many more predictor variables than observations, the dimensionality of the predictor information has to be reduced in some way to avoid over-fitting. There are a number of statistical approaches to achieving this using a linear model, of which the partial least squares (PLS) regression method used in BC17 is arguably one of the best, at least when its assumptions are satisfied. All methods estimate a statistical model fit that provides a set of coefficients, one for each predictor variable.[2] The general idea is to preserve as much of the explanatory power of the predictors as possible without over-fitting, thus maximizing the fit’s predictive power when applied to new observations.

If the PLS method is functioning as intended, adding new predictors should not worsen the predictive skill of the resulting fitted statistical model. That is because, if those additional predictors contain useful information about the predictand(s), that information should be incorporated appropriately, while if the additional predictors do not contain any such information they should be given zero coefficients in the model fit. Therefore, the fact that, in the highest signal-to-noise ratio, RCP8.5 2090 case focussed on both in BC17 and my article, the prediction skill when using just the OLR seasonal cycle predictor field is very significantly reducedby adding the remaining eight predictor fields indicates that something is amiss.

Brown say that studies are often criticized for highlighting the single statistical relationship that appears to be the strongest while ignoring or downplaying weaker relationships that could have been discussed. However, the logic with PLS is to progressively include weaker relationships but to stop at the point where they are so weak that doing so worsens predictive accuracy. Some relationships are sufficiently weak that including them adds too much noise relative to information useful for prediction. My proposal of just using the OLR seasonal cycle to predict RCP8.5 2090 temperature was accordingly in line with the logic underlying PLS – it was not a case of just ignoring weaker relationships.

Indeed, the first reference for the PLS method that BC17 give (de Jong, 1993),  justified PLS by referring to a paper [3] that specifically proposed carrying out the analysis in steps, selecting one variable/component at a time and not adding an additional one if it worsened the statistical model fit’s predictive accuracy. At the predictor field level, that strongly suggests that, in the RCP8.5 2090 case, when starting with the OLR seasonal cycle field, one would not go on to add any of the other predictor fields, as in all cases doing so worsens the fit’s predictive accuracy. And there would not be any question of using all predictor fields simultaneously, since doing so also worsens predictive accuracy compared to using just the OLR seasonal cycle field.

In principle, even when given all the predictor fields simultaneously PLS should have been able to optimally weight the predictor variables to build composite components in order of decreasing predictive power, to which the add-one-at-a-time principle could be applied.  However, it evidently was unable to do so in the RCP8.5 2090 case or other cases. I can think of two reasons for this. One is that the measure of prediction accuracy used –  RMS prediction error when applying leave-one-out cross-validation – is imperfect. But I think that the underlying problem is the non-satisfaction of a key assumption of the PLS method: that the predictor variables are free of uncertainty. Here, although the CMIP5-model-derived predictor variables are accurately  measured, they are affected by the GCMs’ internal variability. This uncertainty-in-predictor-values problem was made worse by the decision in BC17 to take their values from a single simulation run by each CMIP5 model rather than averaging across all its available runs.

Brown claims (a) that each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate and (b) that this means that PLSR is able to provide meaningful Prediction Ratios even when the Spread Ratio is near or slightly above 1. Point (a) is true but the effect is very minor. Based on the RCP8.5 2090 predictions, it would normally cause a 1.4% upwards bias in the Spread Ratio. Since Brown did not adjust for the difference of one in the degrees of freedom involved, the bias is twice that level – still under 3%. Brown’s claim (b), that PLS regression is able to provide meaningful Prediction Ratios even when the Spread Ratio is at or virtually at the level indicating a skill no higher than when always predicting warming equal to the mean value for the models used to estimate the fit, is self-evidently without merit.

As Brown indicates, adding random noise affects correlations, and can produce spurious correlations between unrelated variables. His test results using synthetic data are interesting, although they only show Spread ratios. They show that one of the nine synthetic predictor fields produced a reduction in the Spread ratio below one that was very marginally – 5% – greater than that when using all nine fields simultaneously. But the difference I highlighted, in the highest signal RCP8.5 2090 case, between the reduction in Spread ratio using just the OLR seasonal cycle ratio and that using all predictors simultaneously was an order of magnitude larger – 40%. It seems very unlikely that the superior performance of the OLR seasonal cycle on its own arose by chance.

Moreover, the large variation in Spread ratios and Prediction ratios between different cases and different (sets of) predictors calls into question the reliability of estimation using PLS. In view of the non-satisfaction of the PLS assumption of no errors in the predictor variables, a statistical method that does take account of errors in them would arguably be more appropriate. One such method is the RegEM (regularized expectation maximization) algorithm, which was developed for use in climate science.[4] The main version of RegEM uses ridge regression with the ridge coefficient (the inverse of which is analogous to the number of retained components in PLS) being chosen by generalized cross-validation. Ridge regression RegEM, unlike the TTLS variant used by Michael Mann, produces very stable estimation. I have applied RegEM to BC17’s data in the RCP8.5 2090 case, using all predictors simultaneously.[5] The resulting Prediction ratio was 1.08 (8% greater warming), well below the comparative 1.12 value Brown arrives at (for grid-level standardization). And using just the OLR seasonal cycle , the excess of the Prediction ratio over one was only half that for the comparative PLS estimate.

Issues with the predictor variables and the emergent constraints approach

I return now to BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models also applies in the real climate system. They advance various physical arguments for why this might be the case in relation to their choice of predictor variables. They focus on the climatology and seasonal cycle magnitude predictors as they find, compared with the monthly variability predictor, these have more similar PLS loading patterns to those when targeting shortwave cloud feedback, the prime source of intermodel variation in ECS.

There are major problems in using climatological values (mean values in recent years) for OLR, OSR and the TOA radiative imbalance N. Most modelling groups target agreement of simulated climatological values of these variables with observed values (very likely spatially as well as in the global mean) when tuning their GCMs, although some do not do so. Seasonal cycle magnitudes may also be considered when tuning GCMs. Accordingly, how close values simulated by each model are to observed values may very well reflect whether and how closely the model has been tuned to match observations, and not be indicative of how good the GCM is at representing the real climate system, let alone how realistic its strength of multidecadal warming in response to forcing is.

There are further serious problems with use of climatological values of TOA radiation variables. First, in some CMIP5 GCMs substantial energy leakages occur, for example at the interface between their atmospheric and ocean grids.[6] Such models are not necessarily any worse in simulating future warming than other models, but they need (to be tuned) to have TOA radiation fluxes significantly different from observed values in order for their ocean surface temperature change to date, and in future, to be realistic.

Secondly, at least two of the CMIP5 models used in BC17 (NorESM1-M and NorESM1-ME) have TOA fluxes and a flux imbalance that differ substantially from CERES observed values, but it appears that this merely reflects differences between derived TOA values and actual top-of-model values. There is very little flux imbalance within the GCM itself.[7] Therefore, it is unfair to treat these models as having lower fidelity – as BC17’s method does for climatology variables – on account of their TOA radiation variables differing, in the mean, from observed values.

Thirdly, most CMIP5 GCMs simulate too cold an Earth: their GMST is below the actual value, by up to several degrees. It is claimed, for instance in IPCC AR5, that this does not affect their GMST response to forcing. However, it does affect their radiative fluxes. A colder model that simulates TOA fluxes in agreement with observations should not be treated as having good fidelity. With a colder surface its OLR should be significantly lower than observed, so if it is in line then either the model has compensating errors or its OLR has been tuned to compensate, either of which indicates its fidelity is poorer than it appears to be. Moreover, complicating the picture, there is an intriguing, non-trivial correlation between preindustrial absolute GMST and ECS in CMIP5 models.

Perhaps the most serious shortcoming of the predictor variables is that none of them are directly related to feedbacks operating over a multidecadal scale, which (along with ocean heat uptake) is what most affects projected GMST rise to 2055 and 2090. Predictor variables that are related to how much GMST has increased in the model since its preindustrial control run, relative to the increase in forcing – which varies substantially between CMIP5 models – would seem much more relevant. Unfortunately, however, historical forcing changes have not been measured for most CMIP5 models. Although one would expect some relationship between seasonal cycle magnitude of TOA variables and intra-annual feedback strengths, feedbacks operating over the seasonal cycle may well be substantially different from feedbacks acting on a multidecadal timescale in response to greenhouse gas forcing.

Finally, a recent paper by scientists as GFDL laid bare the extent of the problem with the whole emergent constraints approach. They found that, by a simple alteration of the convective parameterization scheme, they could engineer the climate sensitivity of the GCM they were developing, varying it over a wide range, without them being able to say that one model version showed a greater fidelity in representing recent climate system characteristics than another version with a very different ECS.[8] The conclusion from their Abstract is worth quoting:”Given current uncertainties in representing convective precipitation microphysics and the current inability to find a clear observational constraint that favors one version of the authors’ model over the others, the implications of this ability to engineer climate sensitivity need to be considered when estimating the uncertainty in climate projections.” This strongly suggests that at present emergent constraints cannot offer a reliable insight into the magnitude of future warming. And that is before taking account of the possibility that there may be shortcomings common to all or almost all GCMs that lead them to misestimate the climate system response to increased forcing


 

 

[1] Patrick T. Brown & Ken Caldeira, 2017. Greater future global warming inferred from Earth’s recent energy budget, doi:10.1038/nature24672.

[2] The predicted value of the predictand is the sum of the predictor variables each weighted by its coefficient, plus an intercept term.

[3] A Hoskuldsson, 1992. The H-principle in modelling with applications to chemometrics. Chemometrics and Intelligent Laboratory Systems, 14, 139-153.

[4] Schneider, T., 2001: Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values. J. Climate, 14, 853–871.

[5] Due to memory limitations I had to reduce the longitudinal resolution by a factor of three when using all predictor fields simultaneously. Note that RegEM standardizes all predictor variables to unit variance.

[6] Hobbs et al, 2016. An Energy Conservation Analysis of Ocean Drift in the CMIP5 Global Coupled Models. DOI: 10.1175/JCLI-D-15-0477.1.

[7] See discussion following this blog comment.

[8] Ming Zhao et al, 2016. Uncertainty in model climate sensitivity traced to representations of cumulus precipitation microphysics. J Cli, 29, 543-560.

0 0 votes
Article Rating
37 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
December 27, 2017 2:01 pm

GCM’s totally invalidate global warming theory…
The best information we know about everything…..sun, orbit, CO2, humidity, etc even clouds…..and after 100’s of GCM’s…and thousands of thousands of runs….

…they are all wrong

Reply to  Latitude
December 27, 2017 2:33 pm

The “Science News” section of our Friends of Science quarterly newsletter gives a summary of the issue about the Brown and Caldeira 2017 paper.
https://www.friendsofscience.org/index.php?id=2353#Science

Note that the paper used satellite data from 2001 to 2015 to conclude that the models are running too cold. The best fit trend over the same period from satellite data of the lower troposphere temperature (UAH6.0) was 0.007 °C/decade. The climate model trend of 0.202 °C/decade is much higher. This huge discrepancy suggests that the models are running too hot!

Reply to  Ken Gregory
December 28, 2017 4:40 am

Very good observations, thank you Ken.

One of my co-authors, an accomplished senior meteorologist, recently said to me of the warmist camp:
“These people live in a Virtual World – they believe that their models are more credible than actual scientific observations.”

This, sadly, is the basis of the warmists’ belief that global warming is manmade and catastrophic. This belief system is so obviously false nonsense that nothing more needs to be said about it – it is a childish delusion that is utterly inconsistent with the scientific method.

To date, there is ample evidence that humanmade global warming, if it exists at all in significance, is benign and probably beneficial to humanity and the environment.

Best regards for the Holidays, Allan

Reply to  Ken Gregory
December 28, 2017 12:24 pm

0.007 °C/decade … 0.202 °C/decade

Is there even a real argument here?

Richard M
Reply to  Ken Gregory
December 29, 2017 10:51 am

I believe they got the sign wrong. The trend 2001 through 2015 is -0.0007 C / decade. Of course, both of them are essentially zero given measurement error.

mike
Reply to  Latitude
December 27, 2017 9:12 pm

…that each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate
Sounds like some conflation of infinite monkeys to write Shakepeare with bad science, that if you grind enough cowpies together, you’ll get cherry pie.

F. Leghorn
Reply to  mike
December 28, 2017 3:28 am

If you have a fifty gallon barrel of turds and pour in a cup of wine you still have a barrel of turds. If you have a fifty gallon barrel of wine and pour in a cup of turds you also have a barrel of turds. “Climate models” in a nutshell.

Gamecock
Reply to  mike
December 28, 2017 6:40 am

“I do not believe in the collective wisdom of individual ignorance.” – Thomas Carlyle

Measures of central tendency of crap reveals . . . crap.

taxed
December 27, 2017 2:02 pm

Could someone please explain this post in clear simple terms so l can understand what their taling about.

Latitude
Reply to  taxed
December 27, 2017 2:09 pm

..trying to jiggle the computer games to match reality….when you turn them to match one thing…they don’t match something else….or changing one small thing….makes them go all over the place

taxed
Reply to  Latitude
December 27, 2017 2:18 pm

Ahh!! Thanks.
So it a case of the weather and what’s happening the real world keeps messing up their forecasts of global warming.

jorgekafkazar
Reply to  Latitude
December 27, 2017 4:29 pm

IOW: Climate modelers don’t know what they don’t know.

gnomish
Reply to  Latitude
December 27, 2017 8:08 pm

but the oceans would be frozen without them!

sy computing
Reply to  taxed
December 27, 2017 2:11 pm

See Latitude’s assessment above…

Bryan A
Reply to  taxed
December 27, 2017 2:33 pm

It’s really not my habit to intrude but there must be 50 ways to tune the models

“50 Ways to Tune your Models”
“The problem is all inside your head”
She said to me
“The answer is easy if you
Take it logically
You must stop thinking just like
Michael Mann you see
There must be fifty ways
To tune your models”

She said, “It’s really not my habit to intrude
Furthermore, I hope past warming
will be lost or misconstrued
But I’ll repeat myself
At the risk of being crude
There must be fifty ways
To tune your models
Fifty ways to tune your models”

You just cool down the past, Jack
to heat up the present, Stan
You treat the model like a toy, Roy
and listen to me
You gotta whine and fuss, Gus
You don’t need to discuss much
accuse those that disagree, Lee
And claim they refuse to see

Ooh, cool down the past, Jack
heat up the present, Stan
You treat the model like a toy, Roy
Just listen to me
you gotta whine and fuss, Gus
You don’t need to discuss much
accuse those that disagree, Lee
And claim they refuse to see

Bryan A
Reply to  Bryan A
December 27, 2017 2:39 pm

Or the full version

50 Ways to Tune your Models
“The problem is all inside your head”
She said to me
“The answer is easy if you
Take it logically
You must stop thinking just like
Michael Mann you see
There must be fifty ways
To tune your models”

She said, “It’s really not my habit to intrude
Furthermore, I hope past warming
will be lost or misconstrued
But I’ll repeat myself
At the risk of being crude
There must be fifty ways
To tune your models
Fifty ways to tune your models”

You just cool down the past, Jack
to heat up the present, Stan
You treat the model like a toy, Roy
and listen to me
You gotta whine and fuss, Gus
You don’t need to discuss much
accuse those that disagree, Lee
And claim they refuse to see

Ooh, cool down the past, Jack
heat up the present, Stan
You treat the model like a toy, Roy
Just listen to me
you gotta whine and fuss, Gus
You don’t need to discuss much
accuse those that disagree, Lee
And claim they refuse to see

She said, “It grieves me so
Climate Models are such a pain
I wish there was something I could do
To make them right again”
I said, “I appreciate that
And would you please explain
About the fifty ways?”

She said, “Why don’t we both
Just blog on it tonight
And I believe in the morning
the model runs will be more right”
And then she kissed me
And I realized she probably was right
There must be fifty ways
To tune your models
Fifty ways to tune your models

You just cool down the past, Jack
to heat up the present, Stan
You treat the model like a toy, Roy
and listen to me
You gotta whine and fuss, Gus
You don’t need to discuss much
accuse those that disagree, Lee
And claim they refuse to see

Ooh, cool down the past, Jack
heat up the present, Stan
You treat the model like a toy, Roy
Just listen to me
you gotta whine and fuss, Gus
You don’t need to discuss much
accuse those that disagree, Lee
And claim they refuse to see

Streetcred
Reply to  Bryan A
December 27, 2017 3:51 pm

Immense!

Reply to  Bryan A
December 27, 2017 4:15 pm

For those who don’t recognize the original that he played off of:

Well done!

JohnWho
Reply to  Bryan A
December 27, 2017 4:28 pm

Most excellent!

F. Leghorn
Reply to  Bryan A
December 28, 2017 3:33 am

Someone should record this and put it on Youtube. It could help teach the lemmings.

Reply to  Bryan A
January 4, 2018 1:44 pm

Great work Bryan

December 27, 2017 2:33 pm

Computer models in various branches of engineering have developed into essential tools for design and testing.
However, in the field of climate ‘science’ as predictors of the future trends, to paraphrase G.E. Smith, computer models are no more than ‘numerology origami’. As a meter of fact they actually do lot of damage, just the fuel green taxes, paid by people who could hardly afford many life essentials, amount to billions of $US.

PiperPaul
Reply to  vukcevic
December 27, 2017 2:47 pm

“Computer models in various branches of engineering have developed into essential tools for design and testing.”

jorgekafkazar
Reply to  PiperPaul
December 27, 2017 6:45 pm

Nothing new there.

commieBob
Reply to  vukcevic
December 27, 2017 5:08 pm

I have literally bet my life on the output of models … every time I have flown on certain aircraft. link

I doubt that most climate modellers comprehend the degree of verification and validation involved in most engineering work. For sure, their methods would not pass muster.

Patrick MJD
Reply to  commieBob
December 27, 2017 5:40 pm

Yes however, the airline and air transportation industries are heavily regulated. Climate science not so much.

jon2009
December 27, 2017 3:41 pm

On IPCC models: I have figured out after lengthy arithmetical investigation, just how the IPCC could render its models accurate.
Since they overstate the warming by 100% their final equations could be made accurate by adding the final operation “/2”
Ta da! All those sceptics confounded because IPCC is now correct! the model works at last!
We will no doubt see this everywhere there is a climate scientist, soon!
Oops, the exclamation key just fell off my laptop.

Reply to  jon2009
December 27, 2017 4:28 pm

I just read a story today of a report about sexual harassment claims within Obama’s DOJ being handled …. less than justly. (Some of the accused were actually promoted.)
In Trump’s EPA, maybe the climate models have finally fallen off someone’s lap?

AGW is not Science
Reply to  jon2009
December 28, 2017 6:23 am

Except that wouldn’t do it either, because the models still assume CO2 as climate driver, and there’s absolutely no empirical evidence to support that assumption and much to refute it. Your “fix” still leaves intact the mistaken assumption that CO2 is the cause of the amount of warming that has taken place over whatever period, with absolutely no evidence of causation – the causation being CO2 is NOTHING MORE than a (BAD) ASSUMPTION.

December 27, 2017 6:51 pm

“A colder model that simulates TOA fluxes in agreement with observations should not be treated as having good fidelity. With a colder surface its OLR should be significantly lower than observed, so if it is in line then either the model has compensating errors or its OLR has been tuned to compensate, either of which indicates its fidelity is poorer than it appears to be.”

I am much more inclined to accept the latter, that is, “its OLR has been tuned to compensate.”

All of the processes involving water and its various phase changes are parameterized. Convection and precipitation, in various physical characterizations, are the two principles tuning knobs being adjusted as to how heat energy gets thru the troposphere to produce an adjusted OLR in order to close the TOA energy budget without the surface layers running too hot. These parameters are so poorly constrained by observation they can take on a wide range of values that allows the modelers to tune any ECS they want and/or expect. There has never been a more devastating example of the junk science that Dr Richard Feynman termed, Cargo Cult Science than today’s climate modelling community.

I am utterly convinced that mainstream climate science is junk science for its ready willingness to accept the biased parameterization contained within the models. A parameterization that Nic concludes in this analysis with saying, “They found that, by a simple alteration of the convective parameterization scheme, they could engineer the climate sensitivity of the GCM they were developing, varying it over a wide range, without them being able to say that one model version showed a greater fidelity in representing recent climate system characteristics than another version with a very different ECS.

For any scientist to accept the GCMs as they exist today as informative with this parameterization situation indicates they are openly willing to allow bias a conclusion that has massive economic consequences for humanity and its onward economic and social development.
And a someone who thinks him/herself a scientist who understands what the modelers are doing with these tuned ECS outputs and yet accepts them as reliable are pseudoscientists themselves.

Climate Science… heal thyself.

JBom
December 27, 2017 7:39 pm

“Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

“To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario.”

This evidence strongly argues BC17 is fraud.

December 28, 2017 4:48 am

If a GCM were to consist of a single parameterised function then very few people would agree that it had any actual predictive power. People can easily see it as a fit. But if you combine many of them to make “complexity” then suddenly people believe it now holds predictive power.

This failure holds true even if the individual components are in the ballpark of modelling their respective quantity. Any tweaking beyond pure physics (and all parameterised and simplified-for-the-purposes-of-computation components fall into this category) and the tiny climate signal is swamped with compensated-for-error from time step to time step.

There are so many things wrong with the models that its difficult to understand how people take them seriously for the purpose of projection.

bitchilly
Reply to  TimTheToolMan
December 29, 2017 2:01 pm

in decades to come people will look back in wonder at the sheer enormity of the waste of time,effort,money and intelligent people on the fools errand that is climate modeling in its current state.whoever came up with the word mathturbation in relation to climate science had it spot on.

Gamecock
December 28, 2017 6:51 am

Before I retired, I was responsible for a few models. They all worked well, thanks to the people who originally programmed them.

They produced the models by codifying behaviors, then programming them. Input data was massaged and produced reliable results.

Note that behaviors were known. All relevant behaviors were known.

“Climate” behaviors are far from understood. Ipso facto, it is literally IMPOSSIBLE to model climate. What they are modeling is some of their notions about weather, the results of which have virtually no value*, in spite of their massive cost.

*Excluding political value. Which might be their actual purpose to some, in which case they have worked quite well. When people start doubting, get a bigger computer.

Steve Zell
December 28, 2017 7:23 am

It was difficult to understand the point of this article, since there are far too many acronyms whose meaning was not explained in the article.

E. Martin
December 28, 2017 12:20 pm

Quite right! Pl. stop using jargon abbreviations — some of us deplorables are not up with them.

[And Pl. is “please” ? 8<) .mod]

December 28, 2017 2:07 pm

… a few comments about the text of the original paper on which the article is based:

Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections.

TRANSLATION: Ain’t cilimate models fun? When we play with them, we take observable attributes of the climate system and subject these to questionable built-in assumptions of our toys, to arrive at simulated magnitudes of change based on these questionable, built-in assumptions, and then we pretend that these simulations are equally as valid as the actual observable attributes that we subjected to our garbage-machine mechanations to compute them. Then we insist that these mechanations have some real potential to help us make future projections.

Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming.

TRANSLATION: You have to believe us because we use the word, “robust”. We trust that you will get totally dumbfounded by phrases like, “the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget”, but because we use the word, “robust” before it, we figure you will be so enamored by this favorite word of knowledgeable experts like us that you will just let your dumbfoundedness slide and call us brilliant.

When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general.

TRANSLATION: When we strain your imagination with model projections by pretending that these figures have anything to do with observations, we obtain greater means of taking advantage of your narrow technical understanding of what the hell we are really talking about.

In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (−1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change.

TRANSLATION: We disagree with the IPCC, but don’t get too excited, because we disagree in the wrong way, which makes our projections even more wrong than theirs. To repeat, ain’t climate models fun!