Dana Nuccitelli has written a defence of climate models, in which he appears to claim that a few models randomly replicating the pause should be considered evidence that climate modelling is producing valid results.
According to The Guardian;
… There’s also no evidence that our expectations of future global warming are inaccurate. For example, a paper published in Nature Climate Change last week by a team from the University of New South Wales led by Matthew England showed that climate models that accurately captured the surface warming slowdown (dark red & blue in the figure below) project essentially the same amount of warming by the end of the century as those that didn’t (lighter red & blue).
There’s also been substantial climate research examining the causes behind the short-term surface warming slowdown. Essentially it boils down to a combination of natural variability storing more heat in the deep oceans, and an increase in volcanic activity combined with a decrease in solar activity. These are all temporary effects that won’t last. In fact, we may already be at the cusp of an acceleration in surface warming, with 2014 being a record-hot year and 2015 on pace to break the record yet again.
The problem I’ve got with this line of reasoning, can best be illustrated with an analogy.
Say your uncle came to you and said “I’ve got an infallible horse betting system. Every time I plug in the results of previous races, plug in last year’s racing data, it gets most of the winners right, which proves the system works.”.
- Bet your life savings on the next race?
- Wait and see whether the model produced good predictions, when applied to future races?
- Humour the old fool and make him a nice mug of chocolate?
Anyone with an ounce of common sense would go for option b) or c). We instinctively intuit that it is much easier to fit a model to the past, than to produce genuinely skilful predictions. If your uncle was a professor of mathematics or statistics, someone with some kind of credibility in the numbers game, you might not dismiss his claim out of hand – occasionally skilled people really do find a way to beat the system. But you would surely want to see whether the model could demonstrate real predictive skill.
What if a few months later, your uncle came back to you and said:
“I know my model didn’t pick the winners of the last few months races. But you see, the model doesn’t actually predict exactly which horse will win each race – it produces a lot of predictions and assigns a probability to each prediction. I work out which horse to pick, by kind of averaging the different predictions. The good news though is one of the hundreds of model runs *did* predict the right horses, in the last 4 races – which proves the model is fundamentally sound. According to my calculations, all the models end up predicting the same outcome – that if we stick with the programme, we will end up getting rich”.
I don’t know about you, but at this point I would definitely be tending towards option c).