Dana Nuccitelli has written a defence of climate models, in which he appears to claim that a few models randomly replicating the pause should be considered evidence that climate modelling is producing valid results.
According to The Guardian;
… There’s also no evidence that our expectations of future global warming are inaccurate. For example, a paper published in Nature Climate Change last week by a team from the University of New South Wales led by Matthew England showed that climate models that accurately captured the surface warming slowdown (dark red & blue in the figure below) project essentially the same amount of warming by the end of the century as those that didn’t (lighter red & blue).
There’s also been substantial climate research examining the causes behind the short-term surface warming slowdown. Essentially it boils down to a combination of natural variability storing more heat in the deep oceans, and an increase in volcanic activity combined with a decrease in solar activity. These are all temporary effects that won’t last. In fact, we may already be at the cusp of an acceleration in surface warming, with 2014 being a record-hot year and 2015 on pace to break the record yet again.
The problem I’ve got with this line of reasoning, can best be illustrated with an analogy.
Say your uncle came to you and said “I’ve got an infallible horse betting system. Every time I plug in the results of previous races, plug in last year’s racing data, it gets most of the winners right, which proves the system works.”.
Would you:
- Bet your life savings on the next race?
- Wait and see whether the model produced good predictions, when applied to future races?
- Humour the old fool and make him a nice mug of chocolate?
Anyone with an ounce of common sense would go for option b) or c). We instinctively intuit that it is much easier to fit a model to the past, than to produce genuinely skilful predictions. If your uncle was a professor of mathematics or statistics, someone with some kind of credibility in the numbers game, you might not dismiss his claim out of hand – occasionally skilled people really do find a way to beat the system. But you would surely want to see whether the model could demonstrate real predictive skill.
What if a few months later, your uncle came back to you and said:
“I know my model didn’t pick the winners of the last few months races. But you see, the model doesn’t actually predict exactly which horse will win each race – it produces a lot of predictions and assigns a probability to each prediction. I work out which horse to pick, by kind of averaging the different predictions. The good news though is one of the hundreds of model runs *did* predict the right horses, in the last 4 races – which proves the model is fundamentally sound. According to my calculations, all the models end up predicting the same outcome – that if we stick with the programme, we will end up getting rich”.
I don’t know about you, but at this point I would definitely be tending towards option c).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
![CMIP5-90-models-global-Tsfc-vs-obs-thru-2013[1]](https://wattsupwiththat.files.wordpress.com/2014/06/cmip5-90-models-global-tsfc-vs-obs-thru-20131.png?resize=720%2C648&quality=75)
They don’t get, or admit, that the basis of the models, that nearly all the warming from ~1978-1998 was caused by humans, which the alarmists in the 1990s freely said and admitted and expected to continue, is incorrect.
Some,or most of it, was natural, and therefore the horses aren’t now winning as previously expected.
The sooner they own up to this, the better for science.
What has happened unfortunately, is what happens often with political discussions, when 2 groups disagree.
One side will focus on proving that they are right, while the other side tries to prove the opposite is right. Very little weight is given to the opposite side. If 50% of the new information favors one side and 50% favors the other side, both sides, will use that to strengthen their side and try to weaken the other side as their cognitive bias’s drive their belief system.
With time, both sides can actually get farther apart, even as more meaningful information is discovered that should be dialed into the understanding of the realm in which they disagree.
Sound familiar?
Considering the site where I’m posting this, we all know which side has it all wrong.
Betcha this same post could go up at Skeptical Science and they would think the exact same thing but that WUWT has it all wrong.
My biggest problem on this relates to an element that there can be no disagreement on. On something that those who claim otherwise are blatant frauds and/or scientifically blind.
Sun +H2O +CO2 + Minerals = O2 + Sugars(food)
Either you agree that the big increase in CO2 has resulted in a massive increase in the vegetative health of the planet, with big increases in crop yields/world food production or you are a fraud and anything else you state about the effects of CO2 are not credible either.
“One side will focus on proving that they are right, while the other side tries to prove the opposite is right. ”
No. One side tries to prove they are right, the other side tries to prove that the others are evil demons who should be burned at the stake. Who needs arguments when you can demonize the opposition and GET AWAY WITH IT?
Yogi Berra — ‘It’s tough to make predictions, especially about the future.’ Just in case this has not already been posted.
Haven’t you heard the model/algorithm/formula: coincidence = causation?
(The belief is widespread. For example, the Director of Planning for the fiefdom of Saanich BC claimed that use of self-selecting respondents is a valid survey method because sometimes the results matched a professionally done survey.
I even had to try to educate the Association of Professional Engineers and Geoscientists on that.)
Argh, my g,d,r in chevron brackets disappeared, perhaps /sarc works.
How can any respected scientist publish such non sense? What’s worse is that the people who believe in CAGW are smug in any report that supports CAGW whether it would stand up to scientific inquiry.
Here it is May 11, 2015 and we had a rare and exciting event, 8 inches of snow!!! Oh, if only I had believed in CAGW this wouldn’t have happened!!! Global warming causes cold weather in May, just like last year, the warmest year on record. Maybe they’ll have an ice cream truck out there telling us how warm it is!!
Thanks, Eric.
Some would argue that there must be something correct in the GCMs for a couple of them to forecast a pause. Is this a robust argument?
Or is this a typewriting monkeys experiment were you pick at a couple of correct one-syllable words out of hundreds of reams of paper.
more observation is the key. If someone writes down 10 different possibilities, which tell you the next 5 coin tosses you will make, and one of them is right, have they perfected a theory of coin tosses? Or did they just get lucky, with one of their guesses? You could resolve this, by asking them to repeat the trick, this time with just one guess based on the previously successful methodology.
Yes. In inductive modeling, picking the right model (or ensemble) out of many is a well-known problem. Yours is usually the correct solution. Many ignore it at their peril. If climate modelers (and their funding agencies) were required to put 10% of the income each year into climate futures, all of this nonsense would go away.
The warming is in the pipeline. No, really. Stop laughing. Just you wait and see. It’ll come out of the oceans and there will be a volcano slowdown, then you’ll be sorry.
It’s just like Linus waiting in the pumpkin patch for the “Great Pumpkin” to arrive.
I understood that it had to be a sincere pumpkin patch. While this is a most contrived climate swamp. No “Great Pumpkin” is coming out of it.
According to The Guardian… There’s also no evidence that our expectations of future global warming are inaccurate.
Except for 18+ years of NO GLOBAL WARMING!!
*Sheesh*
There is little point in following the IPCC models and forecasting climate trends ahead linearly when the climate is clearly controlled by natural orbital cycles and cycles in solar activity – most importantly on a time scale of human interest the millennial solar activity cycle?
It is of interest that the trends in the new UAH v6 satellite temperature time series are now much closer to the RSS satellite data,. In particular they confirm the RSS global cooling trend since 2003 when the natural millennial solar activity driven temperature cycle peaked.
see
http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
It is the satellite data sets which should be used in climate discussions because the land and sea based data sets have been altered and manipulated so much over the years in order to make them conform better with the model based CAGW agenda.
The IPCC climate models are built without regard to the natural 60 and more importantly this 1000 year periodicity so obvious in the temperature record. This approach is simply a scientific disaster and lacks even average commonsense .It is exactly like taking the temperature trend from say Feb – July and projecting it ahead linearly for 20 years or so. The models are back tuned for less than 100 years when the relevant time scale is millennial. This is scientific malfeasance on a grand scale.
The temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless.
A new forecasting paradigm urgently needs to be adopted and publicized ahead of the Paris meeting.
For forecasts of the timing and extent of the coming cooling based on the natural solar activity cycles – most importantly the millennial cycle – and using the neutron count and 10Be record as the most useful proxy for solar activity check my blog-post at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
an interesting exercise would be to run a GCM with the equilibrium climate sensitivity set to zero, just to see how close to null effect the software can run; my assumption is chaos will take over and it will spiral out of control.
I don’t think we can dismiss the bottom two models on the basis of criticisms of all the models. To dismiss them, one must dismiss them (or not) based on their own merits. So a couple of questions:
1. Did these two models capture the 1998 super El Nino? It is hard to tell from the spaghetti graph, but it appears that they did not.
2. What is the fundamental difference between these two models and the rest of them? If these models incorporate aspects of (for example) AMO, PDO or other factors as examples, a different question is raised. Did they in fact model these processes correctly while the other models did not? In other words, if the specific differences between these two models and the others can be isolated and shown to be right then they may have some merit. If they did something different however, got it wrong, but by getting it wrong they got the “right” answer, then they are right for the wrong reasons and unlikely to produce future results of any value.
In brief, I don’t think it wise to condemn these two models with a more in depth analysis of what is fundamentally different about them (if anything).
David, I thought all IPCC GCMs do not model ENSO because it “evens out”; has no permanent or even long-term effects.
IANAS…but, I agree with this sentiment.
The difficulty I have is that the argument appears to be that since these 2 models got it right, the rest (or, their average) are accurate. I’d really like to see the full model spread and where these go all the way to 2100. I get the feeling it’s throwing as much sugar honey ice tea up there as possible and claiming success even when major events are missed. Richard mentioned the TSS Fallacy showing this quite well.
Good luck getting the pro side to show the actual causation since that would likely mean they would have to show that it’s not really catastrophic. We can wait for the claim “See! I…errr…WE were correct!” every time the satellite data crosses or trends along one of the models.
I go back to Christopher Essex’s lecture and Freeman Dyson’s interview.
The important question is not whether the models are wrong, but whether it is even possible to model the Earth’s Climate system well enough to make any meaningful projections in the first place.
So far, Norman Page appears to be proposing a feasible and falsifiable model. We can,after all, predict the orbits of the Sun and Planets accurately.
If I said I could predict all of the scores in next seasons Barclay’s Premier League would you believe me? (I will add the qualification plus or minus 5 goals). At the end of next season I could point to all the scores that I got right and claim that the ones that I got wrong are within the bounds of accuracy. If we get another result like Manchester United 8 Arsenal 2 then I could claim that this is like “El Nino”.
You could ‘predict’ every game would have a 5 – 5 score line. With your margin of error, your prosepects of success would be quite good since I cannot recall a 10 – nil thrashing, and such a score line is unlikely to occur.
davidmhoffer
You say
Yes. That is true.
And it is also true that
It is foolish to accept these two models without a more in depth analysis that determines what is fundamentally different about them (if anything).
But that is what Dana Nuccitelli has suggested and is being discussed in this thread.
Richard
Richard,
To put my objection more bluntly, the article would be better served by exploring the differences between those two models and the rest so that the readership can consider them and comment. I would think that with the contacts at the disposal of this forum, getting in touch with those specific modelers and asking them to elaborate would be of more value.
davidmhoffer
Yes, I agree.
But I admit my doubts that the modelers would “elaborate”.
Richard
But I admit my doubts that the modelers would “elaborate”.
In the words of my people:
Don’t ask, don’t get.
😉
He seems to have omitted Lord Monckton’s pocket calculator model that outperforms all of the above models.
Friends
In this thread I have repeatedly commended the wicki explanation of the Texas Sharshooter fallacy.
In light of how the thread has developed, I copy this example from the link.
Richard
Richard
The problem with this study occurs prior to the sharpshooter pulling his/her trick, as I noted above the side of the barn wasn’t even hit.
I remember a great TV presentation of this or similar (Nightline??) in which they showed the real error with the study was that they picked the highest risk factor rate and presented it as the average when the actual average was near 1.2 or some such meaningless risk. There were areas that had a zero and could, by their logic, show that power lines were protecting children.
Long live the advocate!
Somehow, the idea that heat can be ‘stored’ in the deep ocean persists. Water in the deep ocean is already at its maximum density die to the extreme pressures. Heating this water would cause it to expand, reducing its density. That MUST result in convection. Heat applied to deep waters by submarine vulcanism causes convection – visibly. I don’t know how the ‘excess heat’ is supposed to get to the deep waters without being absorbed by the not-so-deep waters along the way.
Such overlying of warm fluids with denser, colder fluids is called an ‘inversion layer’ and is unstable. When such occurs in the atmosphere, convection occurs immediately, resulting in strong updrafts, convection ‘cells’, and thunderstorms, often with tornadic rotation and high flow rates.
Gravity pulls the denser material down more strongly than the less dense material.
“In the end, gravity always wins.”
Reblogged this on Petrossa's Blog and commented:
hilarious caption on the graph. Made me chuckle.
“There’s also no evidence that our expectations of future global warming are inaccurate.”
Try wrapping your head around that statement!
I had the exact same thought. And then there’s this: “These are all temporary effects that won’t last.” Apparently, these guys can predict future volcanic activity.
I think we should go with the obvious 97%. 97%of climate models are wrong…
In what year were these model runs performed? are these model runs from 10 years ago? 20 years ago?
Or 5 years ago?
It makes a huge difference.
Also, note that the range of final values of all the different models seems to be getting wider and wider. So how can it be that they all give the same result at some point in the future?
Similar to a well known scam by unscrupulous stock brokers.
Mail out 5 different and highly speculative stock picks, each to 1/5th of your prospective clients.
When one of the picks comes true, you contact the 20% of the people who you sent that pick to, convince them you’re a genius stock picker, and get their money to invest. You forget about the other 4/5th of the clients.
This is even worse… we have no warming for 18+ years and even though the merchants of doom have been claiming horrific increases in temperature, when a couple of “non-warming” models match the non-warming data they’re suddenly geniuses?
Dare I say – Bwahahahahahahahahahahahahahaha!
I have been fond of quoting a system that I was “introduced” to some years ago which involved backing either the winners or placed horses in certain selected races on their next outing. Five years of results were included in the system as “proof” and the win ratio (if I remember right) was around 70%!
Needless to say the system fell down in the very first season afterwards, the most blatant example of how bad it was being a valuable race in September where no fewer than six of the nine runners qualified. None of them won!
I hadn’t realised at the time that there were “scientists” in the world who were just as reliable in their prognostications as crooked racing tipsters.
I think a problem with the dog race system analogy is that it’s designed to predict a single winner from a range of runners.
The CMIP models aren’t like that. They represent a range of possible outcomes over time. Perhaps a more appropriate analogy would be to imagine a continuous series of golf drives along a very long fairway. With each drive, the ball has to make the fairway. It can hit the semi rough up to 10% of the time, but if it hits the real rough, you’re out.
In this analogy, the fairway represents the 5-95% range of the model projections. The semi-rough represents the upper and lower 5% of the range. The rough is outside the range. Each spot at which the ball lands represents the global annual surface temperature.
If temperatures stray outside the model range, the model range is wrong. If temperatures stray into the upper or lower 5% of the range for more than 10% of the period of projection, the model range is also wrong. As it stands, temperatures are on the low side of the 5-95% range and have strayed into the semi-rough a few times.
The ‘ball’ is still on the fairway though, and the models are not yet down and out: http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2014.png
David R
Your link is very misleading. Its graph assumes all models are equally worthy and there is no reason for that assumption.
Richard
Yes, but the CMIP models are not the set of all possible projections, they’re just the projections we have at the moment. There are much larger sets of other possible projections that do include the current temperature set but don’t include any warming. And there’s no particular reason to think the CMIP projections are more reliable than the set of non-CMIP projections.
You can’t just claim the model is reliable until its falsified, that is the OPPOSITE of science. In fact, you cannot claim the model is reliable until every prediction has been judged against observation.
In continuing the racehorse analogy I find the system more akin to the con man who in a field of 10 runners hands out tips to 10 gullible punters who after the race is run 9 punters think he doesn’t know what he’s talking about and the one who got the winner thinks he has a foolproof system. If you back the field it is virtually certain you’ ll back the winner. However as you go forward each prediction is still independent of the past. My guess is that if you have two out of 100 models that have been right for ten years then the probability is that if there are two that get the next 10 years right it won’t the same 2 .
What utter bollocks. They have no evidence, because there isn’t any.
Climate Liars just making stuff up. Disgusting.
“For example, a paper published in Nature Climate Change last week by a team from the University of New South Wales led by Matthew England showed that climate models that accurately captured the surface warming slowdown (dark red & blue in the figure below) project essentially the same amount of warming by the end of the century as those that didn’t (lighter red & blue).”
This isn’t a new argument. The problem is that it ignores the most likely possibility, which is that all the models are wrong. Their only answer to this is “but we can hindcast!” Well, great, but you can’t forecast. At all.