
Bishop Hill writes:
There is video available here of a lecture given by Professor Mike Hulme entitled “How do Climate Models Gain and Exercise Authority?”. Hulme asks whether deference towards climate models is justified and whether we should have confidence in them. I think the answer is “We don’t know”.
http://www.crassh.cam.ac.uk/page/195/media-gallery.htm
NOTE: Chances are the volume of heavy WUWT induced traffic may “crash” the CRASSH server, bookmark for later if you don’t get a response.
Sorry but I’m new to HTML tags and the post above is in reference to:
“DISINTEGRATED ICE SHELVES DISINTEGRATION DATES
Worde Ice shelf March 1986*
* Larsen A Ice shelf January 1995*
* Larsen B Ice shelf February 2002*
* Jones Ice Shelf 2008*
* Wilkins Ice shelf March 2008*
If the Ice shelves are disintegrating during WINTER, it is not SUN or CO2.
The most normal thing to do is to fall in love with your own simulation model. This is perfectly understandable of course; you work for so long with your baby equations and the numerical implementation that any objections to your beautiful forecasts become insulting.
Sometimes numerical simulations are useful, for example when they landed on the moon the first time, they accurately predicted the trajectory of the landing vehicle. However, for simulations to be useful, the model must first be validated, no nuclear engineer would trust reactor simulations that sometimes fail.
Simulations models are validated when they are able to predict future or unknown empirical observations over and over again. This is not the same as the idea that it is only possible to falsify a theory which is untrue – simulation results that agree reasonably well with the real world are validated, if this happen sufficiently many times.
The tricky issue is to decide how many simulations that must agree with experiments to decide that we have a valid simulation model. I would say a couple of hundred or thousand simulations would be required. Note, this is with unknown or future data to which the simulation model is not calibrated.
Most of the thermal mass is in the oceans, the question is then, for how many years would we require that a climate simulation should predict reasonably well the ocean heat content for the model to be validated? Take a close look at this curve:
http://www.climate4you.com/SeaTemperatures.htm#Sea surface temperatures
We note that the characteristic time constant for temperature oscillations in the oceans is something like 5 or 10 years, each bump in the curve lasts for about 5 years before the next bump appears. Let us then imagine that we would require 200 predictions to become true before the model is validated. This would mean that we would have to wait at least 1000 years before we can validate a climate model – yes the thermal transients in the oceans are that slow.
For the simulation engineer this is not easy to realize – the improvements, the modifications, the software quality and the simulation results have become an obsession. Yes it is perfectly normal to become blinded by your own simulation model. In academia this can go on forever, a professor can improve a useless and erroneous simulation model his entire life. However, in a professional commercial company this cannot happen easily and it is common to require that simulation models are validated.
I think you have figured out what I mean.
Let’s be logical. If the answer is “we don’t know,” then – by definition – we have no business reposing great faith in them.
Climate Science (Roger Pielka Sn) reports on a new study by a group of hydrologists that compares climate models results with observations:
http://pielkeclimatesci.wordpress.com/
The abstract of their paper says:
We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe.We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.
I have yet to find a single Climate Scientist that claims the computer models accurately account for the climate system.
This alone makes the “Majority of Scientists Agree”assertion transparent.
Computer programs can usually model an object IF its fully defined AND its methods are fully understood AND its properties are completely defined AND its programmed correctly AND its capable of being run in an acceptable timeframe AND its proved correct via testing… this is called object orientated design and programming.
If you use Windows on your PC, for example, you will know how much “respect” and “authority” you should grant this model operating system which has been evolving for the last 25 years and still requires “fixes” and “patches” as new issues are continually identified.
Now go back to the first paragraph and look at all those conditions that must be met… now think about all the complexities, uncertainties and unknowns associated with the climate or the economy… now you know how much “respect” and “authority” you should grant a computer model.
The question: How do Climate Models Gain and Exercise Authority?
The answer: Not by being correct… so don’t grant them any authority.
Remember: Garbage In – Garbage Out applies to design, programming and data.
RAVEENDRAN NARAYANAN says:
October 22, 2010 at 11:22 am
“HOW CLIMATE IS CHANGING ?
Massive Arctic iceisland drifting toward shipping lanes…………”
=============
I can’t quite make out where your book is coming from, or going,
but I like the shameless plug.
If it is bad news, I’m sure it’s my fault.
PS
There is an established marketing adage used to sell Information Technology: If you can’t convince them then confuse them. This technique works well especially when you incorporate FUD – Fear, Uncertainty and Doubt. These techniques are pivotal when it comes to selling Global Warming and Climate Change… although the more common adage is: Baffle them with science.
deference towards climate models is justified and whether we should have confidence in them
=====================================================
Absolutely not…
…if these slackers could really predict the future, they would put it to better use
…and pick lotto numbers
Given his background, Hulme went as far as he could in a public setting to say that one should not put all one’s faith in the models as they simply are inadequate and not up to the job.
This was not a ringing endorsement of the models or the IPCC use of them. Its time that university press releases and the media adopted the same approach.
PPS
A booted and suited Al Bore always reminds me of a good old fashioned computer salesman who really knows how to sell ice to the Eskimos…. but doesn’t know much about his bits and bytes… although be knows where he wants to stick his digits…
In real sciences, a model gains authority by making predictions. Then these predictions are compared to the real world. If the model’s predictions are confirmed by measurement or observation, the model gains authority. If the predictions are not confirmed, the model is discarded or fixed until it comports with reality.
Unfortunately, climate models have made predictions that have not been confirmed, like increasing temperatures with increasing CO2 for the past 10-15 years, a reduction in snow falls, or accelerating sea level rise.
The models are useful tools to generate scary press releases and drive power grabbing agendas.
Sorry-EPIC FAIL !
Doug in Seattle says @ur momisugly October 22, 2010 at 9:11 am
As someone who works with models I am very concerned by the faith put it them by policy makers.
It seems to have slipped under the US radar that there are municipal elections here in Ontario; dunno why that would be 🙂 Anyhow: I notice that those who will be elected are doing and saying nothing to offend no-one. This is one sure way to get elected for a Western politician. However, if they really, really, really must SAY or DO something, then they prefer to wrap it up in the flag of rationalism and inevitability.
This is why “Policy Makers” put faith in computer models. Not because they believe them, but they offer a fig leave of scientism to their hare-brained schemes. If it all goes pear-shaped, then they are not to blame since they were merely following THE BEST SCIENTIFIC ADVICE AT THE TIME.
This should be made a Friday Funny LOL
hyper.real says:
October 22, 2010 at 9:32 am
“How do Climate Models Gain and Exercise Authority?”
Only with considerable semantic and grammatical perversion, it would seem.
Billy Ruff’n @ur momisugly October 22, 2010 at 1:56 pm
You do realise that Jan-Feb is the height of summer in the southern hemisphere, right?
Richard A: “they can’t be falsified because their results will always be adjusted ad-hoc to perfectly ‘predict’ the past”.
More than that, past temperatures are ‘adjusted’ because the various models failed to hindcast correctly.
Hulme has a very particular approach towards saving his ass.
Got to love one of the opening statements by Hulme. At the 5:24 minute mark he says, and I quote:
“Climate models are essential for the detection and attribution of anthropogenic climate change.”
Pardon me, perhaps I am a simple lad, but how does a climate model “detect” anything? Back in my day we used instruments in laboratories or even out in the field to “detect” things. Must be my age showing. So these new-fangled models do the entire work of science now…
“Who’d buy a used car from that man [M. Hulme] ?
Brgds/Sweden
//TJ”
I would, but I certainly wouldn’t expect it to be reliable from one day to the next 😉
I found Hulme’s presentation to be more even handed than I had anticipated, though admittedly that was not a high hurdle to top. He did at least leave the question of whether or not climate models deserve respect as an open question and , in laying out the reliability issues, as he sees them at any rate, he left the answers mostly for the audience to provide for themselves. He did however mostly miss the 800 pound gorilla in the room which is that in most modeling applications outside of climate science the models are only deemed useful if their output can be, rather immediately, tested against experimental observations. A quality GCMs don’t possess based both on the timescale of their projections and the relative inadequacy of the observational data they must be judged against.
Confidence in models is a subject I can comment on with some confidence. Any model is only as good as the data and assumptions that are used as input, coupled with the modellers detailed knowledge of how all parameters interact to arrive at a modelled forecast. In the great majority of cases a model is useless at making accurate forecasts unless all preceding paramters are also known accurately.
What a model is good for is giving an indication of what the future may look like. In this regard it is a tool, but not a decision maker on its own.
An example is model I produced for forcasting river flows and lake storage in a hydro-electric catchment. It uses known river flows and lake storage, forecast rainfall and temperature changes, modelled snowpack and estimated pseudocatchment storage. So very little of the input is actual data, and some is only guessed. Therefore the resutaltant output can never be relied on.
It has a very high degree of confidence (better than 99% accuracy) up to 12 hours ahead. It is also very reliable (95% accuracy on average) at up to 48 hours ahead, and a decling degree of accuarcy after that. The model shows a prediction out to 8 days ahead, but as the writer of this model I would say it has very poor confidence that far out.
So imagine my frutsration when opertors, and even civil engineers, make decisions based on what the model tells them might happen in a week’s time. It is only good for catchment management 12 hours ahead and energy management 48 hours ahead, and on that basis is a very good tool.
Climate models are also a tool if they are used as an indication of what might happen if the primary component changes, but everything else is constant. Of course we know that everything else isn’t constant, so no reliance can be placed on climate models at all.
Hulme is playing the role of loyal opposition in order to gently bring the outsider criticism to the insider audience.
This is helpful, but we should not forget how he worked very hard to use his perversion of social science methodology to maintain the exclusion of sceptics and to discredit their arguments in ad hominem attacks.
(see here:
http://enthusiasmscepticismscience.wordpress.com/2010/06/25/the-anatomy-of-virtuous-corruption-disagreements-permissible-unmentionable-and-inconceivable/)
This (unconsious?) strategy is most alarmingly displayed in Hulme’s review of a book by Singer and Avery:
http://www.guardian.co.uk/society/2007/mar/14/scienceofclimatechange.climatechange
This Keynote at a conference on modelling and uncertainty elaborates one aspect of another Hulme’s loyal opposition positions, namely his position on the discussion of risk assessments in AGW, especially in the Stern report. Hulme is now as important in perverting the discussion of risk as Schneider was in the late 1980s and 1990s. A fundamental error is his discussion of risk is exemplified in his ‘Five Lessons’ :
http://www.mikehulme.org/wp-content/uploads/the-five-lessons-of-climate-change.pdf
There he asks the reader to substitute ‘resource’ for ‘risk’ and I think he is trying to say that we need not value-laden climate change as all negative, as we do when we talk of its ‘risks’. Well indeed! But yet he continues to use ‘risk’ as a contraction of ‘the risk of disasters,’ or as, itself, a euphemism for ‘adversity,’ ‘calamity’ or ‘disaster.’ Thus, risk = risk of an adversity.
More confusion is introduced in his book Why We Disagree… by conflating the risk of things happening with their happening. Sometimes this is like the risk a hurricane confused with the event of one. Other times you see risk as some objective external thing that we can come to know. With some thought, we can realise that such an objective risk is an illusion.
This objective risk is invoked in statements like these:
“When we don’t know for sure what the risks associated with climate change will be” [p114]
“Because we are uncertain about many of the risks that climate change may cause” [p116]
“It is actually about whether or not we judge the (largely unknown) risks associated with climate change to be so potentially large and undesirable.. p124
“..the physical (and hence exonomic) damage caused by climate change, and the risk of catastrophic change, are poorly understood by science.”
Imagine we were scientifically discussing the risk of a full eruption of a volcano after some rumblings. The risk at any time can be assessed on what is known at any time. The certainty is the event of the eruption – that’s not a risk! If it does not erupt (and all goes quiet) then we cannot say in hindsight that there was, in fact, no risk of eruption at the time the assessment was made. The risk assessment depends on the knowledge (scientia) at the time, including the state of knowledge of the time — thus, risk was assessed differently (due to the state of knowledge) in AD79, in and 1883, than it would be today. It is not that we are ‘not sure of the risks’ – because they are already in what we know — but that risk is about being not sure of the outcomes, and the extent to which we are sure of the outcome is the extent of the risk.
The most absurd and unscientific invocations of risk are often marked by the expression “potential risk” [eg p223 and common in Schneider]. What proposed outcome, I ask, is not a potential risk? An impossibility? To say that a climate outcome is not impossible is a truth trivial and so almost scientifically useless — and yet this expression has been very useful rhetorically.
The most insulting and condescending discussion of risk, is where Hulme presumes that, when folks do not respond to claims of risk or ‘potential risk,’ that they are not acting reasonably. In looking for all sorts of psyco-social reasons for ‘ignoring risks’, he discounts the possibility that either folks have assessed the risk and taken it as acceptable, or otherwise that they do not accepted the expert’s opinion of the level of risk.
Do not swim! Unfiltered water, do not drink! Keep out of reach of children! Is everyone who defies an expert warning doing so unreasonably? How many times have health warnings been wrong? A much more difficult question would be to ask whether folks have good reason to be sceptical of warnings given on the authority of state-instituted science.
Why does this matter? It matters because this sloppy and confused discussion of risk by Hulme and Schneider, as well as Stern, has been used to obscure a reasonable discussion of the acceptability of various risks to the environment, and especially the risks of CO2 emissions. This obscuring has served the ends of raising alarm beyond what is justified by the evidence. Promoting this confusion is an abuse of science and an abuse of the people’s trust in science. We should hold publically funded scientists such as Hulme accountable for propogating such corrupted reasoning. The value of Lomborg’s scepticism was its sober, clear and balanced discussion of relative risks in an attempt to push aside these puffed-up clouds of confusion, and return us to a clearer view of the (however uncertain) state of what we know by our science.
If this is the best “they” can do, “they” are a sorry lot indeed.
Quite frankly Hulme is making the point (about 1/3rd) of the way in that “pretty pictures lend public credibility”. Not even deserving PITY. More like complete MOCKING and derision are in order for such an implication.
I’m sure the US Military ran models that showed how the Iraq and Afghanistan wars would run smoothly. And, there was no MWP involved, nor tree rings.
Just saying.
Boy, did that ever take me back 40 years to university lectures…..
Analyzing the caffeine dosage required to follow along was more interesting than making sense of the continuous reference to models and how they depend on input parameters to validate the input parameters….
Recently heard outside the lecture hall:
“My brain hurts!”
Prof. Enid Gumby.