One of the biggest issues facing climate science today is the divergence of reality (observations) from the model output. The draft image from IPCC AR5 (seen below) clearly illustrates this as does the analysis done by Dr. Roy Spencer. WUWT regular Tom Trevor wrote this short paragraph in comments, and it seemed prescient to me, so I thought it was worth elevating to Quote of the Week.
You know when I was a boy I would build models, I wasn’t very good at building models, but I built them anyway so I could play with them afterwards. I would pretend that the models were real ships or planes, but I always knew they weren’t even close to real ships or planes. For some reason these people can’t seem to tell the difference between a climate model and the real climate.
Original comment here
The IPCC AR5 draft models-vs-reality image:
Dr. Roy Spencer’s analysis of models-vs-reality:
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

![CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1[1]](http://wattsupwiththat.files.wordpress.com/2013/06/cmip5-73-models-vs-obs-20n-20s-mt-5-yr-means11.png?w=300&resize=300%2C225)
Why are we wasting so much time on this rubbish?
You got something better? Maybe Fantasy Football?
Or let’s try UN reactions to proven Russian incursions into Ukraine. You play the part of OBama.
dunno, why are you ?
If any one of the arrows in this graphic has a greater variability than man made total CO2 energy trapping, then global warming made by man is extremely unlikely… I especially like the magnetospheric driven ionospheric convection…
Ionosphere-Thermosphere Processes Public Domain NASA
http://commons.wikimedia.org/wiki/Category:Solar_wind#mediaviewer/File:Ionosphere-Thermosphere_Processes.jpg
Thanks, an interesting idea.
“One of the biggest issues facing climate science today is the divergence of reality (observations) from the model output.”
I think that one of the biggest issues facing popular scientific culture today is the divergence of reality (observations) from the accepted cultist dogma.
BTW Andrew, there’s a current fissure eruption in progress in Iceland, it began around ten minutes after midnight local.
http://sd-1.archive-host.com/membres/up/17530524838458898/bardur2.png
http://i.imgur.com/vxD0PiY.png
Red alert is current for aviation
reality? try McKibben. much more, incl how The World Council of Churches, representing 580 million Christians, will try to persuade Pope Francis to get the Vatican to divest from fossil fuels, at the link:
VIDEO/TRANSCRIPT: 28 Aug: Democracy Now: As Obama Settles on Nonbinding Treaty, “Only a Big Movement” Can Take on Global Warming
As international climate scientists warn runaway greenhouse gas emissions could cause “severe, pervasive and irreversible impacts,” the Obama administration is abandoning attempts to have Congress agree to a legally binding international climate deal…
This comes as a new U.N. report warns climate change could become “irreversible” if greenhouse gas emissions go unchecked…
We speak to 350.org founder Bill McKibben about why his hopes for taking on global warming lie not in President Obama’s approach, but rather in events like the upcoming People’s Climate March in New York City, which could mark the largest rally for climate action ever…
BILL McKIBBEN: The new U.N. report is more of the same. In a sense, it’s the scientific community, through the Intergovernmental Panel on Climate Change, telling us what they’ve been telling us now for two decades, that global warming is out of control and the biggest threat that human beings have ever faced. They’re using what was described as blunter, more forceful language. At this point, you know, short of self-immolation in Times Square, there’s really not much more that the scientific community could be doing to warn us…
BILL McKIBBEN:…We need to be doing what the Germans have done. There were days this summer when the Germans were getting 75 percent of their power from solar panels within their borders…
http://www.democracynow.org/2014/8/28/as_obama_settles_on_non_binding
another contender for “quote of the week”!
28 Aug: WaPo Letters: How to change the climate on global warming
From: Elliott Negin, Washington
(The writer is director of news and commentary for the Union of Concerned Scientists)
BLAH BLAH
If The Post is serious about clearing up confusion about global warming, it would follow the lead of the BBC and stop publishing scientifically indefensible statements.
http://www.washingtonpost.com/opinions/how-to-change-the-climate-on-global-warming/2014/08/28/408d0340-2ca0-11e4-be9e-60cc44c01e7f_story.html
“Two important characteristics of maps [or models] should be noticed. A map [or model] is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.” – Alfred Korzybski
Unfortunately the climate scientists are not good at building models or maps.
It is an impossible and futile task after all, to model the climate that is. Why?
Due to the climate systems being systems that generate INTERNAL randomness it will NEVER be possible to predict climate systems.
The nature of all systems that generate internal randomness is that the only way to see what happens next is to observe them in real time.
This discovery was made by Stephen Wolfram in his ground breaking book, A New Kind Of Science; see Chapter 2 for the mathematical (and computer science) proof.
All climate models will always fail due to this newly discovered INTERNAL randomness.
Then there is the external randomness that comes from Chaos Theory.
That’s two kinds of randomness, internal randomness and chaos randomness, that mean that it is not possible to come up with an accurate prediction of the Earth’s climate.
The only way to know what the climate of the Earth is going to do is to observe and measure it as it actually happens in real time.
As a result climate models will always fail due to first principles of chemistry, physics, computer science, mathematics, and due to the fundamental laws of Nature.
PWL makes a very important point. Why are all the models so much at variance from the real data?
There must be a common weakness. Is it the boundary values used , or the values for the constants in the radiation equations ? Is it that there is some rule , a combination of chaos and information theory, that says that a model with > n variables and m (m = or <n) boundary values is inherently unstable.?
There is an opportunity for an enterprising graduate student to make a name for him or herself by analysing exactly why the models are inadequate, rather that just producing more of them.
As a one time user of Mathematica I have admired Wolfram's work and have been waiting for second hand copies of his book , quoted above , to drop at Amazon or Abebooks to a level that I can afford .
mikewaite
You ask
The short answer is that each climate model is tuned in a unique way so its output matched past variations in global average surface temperature anomaly, and the tuning is achieved by using a high value of climate sensitivity to greenhouse gas concentrations which are expressed as carbon dioxide (CO2) equivalence.
It seems I need to provide the following explanation yet again.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 can be seen here
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard
Thanks for the writeup Richard. This is what I was referring to when I told “Sonic” yesterday that modellers just “turn the knob”. He retorted that they solve equations, not “turn a knob”.
But, they make assumptions and those assumptions are turning the dial to get the desired result on the backtest while keeping the “run hot” in the forecast going forward. Of course, I don’t know the specifics as well as you.
What’s you opinion on anthropogenic aerosols? Are they short-lived in the atmosphere and mainly a N Hemis phenomena? Wouldn’t changes in aerosols lead to dichotomy between hemispheres?
Mary Brown
You ask me
Anthropogenic aerosols are “short lived” because they are washed out of the air by rain. Typically an anthropogenic aerosol emission will stay in the air for less than a month and, therefore, their concentration in the air is associated with sites of the emissions.
There is a “dichotomy between hemispheres” as a result of the different ratios of land to ocean in the hemispheres. Hence, global temperature rises by 3.8°C in six months each year (and falls by the same amount in the other six months); see
http://en.wikipedia.org/wiki/Season#mediaviewer/File:Jones_et_al._Surface_air_temperature.jpg
I hope this answer is what you wanted
Richard
Three main reasons.
1) We don’t understand all of the interactions that create weather/climate. As a result lots of assumptions are built into the models.
2) Even if we did have a full understanding, we don’t have enough computer power to adequately model the weather/climate. As a result lots of parameterizations are introduced that are simpler to calculate but not as accurate.
3) We don’t have adequate data to feed into the models. This is especially true when attempting to “tune” the models to replicate past climate.
Various “It’s almost too late” warnings are popping up in news outlets based on the latest leaked IPCC reports. No mention of the accuracy of previous predictions just that current predictiosns are freightening. Be scared…I guess.
Here was a headline from one of an article, “Climate Change Scientists Warn: We’re Almost Too Late”.
What is interesting is, is there any such thing as a “Climate Change Scientist”? Aren’t they supposed to be “Climate Scientists” not “Climate Change Scientists”? The bias is using “Climate Change Scientists” is painfully obvious, personally I think they should be called “Climate Doom Scientists”.
This might have been foreseeable decades ago when we had ‘social scientists’ fretting about the effect of violent cartoons (Felix the Cat, Daffy Duck, Elmer Fudd, Tom & Jerry, The Coyote and The Roadrunner, etc) on the minds of small children. The children could obviously tell the difference between reality and cartoons, but the Liberal Arts graduates could not. They still cannot. Only now we see the results of the proliferation of people who are unable to properly identify “Reality.”
You should compare apples to apples. Use figure 9.8a from the actual AR5 compared to the draft graph above. They replaced the draft above (which was, of course, laughable — look a the “error bars” on the actual temperature, which somebody just drew in by hand) with a spaghetti graph that obscures just how badly individual models in CMIP5 do against e.g. HADCRUT4 — presented without any error bars whatsoever — to make it very, very difficult to assess the quality of the individual models in CMIP5.
If they had done this honest, with every model in CMIP5 drawn against HADCRUT4, one at a time people might have been tempted to ask why we believe any of the models when they have enormous per-model temporal variance compared to reality in spite of already being averages over many perturbed parameter ensemble runs — they have the wrong variance, the wrong autocorrelation, the wrong mean as averages, and for a result that is already an average over (say) 100 runs that means that the actual model run variance is approximately ten times larger.
To put it bluntly, if one compared the model runs from any model to reality one at a time people would laugh the model out of the room. What you are portraying above is the statistical band-aid on a truly spectacular failure, a failure of monumental proportions, where even hiding the failure as they attempt to do with multi-run averages and then averaging the averages and pretending that the variance of the model runs generated in this way is somehow relatable to the central limit theorem is utterly without foundation in the theory of statistics. If one attempted to actual present the tower of Bayesian priors that goes into each step of the formation of these super-averages and then does a posterior probability analysis of those priors, one simply concludes that the priors are almost certainly incorrect. Or in lay terms, that the models are bullshit, useless for predicting anything at all.
rgb
The spaghetti graph is the only one of the two presented here that has any relevance to what the models projected. The IPCC’s is deceptive to say the least.
Spencer’s spaghetti graph is the one to note. I smile every time I see it!
Forecast vs Observed… So simple yet so dangerous
I once worked on a forecasting project at taxpayers expense with eight other research groups. Our job was to create a forecast model that would become operational and solve a pesky problem.
When we got to our conference we suggested that we run everyone’s systems and calculate their skill scores and investigate how they might used in a consensus approach or simply select the best one. The other 8 groups all refused to have their skills scores quantified. Instead they convinced the government funding representatives that our work was not very good and that more research was needed and another round of grant money should be handed out.
So our perfectly good operationally ready model was ignored and another million dollars was handed around the table. We quit the project and have never sought grant money again. Now we stick to real-world projects where forecast minus observes actually matters.
I’ve said it before and I’ll say it again.
Reality is clearly faulty. It’s time it scrap the whole thing. I’m pretty sure we don’t need it for anything, but if we do, we’ll have to start from scratch and build a new one.