This is something I never expected to see in print. Climate modeler Dr. Gavin Schmidt of NASA GISS comments on the failure of models to match real world observations.
Source:
[ http://twitter.com/ClimateOfGavin/status/340605947883962368 ]
While the discussion was about social models, it is also germane to climate modeling since they too don’t match real world observations. Below is an example of climate models -vs- the real world; something’s clearly not right.
Graph source: IPCC AR5 draft
Is it maths or assumptions (or both) that cause the divergence?
UPDATE: In comments, I had a discussion with reader “jfk” which I think is worth sharing. He made some good points, and it helped hone my own thinking on the issue:
jfk says: Submitted on 2013/06/01 at 8:40 am
Well, I still think it’s a bit unfair to Gavin (and I am no fan of his). But hey, it’s Anthony’s site.
For a good review of the many failures of statistical modeling in social sciences (and one or two successes) see the book “Statistical Models: Theory and Practice” by David Freedman. Whether or not climate modeling has devolved to the point where it is social science rather than physics, well, I hope it’s not quite that bad…
REPLY: And I think it is more than a bit unfair to us, that if he believes what he tweets, he should re-examine his own assumptions about climate modeling. We have economies, taxes, livelihood, etc. hinging (or perhaps failing) on the success of these models to predict the climate in the future. The models aren’t working, and Dr. Schmidt knows this. Unfortunately his job is tied to the idea that they do in fact work. I feel no regrets at making this comparison front and center. – Anthony
UPDATE2: RussR in comments, provides this graph below showing Hansen’s modeled scenarios against real world observations. He writes:
Here’s an excel spreadsheet comparing observed temperatures vs. model projection from: Hansen (1988), IPCC FAR (1990), IPCC SAR (1995) and IPCC TAR (2001), in pretty charts.
It can be updated as more observations are added.
https://dl.dropboxusercontent.com/u/78507292/Climate%20Models.xlsx
UPDATE3: Dr. Roger Pielke Sr. adds this in comments.
Climate models are engineering code with quite a few tunable parameters, and fitting functions in their parameterization of clouds, precipitation, land-atmospheric interfacial fluxes, long- and short-wave radiative flux divergences, etc. Only a part of these models are basic physics representations – the pressure gradient force, advection, the Coriolis effect.
The tunable parameters and fitting functions are developed by adjustment from real world data and a higher resolution models (which themselves are engineering code), but only for a quite small subset of real world conditions.
I discuss this issue in depth in my book
Pielke Sr, R.A., 2013: Mesoscale meteorological modeling. 3rd Edition, Academic Press, in press. http://www.amazon.com/Mesoscale-Meteorological-Modeling-International-Geophysics/dp/0123852374/ref=sr_1_2?ie=UTF8&qid=1370191013&sr=8-2&keywords=mesoscale+meteorological+modeling
The multi-decadal global climate model projections, when run in a hindcast mode for the last several decades are showing very substantial errors, as I summarize in the article
http://pielkeclimatesci.files.wordpress.com/2013/05/b-18preface.pdf



“Buzzzz, wrong. Anthony was clear as to what Gavin was Tweeting about, and readers here understood Gavin was referencing social models.”
Their comments indicate otherwise.
I’d like the see the answer to this as well.
Wow. Double fail.
There is no such thing as a error in Maths. Mathematics is *always* correct. By definition. The models are not Mathematical Models. They are Physical Models. Physical Models *always* are in error. *always*. The question is whether or not the error is larger than the process being modeled. In the case of Climate Models the error *is* much larger than the process being modeled (i.e. man made co2 vs natural variation as a cause of the observed increase in temperature following the last ice age).
@ur momisugly Greg Goodman….the solar minimum will likely be in 2017/18, but the neutron min/max does not follow the solar cycle min/max. Yes it does seem to be around a 9 year pattern in it. Another thought from looking at the current neutron data. It looks like the flow will reduce this year, possibly as much as 15 points. From where it sits now that would likely be the next low on the graph, although that process should extend into early next year. Will there be another significant Earth event as there has been in the last 3 lows on the graph? Perhaps around Feb/March of 2014.
Taleb is talking about mathematical models, and the way I read Schmidt’s tweet that he agreed with Taleb to a degree that there are always math errors, but his reply highlighted faulty assumptions for model error.
I don’t know the complete context of the discussion, but I don’t think Taleb would make the challenge unless he could back it up. Schmidt is no dummy, either, ant it is possible they are talking past each other.
Taleb’s latest book, “Antifragile: Things That Gain From Disorder” is a good read.
People that say, “maths”, instead of just “math” are almost as annoying as people who constantly want to “raise my awareness”.
PiperPaul says:
June 1, 2013 at 11:45 pm
People that say, “maths”, instead of just “math” are almost as annoying as people who constantly want to “raise my awareness”.
Nah! It’s you colonials who get it wrong – I suggest you raise your awareness … 🙂
I’ve walked into the Gavin fan club, but where is the man?
If [self snip] Gavin doesn’t [self snip] say hello to all his [self snip] fans at Watts Up With [self snip] That. Then I’m just not going to tell him how much I love him. I’m kidding! I think he’s full of crap.
All this dances around the central, philosophical question: What are we to make of these models? Should a model be regarded as a sketch made by Newton or Einstein might have been to motivate a theory or experiment? Or should a model be regarded as data, like an experiment that yields real information? Is it possible to move a model from the former to the latter state by some process of evaluation, testing, refinement, etc? We are in the infancy of using and understanding models. Before powerful computers, no such thing would have been imaginable. It is going to take a long time before methods, standards, review processes, openness agreements, evaluation processes, etc have been developed that we will even understand what we are talking about with regards to models, including climate models.
This song is for Gavin.
Article title:
A frank admission about the state of modeling by Dr. Gavin Schmidt
The esteemed ‘Doctor’ Schmidt appears to be on a par with the esteemed Doctor O. Winfrey.☺
PiperPaul says:
June 1, 2013 at 11:45 pm
People that say, “maths”, instead of just “math” are almost as annoying as people who constantly want to “raise my awareness”.
Maths is a contraction of mathematics, a noun, math appears to be a contraction of mathematic, usually mathematical, and adjective here being used as a noun in math.
There are a lot of words used in the US from the older usage in England, what was a common language a few centuries ago in both places changed in England and didn’t in the US – “ain’t” is the one I recall being given as an example, so, I’ve looked up math and it fits in with this:
http://oxforddictionaries.com/definition/english/mathematics
“Definition of mathematics
noun
[usually treated as singular]the abstract science of number, quantity, and space, either as abstract concepts ( pure mathematics), or as applied to other disciplines such as physics and engineering ( applied mathematics):
a taste for mathematics•[often treated as plural] the mathematical aspects of something:
James immerses himself in the mathematics of baseball
Origin:
late 16th century: plural of obsolete mathematic ‘mathematics’, from Old French mathematique, from Latin (ars) mathematica ‘mathematical (art)’, from Greek mathēmatikē (tekhnē), from the base of manthanein ‘learn'”
Robin says: June 1, 2013 at 8:54 am
I found these UN’s post 2015 plans somewhat telling (if not, well, alarming) as well [see UN word-salad of the day: sustainable development will end poverty]
On the Gavin Schmidt may be seeing the light (and/or the writing on the wall) front …
I thought it was equally telling that while the likes of Mann, Gleick, Weaver, Hansen, Karoly, Ehrlich, and Suzuki gladly (and/or without reading that which they were supporting) endorsed the latest and greatest “Statement” of “unequivocal” all-encompassing doom-and-gloom-must-act-now “Scientists’ Consensus” on “Maintaining humanity’s life support systems in the 21st century’”, Schmidt’s name (amongst others) was conspicuously absent. [see Crisis of the week: the biosphere … new “Statement” percolated, circulated and endorsed]
Well, at least it was as of May 21, 2013. It is not entirely certain – or beyond the realm of -possibility – that, in the interim, Schmidt may have been persuaded to sign on the new, improved we-are-doomed dotted line.
Myrrh,
It is “Math”. take a deep breath and calm down fatty!! 🙂
16 years of no warming has not affected the IPCC reports or the media coverage of climate ‘change’. The carbon dioxide mania continues unabated. A tweet that something is fundamental incorrect with the IPCC models (which is now obvious to everyone that is following this saga) is not going to stop the mania. It appears the planet is starting to cool, in response to the solar 24 magnetic cycle change. There is record sea ice in the Antarctic (all months of the year) and as Arctic temperatures have started to cool, it appears Arctic sea ice will ‘recovery’ if ones idea of nirvana is a massive amount of Arctic sea ice and extremely cold winters.
The warmists can try to explain a lack of warming with heat hiding in the ocean. It is difficult to imagine what will be the explanation for and what will be the public reaction to, step cooling.
The paleo climatic record has unequivocal cycles of warming and cooling that correlate with solar magnetic cycle changes.
Antarctic Sea Ice, 2013 compared to 2012 and compared to 1979 to 2008 mean
http://nsidc.org/data/seaice_index/images/daily_images/S_timeseries.png
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.recent.antarctic.png
2013
http://ocean.dmi.dk/arctic/meant80n.uk.php
Compare to 2012 (data at above site by year.)
Compare to 1986 (data at above site by year.)
http://www.ospo.noaa.gov/data/sst/anomaly/2013/anomnight.5.30.2013.gif
http://www.agu.org/pubs/crossref/2003/2003GL017115.shtml
Timing of abrupt climate change: A precise clock by Stefan Rahmstorf
Many paleoclimatic data reveal a approx. 1,500 year cyclicity of unknown origin. A crucial question is how stable and regular this cycle is. An analysis of the GISP2 ice core record from Greenland reveals that abrupt climate events appear to be paced by a 1,470-year cycle with a period that is probably stable to within a few percent; with 95% confidence the period is maintained to better than 12% over at least 23 cycles. This highly precise clock points to an origin outside the Earth system; oscillatory modes within the Earth system can be expected to be far more irregular in period.
This graph, Greenland ice sheet temperature, last 11,000 years (roughly determined from ice core analysis, Richard Alley’s paper shows nine (9) Dansgaard-Oeschger (D-O) cycles of warming and cooling. The D-O warming and cooling cycles have an interval between occurrence of 950 years, 1350 years, and 2000 years.
The warming that we observed in the 20th century has occurred before.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.climate4you.com/
The following is a link to the late Gerald Bond’s paper “Persistent Solar influence on the North Atlantic Climate during the Holocene”. Bond published this paper in 2001.
http://www.essc.psu.edu/essc_web/seminars/spring2006/Mar1/Bond%20et%20al%202001.pdf
Sparks says:
June 2, 2013 at 1:38 am
Myrrh,
It is “Math”. take a deep breath and calm down fatty!! 🙂
Twit.
If your models are written round a science that does not exist on this planet then you will be wrong regardless of the accuracy of your math.
Is Gavin Schmidt close to retirement? I suspect that we will see more of these mea-culpas as the guilty parties lock in their taxpayer funded pensions, and leave the public eye. They know that they are wrong, that they’ve been wrong for years. Can they not be sued by the taxpayers to claw their pensions back?
“The failure of models to match (the) real world (is) far more likely due to erroneous assumptions”
This quote should appear on the WUWT home page.
“Insofar as the propositions of mathematics give an account of reality they are not certain; insofar as they are certain they do not describe reality.”
Albert Einstein
Most people don’t understand climate models. To them they are black boxes. GIGO doesn’t really apply either in my opinion, nor is even initial values the problem. The nature of climate models should overtake the initial value problem in the time frames we are concerned about.
When it comes to math, essentially your breaking down a continuous world into a discrete re-creation where differential equations are replaced with linear equations that approximate the real world, which introduces inaccuracies.
But the computer system that these models have to run on have severe limitations.
Almost every math calculation done increases the errors and so to compensate you have to us large significant digits, which means more memory and if you exceed the natural maximum size of the machine, for example you use 128 bits or 256 bits on a 64 bit machine, there are extra CPU cycles or clock ticks which increases the time to do the enormous numbers of calculations. This also increases the memory requirements which can also exceed the natural capabilities of the computer system being used whereupon hard drive storage (virtual memory) has to be used.
Then there is the time resolution whereupon increased accuracy means decreasing the time step which increases the time it takes for results. Then there is the spatial resolution whereupon the resolution of the volume or area affects both memory requirements and time requirements. Higher accuracy takes more time and memory.
Then there are the number of variables used whereupon a combinatorial explosion takes place. For example 8!=40,320
9!=362,880 and 10!=3,628,800 increasing from 8 variables to an additional 2 for 10 variables increases the required number of calculations by 100 times. Note that this is just an example and sometimes more efficient algorithms might be able to be used. But imagine a scenario where you can’t do that and your run into a 20!=2E18 number of calculations that are required for a good approximation to the real world.
I don’t think todays computers are capable of calculating a future climate, nor do scientists want to twiddle their thumbs with a extremely expensive super computer for several months till one test run job gets done.
I honestly believe that todays climate models use outdated obsolete coding techniques and languages that are very difficult to tweak and improve. The climate models are pretty much the last bastion left for the alarmist climate scientists, and I want to know more about them.
I am using my 35 years of computer science skills and a somewhat recent acquisition of a solid foundation in calculus knowledge via my daily study of calculus over several years as well as studies on various topics related to physics spread over almost 4 decades and my intense interest in climate science since starting my daily studies of climate science almost 4 years ago and I am directing my efforts to investigating and researching climate models. I am also involved in several software projects related to climate science that I can’t talk about at this time.
So that was just some quick thoughts, and I may have some things wrong, but it’s early days into my research and I probably have many years to go, and I am also working on being a mathematician at a PhD level, if there is such a thing.
P.S. I simply don’t like being lied to.
garymount says:
June 2, 2013 at 6:02 am
“But the computer system that these models have to run on have severe limitations.
Almost every math calculation done increases the errors and so to compensate you have to us large significant digits, which means more memory and if you exceed the natural maximum size of the machine, for example you use 128 bits or 256 bits on a 64 bit machine, there are extra CPU cycles or clock ticks which increases the time to do the enormous numbers of calculations. ”
It is far far worse than that.
An increase of precision, say from 32 bit to 256 bit, gives you a constant factor of 8 of overhead. One could live with that ; if an unreliable simulation became reliable simply by increasing the computer power by a factor of 8, this would be splendid.
But the very definition of a chaotic system – the mathematical definition of chaos – is exactly that the deviation between the real system and a finite resolution simulation of the system grows beyond all bounds as time progresses. If the deviation does not grow beyond all bounds then the system is not chaotic.
(Where deviation does not mean the error in one metric like “average global temperature” but the deviation of the state of real system vs. the simulated system. A suitable metric might be a vector difference between the respective state vectors.)
NO constant increase in precision suffices to suppress the deviation under a predetermined bound. Not an increase to 256 bit, not an increase to 1024 bit, not an increase to a million bit.
DirkH says:
June 2, 2013 at 6:36 am
“NO constant increase in precision suffices to suppress the deviation under a predetermined bound. Not an increase to 256 bit, not an increase to 1024 bit, not an increase to a million bit.”
The mechanism that causes this chaotic behaviour is the iterative feedback of the system that AMPLIFIES low order state bits and shifts them leftwards, to more significant bits, over time. The state word of the real analog chaotic system is unlimited in precision (well maybe limited on the Planck scale, but if you were able to simulate on that level you would have to solve the Schrödinger equation for the entire universe).
The simulation must emulate this amplification of low order state bits but due to its very limited precision it runs out of low order state bits quickly and metaphorically speaking has only zeros left to shift upwards after a few timesteps.