This article was sent to me by reader Peter Yodis. I found it interesting and germane to current events, so I’m sharing it here. Just a note for clarification, the very last sentence is in his original article, it is not commentary from me. – Anthony
Reposted from Machine Design.com from editor Leland E. Teschler Feb 17th, 2009
Amid all the hand-wringing about financial systems in meltdown mode, the subject of modeling hasn’t gotten a lot of notice. Banks and other financial institutions employed legions of Ph.D. mathematicians and statistics specialists to model the risks those firms were assuming under a variety of scenarios. The point was to avoid taking on obligations that could put the company under.
Judging by the calamity we are now living through, one would have to say those models failed miserably. They did so despite the best efforts of numerous professionals, all highly paid and with a lot of intellectual horsepower, employed specifically to head off such catastrophes.
What went wrong with the modeling? That’s a subject of keen interest to engineers who must model the behavior and risks of their own complicated systems. Insights about problems with the mathematics behind financial systems come from Huybert Groenendaal, whose Ph.D. is in modeling the spread of diseases. Groenendaal is a partner and senior risk analyst with Vose Consulting LLC in Boulder, a firm that works with a wide variety of banks and other companies trying to mitigate risks.
“In risk modeling, you use a lot of statistics because you want to learn from the past,” says Groenendaal. “That’s good if the past is like the future, but in that sense you could be getting a false sense of security.”
That sense of security plays directly into what happened with banks and financial instruments based on mortgages. “It gets back to the use of historical data,” says Groenendaal. “One critical assumption people had to make was that the past could predict the future. I believe in the case of mortgage products, there was too much faith in the idea that past trends would hold.”
Therein lies a lesson. “In our experience, people have excessive confidence in their historical data. That problem isn’t unique to the financial area,” says Groenendaal. “You must be cynical and open to the idea that this time, the world could change. When we work with people on models, we warn them that models are just tools. You have to think about the assumptions you make. Models can help you make better decisions, but you must remain skeptical.”
Did the quantitative analysts who came up with ineffective financial models lose their jobs in the aftermath? Groenendaal just laughs at this idea. “I have a feeling they will do fine. If you are a bank and you fire your whole risk-analysis department, I don’t think that would be viewed positively,” he says.
Interestingly enough, Groenendaal suggests skepticism is also in order for an equally controversial area of modeling: climate change.
“Climate change is similar to financial markets in that you can’t run experiments with it as you might when you are formulating theories in physics. That means your skepticism should go up,” he says.
We might add there is one other similarity he didn’t mention: It is doubtful anyone was ever fired for screwing up a climate model.
Weren’t those models peer reviewed? 😉
About 40 years ago I attended a short course on time series, given by an American, from GE I think it was. He lost me pretty quickly in the maths, but I still remember his response to the question “what about a change in trend?”. “Gee” he said ” a change in trend. Well, that’s very deep concept”. Ever snce I’ve found it best to Keep it Simple, although I never made as much money as Kroc!.
There are good models, by climatologists like Spencer, Lindzen, and Pielke, that show we are nowhere near a catastrophe and that coaltrainsofdeath are hyperbole.
But the ‘consensus’ dismisses these models, just like the Wall Street/Washington consensus dismissed the finacial models that showed lending money to people who could not pay it back was a bad idea.
We are going to reap such terrible results from destroying our basic dependable energy sector. But the modelers don’t care. They have their models and their consensus.
“The business managers *chose* to believe the newer (and therefore better right?) models despite their obvious flaws.”
“He rejected Mr. Moore’s allegations, saying that the bank commissioned an independent study and the matter was closed to the satisfaction of the FSA.” I remember the incident very well. It was taken very seriously by the board.” ”
Hindsight is always 20/20. It’s so easy to look back at see the “error” of someone else’s ways. That one “clarion” voice that was ignored. Which, if only it had been heard, would have saved the day? It’s just too easy. It’s not real.
By way of experiment, tell me now whether the U.S. stimulus package will pull the economy out of recession or not? Regardless of what you think, would you bet your salary on it? Would you buy or sell huge amounts of stock on your prediction? Can it be modeled? There are voices on both sides of that debate. Inevitably someone will be right. Who will it be?
By the way, we are very good at modeling “what happened” we just suck at modeling “what will” happen. There are just too many things we don’t “know”. And even when we do know we can’t model completely accurately.
Take for instance, someone’s earlier example of kicking a football. Something that you think would be easily modeled. You could use the football equivalent of Iron Byron (a machine for consistently hitting a golf ball with the same force and same direction) but the football would land in a different place every time. If every time before you kicked the ball you modeled the outcome with a computer, the computer model wouldn’t completely accurately predict the landing spot once. The prediction would always be a little off. In modeling we call that the delta and we try to make it as small as possible but it’s always there.
If we understood climate as well as we understood kicking footballs we could model it too. But our models would still be wrong. They just wouldn’t be as wrong. After reading this blog and others I’m convinced no one really understands climate all that well. So the ability to model it is pretty poor.
And if that “whistleblower” really was that good at predicting risk and so confident in his models, well he must be an exceptionally rich man now. Because anyone who could have predicted this financial disaster with any accuracy would have known to short the market and would have seen financial returns beyond their wildest imaginings. Just how rich is that guy anyway?
As far as old models vs. new models, well computers are faster now than they were just a few years ago but they compute the same way. The newer models were undoubtedly built from the older models adding more variables and more possible outcomes. And if the older models were really that good, they’d still be using them.
Finally, why do these “conspiracy” theories always pop up? They are so easily disproved. What financial manager in his right mind would put himself or herself at risk to the extent alleged now? They have, at least most of them, compensation that is hugely tied to the performance of their company. Which of them is so callous or stupid that they would knowingly risk their company and with it their financial future? They have wives, children, mortgages, tuition payments, and all the other things the rest of us have. Many of them lost staggering amounts of money.
While that the financial disaster was fundamentally the result of bad decisions on many levels, from government to business to personal, it ultimately reached the level it did because of a set of events that no one thought to model.
Take the example of flipping a coin. Every time you flip a coin the probability is 50/50 heads or tails. But say you flipped the coin and it came up heads 100 times in a row. What would be the probability that the 101’st flip was heads? 50%. But that’s not the point. You could model that scenario on a computer and run the simulation 100,000 times. And the probability that the computer would ever return that result is ridiculously small. But it could happen in life. Because life in unpredictable. Welcome to banking crisis 101.
David Wendt:
It doesn’t matter how powerful the computer is if garbage is inputted. GIGO
I want to inject a little paranoia into the discussion. Common Sense (10:53:23) said: “I don’t need a complex model to tell me that making sub-prime loans to people who can’t afford them and/or have sketchy credit histories is a recipe for disaster. And that’s what happened.” Bob Shapiro lists government agencies that “routinely report doctored economic reports and encourage rosy forecasts” as one of his similarities in financial and climate modeling.
Yes, but how did it happen? See Climate Audit on 2/18/09. S. McIntyre discusses a report by McCullough and McKitrick. (Please, everyone, read both the discussion and the report.) In addition to the Mann Hockey Stick falsification of data and modelling, a working paper released in 1992 by the Boston Federal Reserve Bank was investigated. This WHOLLY UNSUPPORTABLE report led to RAPID RULES CHANGES re subprime loans throughout the 1990s. In 1995 ACORN and NACA were embedded in the process. It took years for anyone to be able to verify the method of gathering data, use of the data, and the conclusions of the economists, Munnell, Browne, McEneaney, and Tootell. Essentially they put forward a falsified report and “trillions of dollars of bad mortgage debt and related financial derivatives” ensued.
I want to know the backgrounds of these economists and where they are today. It is very difficult for me not to believe that this financial crisis was in some way “prepared” or “planned”.
The “Mann Hockey Stick” arrives in 1998. Too many similarities: The falsification of data, the WHOLLY UNSUPPORTED methods, the impossibility of both scientists and citizens to gain access to public documents, and the unaccountability of researchers for their data and conclusions, and the institutions, both public and private, that deny that accountability warps a reasonable mind. One can hardly get one’s mind around the enormousness — and the enormity — of the problem.
Link includes Gavin Schmidt comment on difference between climate and financial models FWIW
http://blog.wired.com/wiredscience/2008/10/climate-models.html
The problem with models (climate and financial) is they can only account for known factors within a specific and measurable range of values. In finance this is differentiated with systemic and non-systemic risk.
In the case of the recent financial meltdown, the non-systemic risk (that which we can protect ourselves against and is generally planned around) is given normal economic conditions we can expect 2% forclosure rates to persist. Now add a sensitivity test like during recent recessions (the last 2-3) that rate has gone as high as 4% in some markets. Of course, what happens when external factors we can’t mitigate or properly plan for (systemic risk) move us beyond that range like gas prices going to $4.00 per gallon at the same time there is a slowdown in wage growth and worker productivity, healthcare costs rising faster than wages, a simultaneous massive number of resets in mortgage rates resulting in household financial distress, a property market reaching its ceiling and the failure of the US gvmt to bail out Lehman at the beginning of what would have probably been a minor recession absent all other factors (Someone mentioned Black Swans in an earlier post)? It results in 6-7% foreclosure rates in some of the highest value markets.
Models are built around real-world scenarios where parameters operate within certain ranges. Violate those ranges and the results become unpredictable, possibly resulting in an assymetric event, i.e. a meltdown like what we saw in the financial markets. Yes, there were people who made predictions that this would happen, but they were held in the same regard as those who predict a nuclear incident next year. It could happen, but the factors that would lead to such an event have not been seen since WWII, so it is highly unlikely. Why plan for the 1 in a million event?
We have the same thing with GCMs. They are built around parameters that operate within certain ranges. Now when we ramp up one of the factors like CO2 outside of known ranges, what happens? A possible assymetric prediction because factors like increasing cloud cover, IR absorption reaching a saturation point within specific bands, cosmic dust clouds, sunspot cycles, etc… are not accounted for. The model becomes “broken”. Climate modeling is a good exercise. It helps discover important climate drivers, and in theory, help us determine their interrelationships… within a standard set of values. It’s what’s hidden from us (analagous to the systemic risk in financial modelling) that makes the whole model invalid when we move beyond the model’s working ranges like 500 ppm CO2.
Sorry for the rant, but it is my biggest pet peeve around models and the mis-application thereof.
In other words, pyromancer76, insiders have learned to game the system. This can only be done when the system lacks transparency, as both the financial system and the climate peer-review system currently do.
There are constant calls for Mann, Hansen, the IPCC, the NOAA and others to publicly archive their raw data and methodologies. But getting any information at all is like pulling teeth.
They have learned to game the system, and taxpayers pay the freight.
“The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded… Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
~ President Dwight Eisenhower
Joel Shore (13:15:33) :
“Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses. (The initial conditions are important if you want to get every little up-and-down jog in the climate right, e.g., El Nino – La Nina oscillations and the like, which is why these are so difficult to forecast.)”
This contention (that climate prediction is not an initial value problem) is a controversial one. Here is a note on this topic by Dr. Roger Pielke, Sr.:
http://www.climatesci.org/publications/pdf/R-210.pdf
The money quote:
“This correspondence proposes that weather prediction is a subset of climate prediction and that both are, therefore, initial value problems in the context of geophysical flow.”
Indeed, if you examine the climate prediction algorithms, they are all based on the same time-marching approach (in fact, nearly the same basic dynamics) as numerical weather prediction codes. If you ignore the “weather noise” the climate codes produce, then you basically have a source-term driven numerical process that relies on lots of numerical dissipation, fixers, and ad hoc filters to keep the solutions from blowing up. One of the ironies of these models is that the closer you get to the initial condition, the less reliable the predictions are (!) – which is why they have very little skill in predicting, say, next year’s climate.
For more on the accuracy of long term climate modeling, please read this:
http://climatesci.org/2008/11/14/are-multi-decadal-climate-forecasts-skillful-2/
“Of the amounts provided, $170,000,000 shall address critical gaps in climate modeling and establish climate data records for continuing research into the cause, effects and ways to mitigate climate change.”
I see this as an encouraging sign that Obama wants to sit on the fence until his second term. What’s the point of putting this much into investigating the phenomena if the outcome is already assumed? A lot of that money might get spent on insulation yet.
It’s a clear admission the models aren’t yet up to the task of deciding policy.
David Halliday wrote:
“Computers can’t do anything humans can’t do. Computers can’t think. Computers can’t create. What computers can do is some of what humans can do only faster.”
In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do. There appears to be a limit to the amount of information a human (even great humans) can assimilate, process, and keep account of. Computers can do this far better. There are also documented examples of, for example, neural network algorithms that *learn* from mistakes, from partial successes, and deduce rules or answers that have eluded even the most experienced and smartest humans.
There are also relationship-discovery algorithms, aka data mining, that explore vast reams of data and reveal insights that humans have never before discovered.
In the field of computerized advanced process control, well, let’s just say that many of us are very glad humans are not at the controls, but instead let the computers do the work. Fly-by-wire is just one example of this, wherein advanced aircraft fly in or near the unstable regime, a regime where human responses and anticipation just cannot adequately respond.
John Galt – re is Fortran still in use?
Absolutely. Operating companies have millions of lines of code written in Fortran, that works and works quite well every day. No one in the private sector has the time or budget to rewrite perfectly good code just to bring it up to some newly-written standard. Those new standards change every few years, and rewriting would be a complete waste of effort. There may be some limited instances where this is done, but it must have a justifiable positive influence on the financial bottom line.
Computer modeling has worked so well for financial markets and the Global Warming industry. I can’t wait for it to be used with the Census.
Douglas Cootey
Hey, don’t throw out the baby with the bathwater! Nobody is planning “modeling” with the census. Just a sophisticated sampling to improve results over the count-every-head technique, which never really does just that. Statistical sampling of large populations has a good theoretical base, and the methods of assessing error are quite robust.
Your comment smacks of a political commentary clothed as technical critique. Is there a reason to only do things the way they did in the 18th century?
And to think Governments want to solve the so-called “Climate change” problem based around computer models. It defies common sense and logic to plan to far ahead using a model. It is at best an approximation at worst sooth saying.
Some Australians have had enough and have formed the World’s first sceptics party. Visit them here at http://www.climatesceptics.com.au .
It’s time to stop the nonsense
David Ermer: After the sleazy episode Gavin Schmidt tried to pull on Steve McIntyre (see http://www.climateaudit.org/?p=5180) whatever he subsequently claims is subject to suspicion… and may be derision.
And here: http://www.climateaudit.org/?p=5134
Teschler draws an interesting analogy here, but I don’t believe that it’s correct.
First climate models are very different from economic models -in construction, in execution, in adherence to data, and in management. Both sets are models, both sets are based on equations whose values are manipulated through simulation – and that’s about where the resemblence ends.
“Economic” models come in two distinct forms: the financial models are really valuation models, and the macro-economic models are really forecasting models.
The valuation models at the root of the credit crisis did not actually fail as models – two things happened to them:
1 – a majority are run on Intel x86 gear, and that gear had a long term problem with non random randomization – meaning that all valuations were off by unknown amounts because the underlying assumption of randomness in the simulation was systematically violated in execution.
Nobody yet knows how important that was..
2 – most financial models correctly predicted the problem, but regulatory incentives swayed management decisions to ignore what the geeks said and many, in fact, instructed “their geeks” to just shut the star blank star up about it.
Wait, come to think of it, that does sound like climate modelling, doesn’t it ?
Here’s the link to the report referred to by pyromancer76, above:
http://www.fraserinstitute.org/commerce.web/product_files/CaseforDueDiligence_Cda.pdf
Joel Shore (13:15:33) :
Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses.
A good point. The single factor that has so far made all climate models fail (yes, the null hypothesis is rejected on all of them) is the most important one, climate sensitivity to a doubling of CO2. If you re-run most models with negative feedback, guess what – they aren’t that bad at predicting the actual observations. This single factor is really the only major disagreement between the so-called “consensus” and the “skeptics”. Once a reasonable sensitivity is applied (since this is the only conclusion one can reach from the observational data), there is nothing catastrophic to worry about, and the rest of the argument (control, adaptation, economics – all of it) – becomes completely irrelevant. It’s really quite simple and boils down to one single important factor. For this factor, the evidence is quite clear, and there is really nothing either side needs to be skeptical of.
It is far better to lose one relatively small industry (AGW) than to destroy the larger civilized world chasing demons over this simple factor.
Joel Shore (13:15:33) :
Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses
You may be correct about the necessity, or lack there of, of precise initial conditions for general climate trend models. My own understanding of how the models operate is admittedly amateurish in the extreme, but if the models are as you state it still seems to me that the modellers would require a solid understanding of what the levels of the various forcings are, how they interact over changes, what feedbacks result, how susceptible the forcings are to being overwhelmed by other factors in the system, etc. Nothing I’ve seen leads me to believe the modellers possess any of these understandings, an opinion which I believe is supported by the models dismal performance in predicting results even in the short term. Which I think leaves intact my essential point that making policy decisions that will have real, immediate, and dire consequences based on dubious model’s predictions of catastrophe will lead the planet to a certain future of increased suffering, misery and death worse than any worst case scenario the warmists have envisioned.
AJ Abrams (13:06:09) :
I did say supposedly and I am aware that the looming Doomsday is a product of the kind thinking, or lack of same, that I’ve been arguing against, but I was just engaging in a little ironic poetic license.
John Galt (12:37:19) :
We’ve been through this on other threads. If you were involved in the supercomputer field you’d know that “I don’t know what the future language for programming supercomputers will look like, but I know it will be called Fortran.”
The next Fortran spec will include object oriented programming constructs. Previous specs added matrix operations, support for vector and parallel hardware, data structures, etc.
Look up codes like NASTRAN, that’s a NASA->commercial product that may have racked up more computes than any other program in existence. Your car was probably modeled with it. Google offers 913,000 hits.
Just because GISS doesn’t take advantage of the new features or knows how to write well structured code, that’s no reason to disparage Fortran, just GISS!
Off topic but an action by the EPA to regulate man-made CO2 makes any further arguments about AGW, global warming/climate change irrelevant.
Climate models triumph over Nature.
————————————————————-
EPA Expected to Regulate Carbon Dioxide for First Time
http://www.foxnews.com/politics/first100days/2009/02/18/epa-expected-regulate-carbon-dioxide-time/
The Environmental Protection Agency is expected to act for the first time to regulate carbon dioxide and other greenhouse gases, The New York Times reported on Wednesday, citing senior Obama administration officials.
EPA Administrator Lisa Jackson has asked her staff to review the latest scientific evidence and prepare documentation for a finding that greenhouse gas pollution endangers public health and welfare, the newspaper said.
There is wide expectation that Jackson will act by April 2, the second anniversary of a Supreme Court decision that found that EPA has the authority to regulate greenhouse pollution under the U.S. Clean Air Act.
Climate models are different. Unlike the banks that assumed that past trends would hold, climate models assume that our output of CO2 will cause the planet to diverge from past trends. — John M Reynolds
With precious few exceptions, the financial risk models of which I am aware implicitly or explicitly employ Gaussian Normal Distributions.
As someone alluded to above, this was the point driven home by Nicholas Nassim Taleb in both “The Black Swan” and “Fooled by Randomness.” The models used by Long Term Capital Management, created by Nobel Prize winners no less, calculated that the kinds of losses that eventually drove LTCM under were a ten sigma event which, in their calculation, could only happen every 5 or so billion years.
As Taleb, with his characteristic scorn puts it, they assumed that the conditions that could hurt them had only happened a few times in the history of the Universe.
Old Chemist (10:42:41) :
Their prediction for cycle 24: peak of 120 – 140 (Zurich sunspot number) — cycle starting 2007 – 2008.
Actually 165-185, even worse.