When You Can’t Believe the Model

This article was sent to me by reader Peter Yodis. I found it interesting and germane to current events, so I’m sharing it here. Just a note for clarification, the very last sentence is in his original article, it is not commentary from me. – Anthony

Reposted from Machine Design.com from editor Leland E. Teschler Feb 17th, 2009

Amid all the hand-wringing about financial systems in meltdown mode, the subject of modeling hasn’t gotten a lot of notice. Banks and other financial institutions employed legions of Ph.D. mathematicians and statistics specialists to model the risks those firms were assuming under a variety of scenarios. The point was to avoid taking on obligations that could put the company under.

Judging by the calamity we are now living through, one would have to say those models failed miserably. They did so despite the best efforts of numerous professionals, all highly paid and with a lot of intellectual horsepower, employed specifically to head off such catastrophes.

What went wrong with the modeling? That’s a subject of keen interest to engineers who must model the behavior and risks of their own complicated systems. Insights about problems with the mathematics behind financial systems come from Huybert Groenendaal, whose Ph.D. is in modeling the spread of diseases. Groenendaal is a partner and senior risk analyst with Vose Consulting LLC in Boulder, a firm that works with a wide variety of banks and other companies trying to mitigate risks.

“In risk modeling, you use a lot of statistics because you want to learn from the past,” says Groenendaal. “That’s good if the past is like the future, but in that sense you could be getting a false sense of security.”

That sense of security plays directly into what happened with banks and financial instruments based on mortgages. “It gets back to the use of historical data,” says Groenendaal. “One critical assumption people had to make was that the past could predict the future. I believe in the case of mortgage products, there was too much faith in the idea that past trends would hold.”

Therein lies a lesson. “In our experience, people have excessive confidence in their historical data. That problem isn’t unique to the financial area,” says Groenendaal. “You must be cynical and open to the idea that this time, the world could change. When we work with people on models, we warn them that models are just tools. You have to think about the assumptions you make. Models can help you make better decisions, but you must remain skeptical.”

Did the quantitative analysts who came up with ineffective financial models lose their jobs in the aftermath? Groenendaal just laughs at this idea. “I have a feeling they will do fine. If you are a bank and you fire your whole risk-analysis department, I don’t think that would be viewed positively,” he says.

Interestingly enough, Groenendaal suggests skepticism is also in order for an equally controversial area of modeling: climate change.

“Climate change is similar to financial markets in that you can’t run experiments with it as you might when you are formulating theories in physics. That means your skepticism should go up,” he says.

We might add there is one other similarity he didn’t mention: It is doubtful anyone was ever fired for screwing up a climate model.

0 0 votes
Article Rating
165 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Robert
February 19, 2009 9:54 am

If you read the story of the destruction of AIG, you will see a high stakes model error that destroyed one of the largest comanies in the world. There was a great deal of skepticism at AIG at the models that said the investments that were proposed were extremely low risk. The financial modellers carried the day and AIG invested heavily into these “low risk” assets. The company went from a $70/share value to a $0.70/share because the models were wrong.

Pierre Gosselin
February 19, 2009 10:12 am

Climate models extending a couple of years may be of some value – barring any natural surprises. But climate models predicting decades or even a century into the future is utter lunacy. Who can predict what volcanos, the sun or space will deliver?
It’s the greatest swindle ever pushed onto humanity. Just goes to show there are enough gullible blokes out there waiting to be taken in hook, line and sinker.

Claude Harvey
February 19, 2009 10:13 am

Excellent piece! It seems to me there is one marked difference between models that quantify risk and climate models. Risk models can run for years without actually being tested against reality. Predictive climate models are tested against reality on a continuing basis and have regularly been found wanting. It seems to me the entire history of climate modeling has boiled down to a series of “backfilling” exercises every time either the model’s predictions failed to materialize or additional “past history” was unearthed that could not be duplicated within the envelope of the modeler’s assumptions.
The entire “Manmade Clobal Warming” position is based on a modeled hypothesis that has regularly failed many tests to which it has been subjected.

Pierre Gosselin
February 19, 2009 10:15 am

Or perhaps finance is far more complex than climate. 😉

William
February 19, 2009 10:24 am

Don’t necessarily blame the models here. I think the bankers just ignored the ones that gave them answers they didn’t like. For example, in the UK, the senior risk manager was fired for saying to the boss that the bank strategy was headed for disaster. See here:
http://business.timesonline.co.uk/tol/business/industry_sectors/banking_and_finance/article5701380.ece

Les Johnson
February 19, 2009 10:26 am

Didn’t Gore say that the climate models could be trusted, because the economic risk models were similar?
(if anyone has a reference to this, I would appreciate it. I can’t find a reference, so I will have to put this in my Urban Myths section.)

P. Hager
February 19, 2009 10:26 am

I once saw a warning on a web page about control systems modeling. It went as follows:
Models have limits, stupidity has no limit.
I would love to provide attribution on this, but it was years ago and I don’t know where it came from.

stan
February 19, 2009 10:27 am

The financial risk models had a another serious problem. In addition to the assumption regarding the past data (and note, they only used the last few years!), a more serious assumption comes from using volatility as a proxy for risk. Volatility is useful in that it can be measured. All kinds of calculations of volatility underlie the models. But in the end, volatility is a reflection of normal market conditions. Serious risk is, almost by definition, not part of the normal market.
The models should never have been used the way they were. As used, they gave a false sense of security.
Hubris.

Ron
February 19, 2009 10:28 am

Wouldn’t the idea of sensitive dependence on initial conditions in dynamic complex systems rule out, in principle, the possibility of accurately modeling the climate? Without perfect knowledge of initial conditions to feed into the models, how can climate modelers expect the models to be accurate for any extended time period? Could it be that meteorologists are more familiar with the work of Edward Lorenz (the father of chaos theory) and the limitations of computer models, and that’s why they are more skeptical as a group than other scientists? I haven’t heard this addressed anywhere, and I wonder what you guys think about it.

tallbloke
February 19, 2009 10:31 am

I sometimes wonder how nimble with fortran Gavin really is.

February 19, 2009 10:34 am

Pierre Gosselin (10:15:14) :
Or perhaps finance is far more complex than climate. 😉
For finance Kodratieff’s cycles. For climate Jose’s sun cycles around the barycenter?

Leon Palmer
February 19, 2009 10:41 am

Here’s a good article on why the financial models failed, by placing all the risk on a very very low probability event (called a martingale) … Which won’t ever happen, right?
http://www.slate.com/id/2201428

Rhys Jaggar
February 19, 2009 10:41 am

Has anyone ever had a go at developing a model which inserts certain disrupting events at a given frequency (e.g. volcanic eruption etc)?
Is the result that you have no ability to predict very far ahead?
Or would it say that although you can’t predict exact troughs and peaks, the long-term system performance is quite predictable?
Would such modelling give insights into the sorts of stresses needed to change the steady state e.g. instigate an ice age or a period of prolonged warmth?
Questions from a scientifically trained non-climatologist…….

Old Chemist
February 19, 2009 10:42 am

Here is another example of modeling based on prior history which I believe illustrates the points made above — Dikpati el in Geophysical Research Letters 2006 published their solar cycle predictions based on their model which showed a correlation coefficient of 0.958 for simulated and observed cycle peaks for cycles 12 to 23 and 0.987 for cycles 16 through 23. — Their prediction for cycle 24: peak of 120 – 140 (Zurich sunspot number) — cycle starting 2007 – 2008.
PDF paper is available at 192.211.16.13/z/zita/articles/Dik06GRLMar.pdf
Time will tell how close their prediction came, but it doesn’t appear to me to be very accurate at present.

gary gulrud
February 19, 2009 10:48 am

“In our experience, people have excessive confidence in their historical data.”
This might be a great motto for WUWT.

February 19, 2009 10:48 am

Computer modeling has worked so well for financial markets and the Global Warming industry. I can’t wait for it to be used with the Census.
Douglas Cootey
☆ @SplinteredMind on Twitter
http://TheSplinteredMind.blogspot.com

Pierre Gosselin
February 19, 2009 10:50 am

Chicago Tea Party?!
http://www.cnbc.com/id/15840232?video=1039849853
h/t Drudge

Common Sense
February 19, 2009 10:53 am

I don’t need a complex model to tell me that making sub-prime loans to people who can’t afford them and/or have sketchy credit histories is a recipe for disaster.
And that’s what happened.

George A. Reilly
February 19, 2009 11:01 am

The current financial crisis is entirely anthropogenic, and involves factors completely within our power to measure and control. Given that the risk models involved did not perform correctly, and in fact led to disastrous results, why would anyone assume that the current climate models, which are inherently much more complex and extremely difficult to test, could yield a description of reality even close to being accurate

Dave
February 19, 2009 11:12 am

The thing is, finance is a purely human thing, weather and climater are mostly natural without much human interference.
What can happen with financial modelling is that when someone finds some sort of corelation between things, and acts upon it, that act changes the rules of finance and money so that the rule ceases to work. I think that Goodhart came up with what is now called Goodhart’s law explaining how difficult it was to control inflation by the money supply.
And these chaps did the same with all of these mortgage rates. The worked out that a certain percentage of them would be paid back based on historical figures. They overlooked the fact that if billions and billions of dollars were lent on sub-prime loans, that would change the existing rules, leading to the bust.
I am sure that even a butterfly can flap its wings and change the weather. But I dont think that man is yet capable of bringing about an ice age or melting our ice caps.

Ron de Haan
February 19, 2009 11:23 am

The question “Who killed Wall street” is answered here:
“We see the same problem with “climate science.” Our politicians are in the same position as bank CEO’s, looking at flawed projections, and planning policy from wrong premises”.
“Perilous Models”
Tuesday, 17 Feb 09, business
“Who killed Wall Street? Harvard MBA’s. This analysis makes more sense to me than any other explanation of the crisis. Consider also the way in which financial models were confused with reality. We see the same problem with “climate science.” Our politicians are in the same position as bank CEO’s, looking at flawed projections, and planning policy from wrong premises. With the economy already slumping, we need to expedite energy production, not tax and distort it”.
http://www.seablogger.com/?p=12875
and http://www.bloomberg.com/apps/news?pid=20601039&refer=columnist_hassett&sid=a_ac69DqFutQ

Harold Vance
February 19, 2009 11:26 am

The current administration is ramping up NOAA’s climate modeling program in a big way (via the stimulus):
NOAA – NOAA will receive $230 million for operations, research and facilities, $600 million for procurement, acquisition and construction. The conference report states that $600 million is for “construction and repair of NOAA facilities, ships and equipment, to improve weather forecasting and to support satellite development. Of the amounts provided, $170,000,000 shall address critical gaps in climate modeling and establish climate data records for continuing research into the cause, effects and ways to mitigate climate change.”
Source: American Geological Institute
URL: http://www.agiweb.org/gap/legis111/update_stim0209.html
I think that the original stimulus called for a minimum of $140M to be spent on climate modeling.
I guarantee you that the administration is going to use the results from the new models to drive policy changes.

Frank K.
February 19, 2009 11:27 am

tallbloke (10:31:54)
“I sometimes wonder how nimble with fortran Gavin really is.”
Here you go…
http://www.giss.nasa.gov/tools/modelE/modelEsrc/
I encourage everyone with some knowledge of numerical modeling and basic Fortran to examine the Model E source code, and decide for yourself whether you could place any trust in the output…

Bob Shapiro
February 19, 2009 11:34 am

Modeling finance and climate have some similarities and some dissimilarities. Here are a few, with my take on whether there MAY be similarities.
1. Basic Assumptions:
A. In business, most modelers assume Keynesian economics is correct: Austrians (myself included) would say GIGO. (Similar to AGW modeling)
B. You may assume government policies will be constant (eg. interest rates), but political winds can change. (Dissimilar – while Mother Nature can change, it’s not a political decision)
2. Feedback: If “everybody” acts the same way to business models, the effects can be large enough to affect the conditions of the markets. (Dissimilar)
3. Bias: There are so many financial indicators that is possible to set up models which correspond with preconceived ideas. Economists start with blank-slate minds, so it is easy for bias to be there without their even realizing it. (If you laid all the economists in the world end to end, they wouldn’t reach a conclusion.) (Similar)
4. Government Influence: Agencies routinely report doctored economic reports (see. shadowstats.com) and encourage rosy forecasts of the economic future. (Similar)
5. Corruption: Personal agendas for personal gains led to ratings shopping by investment banks with S&P and other ratings businesses. (Similar)
6. Sloppiness: Data may be accepted for model use which should be verified (CPI vs Surface Station data). (Similar)
7. Randomness: Trends may be “identified” which really are random fluctuations. Tossing a coin 10 times may show HHHHHHHHHH or HHTHTTTHTH; while the latter seems more reasonable, both outcomes are equally as likely, and both are random events. (Similar)
I appreciate the Due Diligence that Anthony and the readers of this site provide.

wonderfulforhisage
February 19, 2009 11:35 am

A huge difference between financial modelling (fm) and climate modelling (cm) is that the system that fm models has a potential for learning built into it whereas cm doesn’t.
Gregory Bateson (I think in his book Steps to an Ecology of Mind) uses the example of kicking a dog and a football. One can model the outcome of kicking a football pretty accurately using Newtonian physics because there is no potential learning capacity in a football. However modelling the dog’s behaviour is another matter because the system that is the dog does have an inherent learning capacity. So one might kick a dog say twenty times and model the results – say it runs away and hides – but on the twenty first kick it might bite your bum. Because a football has no capacity for learning you can forecast the result of kicking it very accurately.
A financial system has huge potential learning built into it because it contains human beings. Confidence is a function of learning. Say no more.
Physically the world has pretty well as much wealth today as it had a year ago so why has the world’s GDP/rate of growth dropped off? Answer – because we’re human and we know things today that we didn’t last year.
In my opinion it wasn’t greed that caused the credit crunch but rather a belief that fm has the same characteristics of cm (or other purely Newtonian extrapolated models).

RH
February 19, 2009 11:36 am

It makes one wonder who the individuals were who developed the financial modeling software. Were they connected to Maurice Strong, Al gore, The Club of Rome, anyone in the IPCC, or any extreme political organization.
I also have no doubt that the press has no interest in looking into it further.

agesilaus
February 19, 2009 11:37 am

I’m not going to defend modeling but at least in the mortgage market the government stuck its oar in with predictably disasterous results. The Congress passed laws with the intent of forcing banks to make loans to low income buyers that they never would have made under normal circumstances. The risks were too high.
But the government creatures Freddie and Fannie ordered the banks to proceed with the promise that they would guarentee those loans. And the entire failure can be traced to that socialistic plan.
Of course government is likewise interfering with science in the climate argument.

David Holliday
February 19, 2009 11:42 am

“Don’t necessarily blame the models here. I think the bankers just ignored the ones that gave them answers they didn’t like.”
Blame the models! Models are written by people. They reflect what people know, think and believe. At best, they are no better than the understanding of the people who write them. They are often much worse.
And then there is the data that goes into the models. It is often inaccurate, incomplete, biased, or otherwise flawed. There is a basic rule in computer science that students are taught the first day they begin classes. Garbage In, Garbage Out. Bad data in, bad data out.
Computers can’t do anything humans can’t do. Computers can’t think. Computers can’t create. What computers can do is some of what humans can do only faster.
They can compute. They can and, or, xor, add, subtract, multiply, shift, and otherwise manipulate bits of data that we humans can apply meaning too. Computers have no understanding of what they do. They are simply machines.
When humans can predict the future, computers will be able to be programmed to predict the future too. Until then, all models that project forward are guesses at best. They are subject to a plethora of possible failings that ultimately leave them utterly unreliable.

Yet Another Pundit
February 19, 2009 11:54 am

Google “black swans” (Nassim Taleb)

February 19, 2009 11:57 am

Pierre Gosselin (10:50:59) :
I have always been a Rick Santelli fan from across the pond – and trust me – he is spot on, on this one.

Reed Coray
February 19, 2009 12:03 pm

George A. Reilly (11:01:20) wrote: “The current financial crisis is entirely anthropogenic, and involves factors completely within our power to measure and control. Given that the risk models involved did not perform correctly, and in fact led to disastrous results, why would anyone assume that the current climate models, which are inherently much more complex and extremely difficult to test, could yield a description of reality even close to being accurate?”
Answer: Because he/she WANTS to believe–especially anything that supports his/her philosophy and view of the world.

voodoo
February 19, 2009 12:06 pm

I think William makes the key comment here:
“For example, in the UK, the senior risk manager was fired for saying to the boss that the bank strategy was headed for disaster.”
If you are a grad student and you rock the Climate Change gravy train, you are toast. If you are a grade school teacher it is your duty to traumatize the children with drowning polar bears. If you are an auto executive begging for money you must shout ‘I BELIEVE!’ If you are an average Joe with a brain, you must keep it to yourself.
Not being an advocate is equivalent to having a swastika on your forehead and spouting the N-word.

Ed Scott
February 19, 2009 12:12 pm

Economic and financial models are not reality. Human nature, especially of those in government, is reality.

NoAstronomer
February 19, 2009 12:23 pm

Holliday
Actually you really can’t blame the models. Why? Because there were models that showed that the disaster was looming. Thing is those were traditional (meaning old) models.
The business managers *chose* to believe the newer (and therefore better right?) models despite their obvious flaws.

February 19, 2009 12:25 pm

William (10:24:18)
I wish I could name the program, but I did see the guy in charge of the AIG model in an interview on TV. If I remember correctly, he said that they delivered a “first pass” model to AIG, and were starting to work on the improved product when they were defunded. AIG then went out and sold the first pass model to the market.

February 19, 2009 12:25 pm

William (10:24:18) :said
“Don’t necessarily blame the models here. I think the bankers just ignored the ones that gave them answers they didn’t like. For example, in the UK, the senior risk manager was fired for saying to the boss that the bank strategy was headed for disaster.”
William, your example highlights how “group think” evolves and reinforces acceptance of a faulty model-and thus inhibits healthy skepticism.
Likewise in academia! How many phD’s get hired into their department if they say they don’t believe in “string theory” or AGW? An academic department hires people who will work together with the other faculty. The other faculty often have a large say in who gets hired.
This group think can be so powerful that it takes a major failure of the model before the established institution accepts skepticism of primary assumptions.

John Galt
February 19, 2009 12:26 pm

For climate models, the actual historical data to work with is very thin. Accurate records go back barely 100 years and those are only for cities primarily in North America and Western Europe. We have very spotty data for the rest of the globe and that is true for much of the 20th century.
Climate models have to make assumptions for most of their ‘historical’ data. This makes estimates of warming even trickier to substantiate and is yet another reason why climate models are unable to accurately predict future climate at any useful level.

February 19, 2009 12:27 pm

By way of a little background – I have been predicting a massive downturn in the economy here in the UK since 2003 with the comment to my friends, “that one day a major bank, a household name major bank, somewhere in the world will declare it has run out of capital.
I did not put any timescale on it but I was sure that it would resemble the South Sea Bubble. (It is worth googling if you do not know about it because the relevance to today is real).
In brief – models have been created for valuing derivative financial instruments based on a set of assumptions that typically work – very much like climate models. When things started to go wrong, adjustments were made – much as we see with the current scenario in climate models today.
When things became really bad – banks valued these esoteric securities not on a ‘mark to market’ basis but on a ‘mark to model’ basis. When they became implausible they managed to value them on a ‘mark to aspiration’ basis (pun intended).
The reality is we still do not know (and neither do any governments funding the banks) just how bad this position really is – because no-one does.
The analogy is that surely we should be challenging the climate modellers to justify their models against real world values. Had the Boards of the Banks been diligent in their questioning maybe the severity of this downturn could have been limited.
We should not let a similar lack of diligence on climate models and AGW theory drive us into making immensely costly and unwise policies on carbon taxation.

Sean
February 19, 2009 12:28 pm

Interesting article on financial modeling. Anyone looking at models has got to look at the underlying assumptions. In the financial world however, you have a human created system. When it looks at history, particularly with the rates of subprime defaults prior to this century, subprime lending was such a small part of the mix that it did not really influence prices. Between 2003 and 2006, the extensive use of subprime lending made it possible for almost anyone to qualify for a mortgage and average home prices were driven well beyond their historical range of ~3x income and could not be sustained. The outlet for subprime borrowers who got too stretched financially, sell at a profit and move on, shut down when prices fell so the house of cards collapsed. The model made sense when subprime was a bit player but not when it was a major player. In essence, the assurances from history that it was safe to expand it was its undoing.
Climate is a natural system but there are human influences (hence all the hullabalu about CO2). Renewable systems however draw power from the wind, sun, tides, currents or whatever from very low energy density sources so are widely dispersed. In addition they tend to deliver power at less than 1/5 their rated capacity. To obtain a really significant portion of our electrical power renewlable systems, an extremely large footprint will be required and the energy drawn from the environment will change that environment in some small way. When you start adding all these small things up however, you might suddenly find air and water circulation patterns changing. We’ve already seen increases in the dead zone in the Gulf of Mexico since Biofuels boomed. Then there is loss of rain forest in Indonesia to grow palm oils, again for fuels. What I am saying in a very long winded way is has anyone looked at the changes to climate system when renewable systems are extensively deployed? Would they have more or less of an impact than fossile fuel use? I suspect that if you asked this question to someone like Roger Pielke Sr. he’d have some very interesting insights.

Mike Bryant
February 19, 2009 12:32 pm

I remember when personal computers were in their infancy. A friend of mine wrote a program that would take a list of names and alphabetize them. He was very excited about the result of his considerable effort.
It seems to me that the Global Climate Models are pretty much the same thing. They have been written to produce the desired result, however the tremendous, and always increasing, complexity of these programs somehow clouds our minds to this simple and undeniable fact.

Graeme Rodaughan
February 19, 2009 12:33 pm

Pierre Gosselin (10:50:59) :
Excellent Link that you provided (below) – Loved it.
http://www.cnbc.com/id/15840232?video=1039849853
G

Dave Wendt
February 19, 2009 12:35 pm

I read the other day that IBM has contracted to build a new supercomputer for the government that when completed will supposedly have more computational power than the top 500 supercomputers now operating combined. One might think that such a machine might approach the level of computational power needed to model something even as complex the climate system, but the problem of establishing initial conditions still applies. As far as I can tell there is no data set out there right now that would qualify as an undisputed gold standard for providing precise initial conditions for a single one of the myriad of parameters which would need to be established to have any hope of creating a reliably predictive climate model. The recent dustups about NSIDC and Stieg et al and Antarctic temps are just the most recent examples of how far we are, even with our supposedly highly advanced state of technology, from being able to derive an accurate representation of what’s happening in the climate right now, let alone hundreds, or thousands, or millions of years in the past. Despite these seemingly insurmountable logical barriers, we are told we must immediately embrace programs and policies that have immense and highly detrimental economic and social costs based on the catastrophic predictions of climate models that have never demonstrated even the flimsiest gift of predictive ability and which logic dictates they never will.
About a millennium ago the Mayans supposedly predicted the end of the world as we know it for the end of 2012. As I watch the continued daily bludgeoning of liberty and reason by the forces of authoritarianism and emotional ignorance I find myself overcome by a wistful hope that maybe they were right.

John Galt
February 19, 2009 12:37 pm

tallbloke (10:31:54) :
I sometimes wonder how nimble with fortran Gavin really is.

I frequently wonder how anybody can be using Fortran for anything in this day and age. It’s obsolete, but once was the primary computer language used in science and engineering.
When was Fortran last part of the curriculum for data processing or computer science? Has it been 20 years or more? Do any schools still teach it? Do engineering firms still have a large Fortran code base in use, or is it only found in government these days?
I’m not saying the models would produce better output if they were written in C++ or Java instead (GIGO: Garbage In, Garbage Out always applies), but I would like to see the models written in a contemporary language that can be run on most current operating systems.
Some good Object Oriented Programming techniques (available with a modern programming language) could also help sort out the spaghetti code and reduce possible coding errors.

Paddy
February 19, 2009 12:38 pm

A while ago I read an extensive explanation about how and why AIG’s risk assessment model failed. I believe it was in the Wall Street Journal. The model designers had difficulty simulating and projecting some of the “known unknowns” and missed altogether on the “unknown unknowns.”
Climate science is loaded with both types of unknowns. You do not have to be a scientist to understand that we do not have the skill or knowledge to design multi-decadel GCMs that simulate, much less project, future global climate.
It is refreshing to follow this blog and realize that there are many honest scientists without political agendas. But, your voices are drowned out by religious zealots, snake oil salesmen and crooks and their enablers in the media.

Ron de Haan
February 19, 2009 12:38 pm

In Australia, the model based Global Warming Scare has resulted in the inevitable.
A SKEPTIC POLITICAL PARTY to confront the political establishment from within.
http://heliogenic.blogspot.com/2009/02/new-political-party-in-australia.html
With 90% of the population denying a human link to Global Warming this party has good perspectives to become a succes.
This is a great idea for the USA to. Anthony for President?

Carsten Arnholm, Norway
February 19, 2009 12:42 pm


Frank K. (11:27:09) :
tallbloke (10:31:54)
“I sometimes wonder how nimble with fortran Gavin really is.”
Here you go…
http://www.giss.nasa.gov/tools/modelE/modelEsrc/
I encourage everyone with some knowledge of numerical modeling and basic Fortran to examine the Model E source code, and decide for yourself whether you could place any trust in the output…

I used to program large Finite Element Systems in Fortran77. It is possible to write such systems reasonably structured with a lot of experience and effort, but if you don’t have the experience, the result is a mess. I have seen many such examples, and the one you refer to is one. It is also a mixture of styles and Fortran dialects (77 vs. 90 for example), so it appears like old code that has grown and grown.
I still do programming for a living, but left Fortran years ago since it did not offer the essential and far more powerful structuring features that languages like e.g. C++ and other modern languages have.
I would say the above code does not appear to be up to current professional standards by any measure. Unless unit and integration tests exists to verify the various parts, it is very hard to tell whether it does anything useful. As it is so large, it is likely to contain many serious bugs.

Mike Young
February 19, 2009 12:50 pm

With precious few exceptions, the financial risk models of which I am aware implicitly or explicitly employ Gaussian Normal Distributions. Why? Because they are simple to model and because they are simple to teach/learn.
However, there is little evidence that return distributions of any asset class–stocks, bonds, real estate, currency–are distributed normally. The Gaussian Normal Distribution is a special case. In my published research, we have found that real estate returns are distinctly non-normal with an alpha of about 1.5, which contrasts with an alpha of 2.0 for the normal distribution and an alpha of 1.0 for the Cauchy distribution typical of bonds.
The normal distribution has a real second moment, a variance, which is considered the simple measure of risk. A distribution with alpha less than 2.0 does not have a second moment. Oops!! Heck, the Cauchy distribution doesn’t even have a first moment, a mean. This is anathema to risk modelers who opt for the easy normal distribution.
We’ve seen disasters with the normal distribution before. Long Term Capital was, in large measure, an over reliance on the normal distribution in the Black-Scholes Option Model, for example.

AJ Abrams
February 19, 2009 1:06 pm

Dave Wendt (12:35:10) :
No the Mayan’s did no such thing. Their calendar simply ended then, they made no prediction about what that meant. Having said that, it understand you sentiment. I just dislike the propagation of myths.
http://skepdic.com/maya.html

Joel Shore
February 19, 2009 1:15 pm

Dave Wendt says:

One might think that such a machine might approach the level of computational power needed to model something even as complex the climate system, but the problem of establishing initial conditions still applies. As far as I can tell there is no data set out there right now that would qualify as an undisputed gold standard for providing precise initial conditions for a single one of the myriad of parameters which would need to be established to have any hope of creating a reliably predictive climate model.

Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses. (The initial conditions are important if you want to get every little up-and-down jog in the climate right, e.g., El Nino – La Nina oscillations and the like, which is why these are so difficult to forecast.)

Edward Morgan
February 19, 2009 1:20 pm

I remember someone making the point in relation to harmonics that the further along the line you go you come to a point which is impossible to predict. You may start with two or three notes but then you’ll get harmonics and interactions and then more harmonics heading towards something of too great a complexity.
This ultimately makes a fool out of any decision however well meant the further away you get from the day of decision. If you take something to extremes it becomes its opposite someone else said. Then again the earth system is bogglingly complex so what do we expect. Maybe this is why politicians are all enigma and not too many facts. They know there will be a hatful of contradictions guaranteed.
The hope lies in our ability to see in the relatively short term (and other nonsense statements)

Barry B.
February 19, 2009 1:21 pm

Weren’t those models peer reviewed? 😉

TFN Johnson
February 19, 2009 1:23 pm

About 40 years ago I attended a short course on time series, given by an American, from GE I think it was. He lost me pretty quickly in the maths, but I still remember his response to the question “what about a change in trend?”. “Gee” he said ” a change in trend. Well, that’s very deep concept”. Ever snce I’ve found it best to Keep it Simple, although I never made as much money as Kroc!.

hunter
February 19, 2009 1:26 pm

There are good models, by climatologists like Spencer, Lindzen, and Pielke, that show we are nowhere near a catastrophe and that coaltrainsofdeath are hyperbole.
But the ‘consensus’ dismisses these models, just like the Wall Street/Washington consensus dismissed the finacial models that showed lending money to people who could not pay it back was a bad idea.
We are going to reap such terrible results from destroying our basic dependable energy sector. But the modelers don’t care. They have their models and their consensus.

David Holliday
February 19, 2009 1:35 pm

“The business managers *chose* to believe the newer (and therefore better right?) models despite their obvious flaws.”
“He rejected Mr. Moore’s allegations, saying that the bank commissioned an independent study and the matter was closed to the satisfaction of the FSA.” I remember the incident very well. It was taken very seriously by the board.” ”
Hindsight is always 20/20. It’s so easy to look back at see the “error” of someone else’s ways. That one “clarion” voice that was ignored. Which, if only it had been heard, would have saved the day? It’s just too easy. It’s not real.
By way of experiment, tell me now whether the U.S. stimulus package will pull the economy out of recession or not? Regardless of what you think, would you bet your salary on it? Would you buy or sell huge amounts of stock on your prediction? Can it be modeled? There are voices on both sides of that debate. Inevitably someone will be right. Who will it be?
By the way, we are very good at modeling “what happened” we just suck at modeling “what will” happen. There are just too many things we don’t “know”. And even when we do know we can’t model completely accurately.
Take for instance, someone’s earlier example of kicking a football. Something that you think would be easily modeled. You could use the football equivalent of Iron Byron (a machine for consistently hitting a golf ball with the same force and same direction) but the football would land in a different place every time. If every time before you kicked the ball you modeled the outcome with a computer, the computer model wouldn’t completely accurately predict the landing spot once. The prediction would always be a little off. In modeling we call that the delta and we try to make it as small as possible but it’s always there.
If we understood climate as well as we understood kicking footballs we could model it too. But our models would still be wrong. They just wouldn’t be as wrong. After reading this blog and others I’m convinced no one really understands climate all that well. So the ability to model it is pretty poor.
And if that “whistleblower” really was that good at predicting risk and so confident in his models, well he must be an exceptionally rich man now. Because anyone who could have predicted this financial disaster with any accuracy would have known to short the market and would have seen financial returns beyond their wildest imaginings. Just how rich is that guy anyway?
As far as old models vs. new models, well computers are faster now than they were just a few years ago but they compute the same way. The newer models were undoubtedly built from the older models adding more variables and more possible outcomes. And if the older models were really that good, they’d still be using them.
Finally, why do these “conspiracy” theories always pop up? They are so easily disproved. What financial manager in his right mind would put himself or herself at risk to the extent alleged now? They have, at least most of them, compensation that is hugely tied to the performance of their company. Which of them is so callous or stupid that they would knowingly risk their company and with it their financial future? They have wives, children, mortgages, tuition payments, and all the other things the rest of us have. Many of them lost staggering amounts of money.
While that the financial disaster was fundamentally the result of bad decisions on many levels, from government to business to personal, it ultimately reached the level it did because of a set of events that no one thought to model.
Take the example of flipping a coin. Every time you flip a coin the probability is 50/50 heads or tails. But say you flipped the coin and it came up heads 100 times in a row. What would be the probability that the 101’st flip was heads? 50%. But that’s not the point. You could model that scenario on a computer and run the simulation 100,000 times. And the probability that the computer would ever return that result is ridiculously small. But it could happen in life. Because life in unpredictable. Welcome to banking crisis 101.

Richard deSousa
February 19, 2009 1:36 pm

David Wendt:
It doesn’t matter how powerful the computer is if garbage is inputted. GIGO

pyromancer76
February 19, 2009 1:46 pm

I want to inject a little paranoia into the discussion. Common Sense (10:53:23) said: “I don’t need a complex model to tell me that making sub-prime loans to people who can’t afford them and/or have sketchy credit histories is a recipe for disaster. And that’s what happened.” Bob Shapiro lists government agencies that “routinely report doctored economic reports and encourage rosy forecasts” as one of his similarities in financial and climate modeling.
Yes, but how did it happen? See Climate Audit on 2/18/09. S. McIntyre discusses a report by McCullough and McKitrick. (Please, everyone, read both the discussion and the report.) In addition to the Mann Hockey Stick falsification of data and modelling, a working paper released in 1992 by the Boston Federal Reserve Bank was investigated. This WHOLLY UNSUPPORTABLE report led to RAPID RULES CHANGES re subprime loans throughout the 1990s. In 1995 ACORN and NACA were embedded in the process. It took years for anyone to be able to verify the method of gathering data, use of the data, and the conclusions of the economists, Munnell, Browne, McEneaney, and Tootell. Essentially they put forward a falsified report and “trillions of dollars of bad mortgage debt and related financial derivatives” ensued.
I want to know the backgrounds of these economists and where they are today. It is very difficult for me not to believe that this financial crisis was in some way “prepared” or “planned”.
The “Mann Hockey Stick” arrives in 1998. Too many similarities: The falsification of data, the WHOLLY UNSUPPORTED methods, the impossibility of both scientists and citizens to gain access to public documents, and the unaccountability of researchers for their data and conclusions, and the institutions, both public and private, that deny that accountability warps a reasonable mind. One can hardly get one’s mind around the enormousness — and the enormity — of the problem.

David Ermer
February 19, 2009 2:00 pm

Link includes Gavin Schmidt comment on difference between climate and financial models FWIW
http://blog.wired.com/wiredscience/2008/10/climate-models.html

L Bowser
February 19, 2009 2:02 pm

The problem with models (climate and financial) is they can only account for known factors within a specific and measurable range of values. In finance this is differentiated with systemic and non-systemic risk.
In the case of the recent financial meltdown, the non-systemic risk (that which we can protect ourselves against and is generally planned around) is given normal economic conditions we can expect 2% forclosure rates to persist. Now add a sensitivity test like during recent recessions (the last 2-3) that rate has gone as high as 4% in some markets. Of course, what happens when external factors we can’t mitigate or properly plan for (systemic risk) move us beyond that range like gas prices going to $4.00 per gallon at the same time there is a slowdown in wage growth and worker productivity, healthcare costs rising faster than wages, a simultaneous massive number of resets in mortgage rates resulting in household financial distress, a property market reaching its ceiling and the failure of the US gvmt to bail out Lehman at the beginning of what would have probably been a minor recession absent all other factors (Someone mentioned Black Swans in an earlier post)? It results in 6-7% foreclosure rates in some of the highest value markets.
Models are built around real-world scenarios where parameters operate within certain ranges. Violate those ranges and the results become unpredictable, possibly resulting in an assymetric event, i.e. a meltdown like what we saw in the financial markets. Yes, there were people who made predictions that this would happen, but they were held in the same regard as those who predict a nuclear incident next year. It could happen, but the factors that would lead to such an event have not been seen since WWII, so it is highly unlikely. Why plan for the 1 in a million event?
We have the same thing with GCMs. They are built around parameters that operate within certain ranges. Now when we ramp up one of the factors like CO2 outside of known ranges, what happens? A possible assymetric prediction because factors like increasing cloud cover, IR absorption reaching a saturation point within specific bands, cosmic dust clouds, sunspot cycles, etc… are not accounted for. The model becomes “broken”. Climate modeling is a good exercise. It helps discover important climate drivers, and in theory, help us determine their interrelationships… within a standard set of values. It’s what’s hidden from us (analagous to the systemic risk in financial modelling) that makes the whole model invalid when we move beyond the model’s working ranges like 500 ppm CO2.
Sorry for the rant, but it is my biggest pet peeve around models and the mis-application thereof.

February 19, 2009 2:03 pm

In other words, pyromancer76, insiders have learned to game the system. This can only be done when the system lacks transparency, as both the financial system and the climate peer-review system currently do.
There are constant calls for Mann, Hansen, the IPCC, the NOAA and others to publicly archive their raw data and methodologies. But getting any information at all is like pulling teeth.
They have learned to game the system, and taxpayers pay the freight.
“The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded… Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
~ President Dwight Eisenhower

Frank K.
February 19, 2009 2:13 pm

Joel Shore (13:15:33) :
“Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses. (The initial conditions are important if you want to get every little up-and-down jog in the climate right, e.g., El Nino – La Nina oscillations and the like, which is why these are so difficult to forecast.)”
This contention (that climate prediction is not an initial value problem) is a controversial one. Here is a note on this topic by Dr. Roger Pielke, Sr.:
http://www.climatesci.org/publications/pdf/R-210.pdf
The money quote:
“This correspondence proposes that weather prediction is a subset of climate prediction and that both are, therefore, initial value problems in the context of geophysical flow.”
Indeed, if you examine the climate prediction algorithms, they are all based on the same time-marching approach (in fact, nearly the same basic dynamics) as numerical weather prediction codes. If you ignore the “weather noise” the climate codes produce, then you basically have a source-term driven numerical process that relies on lots of numerical dissipation, fixers, and ad hoc filters to keep the solutions from blowing up. One of the ironies of these models is that the closer you get to the initial condition, the less reliable the predictions are (!) – which is why they have very little skill in predicting, say, next year’s climate.
For more on the accuracy of long term climate modeling, please read this:
http://climatesci.org/2008/11/14/are-multi-decadal-climate-forecasts-skillful-2/

tallbloke
February 19, 2009 2:19 pm

“Of the amounts provided, $170,000,000 shall address critical gaps in climate modeling and establish climate data records for continuing research into the cause, effects and ways to mitigate climate change.”
I see this as an encouraging sign that Obama wants to sit on the fence until his second term. What’s the point of putting this much into investigating the phenomena if the outcome is already assumed? A lot of that money might get spent on insulation yet.
It’s a clear admission the models aren’t yet up to the task of deciding policy.

Roger Sowell
February 19, 2009 2:19 pm

David Halliday wrote:
“Computers can’t do anything humans can’t do. Computers can’t think. Computers can’t create. What computers can do is some of what humans can do only faster.”
In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do. There appears to be a limit to the amount of information a human (even great humans) can assimilate, process, and keep account of. Computers can do this far better. There are also documented examples of, for example, neural network algorithms that *learn* from mistakes, from partial successes, and deduce rules or answers that have eluded even the most experienced and smartest humans.
There are also relationship-discovery algorithms, aka data mining, that explore vast reams of data and reveal insights that humans have never before discovered.
In the field of computerized advanced process control, well, let’s just say that many of us are very glad humans are not at the controls, but instead let the computers do the work. Fly-by-wire is just one example of this, wherein advanced aircraft fly in or near the unstable regime, a regime where human responses and anticipation just cannot adequately respond.
John Galt – re is Fortran still in use?
Absolutely. Operating companies have millions of lines of code written in Fortran, that works and works quite well every day. No one in the private sector has the time or budget to rewrite perfectly good code just to bring it up to some newly-written standard. Those new standards change every few years, and rewriting would be a complete waste of effort. There may be some limited instances where this is done, but it must have a justifiable positive influence on the financial bottom line.

Lichanos
February 19, 2009 2:25 pm

Computer modeling has worked so well for financial markets and the Global Warming industry. I can’t wait for it to be used with the Census.
Douglas Cootey

Hey, don’t throw out the baby with the bathwater! Nobody is planning “modeling” with the census. Just a sophisticated sampling to improve results over the count-every-head technique, which never really does just that. Statistical sampling of large populations has a good theoretical base, and the methods of assessing error are quite robust.
Your comment smacks of a political commentary clothed as technical critique. Is there a reason to only do things the way they did in the 18th century?

microw
February 19, 2009 2:29 pm

And to think Governments want to solve the so-called “Climate change” problem based around computer models. It defies common sense and logic to plan to far ahead using a model. It is at best an approximation at worst sooth saying.
Some Australians have had enough and have formed the World’s first sceptics party. Visit them here at http://www.climatesceptics.com.au .
It’s time to stop the nonsense

Richard deSousa
February 19, 2009 2:30 pm

David Ermer: After the sleazy episode Gavin Schmidt tried to pull on Steve McIntyre (see http://www.climateaudit.org/?p=5180) whatever he subsequently claims is subject to suspicion… and may be derision.

Richard deSousa
February 19, 2009 2:31 pm
February 19, 2009 2:41 pm

Teschler draws an interesting analogy here, but I don’t believe that it’s correct.
First climate models are very different from economic models -in construction, in execution, in adherence to data, and in management. Both sets are models, both sets are based on equations whose values are manipulated through simulation – and that’s about where the resemblence ends.
“Economic” models come in two distinct forms: the financial models are really valuation models, and the macro-economic models are really forecasting models.
The valuation models at the root of the credit crisis did not actually fail as models – two things happened to them:
1 – a majority are run on Intel x86 gear, and that gear had a long term problem with non random randomization – meaning that all valuations were off by unknown amounts because the underlying assumption of randomness in the simulation was systematically violated in execution.
Nobody yet knows how important that was..
2 – most financial models correctly predicted the problem, but regulatory incentives swayed management decisions to ignore what the geeks said and many, in fact, instructed “their geeks” to just shut the star blank star up about it.
Wait, come to think of it, that does sound like climate modelling, doesn’t it ?

jorgekafkazar
February 19, 2009 2:44 pm

Here’s the link to the report referred to by pyromancer76, above:
http://www.fraserinstitute.org/commerce.web/product_files/CaseforDueDiligence_Cda.pdf

Michael D Smith
February 19, 2009 2:49 pm

Joel Shore (13:15:33) :
Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses.
A good point. The single factor that has so far made all climate models fail (yes, the null hypothesis is rejected on all of them) is the most important one, climate sensitivity to a doubling of CO2. If you re-run most models with negative feedback, guess what – they aren’t that bad at predicting the actual observations. This single factor is really the only major disagreement between the so-called “consensus” and the “skeptics”. Once a reasonable sensitivity is applied (since this is the only conclusion one can reach from the observational data), there is nothing catastrophic to worry about, and the rest of the argument (control, adaptation, economics – all of it) – becomes completely irrelevant. It’s really quite simple and boils down to one single important factor. For this factor, the evidence is quite clear, and there is really nothing either side needs to be skeptical of.
It is far better to lose one relatively small industry (AGW) than to destroy the larger civilized world chasing demons over this simple factor.

Dave Wendt
February 19, 2009 3:04 pm

Joel Shore (13:15:33) :
Using a climate model to predict general climate trends in the presence of forcings is not an initial condition problem. That is to say, the results that one gets for the general climate 100 years from now under a scenario of increasing CO2 emissions is independent of the initial conditions one chooses
You may be correct about the necessity, or lack there of, of precise initial conditions for general climate trend models. My own understanding of how the models operate is admittedly amateurish in the extreme, but if the models are as you state it still seems to me that the modellers would require a solid understanding of what the levels of the various forcings are, how they interact over changes, what feedbacks result, how susceptible the forcings are to being overwhelmed by other factors in the system, etc. Nothing I’ve seen leads me to believe the modellers possess any of these understandings, an opinion which I believe is supported by the models dismal performance in predicting results even in the short term. Which I think leaves intact my essential point that making policy decisions that will have real, immediate, and dire consequences based on dubious model’s predictions of catastrophe will lead the planet to a certain future of increased suffering, misery and death worse than any worst case scenario the warmists have envisioned.
AJ Abrams (13:06:09) :
I did say supposedly and I am aware that the looming Doomsday is a product of the kind thinking, or lack of same, that I’ve been arguing against, but I was just engaging in a little ironic poetic license.

Editor
February 19, 2009 3:24 pm

John Galt (12:37:19) :

I frequently wonder how anybody can be using Fortran for anything in this day and age. It’s obsolete, but once was the primary computer language used in science and engineering.
When was Fortran last part of the curriculum for data processing or computer science? Has it been 20 years or more? Do any schools still teach it? Do engineering firms still have a large Fortran code base in use, or is it only found in government these days?

We’ve been through this on other threads. If you were involved in the supercomputer field you’d know that “I don’t know what the future language for programming supercomputers will look like, but I know it will be called Fortran.”
The next Fortran spec will include object oriented programming constructs. Previous specs added matrix operations, support for vector and parallel hardware, data structures, etc.
Look up codes like NASTRAN, that’s a NASA->commercial product that may have racked up more computes than any other program in existence. Your car was probably modeled with it. Google offers 913,000 hits.
Just because GISS doesn’t take advantage of the new features or knows how to write well structured code, that’s no reason to disparage Fortran, just GISS!

Ed Scott
February 19, 2009 3:31 pm

Off topic but an action by the EPA to regulate man-made CO2 makes any further arguments about AGW, global warming/climate change irrelevant.
Climate models triumph over Nature.
————————————————————-
EPA Expected to Regulate Carbon Dioxide for First Time
http://www.foxnews.com/politics/first100days/2009/02/18/epa-expected-regulate-carbon-dioxide-time/
The Environmental Protection Agency is expected to act for the first time to regulate carbon dioxide and other greenhouse gases, The New York Times reported on Wednesday, citing senior Obama administration officials.
EPA Administrator Lisa Jackson has asked her staff to review the latest scientific evidence and prepare documentation for a finding that greenhouse gas pollution endangers public health and welfare, the newspaper said.
There is wide expectation that Jackson will act by April 2, the second anniversary of a Supreme Court decision that found that EPA has the authority to regulate greenhouse pollution under the U.S. Clean Air Act.

jmrSudbury
February 19, 2009 3:40 pm

Climate models are different. Unlike the banks that assumed that past trends would hold, climate models assume that our output of CO2 will cause the planet to diverge from past trends. — John M Reynolds

Roberto
February 19, 2009 3:45 pm

With precious few exceptions, the financial risk models of which I am aware implicitly or explicitly employ Gaussian Normal Distributions.
As someone alluded to above, this was the point driven home by Nicholas Nassim Taleb in both “The Black Swan” and “Fooled by Randomness.” The models used by Long Term Capital Management, created by Nobel Prize winners no less, calculated that the kinds of losses that eventually drove LTCM under were a ten sigma event which, in their calculation, could only happen every 5 or so billion years.
As Taleb, with his characteristic scorn puts it, they assumed that the conditions that could hurt them had only happened a few times in the history of the Universe.

February 19, 2009 4:14 pm

Old Chemist (10:42:41) :
Their prediction for cycle 24: peak of 120 – 140 (Zurich sunspot number) — cycle starting 2007 – 2008.
Actually 165-185, even worse.

Gary P
February 19, 2009 4:24 pm

Dave (11:12:57)
Thanks for Goodhart’s Law. I love collecting all these laws of behavior. Was home ownership the statistic that the government tried to manipulate?
My financial law is: Any model of the economy will fail as soon as it is accepted as correct.
We pay an awful lot of money to people to maximize returns. As soon as a correct model is created and accepted as correct people will act in such a manner as to make the model fail. In the short run, the economy is close to a zero sum game. The losers in in the financial markets will rapidly start using a correct model that made the winners successful. All the assumptions in any model will immediately fail.
For the climate, we have no effect, so a model could be successful because we cannot, by human behavior, change the assumptions of a correct model.

Ed Scott
February 19, 2009 4:27 pm

Dave Wendt
Dave, here is a study, Radiative Forcing of Climate Change:
Expanding the Concept and Addressing Uncertainties, http://www.nap.edu/catalog.php?record_id=11175.
Changes in climate are driven by natural and human-induced perturbations of the Earth s energy balance. These climate drivers or “forcings” include variations in greenhouse gases, aerosols, land use, and the amount of energy Earth receives from the Sun. Although climate throughout Earth s history has varied from “snowball” conditions with global ice cover to “hothouse” conditions when glaciers all but disappeared, the climate over the past 10,000 years has been remarkably stable and favorable to human civilization. Increasing evidence points to a large human impact on global climate over the past century. The report reviews current knowledge of climate forcings and recommends critical research needed to improve understanding. Whereas emphasis to date has been on how these climate forcings affect global mean temperature, the report finds that regional variation and climate impacts other than temperature deserve increased attention.
Radiative forcing is reported in the climate change scientific literature as a change in energy flux at the tropopause, calculated in units of watts per square meter (W m−2); model calculations typically report values in which the stratosphere was allowed to adjust thermally to the forcing under an assumption of fixed stratospheric dynamics.
Efficacy: The ratio of the climate sensitivity parameter λ for a given forcing agent to λ for a doubling of CO2. The efficacy E is then used to define an effective forcing Fe = f E.
Projections of climate change: An estimate of future climate, typically produced by a climate model, in response to estimates of future natural and anthropogenic forcings. Note that most projections consider only a subset of possible forcings.
————————————————————-
A radiative forcing seems to be a variable whose value is climate model dependent.
I have harbored the thought that radiative forcing, as presented in the study, is the D’Artagnan of the Three Musketeers, Finagle, Bougerre and Diddle.

MGauntt
February 19, 2009 4:36 pm

First off – been reading the site for a couple months – Outstanding job!!
Funny thing about models. Guys that are geniuses program them, and are often wrong. A land surveying buddy of mine knew 2 years ago that the housing market was going to tank because a) he could see the activity decline and b) he has lived through a couple of housing corrections. The geniuses could learn something from the “dumb” surveyor.
It is the same way with climate change. The talking heads speak calamity when the summer is hot and get some geek to talk about the end of the planet. A “dumb” farmer realizes that many events like floods, droughts, etc. are on 40-50 year cycles.

TJA
February 19, 2009 4:50 pm

If some of the stimulus money goes into a well designed automated climate network, that has many stations far from civilization, the whole thing might be worth it in savings on AGW in the long run.
What is really needed is good data, even if that data starts this year.

Robert Wood
February 19, 2009 4:59 pm

Roberto @ 15:45:24):
…calculated that the kinds of losses that eventually drove LTCM under were a ten sigma event which, in their calculation, could only happen every 5 or so billion years.
But therein lies the problem with statistical models. It may even only ever happen, once, ever; but that doesn’t mean it will not happen NOW.

February 19, 2009 4:59 pm

Goodhart’s Law: Whatever you adopt as a target ceases to be a relevant target once you have adopted it.
e.g., once a bill passes giving $$$$$ X 10^9 to those who pretend they can change the climate, their ostensible target evaporates; a new target is put in place, and the ratchet makes another click.

Robert Wood
February 19, 2009 5:01 pm

Ed Scott @ 15:31:04
If the EPA does go for this, I trust that at the public hearings (there will be public hearings, won’t there?) someone mentions that this is a tax on breathing!!!
And joggers should pay more as they generate more CO2 than lazy bastards like me.

February 19, 2009 5:07 pm

Tell me why can’t I set my climate models to the year 1 AD and ask it to predict the last 2,000 years of temperature and see how the predicted compares against the measured?
It’s called validation. It is how financial institutions do it with their market prediction models as well. Both suffer the same problem, unknown events.

Ed Scott
February 19, 2009 5:40 pm

Robert Wood
Public hearings will probably be handled as the public review period of five days for the stimulus pork and public debt bill.
They will have a review, but I wonder how effective public opinion will be on a preconceived outcome.
Holdren, Chu, Browner, Jackson and Salazar are hard-core greenies.

charlesH
February 19, 2009 5:43 pm

Robert (09:54:10) : (first post)
“The company went from a $70/share value to a $0.70/share because the models were wrong.”
WRONG I think. The companies managers didn’t care whether the models were right or wrong. They were making $10sM/yr in bonuses. Their personal agenda’s trumped any fiduciary responsibility they might have felt.
Sound familiar? Wall street. Climate science. Two peas in a pod.

Mike Bryant
February 19, 2009 5:52 pm

“Statistical sampling of large populations has a good theoretical base, and the methods of assessing error are quite robust.
Your comment smacks of a political commentary clothed as technical critique. Is there a reason to only do things the way they did in the 18th century?”
Sorry, but it just scares the heck out of me when somebody says the word “robust”. It sounds too much like climate “science”.

W. James
February 19, 2009 5:57 pm

I am surprised no one has mentioned how the weather naturally drives the economy, and how the unnatural effects of climate change hysteria has slowly strangled the economy over the past 2 decades.
A glut of sub-prime mortgages is nothing compared to world-wide cap and trade policies. Monies that would otherwise have flowed into industrial growth have been diverted into commodities, currencies, and credit markets. One bubble after another…
The elephant in the room is AGW; the economy is the pile of crap at its feet.

Michael J. Bentley
February 19, 2009 6:21 pm

I’ve found the answer to both the financial problem and AGW.
It’s 48
With apologies to the late Douglas Adams – who with typical British humor is laughing at us all.
Mike

Roberto
February 19, 2009 6:25 pm

It’s 48
I thought that it was 42, unless these problems are 6 more, kinda like Spinal Tap’s amps being one louder.

Domingo Tavella
February 19, 2009 6:29 pm

The financial meltdown is not an issue of modeling, and was not caused by reliance on models. The meltdown was caused by lack of transparency in the markets (absence of proper exchanges), and by ignorance on the part of a very large firm whose area of expertise is in insuring non-financial risks, not in derivative contracts.
It is a mistake to draw parallels between the collapse of the financial system, presumably under the noses of expert modelers, and the GW controversy. Physics modeling, which pertains to GW, and financial risk control are different fields with very different paradigms.
Physical systems respond to rigid conservation laws – all the modeler needs to do is capture such laws in their analysis and computational implementation. Financial models, on the other hand, rely on assumptions about human responses and market efficiencies, issues that cannot be cast in the same mathematically rigid manner as the physical laws.

Mark Smith
February 19, 2009 6:41 pm

One point about GCMs vs financial models – the financial models are generally pure statistical models, while the GCMs are ‘physical’ models ie. they’re based on physics and chemistry etc.
Doesn’t make either of them good, bad, or indifferent, but comparing the two is not really apples to apples.

pyromancer76
February 19, 2009 6:48 pm

Way To Go, W. James!
W. James (17:57:20) :
“I am surprised no one has mentioned how the weather naturally drives the economy, and how the unnatural effects of climate change hysteria has slowly strangled the economy over the past 2 decades.
A glut of sub-prime mortgages is nothing compared to world-wide cap and trade policies. Monies that would otherwise have flowed into industrial growth have been diverted into commodities, currencies, and credit markets. One bubble after another…
The elephant in the room is AGW; the economy is the pile of crap at its feet.”
The love of truth, transparency, accountability — and science — has been trampled beneath “its” feet.

Bruce Foutch
February 19, 2009 7:24 pm

I’m going with agesilaus (11:37:40) and the Austrian School of Economics on this one.
I believe the financial models get trumped when a government implements programs that effectively privatize profits while socializing the risks. When financial institutions no longer need to factor in real risk (will I pay back my loan) because another entity (government) subsidizes that risk (don’t worry, Freddie and Fannie will cover it) they can focus totally on the reward side of the risk/reward equation. Add a little greed and a few decades of Government prodding (home ownership stimulus programs and Federal Reserve money pumping), and… Well, here we are.
RE: P. Hager (10:26:40)
“If Stupidity got us into this mess, then why can’t it get us out?” Will Rogers 😉

Michael J. Bentley
February 19, 2009 7:24 pm

Roberto,
“I thought that it was 42, unless these problems are 6 more, kinda like Spinal Tap’s amps being one louder.”
Um,
Well, now you know why the financial markets failed, and AGW models are, well, less than useful…
Mike

Douglas DC
February 19, 2009 7:39 pm

Excellent W.James! Both AGW and Real Estate problems in one statement.I’m
a Realtor now.I saw this bubble coming and warned people-but what do I know.
AGW and its adherents are sitting on top of the AGW/Carbon credits charade,
and we had better not go there.At least selling apples on streetcorners may have a future…

Frederick Michael
February 19, 2009 7:39 pm

John Galt (12:37:19) :
While you weren’t watching, Fortran has evolved. Fortran 95 has lots of OO features and does array operations in a single command. Sometimes something is just plain good. Minor tweaks is all it needs because it is intuitive and thus a great tool. I use it every day — including writing new code.
Who would have thought we’d still be making auto engines with round pistons and a crankshaft. When you understand how piston rings seat, you realize why the Wankel couldn’t last. The right design has a long lifespan.

February 19, 2009 7:41 pm

The business of banks is to lend money for profit. Simple as that. Nothing else. To lend money for profit.
It wasn’t very long ago that each application for a loan was looked at by a human banker with a few years experience and judged on its merits to the best of that banker’s ability. If he wasn’t sure whether to lend, he would refer it to someone of more experience. Whether the decision was about a personal loan to buy a car, a mortgage loan to buy a house or a business loan, the same procedure applied. A significant aspect of the decision was the banker’s assessment of the likely ability of the applicant to repay the loan.
In the mid 1980s in the UK a new procedure was adopted by aggressive entrants into the mortgage loan business. They relied on the perceived value of the property first, the borrower’s ability to repay was relegated to secondary importance to such an extent that in many instances they sought no proof of employment or income other than the borrower’s assertion on the application form. They wrote a lot of business and posted massive profits, then the housing market fell and repossessed properties were sold at a loss. Vast sums were lost by the companies that took these risks. The old crusty banks and building societies suffered very few losses because they based their lending decisions on the apparent ability of the borrowers to repay, the apparent value of the property against which the loan was to be secured was also relevant but only once the borrowers were seen to be a good bet.
This type of “equity lending” went out of fashion as a result, but resurfaced in the early 2000s when exactly the same products which had proved such a disaster more than a decade before were offered once again. And now exactly the same losses are being witnessed again. Well who’d-a-thunk-it? I know one group who thunk it, crusty old lawyers like me who were involved in the litigation resulting from the last property crash. The aggressive lenders wanted to pass some of their losses onto others, so they sued valuers who they claimed had overvalued houses and conveyancing solicitors who missed defects in title. There was thousands of these cases. Even where the valuers or solicitors were found to be negligent the damages awarded were reduced to reflect the unreasonable risks the lenders took by not investigating the borrowers’ means. The most critical judgments were copied and circulated around banks to warn them of the dangers of lending to the impecunious.
Models cannot replace experienced human assessment of the risks involved in individual transactions. Indeed, it is absurd to think of models having any part to play in the process. It is only at more remote stages that models come into the picture. A bank wants to sell-on a bundle of mortgage loans (the bank gets a wad of cash now, the purchaser buys the right to receive repayments from the borrowers), how much should be paid? A bundle of mortgage loans is put up as security for a commercial loan, what is it worth? How will that bundle of loans perform over the next 2, 5, 10 years? Pump some assumptions into a computer and you’ll get an answer, but it will be unreliable unless the assumptions are sound and the assumptions cannot be sound unless average rates of default can be predicted. They can be predicted (within a reasonable margin) under old-fashioned banking procedures, but not when the borrowers’ ability to pay has never been factored into the equation.
Many of the models were concerned with the anticipated performance of financial packages several stages removed from the underlying transactions. At each stage of remove more assumptions have to be made and the input error is increased. To some extent, the assumptions are inferences from known facts, but only to some extent. Inferences can only be drawn if you have an established set of values against which to make them. For example, it is reasonable to infer that lending to someone with a good work history and restricting the loan to no more than 3 times his annual income and no more than 75% of the current market value of a house, will result in a loan that performs well. That is an inference because it can be measured against historic default levels for loans applying those criteria. Lending 5 times income and 125% of the value of the house cannot be measured against established data because there aren’t any, so you have to make assumption which are little more than semi-educated guesswork. Even solid inferences are not guaranteed to prove accurate.
I suppose it must be the same with climate models. Input data based on recent measures of temperature will be subject to error (as Mr Watts’ project is demonstrating) but it is nonetheless likely to be more accurate than data which itself is the result of assumptions about what I believe are known as “proxies” (I’ve always thought that sounds like a 1930s chocolate bar, but that’s an aside). You then have to add more and more layers of inference and assumption. The result is inevitably something so full of ifs and buts that it is matter of chance whether it produces anything even vaguely approaching reality.

David
February 19, 2009 7:57 pm

Tallbloke, you take Obama’s 170 million for “modeling” and research to be a sign of further research; humm? the skeptic in me says it is funding for propoganda, as the administration is already preparing to declare CO2 to be a pollutant

Squidly
February 19, 2009 8:01 pm

superDBA (12:25:02) :
William (10:24:18)
I wish I could name the program, but I did see the guy in charge of the AIG model in an interview on TV. If I remember correctly, he said that they delivered a “first pass” model to AIG, and were starting to work on the improved product when they were defunded. AIG then went out and sold the first pass model to the market.

I believe I saw the same program not long ago. If I remember correctly, he also stated that this (or these) models were not intended to forecast and/or predict anything, and warned that they should not be used as such.
Similar has been said about many of the GCM’s as well. I believe GCM’s are tools being used not for climate forecasting, but as a supporting actor to perpetuate a socialistic agenda, as their outcome is pre-determined!
This being my humble opinion, as a ~28 year computer science veteran who has been creating computer models of various types and derivatives the vast majority of my life.

David Holliday
February 19, 2009 8:02 pm

“In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do.”
My original statement is correct. There is nothing a computer can do that a human can’t do. The computer can just do it faster.
Computers are machines. Programs are instructions to the machine to do things. Humans design the programs. Humans write the programs. Humans test the programs. And humans run the programs. Therefore, humans can do the same thing the programs do but just slower.
Computers aren’t creative. They have no independence of thought. They don’t think at all. They have not independence of action. They have no cognitive understanding. They simply execute the programs. One of the biggest misnomers in Computer Science is Artificial Intelligence. There is no intelligence in a computer. And we’ve never been able to put it in there.
I first studied Artificial Intelligence in the early 80’s. Neural nets, which are often purported to be advanced, self-learning computers, are fundamentally self-weighting algorithms that can varying their behaviour based on feedback mechanisms. Expert systems are simply rule-based approaches to decision systems. Humans build the neural nets and humans write the rules. There is nothing about how these programs work that we don’t understand. The HAL 9000 of 2001: A Space Odyssey doesn’t exist today or maybe ever.
As someone who has worked in computers for over 26 years from programmer to Chief Technology Officer, I can tell you with a high-degree of confidence I understand how computers work and what they can do. They don’t do anything we don’t tell them to do. And since everything they do is something we tell them to do we can do it.
Don’t confuse that computers can do things much faster than humans with what humans can do. The point is they are just doing what we program them to do. Of course they do it orders and orders of magnitude faster than we can. Hence, the old joke, “To err is human, but to really f#*k up takes a computer.”

Mark N
February 19, 2009 8:10 pm

And yet the industry ploughs on with the forthcoming movie “The Age of Stupidity”
http://www.ageofstupid.net/

Squidly
February 19, 2009 8:19 pm

Dave Wendt (12:35:10) :
I read the other day that IBM has contracted to build a new supercomputer for the government that when completed will supposedly have more computational power than the top 500 supercomputers now operating combined.

Dave, the problem is not the horsepower, the problem is the lack of a fundamental understanding of the systems involved. More horsepower will only produce the same answers more quickly! If one knew exactly how the systems really worked, then I would be willing to bet that my laptop would be sufficient horsepower. I am a little tired of the notion that more power makes a smarter computer. It all comes back to GIGO. Faster GIGO is just faster GIGO.

Jim G
February 19, 2009 8:27 pm

According to Moody’s President Ray McDaniels:
The models were based on the assumption that:
Homeowners would default on unsecured debt such as credit cards or auto loans before defaulting on their homes. This time, they protected the plastic and the cars and let the homes go.
In my paraphrase: “The models were based on the notion that people would screw the other guy before they screwed their mortgage companies.”
Now if that’s your business model, I would think that you are living on borrowed time….
Link to Moody’s quote:
http://globaleconomicanalysis.blogspot.com/2008/02/us-homeowners-confound-predictions.html

Squidly
February 19, 2009 8:34 pm

John Galt (12:37:19) :
I frequently wonder how anybody can be using Fortran for anything in this day and age…

I somewhat agree with you here. Although I am a big proponent to using the right tool for the job, and I am not necessarily discounting Fortran, but I can’t help feeling that there are more advanced OO languages that may be better suited to the development of GCM’s (and perhaps financial models as well). Even with the latest morphing of Fortran towards OO (loosely), OO is still not its strength. I also believe its foundational construct and architecture is a bit dated. I presently develop application framework models, and I know what I am doing now would be difficult, if not impossible, to accomplish within a language such as Fortran. Now, that is not to say that the languages I am primarily dealing with at present, would necessarily be well suited to GCM’s either, but my gut tells me that they may be better suited than Fortran. I have been evaluating ModelE for a brief time now, and I just don’t see how Fortran fits well with that model at all. ModelE appears to be very clunky (besides poorly written) and cumbersome to me and I believe at least a significant portion of that can be attributed to the languages in use. Keep in mind, this is not a very educated opinion at this point, merely an educated guess from a fair amount of experience.

Mr Lynn
February 19, 2009 8:41 pm

——
‘Skeptic’ is the wrong name. It should be called the REALIST Party. It’s high time that those who oppose the politicization of science in the service of collectivist politics stopped letting the Acolytes of the Goracle define them with names like ‘denier’ and ‘skeptic’ and (yes, even ‘heretic’).
We could use such a party here. Before that, maybe a well-funded organization to counter the propaganda machines of the leftwing greenies, the Sierra Club, the Wilderness Foundation, etc., which sound so beneficent as they work to destroy the infrastructure of the civilized world.
If the Alarmists can demonstrate and file lawsuits (their primary weapon), why not the Realists?
In another thread, it was suggested that we start filing lawsuits on behalf of the plant life of the world: cutting CO2 way back will impair the very life that humans and animals depend upon! The Realist Foundation should take the fight to the anti-CO2 zealots.
Ideally it should be done soon, before the EPA starts regulating CO2 as a ‘pollutant’.
/Mr Lynn (hoping the blockquote tag works)

Mr Lynn
February 19, 2009 8:43 pm

Oh rats, it didn’t work. I was quoting this:
Ron de Haan (12:38:54) :
“In Australia, the model based Global Warming Scare has resulted in the inevitable.
“A SKEPTIC POLITICAL PARTY to confront the political establishment from within.
http://heliogenic.blogspot.com/2009/02/new-political-party-in-australia.html
“With 90% of the population denying a human link to Global Warming this party has good perspectives to become a succes.
“This is a great idea for the USA to. Anthony for President?”
No preview function here, unfortunately.
/Mr Lynn

Robert Bateman
February 19, 2009 8:43 pm

Having personally seen Risk Management Model at work in the chip industry, and watching in horror as the Financial Industry used it to stick their necks out of a speeding bullet train, I come to the following conclusion about it:
It is used to sweep as much risk under the rug as concienable. If there is something that can be done to mitigate risk, it will be disregarded as costing too much money. If there is nothing that can be done about the suicidal nature of what the company wants, then they will go ahead patting themselves on the back for having discussed it under the rug.
It’s just a dangerous tool to be using when lives and property are at risk.
I fear it is at work in climate modeling.
The model that preceeded it was sanity and common sense.
I knew it as Heinrich’s triangle.
It say that if you are having incidents, you are at risk for loss of life and/or property far beyond mere incidents.
For a solar cycle to be this late, the flux this quiet, the solar wind this backed off and the neutron count this high, the sunspots being this weak and losing contrast, it means that preparations should be in order for something about to snap.
We cannot control the Sun, but we can make preparations for sharp & marked climate change.
Just as in Heinrich’s Triangle, the biggie can come the first time you walk under the suspended load, or the 333rd time you walk under it.
You know that the numbers will eventually find you; you don’t know how sooner or later that will be. Ergo, we prepare ourselves.
Risk Management, on the other hand, is like playing deer in the headlights of an oncoming vehicle. No preparations are made, only marvelling at the strange incidents taking place.
Ergo the animal is wasted in it’s tracks.

David
February 19, 2009 8:48 pm

The comments that connect CAGW cap and trade costs and energy costs with the financial disaster are probably well founded. What would all forms of energy cost if we had instead invested in more efficient and clean nuclear and fossil fuel technology? Would there be a water problem if we had abundant energy to desalinize sea water? What is the economic benefit of all crops increasing yield 15% due to increased CO2? In the two years that gas was very expensive the US alone spent over 700 billion more than necessary for energy.
Energy is the life blood of any economy. We are cutting off our blood supply at the worst possible time.
On a side note, a politician blaming capitalism for not regulating the banks is the epitome of hypocritical. The two worse institutions (Fannie Mae & Freddie Mac) are government sponsored institutions directly regulated by the senate finance committee. Now they are getting an additional 400 billion dollars from the government that started them.
Models are like the human capacity to reason. “You can reason yourself into or out of anything.

Trent Brundage
February 19, 2009 8:55 pm

Fortran is a perfectly wonderful language for developing complex simulations, especially when they involve copious amounts of linear algebra routines like I would imagine atmospheric models do. I would trust the old linpack and eispack FORTRAN routines more than the C and C++ ports of them if I was developing the code. In fact, if you look into it in any detail you’ll be disappointed with the available math and statistics routines available in C, C++, C# and Java versus good FORTRAN. Personally, I think it has to do with FORTRAN’s inclusion of an intrinsic complex variable type, but I digress …
I’ll try to keep this short … I have a fair amount of experience in developing simulations and models of complex physical systems. My stomach turns when I hear these guys predict the earth’s temperature “N” years in the future. What a complete crock and I can’t wait to see how inaccurate they were. Don’t get me wrong, I strongly support the development of atmospheric models and it is great stuff but I think the ability to accurately model the time evolution of such a complex system is way out in the future if ever. Gas dynamics, fluid dynamics and heat and mass transfer are incredibly complex topics by themselves let alone folding in electromagnetic radiation and propagation models. Belief in these simulations’ predictions out multiple days, let alone multiple years is sheer madness.
I just laugh when I hear a politician or a biologist or ecologist tell us how real man-made global warming is. Ultimately, what these guys are doing is looking at simple data correlations and their predictions are no different than predicting the future share price of a company by viewing charts of its historical price. You can fit a mathematical model to a chart’s history and it might fit for some small time steps in the future, but soon you will be SOL because an event will occur that wasn’t in your model. The global warming doom and gloom scientists like that Hansen fellow are nothing but pure snake oil salesmen.

Pamela Gray
February 19, 2009 8:55 pm

Anybody wanna guess where the term “horsepower” came from? There is evidence to suggest that what James Watt originally termed as “horsepower” was based on what a pony could pull X 50. Ponys were the original “little red steam engine that could” in mines. And many farms. Draft horses, the big kind, only came along after big bucks were made. Kind of like the difference between my little red putt-putt and the big-time farmer down the road with a blue tractor that has as many wheels as a semi truck, and as big as the cab.

Squidly
February 19, 2009 9:12 pm

David Holliday (13:35:37) :
Finally, why do these “conspiracy” theories always pop up? They are so easily disproved. What financial manager in his right mind would put himself or herself at risk to the extent alleged now? They have, at least most of them, compensation that is hugely tied to the performance of their company.

I don’t believe there was any sort of “conspiracy” involved here (aside from Bernard Madoff, Robert Allen, etc., and others yet unknown). There are several Psychological driving forces that prompted the actions and continued behavior of those involved. A few that come to mind might be greed, euphoria, denial. And once you pack those into a collective reasoning environment, you really begin to develop a behavior almost akin to being drunk. The answer is, they simply didn’t care, they didn’t think it would collapse, they thought they could profit virtually forever, they listened to and paid attention to those things that most strongly supported their ambition (much like AGW). This is nothing new, this is human nature. The people involved were in positions where they could steer the ship where they wanted. They felt they had all of the control and dismissed any information to the contrary as it did not fit their ambition. Again, this is nothing new, everyone has done this at one time or another. Its just a matter of scale. When you have people profiting millions or even billions of dollars, that is a pretty powerful drug and has historically and does presently, drive people to do some pretty radical things. Explanation of this really is a “no-brainer” (after the fact of course). I think you said it best with ” .. in his right mind .. “

Bill Illis
February 19, 2009 9:15 pm

Awhile ago, I pulled apart the components of GISS’s Model-E and then extended the forecast it would have provided from 2003 (the end date of the data provided by GISS) to 2013, ten years.
The model would be off by about 0.15C in the first five years.
The more detailed version of this extension is here:
http://img175.imageshack.us/img175/2107/modeleextraev0.png
The simpler version.
http://img135.imageshack.us/img135/8594/modeleextend2013gi9.png
Another way to look at is they have huge GHG temperature impacts built in (no way to get to +3.0C without it) but they need to build in almost as big negative temperature impacts from other sources to keep the hindcast close to the actual temperatures we have seen so far.
One could conclude they are just plugging the big negative numbers into the hindcast after the fact to make it work.
Which is close to the point Leland Teschler was trying to make in this article.
http://img183.imageshack.us/img183/6131/modeleghgvsotherbc9.png
Without a large uptick in temperatures in the next few years, the modelers really have to go back to the drawing board (or they need to discover another “negative forcing” to keep the models on track to reality).

Squidly
February 19, 2009 9:15 pm

pyromancer76 (13:46:17) :

Wow! Very well said! I concur…

Squidly
February 19, 2009 9:18 pm

David Ermer (14:00:15) :
Link includes Gavin Schmidt comment on difference between climate and financial models FWIW

Wow, no ego there… And I couldn’t disagree with him (Gavin) more.

Richard M
February 19, 2009 9:22 pm

I agree with almost everything David Holliday (20:02:43) : said. Computers do not have any intelligence and a superfast human could do everything a computer could do. However, there are no superfast humans, so in reality computers can do many things us poor slow humans could never do or would even attempt to do.

Squidly
February 19, 2009 9:28 pm

tallbloke (14:19:29) :
I see this as an encouraging sign that Obama wants to sit on the fence until his second term. What’s the point of putting this much into investigating the phenomena if the outcome is already assumed? A lot of that money might get spent on insulation yet.

I don’t think it will go to insulating your house, I believe it will go to insulating AGW from the heretic’s and deniers. I agree that he is probably fencing this one for a second term, but in the meantime, he needs to “keep the dream alive”, so of course he’s gotta shell out a little to keep the AGW fires fueled. Perhaps this will give us the necessary time to extinguish? Let us hope so…

Pamela Gray
February 19, 2009 9:28 pm

Robert, you forgot to say sarc off at the end of your post. On the other hand, your post would make for a really cool movie. It would make a great movie about the Earth suddenly freezing its butt off.

Pamela Gray
February 19, 2009 9:29 pm

sarc off

February 19, 2009 9:41 pm

Predictions based on too much statistics is bad science. You may predict the climate this way or the solar activity, etc, and you might have a good chance of being right, but it doesn’t really give any new insight. Good science is prediction based on an understanding of the physics behind what you want to predict. The use of statistics is a great tool to identify the physical conditions, but it should be nothing more than that.

Dennis Sharp
February 19, 2009 9:46 pm

For Ron:
Yes, Ed Lorentz poked holes in a lot of weather modeling. Nova produced an excellent program on chaos years ago where Ed and many other pioneers on chaos theory talked a lot about sensitivity to initial conditions and other related subjects. I wish they would do a follow on show.
I don’t believe that most of the intellectual world is yet ready to try and understand the implications chaos theory puts on their models. I rest my case with the financial models and climate change predictions. Even this blog represents a lot of discussion about climate that is based on ignorance. For instance, the causes of natural variations in climate change are not well understood. Is it the sunspots, or lack there of? Is it the increased cloud cover with increased cosmic rays? Do increased flares from the sun heat the earths atmosphere? Does the sun drive the ocean oscillations? Let us not forget the Milankovitch cycles. Once these are at least partially understood, then how do they interact with each other? Are there tipping points we should be aware of on the cooling side of climate?
Climate is an elephant and people with a lot of little letters behind their name are experts in one toenail of the elephant. But they like to believe they know more. I believe they don’t study chaos theory or cybernetics because it makes them feel insecure when they realize it may be impossible to obtain enough timely data to use their statistics to make a prediction.
Think of all the historical data David Hathaway and group have about the sun, and look how pathetic their predictions on the start of cycle 24 have been. They fail to realize that chaos means randomness bounded by attractors. I believe they don’t even know what the attractor for sunspot cycles is. They may not even know what a strange attractor is.
I bet Ron or I cannot even bring anyone else here to discuss the chaotic nature of climate.
Dennis Sharp

Roger Sowell
February 19, 2009 9:51 pm

David Halliday
“One of the biggest misnomers in Computer Science is Artificial Intelligence. There is no intelligence in a computer. And we’ve never been able to put it in there.
We must have taken different classes in AI, then. Mine was from UCLA where the instructor wrote the AI for NASA’s Mars rovers. AI definitely exists, and I stand completely by my earlier assertions.
But I will not get further into a Did so! Did Not! contest, as it is fruitless and a waste of Anthony’s and moderators’ time.

Squidly
February 19, 2009 9:56 pm

Roger Sowell (14:19:48) :
In my experience, computers can do many things humans cannot do. As just one example, when I studied artificial intelligence theory, algorithms, and systems, it was eye-opening to discover that a properly programmed computer can do “things” that humans just cannot do. There appears to be a limit to the amount of information a human (even great humans) can assimilate, process, and keep account of. Computers can do this far better. There are also documented examples of, for example, neural network algorithms that *learn* from mistakes, from partial successes, and deduce rules or answers that have eluded even the most experienced and smartest humans.
There are also relationship-discovery algorithms, aka data mining, that explore vast reams of data and reveal insights that humans have never before discovered.
In the field of computerized advanced process control, well, let’s just say that many of us are very glad humans are not at the controls, but instead let the computers do the work. Fly-by-wire is just one example of this, wherein advanced aircraft fly in or near the unstable regime, a regime where human responses and anticipation just cannot adequately respond.

I would agree that there are “some” things that computers can do that humans cannot. Computational speed is perhaps one, but that is just about where it ends. I have studied AI for quite some time and it was my primary collegiate focus, and I too play with neural networks from time to time just for fun. But the human brain by contrast, can perform many things that computers presently cannot do and some things that they may never do. One very humanly simple thing that computers are extremely poor at is pattern recognition. Humans process patterns with astounding accuracy and at an astounding rate. As a very simple example of this, I was recently sent an email from a colleage, the special thing about it was that the letters were all jumbled up. All words were written with the proper beginning and ending letters, had the proper number of characters and the correct characters as a whole, but, all inner letters were out of order. The interesting part of this is that you can read it almost as easily as you read anything else. As long as the words contain the correct letters, length and beginning and ending letters, it doesn’t matter. Your brain automatically compensates on the fly through pattern recognition. Its an interesting experiment and one that you can easily try for yourself. Now, one would say “so, a computer can do that”, yes, but through iteration and rearranging, not through first take pattern recognition, and certainly not with the efficiency of the human brain. And as for “fly-by-wire”, your brain handles more fly-by-wire than our entire fleet of Stealth bombers combined, every moment of your life, monitoring thoughts, temperature, body functions, heartbeats, internal clock, circulation systems, neural systems, and on and on, all in real-time. That’s pretty tough to beat. We may get close someday in the future, but for now, not even close.
As a tiny example, that goes to this and the model topic, have you seen and heard, even a short film that was completely computer generated, that you could not discern from reality? And that is the simple stuff.
Unfortunately, we still have actors (politician interchangeable)…

Ross
February 19, 2009 9:58 pm

Mr Lynn (20:43:20) :
Oh rats, it didn’t work. I was quoting this:
Ron de Haan (12:38:54) :
“In Australia, the model based Global Warming Scare has resulted in the inevitable.
“A SKEPTIC POLITICAL PARTY to confront the political establishment from within.
http://heliogenic.blogspot.com/2009/02/new-political-party-in-australia.html
“With 90% of the population denying a human link to Global Warming this party has good perspectives to become a
succes.
“This is a great idea for the USA to. Anthony for President?”
No preview function here, unfortunately.
/Mr Lynn

Suggestion; if you create an empty .htm file, using Notepad or Wordpad [for example] text editor,
then open that file using EDIT [right click],
PASTE your desired quote,
add your tags as needed, SAVE.
Then OPEN the file normally by clicking on it.
It will open in your browser and give you some idea of how your submission will look.
Not perfect, but works for me. Others may have better or more complete suggestions for a preview.

Save the Sharks
February 19, 2009 10:07 pm

Robert Bateman. Always enjoy reading your posts! Most of you guys. It is so educational for a layman like me.
Begs the question: Why are some of these brilliant minds posting on this blog…not in charge on this planet for the direction of science?
Why is it still in the hands of the the megalomaniac Hansens, Holdrens, Gores, and other morons of our truncated politically-driven pseudo-science universe?
I understand we are in a continual “Randian” struggle and Atlas is shrugging a lot lately…but GEEZ.
Why must we CONTINUE to be held captive by bureaucrat-ideologues for centuries and centuries to come?
Gallileo is no doubt turning in his grave.
It is a shame…because the controversy of it all TAKES ON A LIFE OF ITS OWN and the noise from THAT…simply drowns out the scientific method.
On a grander scale, whether we are doomed by fire or ice or something in between…could be considered immaterial. (Natural selection at work!).
The REAL tragedy is this: Political ideologues who are polluting science and guaranteeing the human race to a slow, painful, and scientifically backward…demise.
Exempli gratia: WHY should it take the deaths of hundreds of Australian artists and thinkers who were forbidden to clear fire-lines around their homes…to prove this point??
But I digress….

Squidly
February 19, 2009 10:07 pm

David Holliday (20:02:43) :
My original statement is correct. There is nothing a computer can do that a human can’t do. The computer can just do it faster.

By in large I agree with you. There seems to be a popular misconception that I think has been largely fueled by Hollywood. Computers cannot do the things you see on the big screen. Unfortunately, even my father suffers from this misconception, and he is a retired engineer from MIT! And the worst part is that he is eating up AGW like there is no tomorrow. I fight with him on the AGW subject daily.
BTW, to all, yes, AGW is most certainly a religion. I have seen this transformation in my father, and it is rather disturbing. I would never have guessed that I would be seeing this behavior from my father, but he’s clearly had too much kool-aid. I’ve always viewed him as perhaps the most rational and objective person I have known, but wow, not when it comes to AGW. It is some scary stuff!

Roger Sowell
February 19, 2009 10:16 pm

A couple of points on the origins of the financial meltdown — slightly OT.
Years ago, a bank or other lending institution held the loans it made until maturity. The bank made money by a small upfront fee, and by the interest collected over the life of the loan. More recently, banks (as others above have pointed out) would sell the loan to obtain cash in hand in order to make more loans at a faster pace than simply waiting for the monthly payments with interest to come in.
The lenders made money by charging points and other fees on each transaction. The more loans that were made in a given month, the more money the bank made from those points and fees.
Where things began to unravel was derivatives. A mortgage loan, or more accurately, a package of mortgage loans, was sliced up into tranches, each paying a different interest rate, from high to low. Investors could then buy the tranche that suited their appetite for risk/return. The low-risk tranche may have paid as little as 5 percent on a loan that was made at 8 percent interest rate. The highest risk tranche may have paid 12 percent interest. But, if the property values declined, the highest risk tranche would be worthless, as it was the last in line to be paid if the properties were foreclosed. It did not require much of a decline in the real estate values for the highest risk tranches to be worth zero. And that is one of the primary causes of the present crisis.
One may have read about worthless securities, yet it is obvious that real estate prices did not decrease to zero. In many areas, real estate prices declined only 30 percent or so.
Former President Clinton was quoted recently as admitting he should have regulated the tranches and associated transactions more carefully during his 8 year presidency.
IMHO, until this problem is addressed, the financial markets will not regain stability, or investor confidence. Having a significant portion of a bank’s assets (or other institutions that purchase the mortgage-backed securities) with the potential to be worthless after a small decrease in real estate prices is a relatively new phenomenon and must be addressed. I try to follow this in my spare time, but have not read anything on changes to this aspect of finance.

Squidly
February 19, 2009 10:20 pm

Mark N (20:10:48) :
And yet the industry ploughs on with the forthcoming movie “The Age of Stupidity”
http://www.ageofstupid.net/

LOL, if it was so catastrophic, how did he survive?
Give me a break…

John Andrews
February 19, 2009 10:48 pm

Volcanos are to climate what government regulatory changes are to finance.

Jeff B.
February 19, 2009 11:14 pm

I see Mr . Teschler is an engineer. I notice an abundance of engineers that are AGW skeptics as well as skeptics of the many other wobbly ideas being sold to us today by our political leaders.
If you spend all day balancing the variables of complex systems so that they actually work, you start to see some patterns emerging. Pretty soon, you can look at any system and tell at a glance if it is grounded in reality.
Here’s to the engineers.

Jeff B.
February 19, 2009 11:19 pm

A quick response to Roger Sowell’s comment above. This proves my point about engineers. Engineers sure weren’t involved in the silliness of high risk tranches. No engineer worth their degree would ever allow a system like that to be built with zero margin for error. If it won’t stand up to a worst case scenario, it won’t stand up.

Alan the Brit
February 20, 2009 1:36 am

David Holliday:-)
That is just about bang on. I am so delighted to see so many mature heads making the point, GIGO. There will always be a human being at the end of it somewhere. Assume & presume nothing, ever!
As an engineer, & I know I have said this before, computers are little more than powerful calculators & number crunchers, sure they can churn out the numerical answers by the nanosecond where we mere fleshy lumps take minutes to do the same thing. However, the wee, wee, wee, wee, tiny flaw in the whole thing, is that if you get the design philosophy wrong, no amount of number crunching will lead you to the right solution, but only to many ways in which you prove you got it wrong! This point I would like to direct to Roger Sowell, yes computers are wonderful things, but they are after all just a tool to do a job;-) I spend many hours recommending to graduate engineers they sit down with a pad & a pencil & sketch things out by hand before they ever get near a computer programme. As a 51 yo luddite I mistrust computers, & with the current political administration losing personal data left, right, & centre I feel vindicated.

Mike Bryant
February 20, 2009 1:46 am

“For your consideration, I give you the new world order… We are being taken on a trip into a new dimension, where down is up and right is wrong. Energy and healthcare and housing are now free, and the only thing we must fear is earth’s newfound propensity to spiral into chaos at the slightest misstep of man. The scientists are the new priesthood and the beggars have become kings. The truly sane must recant their sanity or be shunned. The future has been written, and exists now only in the electronic mind of a supercomputer. This highway leads to the shadowy tip of reality: we’re on a through route to the land of the different, the bizarre, the unexplainable…We may go as far as we like on this road. Its limits are only those of hard truth. Ladies and Gentlemen, you’re entering the wondrous utopia of scientocracy. Next stop….The Twilight Zone.”

Lance
February 20, 2009 2:11 am

Sorry, off topic.
But yesterday, Putin warns US of the evils of socialism,
http://ibloga.blogspot.com/2009/02/putin-warns-us-democrats-against.html
Am I the only one wondering why this is not a top mainstream media story? Not even Googleing the NEWS.
A X-communist, the leader of Russia, warning of a socialistic agenda in the USA.
We’re living in Bizaro world………..

Lance
February 20, 2009 2:17 am

Sorry OT, I’m wrong about the date, it was over a week ago, and I just never heard of it till now.

Robert Bateman
February 20, 2009 3:56 am

Putin warning us of socialism sound really weird, until one talks to today’s Russians… and finds out that they consider him to be a nationalist.
I’ll have to check out that link. 🙂

Robert Bateman
February 20, 2009 4:18 am

Ah, but Pamela, I didn’t say that the Earth was going to suddenly freeze it’s butt off, though a big ding in the Sun’s output would certainly go a long way towards that. That would make me an alarmist predicting Global Freezing.
Sarc off – I am an alarmist when I see dirt being collectively swept under the rug.
As for the climate snapping, I believe that has occured before in the past.
We are much better able to adapt than at anytime in our history, as long as we know in advance that we should expect the climate to change. It’s the head-in-the-sand attitude with AGW that precludes preparation, whereby they have doomsday measures to put into play to force Earth’s climate in the direction they have pre-conceived it must go.
Sarc on – Why should I follow suit and stick my head in a different color of sand than the AGW mania does?
I don’t want to nuke volcanoes, dump insane amount of chemicals into contrails or perform massive cloud seeding programs to change the Earth’s climate, I just want to see man adapt as it unfolds. It will unfold differently depending on where one resides.

Robert Bateman
February 20, 2009 4:36 am

Here’s a great movie script Pam: Earth suffers massive climate blow out of the blue due to massive Y2K bug in AGW models. Prominent programmer tries vainly to warn the world, but in the end manages only to save wife, kids, grandma & grandpa in large suburban. Our heroes brave basketball size hail, flaming satellite debris and raging volcanic rivers as world goes nuts.
Dr. Adam and his wife Eve begin new civiilization on scenic mountaintop as they muse “Why wouldn’t they listen?”.

Frederick Davies
February 20, 2009 5:02 am

That must be one of the most intelligent and insightful articles I have read in a while.

Mr Lynn
February 20, 2009 5:38 am

Mark N (20:10:48):
“And yet the industry ploughs on with the forthcoming movie ‘The Age of Stupidity’
http://www.ageofstupid.net/
—thus cementing in the minds of the public the underlying “truth” of AGW.
How do you counteract this kind of high-budget propaganda? Most people don’t read Realist blogs like this. Unless they chance upon the buried-on-page 20 article about a ‘skeptic’, or are fans of conservative talk radio, they never hear that there is another side.
How about a suspense thriller, in which our hero (a Tommy Lee Jones-like character) risks life and limb to uncover the sinister decades-long machinations of an organization, suspiciously like The Club of Rome, and an obese former politician building a commercial empire on ‘carbon’ trading (not unlike Algore), all intent on stopping Western civilization in its tracks by creating a gigantic hoax called “global warming.”
In the end the Club members who pushed the American President to close down the coal industry will be marched off in shackles, to hoots and cries of “Shame! Shame!”
It can be called “The Carbon Strategy.”
Who will fund this film? I know a screenwriter (she is a whacky liberal, but will write anything for money).
/Mr Lynn

red432
February 20, 2009 5:42 am

Financial models have the problem that they are self aware. When analysts make predictions people invest using the predicted results and mess up the result. They buy because the prediction says the price will go up in the future, so the price goes up in the present instead and then they have to sell when the price goes down in the future because everyone rushed in and bought because of the prediction…
Climate modellers are in a bit of a fix too. If they suspect that their models are not much more than interesting computational exercises, they can’t say so out loud because they might lose their funding (ie, job). They are not qualified to do anything else.

David Snyder
February 20, 2009 7:18 am

This got me thinking about the similarities of Climate Prediction with Solar Cycle prediction. Both have long data set histories – The Sun’s is longer, with fewer ‘sites’. Both have long intervals. Solar Cycle is pretty will defined, The “Climate” interval is less well defined, but is long and permits only infrequent testing. Both have developed a physical basis in only the last 30 yrs. Dare I say that neither are very good? (But I expect a significant improvement when Cycle 25 is predicted – I don’t see the desire to improve Climate Models)

PaulD
February 20, 2009 7:51 am

I am not aware of any field in which experts have been able to develop computer models of highly complex systems that work. So I have always been skeptical that climate modelers have succeeded where all others have failed.
Since I don’t know everything, I am curious whether anyone is aware of any computer models in any fields that involves highly complex and chaotic systems that make accurate long-term forecasts?

John Galt
February 20, 2009 7:51 am

John Galt – re is Fortran still in use?
Absolutely. Operating companies have millions of lines of code written in Fortran, that works and works quite well every day. No one in the private sector has the time or budget to rewrite perfectly good code just to bring it up to some newly-written standard. Those new standards change every few years, and rewriting would be a complete waste of effort. There may be some limited instances where this is done, but it must have a justifiable positive influence on the financial bottom line.

I work as a software engineer/consultant and I’m well aware of the problems of maintaining and updating code.
Remember the Y2k crisis? That came about because old code was never updated. Nobody knew if programs that had been in operation for decades would work and in many cases, nobody could dig through the layers of patches, bandaids, paperclips and hacks to decipher the internals of the programs, either.
I should not be surprised by the reported size of the Fortran code base, but I am. This language isn’t part of the Computer Science curriculum in any universities in this part of the USA. Is it still taught in the Engineering schools?

hunter
February 20, 2009 8:19 am

Mr Lynn,
The movie looks extraordinarily stupid and boring.
Sort of like AGW, under examination.

hmmmm
February 20, 2009 8:29 am

They could cover their butts on these models if they represented the error margins correctly. Of course then it would be revealed that they are full of crap…
What are they saying, 95% chance we’ll raise 2 deg C over 100 years or something? Where did they pull that number from after the billions of calculations and assumptions that went into the model? These people need to take some statistics courses.
The models are probably good for allot of stuff and can help us figure some relationships out, but big-picture long-term predictions are probably the absolute worst thing to use them for.

February 20, 2009 10:07 am

Mike Bryant (01:46:59) :
“For your consideration, I give you the new world order… We are being taken on a trip into a new dimension”
Really GREAT!…all this began with the french revolution. 🙂
Fortunately we, the savages of the third world, will save you.

Roger Sowell
February 20, 2009 10:08 am

John Galt: re Fortran still taught in engineering schools?
Yup. The University of Texas at Austin, for one. UT is a decent institute of higher education (and my undergrad alma mater). (not to be confused with University of Tennessee, another UT)
see http://www.utexas.edu
Click here for fortran class
Also, the other UT (Tennessee) teaches fortran . From this site: “For example, we’ve made changes in the NE [nuclear engineering] Fundamentals course in response to alumni feedback, bringing the Fortran computer language back to the curriculum in order to prepare graduates for the field.”
The Y2K fortran bugs were not that hard to fix. Refineries, chemical plants, power plants, etc. with fortran made it through midnight 12/31/1999 into 2000 just fine.
Just another scare-mongering non-event.

Steve Keohane
February 20, 2009 10:41 am

Jeff B. (23:14:04) :
“If you spend all day balancing the variables of complex systems so that they actually work, you start to see some patterns emerging. Pretty soon, you can look at any system and tell at a glance if it is grounded in reality.”
I’d rather employ someone who could actually make things work in the real world, and rely in their advice than someone full of theories and models that do not match what is going on in the world. Before calculators, we had sliderules. In order to use one effectively, the user had to be able to estimate what the answer should be. A computer is nothing more than a calculator that can concatenate many steps. It seems that the modelers have the answer they want in mind, and simply try to plug in the functions to get their ‘right’ answer. When the models can replicate the past centuries, only then will it be worthwhile to listen to them about the future. I’m not holding my breath…

John Galt
February 20, 2009 10:51 am

Roger Sowell (10:08:27) :
John Galt: re Fortran still taught in engineering schools?
Yup. The University of Texas at Austin, for one. UT is a decent institute of higher education (and my undergrad alma mater). (not to be confused with University of Tennessee, another UT)
see http://www.utexas.edu
Click here for fortran class
Also, the other UT (Tennessee) teaches fortran . From this site: “For example, we’ve made changes in the NE [nuclear engineering] Fundamentals course in response to alumni feedback, bringing the Fortran computer language back to the curriculum in order to prepare graduates for the field.”
The Y2K fortran bugs were not that hard to fix. Refineries, chemical plants, power plants, etc. with fortran made it through midnight 12/31/1999 into 2000 just fine.
Just another scare-mongering non-event.

Thanks for the update regarding Fortran in engineering schools.
pre-Y2K was a great time to be in software consulting. This business never made so much money. If you had asked me about the seriousness of the threat, I would have repeated the industry line about it being the gravest danger you could imagine.

Ohioholic
February 20, 2009 12:43 pm

Read an interesting piece on how illegal money laundering has not been mentioned by the media, yet may have played a role. Can you imagine being the poor dope at AIG who has to tell the mob he lost billions of their money?

February 20, 2009 1:31 pm

During my study Political Science (1970-1976, VU Amsterdam) I made in 1975 a simulation model of the growth of the city of Amsterdam (people, houses, companies, etc) for the period 1820-1970. For calculating my model I used the programming language DYNAMO from Forrester and Meadows with mathematics was also used for the original Club of Rome report “The Limits to growth: a global challenge” (Dennis Meadows , 1972).
My model contains around 50 variables and 120 parameters/initial values of the variables); most of the relationships between the variable were third order feedbacks.
After a intensive period of I came to the surprising conclusion that any model with that amount of degrees of freedom, you must be able to construct any desirable time-curve of all the relevant variable you want.
After that experience I distrust the outcome of many complex simulation models, and I think that also the IPCC-models can “prove” any desirable outcome.

DaveE
February 20, 2009 4:39 pm

PaulHClark (12:27:33) :
“By way of a little background – I have been predicting a massive downturn in the economy here in the UK since 2003 with the comment to my friends, “that one day a major bank, a household name major bank, somewhere in the world will declare it has run out of capital.”
Sorry but you were pre-empted by John E. Brignall at numberwatch in Number of the month. I believe it was August 2001.
DaveE.

Roger Sowell
February 20, 2009 4:46 pm

Alan the Brit,
“This point I would like to direct to Roger Sowell, yes computers are wonderful things, but they are after all just a tool to do a job;-) I spend many hours recommending to graduate engineers they sit down with a pad & a pencil & sketch things out by hand before they ever get near a computer programme. As a 51 yo luddite I mistrust computers, & with the current political administration losing personal data left, right, & centre I feel vindicated.
I also am/was an engineer, dating from the slide rule days. I completely agree that it is usually best to think it through first with a pad and pencil, perhaps even research a bit to see what others have published. There are, no doubt, many thousands of good software routines in regular use that are just doing what humans can do, only faster and error-free. I have written and implemented my share of those.
I think this all comes down to semantics, just what is artificial intelligence. To me, if a human cannot do it (whatever “it” is), but the computer can, that is a form of AI. The examples I gave earlier are on point.
We as humans give a label to people with great memories, or abilities to solve problems that no one else can. That label is usually “intelligent.” There are even standardized tests (albeit controversial) that purport to give a score that measures IQ. As an attorney, I had to take quite a few rather difficult tests to prove a certain level of ability before I was awarded my license to practice law. Other professions do too, and I have no intention to place attorneys in a spotlight. Professional engineers, PhDs, MDs, CPAs, CFAs, the list is long. I have a lot of respect for others without fancy degrees, too, especially my auto mechanic. Even he uses a computerized diagnostic tester from time to time; I think it has a rules-based expert system in it.
Hence, when a computer can solve a problem no human could or ever will, is it also “intelligent?”
Anthony, if this is too far off-topic, I can take this over to my energyguy’s musings blog so as not to waste your time. — Roger

Roger Knights
February 20, 2009 5:10 pm

“They have, at least most of them, compensation that is hugely tied to the performance of their company …”
In the short run.

Roger Knights
February 20, 2009 5:46 pm

In the March issue of “Wired” there’s a good cover-story article by Felix Salmon titled, “Formula for Disaster,” about David Li’s formula upon which the financial house of cards was built.

AnonyMoose
February 20, 2009 7:24 pm

Did the models predict Congress changing the rules to require risky mortgage loans which Fannie Mae or Freddie Mac was given a quota to buy? How did the models predict fraud in those agencies so the executives could get bonuses? Why are the models being blamed for the mess?

Robert Bateman
February 20, 2009 8:54 pm

But therein lies the problem with statistical models. It may even only ever happen, once, ever; but that doesn’t mean it will not happen NOW.
And it doesn’t tell you when it will happen again, but you know darn well it can happen. Risk Management is a convenient sedative to lull the daredevil into thier impending demise. It uses the bellcurve to deadly effect, the user unaware of the poison dulling thier senses.
I read the Black Swan.

Robert Bateman
February 20, 2009 9:04 pm

Most of us never saw the sinister effects of Y2K because an army of programmers was called up to fix the bugs.
On jan 1, 2000, as I was filling up at the gas station, the numerical display on the pump was reading the underlying code.
I told the station attendant, who seemed to not care. Cheapest gas I have ever had the pleasure to pump.
They only fixed the code for the expected years of usage remaining.
It’s still there, waiting for the next round.
The people who knew the bugs were there, sounded the alarm, then got called up to implement the fix. They won’t be around for the next time,
and the bugs are there waiting to strike.
Who wants to bet all the outsourced programmers will come to the rescue next time?
Black Swan.

voodoo
February 20, 2009 10:10 pm
Pink Pig
February 20, 2009 11:29 pm

There’s a lot of good comments above, which regrettably I haven’t read, as I am impatient.
I can sensibly claim to be an expert on Y2K. I have been a computer programmer for nearly 50 years now, and when I first heard of the “Y2K bug”, I thought is was a put-on. Unfortunately, it appears that you can clean up by promoting the most utter nonsense in the world, because the vast majority of the people (these days) don’t really understand computers, and are fully prepared to believe that someone or something is out to get them. I got a call from a friend in 1997 saying that he had a very rich friend who was being cajoled into funding a Y2K company, and he asked me to advise him. I told him that there was no Y2K problem, because everybody who might care about Y2K had already fixed their software decades ago. For example (and I gave several well-known examples), the Social Security System had no Y2K problem, because at the time that they started, most retirees had been born in the 19th century, and over the next 30 years, as people born in the 20th century became dependent on SS, the software was corrected to handle it. I’m pretty sure that in the 90s, most of the retirees or disabled on their books had been born in the 20th century, but a small number had been born in the 19th century. For another example (perhaps more to the point), it was frequently claimed that the banking system would collapse in the year 2000, but in reality, almost all banks had to deal with maturities in the 21st century starting around 1970. Many mortgages at the time (and even now) had a 30 year term, so a mortgage obtained in 1970 would have matured in the year 2000. By 1997, even the most routine money market instrument, e.g. a 3 year certificate of deposit, matured in the next century. Perversely, the one thing you could be completely sure of is that all century-related issues would disappear as soon as we reached the year 2000. The notion that airplanes would fall out of the sky was even more ludicrous, because the airlines don’t keep track of the current year. (Try making a reservation more than a year in advance sometime.) The silliest one was that elevators would plunge to the ground. Can anyone imagine a reason why an elevator would care what year it is? Like, “Ohmigod, it’s the year 2000, we’d better fall to the ground.” Anyway, all I got for my trouble was a threat from one of the lawyers trying to separate this guy from his money.
Oh well, enough nattering. The same friend who asked me to provide advice about Y2K has been beating on me for years to get me interested in modeling. He’s a very smart guy, and he knows perfectly well that none of the existing models is worth the paper it’s written on. For my own part, I won’t do that, because there’s no way I could imagine every relevant detail. He was very hot on a spot Forex model that some guy named Mario from Cuba devised. I tried working with Mario, but I ended up getting blamed for everything that didn’t work out the way he wanted. Mario never told us what his great idea was.
Here’s a small detail that I don’t understand very well. In around 1963, I applied for an advertised programming job at the Harvard Meteorology Department. They turned me down, on the basis that they were looking for a Meteorology student. As it happens, they then went on to produce some of the more outrageous models that we have ever seen. I’m guessing that I wouldn’t have fit in.

W. James
February 21, 2009 6:47 am

One last point on financial vs climate models.
The economy is 90+% controlled by man (regulation & implementation), while the climate is 99+% controlled by God (the laws of physics, thermodynamics, etc.).
To model the economy is to model man. To model the climate is to model God. Neither one is easy, but at least God is rational and repeatable.
They say that capitalism only works with perfect knowledge and rational actors. Personally I believe it is socialism that has these requirements. Capitalism works because we as humans understand our limitations & irrationality, and if unfettered are able to quickly react to the ever-changing conditions.
Finally, the current mortgage based banking crisis is a valuation problem. Mark-to-market accounting regulations put in place in the wake of Enron are responsible for making the banks insolvent (on paper). Enron, which “bubbled up” during the California energy crisis of 2000 benefitted from: Greenies who did not want power plants in California, a hot summer, a drought, and an ill conceived deregulation scheme. Just another one of those perfect storms we seem to be getting more of lately.
Climate model says CO2 will kill planet
Governments start banning CO2
Business (capital) investment plummets
Bubbles grow and burst as dollars flow into commodities instead of capital.
Governments react by further redirecting dollars into political programs
Last man standing please turn off the light (or blow out the candle..)

David Holliday
February 21, 2009 6:43 pm

“Hence, when a computer can solve a problem no human could or ever will, is it also “intelligent?””
No.

David Holliday
February 21, 2009 8:35 pm

“To me, if a human cannot do it (whatever “it” is), but the computer can, that is a form of AI.”
IBM’s Deep Blue Supercomputer can compete with world class chess players. Does that make it “intelligent”?
In some respects you can say it looks intelligent. But fundamentally what Deep Blue can do is evaluate more moves further ahead in a shorter period of time than a human. Its program was intelligently written to use the brute force compute power of the supercomputer to give it an advantage over a human. That’s not intelligence.
In order to program something a programmer has to think through every step of the process and write instructions to achieve the results desired. In essence, computers are “repeaters” of the instruction sequences humans write. Intelligence is not contained in a preprogrammed sequence of binary operations no matter how “intelligent” the results appear.
Go back to that professor and ask him how that program worked. I’m certain that he’ll explain that the program was intelligently written to give whatever it controlled the ability to be “smart” with respect to how it handled various situations. That doesn’t make the program smart. It means the programmers were.
“But I will not get further into a Did so! Did Not! contest, as it is fruitless and a waste of Anthony’s and moderators’ time.”
I wasn’t going to go here but I felt obliged to respond after your latest comments.
There is no “Did so! Did not!” component to my earlier responses. I wasn’t voicing an opinion. I was explaining how computers work.
This leads me to my original point. Computer models are simply reflections of the knowledge and understanding of the people who program them.

Ron de Haan
February 21, 2009 9:00 pm

Mr Lynn (20:41:34) :
——
‘Skeptic’ is the wrong name. It should be called the REALIST Party. It’s high time that those who oppose the politicization of science in the service of collectivist politics stopped letting the Acolytes of the Goracle define them with names like ‘denier’ and ’skeptic’ and (yes, even ‘heretic’).
Mr Lynn,
I agree with your qualification.
“Realist Party” would have been a better name.
I reject any form of stigma but let’s leave this discussion for another time because it’s OT.

Ron de Haan
February 21, 2009 9:23 pm

DaveE (16:39:31) :
“PaulHClark (12:27:33) :
“By way of a little background – I have been predicting a massive downturn in the economy here in the UK since 2003 with the comment to my friends, “that one day a major bank, a household name major bank, somewhere in the world will declare it has run out of capital.”
Sorry but you were pre-empted by John E. Brignall at numberwatch in Number of the month. I believe it was August 2001.
DaveE”.
Gentlemen,
As we know (every body knows) our economy is a cyclic system and therefore it is not difficult to make “any” prediction. There will always be a moment where a prediction comes true. It’s only a matter of time.
The same goes for our climate.
Only the IPCC does not know that.