First, a bit of a primer. Wikipedia describes a random walk is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the financial status of a gambler can all be modeled as random walks. The term random walk was first introduced by Karl Pearson in 1905.
![420px-Random_Walk_example.svg[1]](http://wattsupwiththat.files.wordpress.com/2012/06/420px-random_walk_example-svg1.png?resize=420%2C315&quality=75)
From the Financial Post: A 2011 study in the Journal of Forecasting took the same data set and compared model predictions against a “random walk” alternative, consisting simply of using the last period’s value in each location as the forecast for the next period’s value in that location.
The test measures the sum of errors relative to the random walk. A perfect model gets a score of zero, meaning it made no errors. A model that does no better than a random walk gets a score of 1. A model receiving a score above 1 did worse than uninformed guesses. Simple statistical forecast models that have no climatology or physics in them typically got scores between 0.8 and 1, indicating slight improvements on the random walk, though in some cases their scores went as high as 1.8.
The climate models, by contrast, got scores ranging from 2.4 to 3.7, indicating a total failure to provide valid forecast information at the regional level, even on long time scales. The authors commented: “This implies that the current [climate] models are ill-suited to localized decadal predictions, even though they are used as inputs for policymaking.”……
More here: http://opinion.financialpost.com/2012/06/13/junk-science-week-climate-models-fail-reality-test/
h/t to WUWT reader Crispin in Waterloo
Previously, WUWT covered this issue of random walks here:
Is Global Temperature a Random Walk?
UPDATE: The paper (thanks to reader MT) Fildes, R. and N. Kourentzes, 2011: Validation and forecasting accuracy in models of climate change. International Journal of Forecasting. doi 10.1016/j.ijforecast.2011.03.008
and is available as a PDF here
Why are you linking to “comments” at another site? Your video does not look like a simulation but a real world test. Let me know which passenger airliner was cleared safe to fly based only on computer simulations.
Reblogged this on evilincandescentbulb and commented:
We are witness to is a classic example of Marshall McLuhan’s the Medium is the Message. The Left can never take back what it has done to Mr. Smith and Mrs. Jones and all of the other victims of the Left’s liberal fascism. Mr. Smith went to Washington to fight the science authoritarians, to save lives. How many lives has the Left saved by saying NO to truth and NO, NO, NO to capitalism? How much misery, poverty and death has the Left caused by sacrificing individual liberty on the altar of the blindingly self-defeatist Climatism of Leftist ideology?
Expect Us to Take Their Little ‘Red’ Pills Forever
This is all anyone needs to know about the computer illiterates programming the climate models,
http://www.nature.com/news/2010/101013/full/467775a.html
“Researchers are spending more and more time writing computer software to model biological structures, simulate the early evolution of the Universe and analyse past climate data, among other topics. But programming experts have little faith that most scientists are up to the task. […]
…as computers and programming tools have grown more complex, scientists have hit a “steep learning curve”, says James Hack, director of the US National Center for Computational Sciences at Oak Ridge National Laboratory in Tennessee. “The level of effort and skills needed to keep up aren’t in the wheelhouse of the average scientist.”
As a general rule, researchers do not test or document their programs rigorously, and they rarely release their codes, making it almost impossible to reproduce and verify published results generated by scientific software, say computer scientists. […]
Greg Wilson, a computer scientist in Toronto, Canada, who heads Software Carpentry — an online course aimed at improving the computing skills of scientists — says that he woke up to the problem in the 1980s, when he was working at a physics supercomputing facility at the University of Edinburgh, UK. After a series of small mishaps, he realized that, without formal training in programming, it was easy for scientists trying to address some of the Universe’s biggest questions to inadvertently introduce errors into their codes, potentially “doing more harm than good”. […]
“There are terrifying statistics showing that almost all of what scientists know about coding is self-taught,” says Wilson. “They just don’t know how bad they are.”
As a result, codes may be riddled with tiny errors that do not cause the program to break down, but may drastically change the scientific results that it spits out.“
Just out of interest, because I’m a little surprise I missed it, but was that attribution of the graph to Wiki in place when I made my first comment or was it put up afterwards?
REPLY: I added it because some folks were getting confused. The link to the Wikipedia article on random walks was there originally, but I suppose some folks didn’t follow it to see that the graph also came from there. My mistake for not making it crystal clear in the first place. – Anthony
This paper [Validation and forecasting accuracy in models of climate change] does a poor job phrasing skeptic arguments,
“From a scientific perspective, disbelief in global warming is found in the work of the Heartland Institute and its publications (Singer and Idso, 2009) and supported by the arguments of a number of eminent scientists, some of whom research in the field (see Lindzen, 2009)”
It is not “disbelief” in “global warming” but skepticism in “anthropogenic global warming alarm”.
LOL … Poptech, in aircraft design, computational simulation has replaced “only” 90% of wind-tunnel testing.
It’s pilot training that is now done 100% on simulators. 🙂
Yes, it’s completely possible — even common nowadays — for a commercial pilot’s first “real” 787 flight to have paying customers aboard.
The reason makes perfect sense — simulation training is superior in detail-and-depth to flying the real thing.
Poptech,
“This is all anyone needs to know about the computer illiterates programming the climate models…” [Poptech]
That problem with scientists and programming is decades old as you probably know anyway, but it’s taken Nature that long to find out!? — assuming that’s their first piece on it (but I wouldn’t know).
I’ve seen this from both sides as for a short while I did a stint as a analyst/programmer in industry when I was very young. (It wasn’t called IT in those days.) You’d get a good bollocking if you broke documentation, coding or annotation standards. Rightly too. And sacked quickly if you showed any hint of persistence at it. But you found out PDQ why they are an absolute necessity. Yes, I learnt that the hard(ish) way, but very quickly. “Programming is easy; anyone can do it.” Well, certainly much of it is but that isn’t the point. The point is can someone else pick up from where you left off (can you yourself?), i.e. without having to read through a load of spaghetti code you wouldn’t feed to a dog trying to work out what the hell is going on and at what stage things are at, or worse, going right back and starting from scratch again, which is the only sensible thing to do under those circumstances. Subsequent changes in specification and the resulting updates are inevitable so the thing needs to be designed with those in mind. Blah blah blah… And your time is someone else’s MONEY!
Then jump back into a science environment where everyone is his own god when it comes to systems and programming. What a f..king mess! The worst kind are the prima-donna cleverdicks — they’re the real dummies, and can be very secretive too! Precious, eh? And fun to work with? But it’s impossible to convey to any of them why things need to change. You can explain it all you like but it makes no difference. What they’re doing at the moment is far more important than “mere housekeeping”. If someone leaves and someone else has to pick up the work? “It’s just programming isn’t it?” Right. No. Wrong. In fact, you couldn’t be wronger.
Get out of that environment as fast as you can before you kill someone. I did.
David L. says:
June 14, 2012 at 9:52 am
“There have been studies into the accuracy of weather predictions . . .”
Thanks, David.
I looked a bit and also found this:
http://www.climas.arizona.edu/feature-articles/january-2008
with a link to:
http://www.cawcr.gov.au/projects/verification/
. . . that even shows an upcoming meeting for April 2013.
All more than I have time for today. After several days of winds, with gusts to 40+, they slowed to about 10 mph today. I took the opportunity to clear brush under big old cottonwoods.
Until the “scientists” can input the “butterfly wing beat” and about a million other data points, no climate/weather model, will work further than a week out……
and……”I don’t care how good the models are, we still book wind tunnel time.”
That is not what I asked, strawman. Nice video though of an action experiment not done on a computer simulation.
This is a red herring to my actual question. So someone never actually flies a real aircraft before becoming a pilot and flying real passengers for the first time? Or are you just talking about pilots already certified to fly other aircraft? There is a big difference.
A fan of *MORE* discourse says:
June 14, 2012 at 6:40 pm
The reason makes perfect sense — simulation training is superior in detail-and-depth to flying the real thing.
Speaking as both a pilot and a sim operator — bullshit. A simulator training program is only as good as its replications of real life. The random factors involved in actual flight are what kill people who have only been trained in simulators.
Two examples:
1. Sim training for the A320 don’t include stalls because Airbus insists the computers will prevent the aircraft from stalling — but most accidents in Airbuses were caused by computer malfunctions which resulted in stalls.
2. Sim training for the V-22 is based on parameters which assume the tilt-props would function like rotors when the aircraft is in “helicopter mode.” Aircraft acceptance was based entirely on sim flights, and it was only after several Class A fatals that someone actually did some digging into the sim program — the program parameters did not match test flight data from the actual aircraft.
The manufacturers’ solutions in both instances was *not* to correct errors in the training syllabus or the sim program, they were instructions to the pilots to avoid getting into situations which might result in the aircraft crashing.
Poptech says:
June 14, 2012 at 11:34 pm
This is a red herring to my actual question. So someone never actually flies a real aircraft before becoming a pilot and flying real passengers for the first time? Or are you just talking about pilots already certified to fly other aircraft? There is a big difference.
There *is* a big difference.
In student pilot training, both sims and static flight training devices (FTD) are great tools to teach the fledgling proper engine starting procedures and flight control application and correlation. In rated pilot transitions, they knock a couple of hours off cockpit familiarization drills for pilots transitioning into an advanced aircraft, which allows the instructor pilot to concentrate on maneuver and emergency procedures training.
Sims have three huge advantages over an actual aircraft — they’re cheaper to operate, they eliminate the “fear factor” of dying if the trainee performs emergency procedures incorrectly or too slowly, and they’re perfect for instrument flight training.
A fan of *MORE* discourse says:
June 14, 2012 at 6:40 pm
Yes, it’s completely possible — even common nowadays — for a commercial pilot’s first “real” 787 flight to have paying customers aboard.
And that commercial pilot will have already accumulated at least 6,000 hours flying a multiengine jet aircraft — the sim is just the stepping-stone to another aircraft transition.
And the reason the B-787 was delivered years past its original rollout date is because
1. Boeing discovered airframe and structural problems during flight testing in areas that surprised the hell out of the computer design team,
2. the aircraft suffered at least two in-flight fires due to excessive heat buildup in the computer-designed wiring bundles, and
2. the computer-designed Rolls Royce engines didn’t work as specified — none of them were capable of producing the power required and one of them exploded the first time it was run up on the aircraft.
In their paper, Fildes and Kourentzes commit the error of conflating projections with forecasts. The climate models make projections and not forecasts.
Re scientists and programming.
I should have added the following to my previous post to end on a positive note.
Years later, but still decades ago now, and when I had more say in things, I had the pleasure of working with a real IT pro far more experienced than I was. I did the the core specifications in outline but he and his team did everything else, and beautifully too. It was an iterative process, as is inevitable, but the number of iterations was minimal. We understood each other perfectly as each knew what the other had to achieve. Everything he did was superb and testing was a delight. We got on like a house on fire and the whole thing was a great success. So, a positive ending.
Well not quite. That was the first and last time it happened like that, for me anyway. So it was something of an organisational fluke I suppose.
Bill Tuttle says:
June 15, 2012 at 2:26 am
I have sim time in B757 and B767. They are easy to ‘fly’ on instruments (with both engines turning). The emergency procedures manual for a seven five contains more information than did the entire flight manual for the high performance single engine retract I once owned. I have far more confidence in a flight simulator than I do in any climate model program. Far more.
More importantly you have just outlined the way scientific research at major universities ought to be setup. They University or R&D consortium should have a professional IT staff on the pay roll to support the research teams. The researchers might cobble out useable test code on their own to develop ideas, then once they are satisfied there is some merit to their methodology they should be “required” as part of their performance objectives of the research to sit down with an IT department software development representative to prepare software requirements documents that formalize the requirements for a professionally coded software module that does the required computations they have outlined.
Then when the research is complete, they should be “required” to turn in the final research paper, along with the supporting computer code and the final software requirements documents that defined the functionality for their code so it can be tested and validated by any outside researcher to demonstrate that :
a) The code does what the code requirements document says it should
b) That the code actually produces the output the research report says they did
c) That the code does not have some hidden flaw in logic or math that produces the intended output without introducing spurious errors, biases or statistical abuse of the data.
There obviously also needs to be an in house statistics resource to help the researcher make proper use of complex statistical analysis which should audit and “sign off” on the research that it meets good practice in statistical analysis.
Will that happen ?
— not likely but it is a worth while objective to find a mechanism to as in medicine realize that there are sub-specialties which need to be addressed by specialists.
Today the research task is conducted as if a general practitioner in medicine was doing brain surgery and eye ear nose and throat all at the same time.
At least in the medical field it is recognized and considered professionally appropriate for doctors to hand off specialized areas of expertise to other colleagues who have the necessary skill sets to properly handle them. Let the researchers do what they do best — be idea men/women, come up with concepts and objectives to investigate a new line of research but hand off the grunt work of best practice computer coding to someone who does that for a living to work as a consultant to the researcher. Let the coder code and the researcher research.
To use a building trade analogy the researcher should be like the architect who envisions a new building, but he has engineers detail the actual structure and tradesmen skilled in specialties like framing, masonry, plumbing, heating and air conditioning etc. actually do the construction of the structure. The architect gets the awards for developing the concept and shepherding it through to completion, but the actual construction steps were performed by specialists (not unpaid graduate students or interns who are in fact still learning their craft).
Larry
Babsy says:
June 15, 2012 at 8:47 am
I have sim time in B757 and B767. They are easy to ‘fly’ on instruments (with both engines turning). The emergency procedures manual for a seven five contains more information than did the entire flight manual for the high performance single engine retract I once owned. I have far more confidence in a flight simulator than I do in any climate model program. Far more.
I used to troubleshoot sim glitches in the old Singer-Link mechanical “Blue Canoe” for First US Army, and the basics for writing a good sim program all date from the days when somebody decided that seat-of-the-pants flying in the clouds was suicidal. A well-written flight sim program is a dream to work with — the one for the Mi-8MTV-1 I’m running now is *not* a well-written program.
And you’re right — even a poorly-written flight sim program will consistently give you acceptable results.
RockyRoad says:
June 14, 2012 at 2:14 pm
Phil. says:
June 14, 2012 at 12:29 pm
Jim Clarke says:
June 14, 2012 at 9:41 am
The fact that the models are so much worse than a random walk is proof that one or more of the assumptions (or equations) are incorrect!
This has not been shown, so this ‘proof’ fails!
But nowhere, Phil, have you been able to show that the [climate] models are BETTER!
That’s the crux of this whole issue.
They certainly are better than a random walk, which they haven’t been compared with. In his intro Anthony correctly describes a random walk, however the paper doesn’t compare against a random walk. In the linked commentary, as well as getting the name of the journal wrong, McKitrick describes a random walk thus: “consisting simply of using the last period’s value in each location as the forecast for the next period’s value in that location”, this is not a random walk, not even close, compare it with Anthony’s correct description.
Phil. says:
June 15, 2012 at 10:48 am
They certainly are better than a random walk, which they haven’t been compared with. In his intro Anthony correctly describes a random walk, however the paper doesn’t compare against a random walk.
The paper *does* say that plotting random numbers — “using the last period’s value in each location as the forecast for the next period’s value in that location” — resulted in a more accurate forecast than those produced by the models.
Bill Tuttle, thanks for all the information.
@Poptech — *hat tip*
“A fan of *MORE* discourse” needs to do better research. It looks like an epic failure for computer modeling (I am not surprised),
Boeing 787 wing flaw extends inside plane
Another 787 design flaw
Electrical fire forces emergency landing of 787 test plane
Dreamliner’s woes pile up
ANA 787 lands safely after landing-gear trouble
What’s Causing Huge Delays for the Boeing 787 Dreamliner?
Sounds like climate modelers.
I find the following statement by the (admittedly articulate) Jim Clarke to be the perfect culmination (to this point) of the most important exchange in this whole comment thread, and one that simply exemplifies again for me the fundamental cognitive deficiencies in those who reject the science showing the hazards of anthropogenic global warming.
Jim Clarke says:
***Are these statements [where Fildes and Kourentzes affirmed the fundamental science of CO2 as a primary driver of climate change and the dangers inherent from it] derived from the results of their research? No! They contradict the research! Authors make these statements because they like their friends, their jobs and their paychecks.***
Jim: How the *hell* do you know that?
What is *really* more likely–that Fildes and Kourentzes are psychically intimidated into denying the conclusions of their research because the “warmist” conspirators will take away their jobs and/or social standing, or that *you* just don’t understand what they are saying? Seriously, which is *really* more likely? I seriously doubt you read their whole article, Jim. (I read as much as I could grasp, which was certainly not all of it, I’ll admit.)
And your conflation of “theory” and “model” is just . . . (what is a polite word?) . . . unsound. Here’s an analogy:
I have a “theory” that the gunpowder in the shell of a firearm is a “primary driver” of the round into the target. But I doubt an adequate “model”–computer generated or otherwise–could ever perfectly *predict* exactly where the round will land. But whether it strikes a human in the ear or the temple does not make them any less dead one way or the other.
It’s the same with climate change: sure enough, it *might* be that prevailing models overestimate climate sensitivity, it ends up being not that big a deal, and increased global temperatures and it’s associated ills are managed fine without any heroic interventions. (Maybe an unforeseen factor such as a gust of wind will take the bullet off target and spare the target’s life.) Great. No one will be happier about it than I.
Maybe. *Or*, our climate models are wrong in the *opposite* direction–that we are taking an even *bigger* chance with our environment than we realize. Maybe–just maybe Jim–this is why Fildes and Kourentzes affirmed the fundamental science of CO2-induced climate change *and* it’s associated risks–and that this is why we should continue to develop and improve climate models.
But you seem to know better, don’t you Jim? You can read between the lines of an article you skimmed (at best) and determine where the authors are telling you what they really think (such as when they seemingly debunk climate modeling–the part you like) and when they’re towing a “party line”. An amazing talent, that–and one that a number of self-described climate “skeptics” seem to have.
And of course the administrator of this site simply (and of course incorrectly) linked to the *Journal of Forecasting*, blithely assuming the 2011 article proved his point, but course it was the wrong journal, showing that AW, whatever his merits as an individual, did not read the original article. He just saw, believed, and linked.
Why did I bother posting this? Not entirely sure, really. It’s just a question of time before AW censors me.