A few models wandered over the pause…

CMIP5-90-models-global-Tsfc-vs-obs-thru-2013[1]

Dana Nuccitelli has written a defence of climate models, in which he appears to claim that a few models randomly replicating the pause should be considered evidence that climate modelling is producing valid results.

According to The Guardian;

… There’s also no evidence that our expectations of future global warming are inaccurate. For example, a paper published in Nature Climate Change last week by a team from the University of New South Wales led by Matthew England showed that climate models that accurately captured the surface warming slowdown (dark red & blue in the figure below) project essentially the same amount of warming by the end of the century as those that didn’t (lighter red & blue).

There’s also been substantial climate research examining the causes behind the short-term surface warming slowdown. Essentially it boils down to a combination of natural variability storing more heat in the deep oceans, and an increase in volcanic activity combined with a decrease in solar activity. These are all temporary effects that won’t last. In fact, we may already be at the cusp of an acceleration in surface warming, with 2014 being a record-hot year and 2015 on pace to break the record yet again.

Read more: http://www.theguardian.com/environment/climate-consensus-97-per-cent/2015/may/06/pause-needed-in-global-warming-optimism-new-research-shows

The problem I’ve got with this line of reasoning, can best be illustrated with an analogy.

Say your uncle came to you and said “I’ve got an infallible horse betting system. Every time I plug in the results of previous races, plug in last year’s racing data, it gets most of the winners right, which proves the system works.”.

Would you:

  • Bet your life savings on the next race?
  • Wait and see whether the model produced good predictions, when applied to future races?
  • Humour the old fool and make him a nice mug of chocolate?

Anyone with an ounce of common sense would go for option b) or c). We instinctively intuit that it is much easier to fit a model to the past, than to produce genuinely skilful predictions. If your uncle was a professor of mathematics or statistics, someone with some kind of credibility in the numbers game, you might not dismiss his claim out of hand – occasionally skilled people really do find a way to beat the system. But you would surely want to see whether the model could demonstrate real predictive skill.

What if a few months later, your uncle came back to you and said:

“I know my model didn’t pick the winners of the last few months races. But you see, the model doesn’t actually predict exactly which horse will win each race – it produces a lot of predictions and assigns a probability to each prediction. I work out which horse to pick, by kind of averaging the different predictions. The good news though is one of the hundreds of model runs *did* predict the right horses, in the last 4 races – which proves the model is fundamentally sound. According to my calculations, all the models end up predicting the same outcome – that if we stick with the programme, we will end up getting rich”.

I don’t know about you, but at this point I would definitely be tending towards option c).

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

194 Comments
Inline Feedbacks
View all comments
jones
May 11, 2015 5:38 am

I once got a bulls-eye at 50 feet with my blunderbuss…

Mike Kocan
May 11, 2015 5:41 am

“There’s also no evidence that our expectations of future global warming are inaccurate.” Isn’t this a reversal of the burden of proof. He’s effectively expecting skeptics to have to prove the null hypothesis. He should be reminded that the burden of proof lies with those with the hypothesis. It’s up to him to prove accuracy.

Reply to  Mike Kocan
May 11, 2015 1:55 pm

They’ve always done that, the warmist’s obsession with consensus is another way they try to usurp that status quo position and shift the burden of proof to the other side.

Gary
May 11, 2015 5:41 am

The Texas Sharpshooter Fallacy strikes again. https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy

Reply to  Gary
May 11, 2015 7:59 am

Gary
Yes. I have repeatedly tried to point that out in this thread.
Richard

Walt D.
Reply to  Gary
May 11, 2015 9:55 am

The difference between the climate models and the Texas Sharpshooter is that some bullets actually hit the side of the barn!
/Sarc

Martin M
May 11, 2015 5:42 am

This is the apocalyptic-al hysteria of the climate alarmist.
When religious groups were predicting the end of the world most of the intelligent world laughed at them in the face. These alarmists are as bad as those religious fanatics (or perhaps worse) and all I can do is laugh in their faces.

Jim Francisco
Reply to  Martin M
May 11, 2015 7:15 am

They are much worse because they want us to pay for their madness.

Mark from the Midwest
May 11, 2015 5:50 am

Whether intentional or not, Nuccitelli’s rant has elements of the “I agree with you, but you’re wrong” con gambit. The con artists intent is to make you shake your head “yes,” and then they use the comeback, “but I have more information.” It’s often used in selling shares in “speculative” ventures to unsophisticated victims.

May 11, 2015 5:56 am

I have a model of climate sensitivity. If calibrated to results known to the IPCC’s creation in 1990, it fits the last 25 years well (out of sample). It predicts the pause. So it hindcasts well and fits well out of sample as well as getting the general shape accurate.
It predicts 0.9C of warming over the 21st century at current emissions.
On evidence currently available, I am the greatest climatologist on earth.

Bill Illis
May 11, 2015 5:58 am

Here are the two highest and two lowest models from IPCC AR4.
The average of UAH and RSS satellite temps is, right now, LOWER than even the lowest model. Hadcrut4 is creeping up toward the average (now that Hadcrut4 is also incorporating the extra adjustments added to surface records).
But it makes no sense to cherrypick out the two low sensitivity models which seem to have a lot of RANDOM WALK VARIABILITY and then say the models are accurate. (Note these two models have only 2.2C per doubling built into their assumptions, the lowest that IPCC AR4 would accept. Why this would be an “assumption” is strange enough on its own)…
… Without talking about how the two highest sensitivity models are so far off right now, it is ridiculous. (Even when they had historic data up until 2004 to use in their hindcast, they were still miles off at that point).
2 high and 2 low models versus Hadcrut4 and UAH-RSS average. All on the same baseline so they are comparable.
http://s12.postimg.org/bqjdgnffh/IPCC_2_high_low_vs_H4_UAH_RSS_Apr2015.png

Sturgis Hooper
Reply to  Bill Illis
May 11, 2015 6:31 am

Model runs assuming a more realistic CS of 1.2 degrees C per 2xCO2 would presumably be below “observations”, which means not that CS is higher but that the observations include man-made “warming” via unwarranted adjustments.

benofhouston
Reply to  Bill Illis
May 12, 2015 10:18 am

That’s the thing. The reason that this graph was cut from the final report was that it showed the actual range of the prediction, which is so wide that even the scientifically illiterate can say “isn’t that predicting almost nothing”

knr
May 11, 2015 6:14 am

Barn door style guesswork can be right on occasion.
If when asked to name the card, I answer with the name of every card I can claim 100% accuracy rate . I do not even need models to do it . The trouble comes when I cannot answer with the name of every card .

ferdberple
May 11, 2015 6:21 am

This scam has been used for years in the Mutual Funds industry. Create 20 different funds with 20 different mixes of stocks. In a years time some will have done well, others will have done poorly. Promote the hell out of the one or two funds that did well as proof of your superior ability to pick stock winners and at the same time quietly drop the funds that did poorly while creating 20 new funds.
A similar strategy is used by con artists. They generate all possible future outcomes on a horse race or some other betting event and email these out randomly and wait. Most people will receive failed predictions, but some will receive winning predictions, as proof that the con artist’s betting system works. All you need do is invest your life savings to get in on the system.
Unfortunately for climate modelers we can see the other emails the con artists is sending out, so we know the system doesn’t work any better than chance. A dart board or pair of dice could predict the future with similar accuracy.

HankHenry
May 11, 2015 6:23 am

The Australian slang for the kind of guy that plays this game at the racetrack is an “urger.”
“Urger” is defined in the Dictionary of Racing Slang by Ned Wallish as “a racecourse con man who will urge an unsuspecting punter to back a horse after telling him a most impressive story. If the horse should win, the urger is always present when the punter collects to obtain or demand a portion of the winnings.”
Also look up “tout” which is a slightly more sophisticated ploy but still involving pretending to know something that you don’t.

May 11, 2015 6:25 am

“There’s also no evidence that our expectations of future global warming are inaccurate. ”
So he’s saying there’s no evidence showing that future predictions, a.k.a. “Stuff that HASN’T HAPPENED YET”, is wrong?
No much of a cheese shop then, is it?
Amazing that these people continue on, without some of the friends and associates saying “Ok…Dana?…what you just said was pretty stupid. Really.”

Reply to  jimmaine
May 11, 2015 7:21 am

“…No(t) much of a cheese shop then, is it?”
LOL
Palin – “But it’s so clean!”
Cleese – “Certainly uncontaminated by cheese”
Palin – “but you haven’t asked about Limburger”
Cleese – “is it worth it”
Palin – “could be…”
Love any references to Python (Monty)

Reply to  JKrob
May 11, 2015 8:15 am

Oh, I though you were referring to Sarah Palin.

Latitude
May 11, 2015 6:31 am

well fine…..take them for their word
Throw all the other models out…and starting right now we only go by the models that got the ‘slowdown’

JohnTyler
May 11, 2015 6:31 am

You are being far to magnanimous when you suggest that an economist, in making a range of predictions, actually finds that a the subsequent observed data fits within his/her range of predictions.
Though implicit in your comment, and rightfully so, is that economic “science,” along with climate “science,” both have near perfect records of producing predictions that are totally wrong.Coin flipping would at least have a 50% chance of producing a correct prediction.

Tom J
May 11, 2015 6:41 am

‘ … a team from the University of New South Wales … showed that climate models that accurately captured the surface warming slowdown … project essentially the same amount of warming by the end of the century as those that didn’t … .
‘There’s also been substantial climate research examining the causes behind the short-term surface warming slowdown. Essentially it boils down to a combination of natural variability storing more heat in the deep oceans, and an increase in volcanic activity combined with a decrease in solar activity.’
Ok, hold on here just a second. Let’s go to the second paragraph first. So, there’s been a substantial amount of research examining the slowdown. Was this research done before the model ensembles? If so why do a significant majority of those models not show the slowdown? Why do only the outlier models show it? Were those specific modelers privy to this substantial research, and the other modelers not privy to it? Or, was it more likely that nothing other than chance contributed to the outlier models sort of catching the slowdown? The author here is conflating two different things. And, wants to have his cake and eat it too. The two paragraphs are not aligned but they’re claimed to be. The panicked research into the slowdown followed its appearance in the records. It did not precede it. As such, any model showing the slowdown could not have benefited from that research. Try as one might, the two cannot be married. Claiming so is not science. It might be law. It might be public relations. It might be advocacy. It might be propaganda. But it is not science.

jlurtz
May 11, 2015 6:43 am

A significant statement was made: “Essentially it boils down to a combination of natural variability storing more heat in the deep oceans, and an increase in volcanic activity combined with a decrease in solar activity.” These other things a affecting the Global Temperature: NOT JUST CO2.
Solar EUV varies much more than TSI, but is in sync with TSI. Why not use it to track the “decrease in solar activity”?? Simply, less Solar EUV, less Solar energy as indicated by the 10.7cm Flux.
http://www.spaceweather.ca/solarflux/sx-6-mavg-eng.php
As per the deep oceans, let’s look at the Atlantic first. The AO has changed from the warming to the cooling phase:
http://weather.gc.ca/saisons/animation_e.html?id=month&bc=sea
is a great link to watch ocean changes [also a great animation].
Remember, unless driven by outside forces, heat usually rises therefore the ocean surfaces should be warm. The Solar energy warms them first. Now, if the oceans show cold, that cold probably goes deep.
The Pacific is a special case due to the Trade Winds at the Equator. The Solar warmed water moves from east to west. When is reaches the west, the currents drive the warmed water deep 200 – 300 meters. It travels back to the east. BUT, the warm water was Solar heated and Solar driven by the Trade Winds. Less Solar energy gives less Pacific warmth. Note the time delay before last years warm water will rise in the east.
Volcanic activity??? Where?? Hidden under the Oceans???
What about the Global Sea Ice at almost 1,000,000 sq. km. above average. Doesn’t that say anything?

May 11, 2015 6:44 am

“….Computer models
Cannot possibly predict,
The physics not sorted
To allow that edict;
But that is ignored,
You could say it’s denied,
(Isn’t that the term used
If you haven’t complied?)…..”
Read more: http://wp.me/p3KQlH-CL

Gamecock
May 11, 2015 6:48 am

Nuccitelli/Guardian give us gibberish.

May 11, 2015 6:50 am

So, 2 models aren’t as wrong as the other 88 are, so that means the models are a success?!
Further, that statement should read “2 models aren’t as wrong, yet…”
The really nutty part about this is that there will be many who accept Nuccitelli’s assertion.
I predict Nuccitelli will predict that 97% of climate scientists agree with him.

May 11, 2015 6:50 am

As far as I am aware, climate models are based on the IPCC belief system that CO2 causes global warming through the magic of back-radiation. Back-radiation is another way of saying infra-red radiation from infra-red active gases such as CO2 and other “greenhouse” gases in the atmosphere. As there has always been CO2 in the atmosphere since the very first life forms evolved on Earth, there has always been “back-radiation” since then. Most importantly it is still with us today. The absorption and emission of IR does not stop or even pause. It is always happening. Thus the stable global temperature for the whole of this century in spite of a near 10% increase in CO2 concentration proves beyond any doubt whatsoever that the IPCC belief is false.
IR is electromagnetic radiation so it travels at the speed of light. That means that it takes an IR photon a mere 37 microseconds to pass through the troposphere. Which means that any IR interaction with a greenhouse gas is going to be almost instantaneous. There cannot be any 15 plus years wait before back-radiation occurs. Just a few seconds of no warming after the addition of more greenhouse gas is sufficient to prove that back-radiation does not cause global warming. However we have waited 15 plus years and still cannot agree. It would seem that the modern human race is seriously lacking in its former powers of deduction. Clearly that is another catastrophe bought on by man-made CO2 so we must all shut down our power stations and go back to living as the Neanderthals did.

JimS
May 11, 2015 7:02 am

This is nothing new. It was found that those climate models that incorporated an ENSO pattern, accidentally predicted the Pause. Even a blind squirrel stumbles upon a nut, once in a while:
http://arstechnica.com/science/2014/07/climate-models-that-accidentally-got-el-nino-right-also-show-warming-slowdown/

Colder
Reply to  JimS
May 11, 2015 10:51 am

So that would indicate that “all” the other research that tries to explain the “pause” is flat out erroneus, but it is still included in the discussion for some reason.

May 11, 2015 7:05 am

I left a couple of comments at Nuccitelli’s Guardian post about the Nature-piece that seems to be the central pillar. The very short summary is: It’s bunk! A no-result wich is totally expected.
I’ll post my analysis and conclusions here too (and there will be some repitition, sorry)
_________________________________________________________________________________
Well, at least I got a good laugh ….
Allmost all the links given were to SkS or Guardian or some other activist blog. So I checked the Nature-reference. It was a piece by M.England et al (so called ‘climate scientists’) and discussed the pause, trying to downplay its relevance, and to fend off suspicions that models run thoo warm, and their simulations imply sensitivities are a tad too high.
The way they go about this is quite something else …
But first one needs to know that all these climate models nothing but code, instructions based on the models (assumptions of the modelers) built into them. Running them will give you what these assumptions amount to. And as we know those models, all of them, produce squiggly lines trending upwards. And there is some noise built into them too, depending on how they are ‘initialized’ these squiggles look different, occur at different times, and make the whole thing look a little more like the temperature record. But letting them run, simulating the 21:st century, each model with trend as it is essentially instructed to do, and show some superimposed noise (usually interpreted as ‘natural variability’).
So here is what England et al did, in order demonstrate that they still give:

Robust warming projections despite the recent hiatus

(Yes, that’s their title!)
They took many of such simulations using different GCM-models, and looked at those ‘realizations’ (model runs) that happened to display a warming 14-year slowdown between 1995 and 2015. (As ‘slowdown’ was defined temp.trend from ‘cold’ 2000 to ‘warm’ 2013). They even (mis-)labelled this as ‘capture a slowdown’. Whereas it merely was those runs who just happened to be a bit slower around then.
OK, thereafter they compared the where these runs ended, compared to the rest of the runs (that didn’t ‘capture’ any slowdown there)
And voilá, the 2100 simulated temperatures for both these groups came out almost on top of each other! Their claimed ‘conclusion’ is astounding:

We have shown that here that there is no significant shift in projected end-of-century global warming .. This suggests that the recent surface warming slowdown is associated with variability not influencing long term climate change .. In short, the drivers of the recent hiatus do not alter the century-scale warming

To summarize: Essentially, they have a bunch of trending straight lines, with some noise added. And notice that those lines where the noise seemed to counter the trend at around 2000, pretty much ended in the same ballpark as those where such noise occurred at other times, or wasn’t that prevalent at all!
Wow!
That’s climate science™ by climate scientists™. And unfortunately, it is not at all unrepresentative for quite a lot I’ve seen. Not only is thus tosh not any (real) science at all, but what they claim to show is sheer nonsense. And not only that, it reflects very poorly on them that they put such silly stuff in writing and even send it to Nature.
To be fair, Nature doesn’t label this science, its Opinion and Commentary. But as we know, the activists don’t care about such details. Dana tried to present it as a ‘paper’ in Nature, and interjected ‘accurately’ before ‘capture’. Well, its printed, in Nature, it’s a reference, it will be marketed and used as such as it adds to those 10.000:s of ‘papers’ making up this ‘consensus which are said to prove something.
Well, this one does. But nothing flattering.

Reply to  Jonas N
May 11, 2015 7:08 am

I picked up the the topic and added some more points.
________________________________________________________________________________
Ok, a quick recap first:
Further down I posted a summary of what that Nature-piece (by M.England et al) mentioned above amounted to. And wondered if anybody would respond or address the content and/or my summary and analysis (in a meaningful way)
Here I am going to expand a little on my summary. But first let me re-iterate that this short Nature-piece is well written and easy to follow for almost anybody. Because the ‘analysis’ they actually perform is very simple. And the ‘data’ they used is not the least contested: It is the output by the CMIP5 GCM models, and that those churn out such projections is not contested either.
(The question whether or not such models are useful, e.g. for predicting the future, or if they overestimate the sensitivity to CO2 is contentious, however.)
But here only their output is analyzed, and that analysis is straight forward. Below, I will describe what that ‘analysis’ does, and later on I’ll explain why the authors’ conclusions don’t hold up.
Simply put, a GCM tries to model the climate system with its components, and particularly how it responds to increased ‘forcing’ as eg by increased GHG-levels. And albeit quite complex (involving many parts, their function, interactions etc within the system) their output is close to a proportional response to external forcing, with a slight time delay.
On top of that there is all else that goes on in the system, mostly weather and other internal variations (interactions among various components not considered ‘forcings’)
The result (for a steadily increasing GHG-forcing) is essentially a straight line with superimposed noise (weather and other internal phenomena). The noise is not considered to be of any predictive value, although it is sometimes said to represent (the magnitude of) possible internal variations and to calculate the probabilities of certain outcomes, eg. heatwaves etc.
It is also worth noting that the various GCM:s among them produce a range of such scenarios, and depending on how they are ‘initialized’ the internal variations might come out quite differently for any particular GCM. Therefore, these simulations (as with the CMIP-project) are often presented as ‘ensembles’ from both many models, and many realizations of each one of them. The (averaged) result is then deemed to represent some best (modelled) estimate, and the scatter around it to be representative of inter-model (and other) uncertainty including internal variation. As is implicitly done every time when the hiatus is deferred to ‘just natural variability’ btw, and also in this Nature-piece.
Anyway, for practical purposes (and steadily risning GHG:s) these realizations can be seen as an upwards trending line (representing the model-determined sensitivity) with superimposed noise (representing weather and other variability)
With me so far? OK!
What the authors have done here is merely to look at all those (differing) simulations, making up the CMIP5-ensemble, and sorting them into two piles:
1) One with those that did not display any hiatus/slowing down between 1995 and 2015, defined as 14 years somewhere within that time span with average trend les than 0.096C/decade (a value taken from HadCRUT4 for 2000-2013) and which is less than half of the all-model-ensemble during the same time (~0.23C/decade). This would be by far the largest pile
2) The other pile contained the remainder, those simulations that happened to show such a 14-year slowdown somewhere between 1995 and 2015.
Note that is no physically motivated difference between the (simulations among the two) piles. One only happened to have some particular (counter-trend) variations in a small time-window, whereas the rest had such elsewhere or not at all.
Ergo my (previous post) summary:

Essentially, they have a bunch of trending straight lines, with some noise added. And notice that those lines where the noise seemed to counter the trend at around 2000, pretty much ended in the same ballpark as those where such noise occurred at other times, or wasn’t that prevalent at all!

They essentially noted (quote correctly I might add) that those lines which had some particular noise-feature at some time, overall looked the same as the rest with their noise (but lacking that particular feature)
And so far, I have absolutely no quarrel with what they present. I only say that this is trivial, even bordering completely meaningless. So no. There is no beef at all here. This comparison says absolutely nothing about neither the hiatus, or nor if the models can make any useful predictions (natural variability or not).
Because there is no physics at all involved in this comparison, only simulations (all based on the same physical assumptions) by many models and runs, divided into those who (by pure chance) happen to fit better around ~now, and those who don’t. And yes, the ‘pure chance’ is the main argument from those who want to maintain that what we see is still within the model prediction ranges. Their core argument!
And we can even take this ‘analysis’ a bit further, by noting that at each simulation time-step the calculated state is used to ‘initialize’ the next time-step and thus the rest of the simulations:
This means that those two piles of realizations are ‘initialized’ (around ~2010 and for the rest of the run to 2100) at a temperature difference of 0.18C (on the average: (0.23-0.096)*1.4decades), which also seems to be the difference between the two thick lines after the ‘hiatus’-sorting. As I said, this is almost self-evident, trivial stuff. But apparently worthy publication in Nature (OK Nature-ClimateChange, but still).
But the real beef I have is not this trivial content-free comparison. It is with what the authors claim to have shown. Particularly:

We have shown that here that there is no significant shift in projected end-of-century global warming .. This suggests that the recent surface warming slowdown is associated with variability not influencinglng term climate change .. In short, the drivers of the recent hiatus do not alter the century-scale warming

This (bolded) is pure hogwash. It does nothing of the kind! Has nothing to do with any (real) science. It is a circular argument of the worst kind, essentially their ‘study’ tries to imply:
If the models were correct now, what they show in 2100 would also be correct!’
(With a necessary addendum: Meaning almost anything within a 3C-range). While what they are saying only is ‘If we look at all the simulations and squint, they still look the same’.
And what they say about any ‘drivers of the recent hiatus’ is nothing but an expression of blind or hopeful faith in the same unvalidated models, and that all there is to earth’s climate is accurately captured there …
It is nothing but yet another one of these wordy wobbly model based persuasion-attempts, this time not even knowing what … Well, at least Nature didn’t publish this as a ‘paper’

Reply to  Jonas N
May 11, 2015 7:19 am

And I needed to make a correction: They dont divide the model runs into two piles. They compared one sub-set of them with the the whole bunch (including the sub-set)

HAS
Reply to  Jonas N
May 11, 2015 1:23 pm

Jonas N
When this came up on an earlier thread that discussed this paper I made the additional point that the criteria for selection of the pattern that represented the pause was very weak i.e. it isn’t just the trend being flatish that is causing people to take note, it is being flatish and consistently below the actuals. see early comment http://wattsupwiththat.com/2015/04/23/climate-modeler-matthew-england-still-ignoring-reality-claims-ipcc-models-will-eventually-win/#comment-1915547
Had the trend been flatish and reverting down to the actuals or even starting higher and cross over there would be much less room for comment. But England et al have taken them all and so as you note it isn’t surprising that they form a reasonably representative sample of the other model runs when it comes to 2100 (although I note England finds it breaks down with more stringent trends through having only a limited number of models in his sample).
I was also churlish enough to suggest England et al would have known this (because it is obvious once you conceive of the experiment), and had deliberately concealed what would have happened with a more stringent test.
So not only is the inference in the conclusion unsupported by the study, it almost certain had the test been more appropriate the study would have failed, and the authors would have known that.
The Texas sharpshooter failed to even hit the barn wall.

Craig
May 11, 2015 7:14 am

For years, con men have sold sports picks and the like by sending out large numbers of teasers with random predictions for the first few games. Some people receive ones that have all the right picks and to the suckers, it looks like it must be an infallible system.

Gamecock
Reply to  Craig
May 11, 2015 2:35 pm

The scam was on an episode of Twilight Zone some 50 years ago.

Go Home
May 11, 2015 7:20 am

This one really hits home for me.
I did this modeling for dog racing once many years ago. All in Excel.
For data I imported 30 days of racing, 14 races a day, 8 dogs a race, and the data for each dogs last 6 races. The spreadsheet parsed the daily race form from text to excel.
For variables, as I recall there were close to 20+ for each dog (6 race records for each) … place out of gate, place at first turn, place at finish, season and last 6 races win, place, show record, race time, moving up or down in grade, falls, post position, weight of dog, etc.
For results I used every paramutual bet, 2-8 ticket box quinella, win, place show, 3-8 ticket box trifecta and the forms expert picks to see where I could get the most winning results if I played that one paramutual ticket for all possible races.
I then proceeded to tweak every variable to get the best results for all ticket combinations. It looked like i got to a 70% winning percentage. I thought this was greatest achievement of a lifetime. I worked on it for months.
Until I went to the race track with the days winning picks in hand. And a second time, and a third time. Never won. I would have been better just throwing down an eight sided dice and going with that.
But it was a wonderful experience and I often think about it when we talk about climate modeling and hind casting. I just wished they would have learned as fast as I had.

Reply to  Go Home
May 11, 2015 7:26 am

I just wished they would have learned as fast as I had

Had it been their own money, they would have. That’s the key to the entire ‘secret’ …

Jeff Id
May 11, 2015 7:31 am

The guardian article contains actual lies. Not a mistake or opinion but the intentional publication of known falsehoods.
They ‘should’ feel ashamed of themselves. I guess shame doesn’t pay as well as lying – or bring you fame.

Reply to  Jeff Id
May 11, 2015 7:38 am

Jeff, could you be more specific please?
(Further down, I posted some comments why I think the M.England et al Nature-piece is essentially vacous nonsense, and that the claims about the actual observed hiatus in it are false. But you seem to be talking about something else?)

Jim Ryan
May 11, 2015 7:33 am

We instinctively intuit that it is much easier to fit a model to the past, than to produce genuinely skilful predictions.
The time at which the model was conceived doesn’t make a difference, does it? “The model was conceived before the data came in, not afterward” is not evidence in favor of the model. I think what your “instinct” is getting at is that creating the model after the data has come in gives the modeler the chance to make the model a mere list of special cases catering to each data point, a list which has no predictive power because it reflects no general principles which can be applied to new cases. (For example, “My model is that the northernmost team wins the super bowl, except in the case of the Dolphins winning the Super Bowl in 1973 and the Dolphins winning it in 1974, and… (etc.).”) Before the data comes in, he doesn’t have such an opportunity, so it is unlikely that his model fits the data merely by being a list of special cases. It is more likely to reflect general principles which can be used to predict accurately (for example, “My model is that the team with a solid offensive coaching staff, a big, veteran defensive line, and the fastest wide receivers will win the super bowl.”)
So, it doesn’t matter when a model was hatched. You have to look under the hood to see if it merely restates the data or gives general principles which are not specific to the data. So, if your uncle says, “Say, I just now figured out a general model which predicts all the past super bowl results without merely restating those results,” and you look at the model and he’s right about this, then he really has something and off to the betting shop you would go. Were he to say, “Actually, I hatched the model long ago before any super bowls took place,” this wouldn’t make his theory any more worthy of belief. If wouldn’t make you justified in betting even more money on its future results.

Frank
Reply to  Jim Ryan
May 11, 2015 9:10 pm

“… [if] you look at the model and he’s right about this…”
Of course, there’s the rub. If you already know enough to know that he’s right, his model has pretty much accomplished nothing–you should have been betting for years. If you don’t know what the right answer is, the model might add knowledge or not. The only way to have confidence in what knowledge the model adds is to test it on the future. Especially in a context like football betting, where any easy answers have been sucked out of the “market” already by other bettors.
Climate modelers actually have it somewhat easier than football modelers. The predictions of other climate modelers should not have any effect on global temperatures (unless there is some feedback from predictions to data adjustments). The prediction of other football modelers affects the market itself.