The Button Collector or When does trend predict future values?

So, you know now who to call if YOU loose a bu...
How many buttons will he have on Friday? (Photo credit: Wikipedia)

Guest essay By Kip Hansen

INTRO: Statistical trends never determine future values in a data set. Trends do not and cannot predict future values. If these two statements make you yawn and say “Why would anyone even have to say that? It is self-evident.” then this essay is not for you, you may go do something useful for the next few minutes while others read this. If you had any other reaction, read on. For background, you might want to read this at Andrew Revkin’s NY Times Dot Earth blog.

­­­­­­I have an acquaintance that is a fanatical button collector. He collects buttons at every chance, stores them away, thinks about them every day, reads about buttons and button collecting, spends hours every day sorting his buttons into different little boxes and bins and worries about safeguarding his buttons. Let’s call him simply The Button Collector or BC, for short.

Of course, he doesn’t really collect buttons, he collects dollars, yen, lira, British pounds sterling, escudos, pesos…you get the idea. But he never puts them to any useful purpose, neither really helping himself or helping others, so they might as well just be buttons, so I call him: The Button Collector. BC has millions and millions of buttons – plus 102. For our ease today, we’ll consistently leave off the millions and millions and we’ll say he has just the 102.

On Monday night, at 6 PM, BC counts his buttons and finds he has 102 whole buttons (we will have no half buttons here please); Tuesday night, he counts again: 104 buttons; on Wednesday night, 106. With this information, we can do wonderful statistical-ish things. We can find the average number of buttons over three days (both mean and median). Precisely 104.

We can determine the statistical trend represented by this three-day data set. It is precisely +2 buttons/day. We have no doubts, no error bars, no probabilities (we have 100% certainty for each answer).

How many buttons will there be Friday night, two days later? 

If you have answered with any number or a range of numbers, or even let a number pass through your mind, you are absolutely wrong.

The only correct answer is: We have no idea how many buttons he will have Friday night because we cannot see into the future.

But, you might argue, the trend is precisely, perfectly, scientifically statistically +2 buttons/day and two days pass, therefore there will be 110 buttons. All but the final phrase is correct, the last — “therefore there will be 110 buttons” — is wrong.

We know only the numbers of buttons counted each of the three days – the actual measurements of number of buttons. Our little three point trend is just a graphic report about some measurements. We know also, importantly, the model for the taking the measurements – exactly how we measured — a simple count of whole buttons, as in 1, 2, 3, etc..

We know how the data was arrived at (counted), but we don’t know the process by which buttons appear in or disappear from BC’s collection.

If we want to be able to have any reliable idea about future button counts, we must have a correct and complete model of this particular process of button collecting. It is really little use to us to have a generalized model of button collecting processes because we want a specific prediction about this particular process.

Investigating, by our own observation and close interrogation of BC, we find that my eccentric acquaintance has the following apparent button collecting rules:

  • He collects only whole buttons – no fractional buttons.
  • Odd numbers seem to give him the heebie-jeebies, he only adds or subtracts even numbers of buttons so that he always has an even number in the collection.
  • He never changes the total by more than 10 buttons per day.

These are all fictional rules for our example; of course, the actual details could have been anything. We then work these into a tentative model representing the details of this process.

So now that we have a model of the process; how many buttons will there be when counted on Friday, two days from now?

Our new model still predicts 110, based on trend, but the actual number on Friday was 118.

The truth being: we still didn’t know and couldn’t have known.

What we could know on Wednesday about the value on Friday:

  • We could know the maximum number of buttons – 106 plus ten twice = 126
  • We could know the minimum – 106 minus ten twice = 86
  • We could know all the other possible numbers (all even, all between 86 and 126 somewhere). I won’t bother here, but you can see it is 106+0+0, 106+0+2, 106+0+4, etc..
  • We could know the probability of the answers, some answers being the result of more than one set of choices. (such as 106+0+2 and 106+2+0)
  • We could then go on to figure five day trends, means and medians for each of the possible answers, to a high degree of precision. (We would be hampered by the non-existence of fractional-buttons and the actual set only allowing even numbers, but the trends, means and medians would be statistically precisely correct.)

What we couldn’t know:

  • How many buttons there would actually be on Friday.

Why couldn’t we know this? We couldn’t know because our model – our button collecting model – contains no information whatever about causes. We have modeled the changes, the effects, and some of the rules we could discover. We don’t know why and under what circumstances and motivations the Button Collector adds or subtracts buttons – we don’t really understand the process – BC’s button collecting because we have no data about the causes of the effects we can observe or the rules we can deduce.

And, because we know nothing about causes in our process, our model of the process, being magnificently incomplete, can make no useful predictions whatever from existing measurements.

If we were able to discover the causes effective in the process, and their relative strengths, relationships and conditions, we could improve our model of the process.

Back we go to The Button Collector and under a little stronger persuasion he reveals that he has a secret formula for determining whether or not to add or subtract the numbers of buttons previously observed and a formula for determining this. Armed with this secret formula, which is precise and immutable, we can now adjust our model of this button collecting process.

Testing our new, improved, and finally adjusted model, we run it again, pretending it is Wednesday, and see if it predicts Friday’s value. BINGO! ONLY NOW does it give us an accurate prediction of 118 (the already known actual value) – a perfect prediction of a simple, basic, wholly deterministic (if tricky and secret) process by which my eccentric acquaintance adds and subtracts buttons from his collection.

What can and must we learn from this exercise?

1. No statistical trend, no matter how precisely calculated, regardless of its apparent precision or length, has any effect whatever on future values of a data set – never, never and never. Statistical trends, like the data of which they are created, are effects. They are not causes.

2. Models, not trends, can predict, project, or inform about possible futures, to some sort of accuracy. Models must include all of the causative agents involved which must be modeled correctly for relative effects. It takes a complete, correct and accurate model of a process to reliably predict real world outcomes of that process. Models can and should be tested by their abilities to correctly predict already known values within a data set of the process and then tested again against a real world future. Models also are not themselves causes.

3. Future values of a thing represented by a metric in data set output from a model are caused only by the underlying process being modeled–only the actual process itself is a causative agent and only the actual process determines future real world results.

PS: If you think that this was a silly exercise that didn’t need to be done, you haven’t read the comments section at my essay at Dot Earth. It never hurts to take a quick pass over the basics once in a while.

# # # # #

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
222 Comments
Inline Feedbacks
View all comments
richardscourtney
October 20, 2013 1:40 pm

The Pompous Git:
Thankyou!
Your post at October 20, 2013 at 12:53 pm explains what I was not understanding. It says

Richard, you stated “Determining a trend and extrapolating that trend to obtain a prediction… does not create belief” and it is that portion of your statement with which I disagreed. I am perfectly happy to accept the Aristotelian account of knowledge as justified true belief. We agree that extrapolation into the future does not generate knowledge. But if that extrapolation does not generate belief, then what does it generate? You seem to be saying it generates “X” and people subsequently choose to believe, or not based on “X”. I am curious to understand what this “X” is.

Obviously, I was not clear, and it is hard to understand a reason for a disagreement when one thinks one has explained something adequately when one has not. Sorry.
I defined what I understood to be belief and said – as you have quoted – that a prediction obtained from an extrapolated trend “does not create belief”. However, as you said and I agree, some people can choose to attach belief to a prediction (obtained from any method).
I am saying the prediction does not ITSELF generate belief (any more than a chair does). But people can generate beliefs which they may attach to predictions from particular methods (similarly, a chair does not generate belief that it can support a person’s weight but a person may attach that belief to a chair they have yet to sit on).
I will illustrate that with a classic con trick.
The trickster sends four circulars to four sets each of 1000 people. The circulars contain predictions of e.g. stock market changes and the four circulars provide very different predictions. Each circular says to not send money but to wait for a future invitation to enable the recipients to join in an offered investment plan which would use the demonstrated investment scheme which uses an undisclosed formula. One of the groups would – by chance – have made much money by investing as described in its circular. The trickster makes no further contact with the other three groups. He divides the remaining group into four smaller groups each of 250 people, and repeats the process of sending circulars. Again, one of these smaller groups would – by chance – have made much money by investing as described in its circular. And those 250 people are the trickster’s target. Many of them now attach a belief to the claimed investment formula. The trickster sends each of them another circular but this asks each of the targets to invest $1,000 in the investment scheme. If 200 of those 250 people respond then the trickster obtains $200,000 for a scam which cost at most a few hundred $ to conduct. (Actually three rounds of fake predictions – not two – are usually used because people tend to be convinced by three successive successes.)
In the illustration, the apparent success of the predictive investment formula induces the ‘marks’ to believe the formula works, but no such formula exists. The formula has not induced belief in its ability – it does not exist – but the desire to believe in the formula and, hence, its rewards induces belief in its ability.
I hope that clarifies what I meant; i.e. predictions don’t create belief but people may.
The prediction from an extrapolated trend provides an indication of the future which is better than a chance guess of the future. It is that improvement on chance which is what you call “X”.
The prediction is not knowledge of what will or will not happen.
And the prediction is not a belief that sometging will or will not happen.
The prediction is an indication of what is more likely to happen than random chance would suggest.
In principle this improvement on chance is similar to a weather forecast. Nobody believes a weather forecast is what will happen, and everybody knows a weather forecast is not knowledge of what will happen. But if the weather forecast is for rain then people tend to carry an umbrella.
I hope I have now been more clear.
Richard

October 20, 2013 1:41 pm

I believe I have come up with an example that may make what I believe Kip’s point more obvious.
Let us suppose that the Tasmanian Education Department have hired me to test the IQ of the children at Franklin Primary School. Let us further suppose I record the following results:
Alice has an IQ of 80
Bertrand has an IQ of 100
Clara has an IQ of 120
Some here would seem to believe that Desmond’s IQ is more likely to be 140 than it is 100. I can assure you that having been schooled by that statistician to the stars, William/Matt/Briggs [delete whichever is inapplicable] that this is not the case. Fortune-tellers/necromancers/tree-huggers/numerologists/astrologers [delete whichever is inapplicable] will naturally disagree with me on this matter 😉

richardscourtney
October 20, 2013 2:10 pm

The Pompous Git:
Sorry, but this time it is me that is bemused. I cannot understand why (in your post at October 20, 2013 at 1:41 pm ) anybody would think “Desmond’s IQ is more likely to be 140 than it is 100”.
Richard

October 20, 2013 2:29 pm

Ted Carmichael said October 18, 2013 at 3:51 am

The Theory of Gravity is the classic example of this. The empirical results are so thoroughly robust and understood that we even call it a Law. But we don’t know what “causes” gravity … we can model it very, very well, but we don’t know the cause. (Yes, there are a few hypotheses of late; but these are as yet uncertain.)

There is no “The Theory of Gravity”; there are four theories of gravity with which I am familiar:
Aristotle: “all bodies move towards their natural place. For some objects, Aristotle claimed the natural place to be the center of the Earth, wherefore they fall towards it. For other objects, the natural place is the heavenly spheres, wherefore gases, steam for example, moving away from the center of the Earth and towards Heaven and to the Moon.”
Einstein: “General relativity, or the general theory of relativity, is the geometric theory of gravitation published by Albert Einstein in 1916.”
Quantum theories: string theory and loop quantum gravity.
You will notice that Newton does not appear on that list. Newton wrote: “Hypotheses non fingo”. (I do not make up hypotheses.)
Newton’s Law is a statement that objects are attracted to each other in proportion to their mass. It appears to have been in operation for more than the last 10 billion years, thus predating Newton by a considerable margin. Theories of gravity are an attempt to explain the why of gravitation.

October 20, 2013 2:35 pm

richardscourtney said October 20, 2013 at 2:10 pm

Sorry, but this time it is me that is bemused. I cannot understand why (in your post at October 20, 2013 at 1:41 pm) anybody would think “Desmond’s IQ is more likely to be 140 than it is 100″.
Richard

Me too, but clearly many do. I chose the example because I think the error that Kip is attempting to explicate is made more explicit. I could be wrong and stand to be corrected if that is the case.

October 20, 2013 3:16 pm

Another example that may make my (Kip’s?) point clearer. The Git and Mrs Git are members of an art buying group. We have a 1/25 share and each shareholder pays $1,000 per year over 10 years into the kitty. From that kitty, artworks are purchased and at the end of the 10 years, the artworks will be distributed by auction among the shareholders by auction. Artworks not desired by shareholders will be auctioned publicly and the money obtained distributed equally among the shareholders. Not that it’s relevant here, but we purchase only artwork by living Tasmanian artists.
So, we will purchase $250,000 of artworks over a decade. Let’s assume we purchased three artworks in our first year for $10,000, $7,500 and $5,000 respectively and in that order. Extrapolation would indicate that our next purchase would be $2,500. But what we spend is dependent on aesthetics, how much remains in the kitty, whether an artist will hold a work until we accumulate sufficient funds to make the purchase, the availability of a suitable artwork to purchase and so on.
Again, there is no suitable numerical model to base an extrapolation on. Yet extrapolation is not only possible, such extrapolations do occur in the real world that are based on the real data, as in the case that Briggs brought to our attention. As Briggs says (and he has said this before): “the data is the data”. The interpretation we put on the data is not data; it’s a product of our imagination.
Hopefully that helps make the point clearer.

Bart
October 20, 2013 3:39 pm

Kip Hansen says:
October 19, 2013 at 7:51 am

Before using the apparent trend to make a prediction, once must know at least something about the system—the physical process itself—what we are talking about here!

I think that is more or less what I stated here, and to which rgbatduke agreed:

He would be on much more solid ground if he said that climate variables, particularly mean surface temperature anomaly, do not behave like such a sequence and, as a result, are not predictable to the desired level of accuracy using such a model. To the degree that your model fails to capture the dynamics of the actual process, statistics derived based upon that model are dubious, to say the least.

Has our shared opponent been so defeated, debunked, and demoralized that we must find trivial issues to divide ourselves, because we are now locked in an aggressive, argumentative mode which must find some avenue for expression?

Ted Carmichael
October 20, 2013 9:54 pm

Pompous Git: (Nice handle) You said, “Newton’s Law is a statement that objects are attracted to each other in proportion to their mass. It appears to have been in operation for more than the last 10 billion years, thus predating Newton by a considerable margin. Theories of gravity are an attempt to explain the why of gravitation.”
The Theory of Gravity (or the Law of Gravity, if you prefer) is wholly an empirical model. What I meant by that is that it does not express a mechanism for gravity, only a (very well refined) description, based on copious amounts of data (measurements). Newton said it was not a hypothesis because he was convinced by the overwhelming consistency of observations. It is simple and elegant – one cause that explains a wide variety of observations. But the “cause” – that objects are attracted to each other in proportion to their mass – does not have a known underlying mechanism. In other words, we don’t know why objects are attracted to each other in proportion to their mass. We only know that it happens because we see it happen.
Thank you for providing two examples of a spurious trend. (The artwork example and the IQ example.) I don’t think they prove your (or Kip’s) point however. In the IQ example we have additional information … we know that IQ is not likely to be related to the first letter in a student’s name. We also know that IQ forms a bell curve, and so is not likely to continue in an upward trajectory. In other words, we have two additional models of student IQ scores that we would bring to bear on that particular data set, and would thus probably ignore a linear trend as being spurious.
For the artwork example, we also have a lot of additional information that would discount a linear trend. With Kip’s original BC example, we similarly have additional information, but nothing that would dissuade us from extrapolating a (weak) linear trend. And so a prediction of 110 buttons on the fifth day is – very weakly – the most likely outcome … it is more likely than any other number, based on the information we have. (You will recall that Kip’s point was that any prediction for day five is equally likely to occur. I disagree with the “equally” part.)
It is useful, I think, to extend these examples, so that instead of just three data points there are, say, 100, or 5,000, or a million. Then it becomes obvious that a linear trend – one that perfectly matches every single data point – is a reasonable approximation for data point 101, 5001, or 1,000,001. This extension helps to show why the numbers, all by themselves, contain a non-zero amount of knowledge of whatever system produces them. Cheers.

October 21, 2013 12:09 am

Ted Carmichael
First, you are correct that my examples were flawed; perhaps I should wait until I have finished a litre of coffee before posting 😉 Nevertheless, the numbers without the additional information convey no information whatsoever. The coin toss example above should be sufficient to convey that.
Apropos Law versus Theory, my view is what philosophers call The Received View; i.e. it is what is taught at the academy. You might find the following of interest:

When Does a Theory Become a Law?
This is something that comes up quite frequently in discussions between scientists and the general public. How much proof does it take for a theory to graduate to being a law?
Law
Because the words theory and law have such different meanings in the language of science, it is often a difficult question to answer, so instead, I’ll start by giving you a few similar questions to answer.
How perfectly do you have to build a house so that it will become a single brick?
How well do you have to write to change an entire dictionary into a single word?
What would you have to do to change an entire symphony into a single note?
If you are thinking that those questions don’t make much sense, then you are feeling very much like a scientist who has been asked “How much proof does it take for a theory to graduate to being a law?”

http://thehappyscientist.com/study-unit/when-does-theory-become-law

October 21, 2013 10:11 am

“fhhaynie says: October 19, 2013 at 12:50 pm
For those who do not trust the use of statistics in data analysis, I suggest you read http://books.google.com/books?id=8C7pXhnqje4C&printsec=frontcover&source=gbs_ViewAPI#v=onepage&q&f=false. This is based on a paper I presented at an international corrosion symposium way back in 1971. It was published as a chapter in a handbook and has been revised and published twice since then. I think it is still valid. If you don’t understand how to use statistical techniques, you should talk to a statistician and at least learn from them. The probability that you will make mistakes in your research will be greater when you don’t use statistical techniques correctly. Much of the the IPPC research is a good example of missuse. Ask most any statistician.”

FNHaynie:
I hope we haven’t made it seem that we distrust statistics, perhaps better stated as disbelieve statistics. My perception is that most of us commenting here are convinced that statistics has immense value when dealing with data.
Using RGB as an example, most of us have immense respect for presentations RGB has shared with us. Bluntly stated, I’d accept RGB’s findings easily without any unease. RGB’s handling of research is detailed with solidly understood data and metadata. Dissecting RGB’s articles is always educational and often illuminating. I’d be proud if any of my work had been equal in accuracy and efficiency.
Separating out the bad CAGW statistics examples from proper statistics usage, CAGW’s bad examples are the ones that assert absolutism and insist on faith in the scientists rather than algorithm investigation.
Our seeming disagreement mostly focused on when is a ‘prediction’ considered absolute or near absolute. Statisticians and many model code writers automatically keep in mind the confidence levels inherent to any run. Other experts reviewing the output also take confidence levels as a necessary part of the statistical output. They also take into account who the statistician is and oddly, the maturity level of the statistical model. Well written and proven models/modules/programs/code outputs are easily accepted. Modules written by an unknown just don’t get that level of acceptance until thoroughly reviewed.
This is harder to define whom, so allow me to state; experienced statisticians allow the data to lead them to the analysis and often the deeper one delves the more value of specific trends is attained. Targeting of defined areas or problems just provides a data area focus for analysis.
Contrast that with an advocate seeking to ‘prove’ anything. Advocates misuse data, metadata, filters, code and statistics to achieve their aims, not properly assess a ‘trend’. Their outputs are presented as the absolute ‘future’. A quick way to distinguish between the types is when trend issues are identified whether the owners accept corrections or seem deaf to all entreaties for correction/withdrawal.
This are not unique problems specific to statistics, but are inherent to all presentations or sales pitches intent more on deceit than serious study.
When those of us insisting on acknowledgement that trends are not absolute comment we’re definitely not denying the impressive and very extensive value of statistics. We’re trying to point out that even the finest statistics runs have confidence levels. If the confidence level equates to 100%, the trend probably does not need to be run; e.g., charting the sunrise.
Excellent highly talented statisticians work to get their code and data to hit as high a confidence level as possible. They also review the runs to assess any causes for less than ideal conditions impacting the trend. This prepares them for questions about trend expectations; questions they are usually happy to answer honestly.
The less than honest spin presenters are the ones blackening the statistics eye. They often use phrases like, “It’s a computer model”, or “Statistics is used to determine…” to inflate audience impressions of their product. They often start to sweat extensively and their eyes dart around when anyone asks specific technical about the model or statistics runs. Rarely do they answer technical questions with technical replies. Recently, over the last decade, their approach is to refuse direct answers but oddly tongue and mind twisting logic is later posted on the web as ‘official’ responses’ by third party individuals. A number of the most severely flawed usages of statistics where the owner ignores all non-worship comments and insists their models are validated. Falsifiable science is apparently suspended for their causes.
I won’t claim to be a statistician nor that I love statistics. I have used statistics for trend models and spent many hours/days/weeks vetting data and statistic runs. Every time I knew that a data analysis program needed statistics to progress, it was amazing how much of my other work got done first. Diving into a pile of data with a statistics injection took more time for review than coding, by far. I can say I am impressed with the information inherent to the most innocuous boring mass of ordinary data and how statistics is the tool to achieve that information.

Reply to  ATheoK
October 21, 2013 11:21 am

Thanks Theo,
You have stated more cleary what I attempted to convey when I presented that paper years ago. The primary message was that it is easy to make mistakes in your research if you missuse statistical techniques because you don’t understand the math. Statisticians may not understand the physics but they do understand the math. Those that understand the physics (or think they do) but lack the understanding of the math, should consult with a statistician before designing their experiments or write code for a model to do what-if experiments with vertual reality.

Ted Carmichael
October 21, 2013 12:13 pm

: +1
@The Pompous Git: I like the Theory/Law explanation. Thanks for posting it.

mitigatedsceptic
October 21, 2013 12:47 pm

Statistics is simply another language form from the maths stable. There is no reason to trust or distrust it; but, as in the use of any language, it has it’s rules and if these are not obeyed, the outcome can be nonsense. Before digital computing became commonplace, we had to do the math by hand and consequently we knew what was what. Once calculator and big computing engines became fashionable, users did not need to know what was involved in the calculation and , I suspect , many may not know very much about statistics and when it is appropriate to use this or that statistical tool – hence GIGO. They forget that describing an event statistically cannot add new knowledge, any more than the operations of logic can. What it can do, if used prudently, is provide several views of the event in much the same way that an artist can produce several views of a still life by moving the source of illumination.
It’s over 200 years since Hume demonstrated that correlation and causation are not necessarily related and that induction is a fragile tool. Indeed he envisaged that far from the universe being well-behaved and orderly, it might be manipulated by a mischievous devil out to tease us into false beliefs. But, heigh ho, it’s all we have. That’s why Science must be defended from hooligans such as those who profess to find validity for ‘post-normal’ science and ‘consensus’ validation.
Science, based on inductive inference, has served us very well in many respects, but she is so so vulnerable.

October 21, 2013 1:11 pm

mitigatedsceptic
That was rather well put 🙂

October 21, 2013 6:57 pm

” fhhaynie says: October 21, 2013 at 11:21 am

…Those that understand the physics (or think they do) but lack the understanding of the math, should consult with a statistician before designing their experiments or write code for a model to do what-if experiments with vertual reality.”

I agree with your comment and your latter sentence above I do agree with absolutely. If I could add a couple of words they would be, “…consult a statistician before and after. But I like your phrasing just fine.

“Ted Carmichael says: October 21, 2013 at 12:13 pm
ATheoK: +1
The Pompous Git: I like the Theory/Law explanation. Thanks for posting it.”

🙂 Thank you.

“mitigatedsceptic says: October 21, 2013 at 12:47 pm
Statistics is simply another language form from the maths stable. There is no reason to trust or distrust it; but, as in the use of any language, it has it’s rules and if these are not obeyed, the outcome can be nonsense. Before digital computing became commonplace, we had to do the math by hand and consequently we knew what was what. Once calculator and big computing engines became fashionable, users did not need to know what was involved in the calculation and , I suspect , many may not know very much about statistics and when it is appropriate to use this or that statistical tool – hence GIGO. They forget that describing an event statistically cannot add new knowledge, any more than the operations of logic can. What it can do, if used prudently, is provide several views of the event in much the same way that an artist can produce several views of a still life by moving the source of illumination.
It’s over 200 years since Hume demonstrated that correlation and causation are not necessarily related and that induction is a fragile tool. Indeed he envisaged that far from the universe being well-behaved and orderly, it might be manipulated by a mischievous devil out to tease us into false beliefs. But, heigh ho, it’s all we have. That’s why Science must be defended from hooligans such as those who profess to find validity for ‘post-normal’ science and ‘consensus’ validation.
Science, based on inductive inference, has served us very well in many respects, but she is so so vulnerable.”

“The Pompous Git says: October 21, 2013 at 1:11 pm

mitigatedsceptic…”

That was rather well put :-)”

I’m in complete agreement with you! Good statement mitigatedsceptic! I flashed back to slide rules with your ‘math by hand’ phrase.

mitigated sceptic
October 22, 2013 3:34 am

Thank you.
Yes, indeed, slide rules linear, circular and cylindrical. We also had a ghastly monster by NCR, I think, that looked and sounded like a cash register (Open all hours type of thing) and which printed out the calculations line by line. That really was a gem because it was so easy to trace back to find the (inevitable) errors.

Samuel C Cogar
October 22, 2013 6:46 am

mitigatedsceptic says:
October 21, 2013 at 12:47 pm
They forget that describing an event statistically cannot add new knowledge ….
That’s why Science must be defended from hooligans such as those who profess to find validity for ‘post-normal’ science and ‘consensus’ validation.
—————
Right you are. Now all one has to do is convince all of the proponents of CAGW of that fact.

Brian H
October 22, 2013 7:33 pm

They forget that describing an event statistically cannot add new knowledge

I am somewhat uncomfortable with this statement. New data, or “information” in the formal sense cannot be added, but knowledge about the event, in the sense of increased understanding or better interpretation, can be. Else, why use statistics at all?

mitigatedsceptic
October 23, 2013 4:51 am

H Indeed, but I did say “What it can do, if used prudently, is provide several views of the event in much the same way that an artist can produce several views of a still life by moving the source of illumination.” That is the value of statistical conversations – they enable us to form different impressions of things and events.
The real problems arise when statistical calculations are extended to make ‘projections’ into the future. We believe that everything is causally connected to everything else; but our models cannot embrace all possible causal relations including points (tipping points or bifurcation points) at which state changes take place.
Newton warned us not to offer explanations that go beyond the observations, yet this is what the futurologists are up to. Science should be modest and admit to vast ignorance about future events

Brian H
October 23, 2013 9:00 pm

Yes, a whole new topic and field: The Artistry of Statistics!

Editor
October 25, 2013 8:14 am

I am pleased to see that this thread apparently has a life of its own!

1 7 8 9