Repeated Trials, Autocorrelation, and Albedo

Guest Post by Willis Eschenbach

OK, quick gambler’s question. Suppose I flip seven coins in the air at once and they all seven come up heads. Are the coins loaded?

Near as I can tell, statistics was invented by gamblers to answer this type of question. The seven coins are independent events. If they are not loaded the chances of a heads is fifty percent. The odds of seven heads is the product of the individual odds, or one-half to the seventh power. This is 1/128, less than 1%, less than one chance in a hundred that this is just a random result. Possible but not very likely. As a man who is not averse to a wager, I’d say it’s a pretty good bet the coins were loaded.

However, suppose we take the same seven coins, and we flip all seven of them not once, but ten times. Now what are our odds that seven heads show up in one of those ten flips?

Well, without running any numbers we can immediately see that the more seven-coin-flip trials we have, the better the chances are that seven heads will show up. I append the calculations below, but for the present just note that if we do the seven-coin-flip as few as ten times, the odds of finding seven heads by pure chance go up from less than 1% (a statistically significant result at the 99% significance level) to 7.5% (not statistically unusual in the slightest).

So in short, the more places you look, the more likely you are to find rarities, and thus the less significant they become. The practical effect of this is that you need to adjust your significance level for the number of trials. If the significance level is 95%, as is common in climate science, then if you look at 5 trials, to have a demonstrably unusual result you need to find something significant at the 99% level. Here’s a quick table that relates number of trials to significance level, if you are looking for the equivalent of a single-trial significance level of 95%:

Trials, Required Significance Level

1, 95.0%

2, 97.5%

3, 98.3%

4, 98.7%

5, 99.0%

6, 99.1%

7, 99.3%

8, 99.4%

Now, with that as prologue, following my interest in things albedic I went to examine the following study entitled Spring–summer albedo variations of Antarctic sea ice from 1982 to 2009 :

ABSTRACT: This study examined the spring–summer (November, December, January and February) albedo averages and trends using a dataset consisting of 28 years of homogenized satellite data for the entire Antarctic sea ice region and for five longitudinal sectors around Antarctica: the Weddell Sea (WS), the Indian Ocean sector (IO), the Pacific Ocean sector (PO), the Ross Sea (RS) and the Bellingshausen– Amundsen Sea (BS).

antarctica ice areas

Remember, the more places you look, the more likely you are to find rarities … so how many places are they looking?

Well, to start with, they’ve obviously split the dataset into five parts. So that’s five places they’re looking. Already, to claim 95% significance we need to find 99% significance.

However, they are also only looking at a part of the year. How much of the year? Well, most of the ice is north of 70°S, so it will get measurable sun eight months or so out of the year. That means they’re using half the yearly albedo data. The four months they picked are the four when the sun is highest, so it makes sense … but still, they are discarding data, and that affects the number of trials.

In any case, even if we completely set aside the question of how much the year has been subdivided, we know that the map itself is subdivided into five parts. That means that to be significant at 95%, you need to find one of them that is significant at 99%.

However, in fact they did find that the albedo in one of the five ice areas (the Pacific Ocean sector) has a trend that is significant at the 99% level, and another (the Bellingshausen-Amundsen sector) is significant at the 95% level. And these would be interesting and valuable findings … except for another problem. This is the issue of autocorrelation.

“Autocorrelation” is how similar the present is to the past. If the temperature can be -40°C one day and 30°C the next day, that would indicate very little autocorrelation. But if (as is usually the case) a -40°C day is likely to be followed by another very cold day, that would mean a lot of autocorrelation. And climate variables in general tend to be autocorrelated, often highly so.

Now, one oddity of autocorrelated datasets is that they tend to be “trendy”. You are more likely to find a trend in autocorrelated datasets than in perfectly random datasets. In fact there was an article in the journals not long ago entitled Nature’s Style: Naturally Trendy . (I said “not long ago” but when I looked it was 2005 … carpe diem indeed.) It seems many people understood that concept of natural trendiness, the paper was widely discussed at the time.

What seems to have been less well understood is the following corollary:

Since nature is naturally trendy, finding a trend in observational datasets is less significant than it seems.

In this case, I digitized the trends. While I found their two “significant” trends in the Bellingshausen–Amundsen Sea (BS) at 95% and the Pacific Ocean sector (PO) at 99% were as advertised and they matched my calculations, unfortunately I also found that as I suspected, they had indeed ignored autocorrelation.

Part of the reason that the autocorrelation is so important in this particular case is that we’re only starting with 27 annual data points. As a result, we’re starting with large uncertainties due to small sample size. The effect of autocorrelation is to reduce that already inadequate sample size, so the effective N is quite small. The effective N for the Bellingshausen–Amundsen Sea sector (BS) is 19, and the effective N for the Pacific Ocean sector (PO) is only 8. Once autocorrelation is taken into account both of the trends were not statistically significant at all, as both were down around the 90% significance level.

Adding in the effects of autocorrelation with the effect of repeated trials means that in fact, not one of their reported trends in “spring-summer albedo variations” is statistically significant, nor even near to being significant.

Conclusions? Well, I’d have to say that in climate science we’ve got to up our statistical game. I’m no expert statistician, far from it. For that you want someone like Matt Briggs, Statistician to the Stars. In fact, I’ve never taken even one statistics class ever. I’m totally self-taught.

So if I know a bit about the effects of subdividing a dataset on significance levels, and the effects of autocorrelation on trends, how come these guys don’t? Be clear I don’t think they’re doing it on purpose. I think that this was just an honest mistake on their part, they simply didn’t realize the effect of their actions. But dang, seeing climate scientists making these same two mistakes over and over and over is getting boring.

To close on a much more positive note, I read that Science magazine is setting up a panel of statisticians to read the submissions in order to “help avoid honest mistakes and raise the standards for data analysis”.

Can’t say fairer than that.

In any case, the sun has just come out after a foggy, overcast morning. Here’s what my front yard looks like today …

redwood and nopal

The redwood tree is native here, the nopal cactus not so much … I wish just such sunny skies for you all.

Except those needing rain, of course …

w.

AS ALWAYS: If you disagree with something I or someone else said, please quote their exact words that you disagree with. That way we can all understand the exact nature of what you find objectionable.

REPEATED TRIALS: The actual calculation of how much better the odds are with repeated trials is done by taking advantage of the fact that if the odds of something happening are X, say 1/128 in the case of flipping seven heads, the odds of it NOT happening are 1-X, which is 1 – 1/128, or 127/128. It turns out that the odds of it NOT happening in N trials is

(1-X)N

or (127/128)N. For N = 10 flips of seven coins, this gives the odds of NOT getting seven heads as (127/128)10, or 92.5%. This means that the odds of finding seven heads in ten flips is one minus the odds of it not happening, or about 7.5%.

Similarly. if we are looking for the equivalent of a 95% confidence in repeated trials, the required confidence level in N repeated trials is

0.951/N

AUTOCORRELATION AND TRENDS: I usually use the method of Nychka which utilizes an “effective N”, a reduced number of degrees of freedom for calculating statistical significance.

nychka neff

where n is the number of data points, r is the lag-1 autocorrelation, and neff is the effective n.

However, if it were mission-critical, rather than using Nychka’s heuristic method I’d likely use a Monte Carlo method. I’d generate say 100,000 instances of ARMA model (auto-regressive moving-average model) pseudo-data which matched well with the statistics of the actual data, and I’d investigate the distribution of trends in that dataset.

0 0 votes
Article Rating
342 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
daveandrews723
June 27, 2015 3:40 pm

The way NOAA has adjusted the temperatures of the 20th century, based on what valid criteria I have no idea, how can anyone have any degree of confidence in what they put forward? And when you go back to the 19th century how does anyone purport to have an accurate record of global temperatures? What percentage of the world were temperatures even record in back then?

Reply to  daveandrews723
June 27, 2015 10:11 pm

The chance of getting heads or tails is a bit less than 50%. Because there is a small chance for the coin to end up on the edge?

The Ghost Of Big Jim Cooley
Reply to  Santa Baby
June 28, 2015 12:45 am

And it can happen. We have a small key that opens our letterbox. We are refurbishing our hallway, and currently have a bare concrete floor covered with a membrane. I threw the key onto the floor and didn’t hear the familiar sound of it bouncing. I looked back and it was on its edge! It was so astounding that I quickly called my wife to witness it. What’s even weirder is that the key is very thin, so its edge is about just 1 millimetre. This was the third thing (of a sort) to happen in a few weeks. On two occasions, separated by about four weeks, I threw a dishwasher tablet into the dishwasher and it landed on its edge. The second time it landed on its top edge (even less likely). At least the tablet has a wide edge, so the odds aren’t bad, but a key landing on its edge when it is so thin? Surely the odds are extraordinary?

Reply to  Santa Baby
June 28, 2015 1:00 am

Perhaps you have a natural magnet under your property?
Or maybe stuff just happens sometimes.

The Ghost Of Big Jim Cooley
Reply to  Santa Baby
June 28, 2015 1:08 am

Yep, stuff happens sometimes, that’s all it is!

Craig
Reply to  Santa Baby
June 28, 2015 1:50 am

Check out the classic Twilight Zone episode “A Penney For Your Thoughts” to see what happens if you manage to get a coin to land on its edge.

george e. smith
Reply to  Santa Baby
July 1, 2015 3:04 am

Who said it is a small chance ?
Sometimes the coin toss before a game or such is done on a mat where the coin won’t bounce.
So what if you do the coin toss on a flat patch of beach sand, where an edge on coin can dig into the sand and stay there.
So now what is the probability of it being less than 45 degree tilt from perfectly edge on ??

Bill 2
June 27, 2015 3:56 pm

“the odds of finding seven heads by pure chance go up from less than 1% (a statistically significant result at the 99% significance level) to 7.5% (not statistically unusual in the slightest)”
What would you consider “statistically unusual in the slightest”? 5.0%? Would 5.00001% then not be “statistically unusual in the slightest”? The difference between the two is insignificant in itself. Certainly 7.5% is statistically unusual in some sense, just not as statistically unusual as the arbitrarily-chosen threshold of 5.0%.

donb
June 27, 2015 3:57 pm

I once read the following comment by a person who taught statistics.
He assigned his students a task of flipping a coin 100 times and recording the sequence of heads and tails. He then took the results and informed the class which students had done the exercise and which had “dry-lab” the results, i.e. made then up. His answers were mostly correct.
His secret (which he conveyed to the students to make the point) was that most people think that the same result occurring three times in a row (e.g. three heads), and especially four or more times in a row, was very, very unlikely. The dry-lab results were those that had no (or very few) heads or tails occurring three or more times.
In reality these multiple occurrences are more common than people think.

noaaprogrammer
Reply to  donb
June 27, 2015 9:23 pm

(A little OT, but similar): Before announcing that the class topic is on pseudo random number generators, I tell my students to write down 5 digits of their choice. We then plot the frequency distribution of those choices. The frequency of the middle digits are higher than the tails, 0 & 1, and 8 & 9; and the frequency of the odd digits are higher than the even digits. This illustrates that people are biased in their choice of digits, as one would expect a more or less even distribution if the choices were done randomly.
I then have them write down 100 consecutive digits chosen at random, (in lines of 10 digits each where the first digit of a succeeding line follows the last digit of the preceding line.) After they’re done, I have them circle the number of pairs of same digits, and the number of triplets of same digits. Most of the students are concentrating so hard on avoiding such occurrences in their efforts to “be random” that they fail to meet the statistical average of 10 pairs and 1 triplet.
The conclusion is that generating pseudo random sequences is difficult, as humans aren’t randomly inclined when they aren’t trying to be random, and they aren’t randomly inclined when they are trying to be random.

E.M.Smith
Editor
Reply to  noaaprogrammer
June 28, 2015 1:55 pm

Well, then there is the whole question of “Ought the numbers actual BE random?”. Yes, for a random number generator or a truly fair coin, but real coins are not always evenly balanced and all…
As per the ice data set, this “natural bias” comes from the existence of a natural 60 year ocean / weather cycle, a roughly 8 year Southern Ocean Antarctic Circumpolar Wave
https://en.wikipedia.org/wiki/Antarctic_Circumpolar_Wave
the 18 ish year Saros Cycle of lunar tidal forces
https://en.wikipedia.org/wiki/Saros_%28astronomy%29
and the 1500 year cycle of tides caused by lunar cycles as longer term influences have effect.
So we KNOW there will be various interacting cycles causing observed pseudo-trends in the data as “heads and tails” of those sine waves line up, or not, with the start and end of the data observed…
So how does it make sense to apply a test of non-random to a non-random data set to find ‘trend’?

David A
Reply to  noaaprogrammer
June 29, 2015 3:18 am

Yes EM. Is that not the reason for “Autocorrelation”. However the example Willis gave was,
““Autocorrelation” is how similar the present is to the past. If the temperature can be -40°C one day and 30°C the next day, that would indicate very little autocorrelation. But if (as is usually the case) a -40°C day is likely to be followed by another very cold day, that would mean a lot of autocorrelation.”
This is an example of non randomness WITHIN the period of study, but your example is of non randomness outside the period of study. I do not know if autocorrelation corrects for non random trends outside the period of study.
Willis?

Reply to  donb
June 27, 2015 9:29 pm

I spent a decade in grad school in a visual psychophysics lab . The bias of humans to disbelieve the frequency of long runs was pervasive . To make the point , here’s a couple of sequences of 100 pluses and minuses from a common random number generator which I’m sure will pass any simple chi^2 test or such .
++—–+-+-+-+-++-+-++-++-++–++++–+—+-+++-++-++-+++—–++—++++-+–++++–++-+-+-++—+++-+-+++
——-+—-+++-+–+-+-+++–+++–+—-+++-+—-+++-+-++-+——+-+-++–+++-+++—+—+++++++–++++–

richard verney
Reply to  donb
June 27, 2015 11:18 pm

Something that one can see every night on UK television where there is late night roulette.
One frequently sees runs of 3 and even 4 reds (or blacks) in a row. I have not looked for odd/even streaks because one would have to look at the numbers in detail, I merely channel hop over this late night gambling fad. But every night one can see this as they show the last dozen spins of the wheel, and certainly 3 of one of the two colours in a row is a common occurence.

george e. smith
Reply to  donb
June 30, 2015 1:51 pm

Well I have used statistical mathematics; or some aspects of it, for over 50 years, in my daily work; which often included recording the results of repeated experiments; or “trials” as Willis calls them.
But I don’t flip coins, so I haven’t done what Willis has.
So I am all the time (or have been) computing the “average value” of some data set of numbers, along with things like standard deviations.
But in my case, the numbers in my data sets, had a common property. All of the members of my typical data set, were supposed; in the absence of experimental errors, to be exactly the same number.
So my purpose in averaging, was to tend to reduce the random experimental error in my result. Systematic errors, of course posed additional uncertainties, but absent that, my expectation was that the probable random error in my result would diminish in about the square root of the total number of trials, or experiments.
With the caveat, of excluding systematic errors, my expectation is that this statistical average is the best result I can get from that experiment or measurement.
Now that is quite different from tossing a coin, as in Willis’s trial.
With my luck, if I tossed seven coins, just once, all seven of them would likely land on edge, just to annoy me.
So I don’t toss coins, just in case, that should happen.
But the use of statistics, and averaging, related to climate “science”, seems to be quite a different proposition all together.
People in that field, seem to take single, non-repeatable measurements of different things (Temperatures ?? e.g. ) in quite different places, at quite different times, all of which should yield quite random unrelated results, with no expectation at all, that any of those measurements would be the same.
They then engage in what amounts to numerical Origami, during which they discard, all of these unrelated experimental observations, and replace them with either an entirely new and fictitious set of numbers, often referred to as “smoothing”, or else discard them all completely to be replaced by a single number; “the average.”
Now the algorithms of statistical mathematics are all described in detail, in numerous standard texts on that art form; and it is an art form, trying to make nothing out of something.
So you can take a very useful square of paper, on which you could write a nice poem, and by applying a simple algorithm, you can fold that useful piece of paper, and get an ersatz frog that can even jump; but is now much more difficult to right a poem on.
Well of course, the algorithms of statistical mathematics place very few restrictions on the elements of the data set.
The only requirements is that each of them is a real number. That is “real” in the mathematical sense, so NO imaginary, or complex numbers, and NO variables.
Each element is an exactly known number; although it is not necessarily the exact value of anything physically existing.
So the result of applying the algorithm is always exact, and any practitioner, applying the same algorithm, will get the exact same result from the same data set.
So statistics is an exact discipline, with NO uncertainty in the outcome.
And it always works on ANY set of numbers whatsoever that meet the “real” number condition.
The numbers of the data set, do not have to be related in any way. You could choose all the telephone numbers in your local phone yellow pages. Well you could also include the page numbers, or the street address numbers as well, or any subset of them.
So NO uncertainty surrounds the outcome of the application of ANY statistical mathematics algorithm.
Now if you like uncertainty, my suggestion is to look instead, not at the numbers you get from doing statistics, but at the absurd expectations for what meaning lies in that outcome.
There is NO inherent meaning, whatsoever. The result is just a number.
If you add the integers from 1 to 9 inclusive, you get a sum of 45 (always), and dividing by the number of elements (9) you get the average value of 4.5 (always) and as you can see it isn’t even one of the numbers in the data set.
The average value of all the phone numbers in your yellow pages, is likely to not even be a telephone number at all. Averaging numbers that aren’t even supposed to be the same, simply discards all those numbers in favor of a completely fictitious one.
So our self delusion, is in what we expect the outcome of our statistics to mean.
It means only what we choose it to mean. There is no inherent meaning.
Just my opinion of course. Most people (maybe 97%) would likely disagree with me.

June 27, 2015 4:05 pm

Even some well-known scientists fall in that trap. Several recent posts have quoted papers by Mike Lockwood, whose grip on statistics is not the best. In his famous paper in Nature that the coronal magnetic field has more than doubled in the last hundred years, Lockwood claimed that his finding was significant at the 99.999999999996% level [he ignored or didn’t know about auto-correlation].

GDauron
Reply to  lsvalgaard
June 28, 2015 8:15 am

Doctor you are the best at getting to the point that I have ever seen! Please live many more years.

Gregory
June 27, 2015 4:06 pm

42

Reply to  Gregory
June 28, 2015 9:20 pm

Which is ASCII for * , the wildcard character on DOS machines… essentially, it means match everything. So, the ultimate answer, is “everything”.

Expat
June 27, 2015 4:08 pm

I took exactly 1 course is statistics when I studied engineering. It was an elective at that. Wish I had taken more. Who would think it’d be that useful? About the only thing I remember about it, besides how to calculate lottery odds (never have bought a ticket) is it’s usually easier to find the odds of something not happening and go from there – as you’ve shown above.
ps Willis, Plant some yellow leafed Japanese Maples under the Redwoods. The effect is excellent on those mostly cloudy days you have there.

Gamecock
Reply to  Expat
June 28, 2015 4:20 am

The lottery is a tax on the mathematically challenged.

Louis Hooffstetter
Reply to  Gamecock
June 29, 2015 9:03 pm

When it gets above $100 million, I get mathematically challenged,

EdA the New Yorker
Reply to  Expat
June 28, 2015 8:15 am

Aside from being a highly regressive tax on people with poor math skills, state lotteries have some redeeming qualities. Students learning statistical mechanics can relate to the expectation value of the ticket exceeding its face value for a sufficiently high prize. For quantum mechanics, the ticket represents Schrödinger’s Cat; the wave function collapses at the drawing. This can then be extended to the probability density of electron position in an atom or molecule.
I also slip in that the students are about as likely to be killed in a traffic accident going to buy the ticket as they are to win the top prize. Bad sport I guess.

The other Casper
Reply to  EdA the New Yorker
June 28, 2015 2:19 pm

It’s good having some new teaching models to replace the old ones. I’ve often found myself trying to explain levers and leverage using the playground “see-saw” that’s familiar from my childhood — only to be reminded that youngsters today (in the US, at least) have never seen these. Liability problems, I guess.

June 27, 2015 4:18 pm

The mathematics of statistics lead to some results that many will find nearly impossible to believe, if they rely on “common sense” to decide what is or is not true.
The odds of winning a certain version of the Florida Lotto were at one time about 23 million to one, because there were that many different combinations of the numbers possible.
Many people will take this to mean that if one buys one ticket per drawing, then after 23 million drawings, they will have won. Of course this is not true…some will win many times in that many drawings, and some will not win at all.
But what are the odds if one buys multiple tickets per drawing? Math tells me that if I buy ten tickets (all different of course), then my odds of winning have increased all the way up to one in 2.3 million. But I have had people who were otherwise seemingly intelligent insist that this was BS. No way can spending just ten dollars magnify the chance of winning that much, they said. What about all those other 20.7 million possible combinations, they point out.
Innumeracy takes many forms it seems.
My favorite statistical surprise though, is the more or less well known Birthday Problem: How many people must one gather in a room for the odds to be fifty-fifty that two of them (or more) will have the same Birthday? The answer is quite surprising, if you have never heard the problem before and are not able to instantly figure such things out in your head.

Reply to  Menicholas
June 27, 2015 4:54 pm

I have spent years hacking the math of state lottos…and you now give it away? Btw, my math was so successful in Illinois that they added two numbers (55, 56) to the system. RATS. Like banning Vegas blackjack card counters.

Reply to  ristvan
June 27, 2015 6:02 pm

So you are the one! Odds are now over 50 million to one to win, plus they slashed the prizes for getting 3, 4, or 5 correct.
Anyway, those MIT folks gave the casinos a good go, though, huh?

Reply to  ristvan
June 27, 2015 6:11 pm

Seriously though, I thought they added numbers because of those organized efforts to plan how to win: Groups would wait for a long time with no winner, when the jackpot would then become far higher than the odds against winning. The lottery then becomes, from a statistical point of view, a “good bet”.These groups then sent people out to purchase tickets, large numbers of tickets. In fact the goal would be to purchase one of every possible combination, thus ensuring a win. The only risk would be if others also won…or if they ought almost but not quite every combination and lost after spending millions of dollars.
By adding so many combinations, it became logistically far more problematic to ever buy one of each ticket in time.
But it could have been you, sir. I was just guessing as to why the changes.
Although, the steep odds also made giant jackpots more common, at times reaching a quarter or a half of a BILLION dollars!
Imagine…being one of the richest people in the country from a (now) two dollars ticket.

Reply to  ristvan
June 27, 2015 8:30 pm

You want more imfo…?

Alan Robertson
Reply to  ristvan
June 27, 2015 9:49 pm

Could you loan me $3.00?

Reply to  ristvan
June 28, 2015 4:20 pm

Dr. Istvan,
I would love more info sir. Always appreciate your comments.
And I have zero imfo, so that would be a treat as well…assuming it is not a typo?
🙂

Reply to  ristvan
June 28, 2015 4:22 pm

Mr. Robertson,
As soon as I collect my winnings from picking the place horse in race #7 at Churchill Downs, I will be flush and can spot you the $3.
🙂

blcjr
Editor
Reply to  Menicholas
June 27, 2015 4:59 pm

On the Birthday problem, I don’t recall the precise number, but as I recall it is less than 30, maybe 22? I could, of course, look it up, but that would be cheating.
Our church prints out birthdays of all the members on the back of a directory. There are 192 names. There are 39 shared birthdays. That’s 1 in 5. There are shared birthdays in every month. In the month with the small number of names, 9, there are two sets of shared birthdays. It is a remarkable confirmation of this phenomenon. For the month of November, there are 24 names; there are 6 days in November with shared birthdays, as many as 4 one one day.If I were to go on the basis of this data set, I’d have to conclude that on average, you’ll find a shared birthday in any group much larger than 8 or 9..

Combotechie
Reply to  blcjr
June 28, 2015 10:37 am

If the lottery is pure and fair then there are two factors that determine the odds of winning a big prize:
1. The odds of choosing the winning number, and
2. The odds of other people also choosing the winning number.
Since the odds of the winning number being chosen is the same as the odds for any other number being chosen the focus of ones attention should be applied to number 2, which is the odds of other people choosing the same number as you choose.
Since some numbers are more popular than others one should focus his attention on uncovering the least popular numbers and then bet on these numbers exclusively.

Reply to  blcjr
June 28, 2015 4:26 pm

Yeah, but maybe they are the least popular for a reason?
No matter what the stats tell me, I would be surprised as all get out to se the combination of 1-2-3-4-5-6 with a powerball of 7 ever come up.
The again, I was quite surprised when the winners were 8-17-23-25-41-49 w/ PB of 34…so what do I know?

Combotechie
Reply to  blcjr
June 28, 2015 6:41 pm

“Yeah, but maybe they are the least popular for a reason?”
The reason probably has more to do with the thinking (or feelings) of human beings rather than the probabilities offered up by mathematics.
Certain numbers in certain cultures are deemed to have different powers than other numbers, meaning that in certain cultures some numbers are considered to have the power of being, say, “lucky” and other numbers are considered to have the power of being “unlucky”. The number 13 is seen by many people in the West as having the power of being unlucky. In Eastern cultures other numbers have similar powers.
Whatever the case, these powers of “lucky” or “unlucky” are powers bestowed upon numbers by humans and not bestowed on numbers by mathematics; A number does not know (nor does it behave as if it knows) whether it is a “lucky” number or an “unlucky” number.
If a person believes that some numbers are lucky and some numbers are unlucky then I believe this tells us more about the person than it does about the number.

Reply to  blcjr
June 28, 2015 8:42 pm

Yes, I was kidding about that part. Sorry, thought it was obvious.
You are right of course, some people do have superstitions about numbers.
We had a technician where I work who was going to quit because he was assigned truck number 13.
When I reported this, to my surprise they changed the number. If someone thinks they are jinxed, I suppose that is as bad as actually being jinxed.

Richard Barraclough
Reply to  blcjr
June 29, 2015 2:51 am

For the birthday problem, you multiply together the odds of people having separate birthdays, like this
((364/365) x (363/365) x (362/365) x (361/365), …..etc until the answer drops below 0.5.
As you say, the answer is suprisingly low – about 23 I seem to remember.

blcjr
Editor
Reply to  Menicholas
June 27, 2015 5:12 pm

Okay, now that I posted, I looked up the answer and it is 23, so my guess of 22 was almost dead on.
There must be a different explanation for the results I gave about the church group. The statistics of the Birthday problem would say that in a group of 192, the odds are about 100 percent that there is at least one shared birthday. It doesn’t project the odds of the average number of shared birthdays. As I noted in my other post, there are 39 shared birthdays in the group of 192. What are the odds of that? Or, put differently, in a group of size N, how many shared birthdays are there likely to be on average?

urederra
Reply to  blcjr
June 27, 2015 6:20 pm

I have heard about that problem applied to a soccer match, it is more common in Europe to tell it that way, I guess. The problem says: What is de probability that two people on the soccer pitch share a birthday?
There are 2 teams, 11 players per team, and the referee, total 23 people, and the probability is a bit more that 50 percent.

Reply to  blcjr
June 28, 2015 1:08 am

Church Question: Why assume all birthdays are equally probable?
9 months after Christmas is more likely – if only because targeting the child’s birth to make them the most mature in the school year is common. And couples have time off work at Christmas,
As for one shared birthday in each month, it’s only one sample. You’d need to see of that was common in about 30 churches of about 200 members to find if that’s significant.

David Chappell
Reply to  blcjr
June 28, 2015 4:30 am

M Courtney: In the East Neuk of Fife, according to the registrar when I registered my daughter’s birth, it’s 9 months after Hogmanay. 9I hasten to add I’m not a Fifer, I just happened to be stationed there at the time0

John M
Reply to  blcjr
June 28, 2015 12:18 pm

My two oldest kids were born in late September, and I once joked with a friend that this was about nine months after Christmas. He replied that his two kids had the same birthday. I, of course, asked if that was related to any particular date nine months before, and with a great big smile, he said “Yeah, my birthday!”

Reply to  blcjr
June 28, 2015 4:37 pm

MCourtney,
My younger sister and I were both born on April 8th, two years apart.
And of my other seven siblings, two were on July 3rd.

Reply to  blcjr
June 28, 2015 4:49 pm

“According to a public database of births, birthdays in the United States are quite evenly distributed for the most part, but there tend to be more births in September and October.[11] This may be because there is a holiday season nine months before (as nine months is the human gestation period), or from the fact that the longest nights of the year happen in the Northern Hemisphere nine months before as well. However, it appears the holidays have more of an effect on birth rates than the winter weather; New Zealand, a Southern Hemisphere country, has the same September and October peak with no corresponding peak in March and April.[12] The least common birthdays tend to fall around public holidays, such as Christmas, New Years and fixed-date holidays such as July 4 in the US. This is probably due to hospitals and birthing centres not offering labor inductions and elective Caesarean sections on public holidays.
Based on Harvard University research of birth records in the United States between 1973 and 1999, September 16 is the most common birthday in the United States and December 25 the least common birthday (other than February 29, because of leap years).[13] More recently[when?] October 5 and 6 have taken over as the most frequently occurring birthdays.[14]
In New Zealand, the ten most common birthdays all fall between September 22 and October 4. The ten least common birthdays (other than February 29) are December 24–27, January 1–2, February 6, March 22, April 1 and April 25.[12]
According to a study by the Yale School of Public Health, positive and negative associations with culturally significant dates may influence birthrates. The study shows a 5.3 percent decrease in spontaneous births and a 16.9 percent decrease in cesarean births on Halloween, compared to other births occurring within one week before and one week after the October holiday. Whereas, on Valentine’s Day there is a 3.6 percent increase in spontaneous births and a 12.1 percent increase in cesarean births”
https://en.wikipedia.org/wiki/Birthday

Ben Of Houston
Reply to  blcjr
June 29, 2015 11:11 am

Birthdays are definitely autocorrelated. There was a big birth spike in Houston 9 months after Ike. My own kid included.
Also, you have a spike of honeymoon babies in February-March from the May-June wedding season, and as mentioned, the September bulge from Christmas Break.

The Ghost Of Big Jim Cooley
Reply to  Menicholas
June 28, 2015 1:06 am

The one that gets people the best is the Monty Hall Problem. I told it to someone who I know is a maths fan, and he couldn’t handle it. He even said I was wrong, so I had to get him to Google it. Likewise, my brother-in-law (much more clever at maths than I could ever be) said the same – that it can’t be so. The reason is that people forget it isn’t a 50/50 chance. It’s 2/3 if you change.

Reply to  The Ghost Of Big Jim Cooley
June 28, 2015 5:05 pm

Yes, very interesting…see below.

mellyrn
Reply to  Menicholas
June 28, 2015 4:14 am

Birthday problem, feh. My job includes running the database of researchers at our facility, so I learn a lot of birthdays; I’ve entered 4000+ people since I started, and only one shares my birthday, and not one of the 300 staff in my building does. Also, you can figure out what that day is — scroll through one of those calendars that list “important events on this date in history”, and it’s the day where you reeeally have to stretch your definition of “important”.
🙂

Reply to  mellyrn
June 28, 2015 5:44 am

You are citing a very different problem. To get the odds of any given birthday matched to 50% takes a much, much larger population, and as is pointed out below, birthdays are not purely random, so it further depends on what the birthday is.
Taylor

Reply to  Menicholas
June 28, 2015 5:37 am

That was one of my favorite openings with my new math classes when I taught high school. Because the breakeven number is in the 20s, it was a perfect case to use in classes, which typically numbered in the low 30s. I’d first make a bet with them that two had the same birthday, they would invariably disagree, and then we’d go around the room – never failed once…
Taylor

Reply to  Taylor Pohlman
June 28, 2015 4:52 pm

At somewhere in the mid thirties, the certainty gets near 99%.

Michael Jankowski
Reply to  Menicholas
June 28, 2015 10:19 am

I had a somewhat headed debate in a Chili’s back in grad school. A fellow student was talking about improving the odds of winning the lottery, and he said you could start by eliminating all combinations of even numbers, all combinations of odd numbers, all combinations of sequential numbers, etc, because those sequences rarely happened. He was incapable of understanding that every single combination – assuming no loaded balls – was just as likely to happen as another. I couldn’t believe an engineering grad student would fail to realize something so basic in probability.

Reed Coray
Reply to  Menicholas
June 28, 2015 10:27 am

Menicholas, by “same birthday” do you mean “born on the same day–say January 5, of specifically 1960;” or do you mean “born on the same calendar day–i.e., January 5 of any year?” If the people are selected from a classroom of students in grades 1 through 12, the two answers (the number of people required to have a fifty-fifty chance that two or more people were born on the same day, and the number of people required to have a fifty-fifty chance that two or more people were born on the same calendar day) won’t differ that much because most students in such classrooms are of a common age. However, if the people are selected from a less age-restricted distribution (say the population of a large city), the two answers will be markedly different.
IMO, the “birthday problem” as commonly stated is an example of the Bertrand Paradox–see https://en.wikipedia.org/wiki/Bertrand_paradox_%28probability%29: “Joseph Bertrand introduced it in his work Calcul des probabilities (1889) as an example to show that probabilities may not be well defined if the mechanism or method that produces the random variable is not clearly defined.” Specifically, as commonly stated the “birthday problem” doesn’t specify how the “people are chosen;” or if as is reasonable to assume they are chosen randomly, doesn’t specify the method that produces the random variable.

Reply to  Reed Coray
June 28, 2015 4:57 pm

Mr. Coray,
The question as I have heard it is specifically about same birthday, not birth date.
As are the stats quoted.
But you raise a valid point.

Reply to  Reed Coray
June 28, 2015 5:02 pm

“Note the distinction between birthday and birthdate: The former occurs each year (e.g. December 18), while the latter is the exact date a person was born (e.g., December 18, 1998).”
https://en.wikipedia.org/wiki/Birthday
And the whole thing apparently assumes a random assemblage of people.
A meeting of the Aries Babies That Are Also Oxes Club will have a different result, I would think.
I only ever met one other person in my adult life who was also one, and she and I had a happy twelve years together.
🙂

garymount
Reply to  Menicholas
June 28, 2015 5:04 pm

I tell people that choosing 1, 2, 3, 4, 5 and 6 has the same chance of winning as any other number, and they don’t believe me.

Richard Barraclough
Reply to  garymount
June 29, 2015 3:03 am

But to increase your chances of being the ONLY winner of the big jackpot, you should choose numbers above 31, as many people include their birthday in their selection.

June 27, 2015 4:27 pm

BTW, nice opuntia. I sure would love a cutting. Have none with yellow flowers.

Ernest Bush
Reply to  Menicholas
June 27, 2015 9:30 pm

Most of the opuntia in the Arizona desert have yellow flowers, along with the chollas.

Reply to  Ernest Bush
June 28, 2015 9:26 am

Mine have red.

June 27, 2015 4:47 pm

Willis, I am not a self taught statistician. Am a Harvard taught PhD level econometrician amongst other fairly useless Harvard taught avocations. What you have just posted is truly beautiful. In two senses. 1. Skip the complicated math, just use the common sense intuitions that underly all math formalisms. 2. Illustrate simply. Five sectors increase significance odds>5. You should become a teacher…belay that, sailor, you already are. Amply evidenced here.
Highest regards on a ‘beautiful’ post.

Reply to  ristvan
June 27, 2015 7:55 pm

+1!

June 27, 2015 4:49 pm

One hundred scientists independently decide to try the same experiment. Five of them find significant results, and 95 don’t. Which studies get published?

Reply to  Fritz
June 27, 2015 4:57 pm

Great corrollary to Willis’ main theorem.

ferd berple
Reply to  Fritz
June 28, 2015 7:14 pm
June 27, 2015 5:03 pm

Stats can be so misleading. My fav statistics problem: What is the probability that at least 2 people have their birthdays on the same day in a party of 20 people?

Reply to  Tony
June 27, 2015 5:11 pm

About 40%.

June 27, 2015 5:27 pm

With two people there is one pair, so the probability is 1/365
With three people there are three distinct pairings , so the probability is 3/365
With four people, there six distinct pairings, so the probability is 6/365
With five people there are 10 distinct pairings, so the probability is 10/365
With six people there are 15 distinct pairings, so the probability is 15/365
With seven people there are 21 distinct pairings, so the probability is 21/365
8—>28/365
9—>36/365
10—>45/365
11—>55/365
…..
20 gives (20*19)/(365*2) or 190/365…..about 0.52
and
N gives
“””””””””””””””””””””””””””””””””””””””””””””””””””

Reply to  Joel D. Jackson
June 27, 2015 5:29 pm

Opps
N gives N times N-1 divided by (2) divided by 365

Reply to  Joel D. Jackson
June 27, 2015 5:56 pm

Hmm. I am not a statistician either, but I do know how to look stuff up:
https://en.wikipedia.org/wiki/File:Birthday_Paradox.svg

Reply to  Joel D. Jackson
June 27, 2015 5:58 pm

“20 gives (20*19)/(365*2) or 190/365…..about 0.52”
But:
“in fact the ‘second’ person also has total 22 chances of matching birthday with the others but his/her chance of matching birthday with the ‘first’ person, one chance, has already been counted with the first person’s 22 chances and shall not be duplicated), ”
https://en.wikipedia.org/wiki/Birthday_problem#/media/File:Birthday_Paradox.svg

urederra
Reply to  Joel D. Jackson
June 27, 2015 6:24 pm

A friend of mine was born on February 29th. 😉

Reply to  urederra
June 28, 2015 7:54 am

Again, another population-dependent example. If everyone was born in the same year, then the required number decreases only slightly (366 choices vs. 365). However, if it’s a random age group, then chances are diminished a bit more, because it’s more difficult to get a pairing. Of course, if you’re trying to match that specific person, it is more difficult still (see my comment above). It’s also worth noting that leap year introduces a complexity into the general calculation that is not considered in any of the examples so far, since any solution should include assumptions about the probability of 2/29/XX birthdays in the population. Calculating that should not be attempted by the faint hearted.
Taylor

u.k.(us)
June 27, 2015 5:40 pm

OK, all you statisticians, who is gonna win the 7th at Churchill Downs ?
I’ll take the # 6.
23 minutes to post.

Reply to  u.k.(us)
June 27, 2015 5:52 pm

Is it raining?
Because the 4 horse’s mother was a mudder. And his father was a mudder.
If it is raining, put me down for #4.
But if it is not, I will go with the favorite #2 Adhara at 3:2, on general principle.

u.k.(us)
Reply to  Menicholas
June 27, 2015 5:59 pm

Funny, but this ain’t about principles 🙂

u.k.(us)
Reply to  Menicholas
June 27, 2015 6:12 pm

It went 5/2/6.
Drats 🙂

Reply to  Menicholas
June 27, 2015 6:31 pm

Woo hoo!

Reply to  Menicholas
June 27, 2015 6:33 pm

The general principle is that the favorite usually wins. Paramutual wagering leads to the correct horse being the favorite more often than not.

u.k.(us)
Reply to  Menicholas
June 27, 2015 7:14 pm

If it was that easy, there wouldn’t be so many people living under the bridges near the racetracks.
Ya think ??
Easy Nellie !!

Reply to  Menicholas
June 28, 2015 5:45 pm

Yes, you are right. I was referring to the wisdom of crowds, but this is likely not a valid exercise in such.
My pappy played the ponies…I am a poker man.

June 27, 2015 5:40 pm

Elegant post Willis, I having no statistical education and being a horrid gambler with minimal math skill I struggle with understanding autocorrelation in subdivided datasets and significance in extrapolated trendiness. What interests me is the data sets themselves, and the seemingly recent and intensified data adjustments. I wonder how any trend or statistical significance can be arrived at when the data is adjusted and more disturbing (maybe) data infilling from vast regions of inadequately collected data and further distressing is the measurements taken from poorly sited instrument stations save of course satellite sampling. My intuition screams bloody murder that the data itself and that much of what I read regarding the ever rising global temperatures are adjusted (always) so that the surface and troposphere is warming worse than we thought and always CO2 is the main driver of said warming. I wonder what is the statistical significance of routine data adjustments? If this makes any sense I will appreciate any thoughts !

Bulldust
Reply to  George NaytowhowCon
June 28, 2015 6:31 pm

I am surprised that autocorrelation was not corrected for. I did one introductory class in Econometrics and that is one of the first things you get taught after the basics of OLS, R^2, t-tests etc. It should be noted that there can be both positive autocorrelation and negative. Positive AC is when you get longish trends and negative AC is when the data tends to fluctuate from one period to the next. Both can be tested for with simple statistical tests (e.g. Durbin-Watson). Heteroscedasticity is a fun one BTW – when the variance of the residual (error term) is not constant. Changing variance … only a statistician could dream up such a concept. I must tip my hat to Ron Oaxaca of UofA for being such a tough but fair econometrics teacher.

auto
Reply to  Bulldust
June 29, 2015 12:38 pm

Bully
+10 for mentioning your teacher alone!
Nice!
Auto

Bulldust
Reply to  Bulldust
June 29, 2015 8:20 pm

Thanks Auto – I was surprised to see he is still teaching there. I was at UofA about 25 years ago.

June 27, 2015 5:48 pm

The difference between a coin flip and a study of ice in the Antarctic is that you can calculate the probability of the coin before hand. You can’t know what the probability of the ice is except by testing it year on year and even then you can never know which results are random anomalies and which are expected due to the energy received. Therefore, to assign a probability to these events at all is completely meaningless. You can have 10,000 years of data, then suddenly get what seems like a freak result in the present. You won’t know if it really is a freak event or if it signals a change in climate conditions until you have waited for 5, 10 or even 100 years and can look back.

old44
June 27, 2015 6:02 pm

Here is a question on climate statistics.
If you travel to Yamal, cut samples from 34 trees and discard 33 of them, what are your chances of retaining the one sample that proves a theory?

Reply to  old44
June 27, 2015 6:37 pm

That depends. Is the person doing the selecting, and deciding if it is proved, a warmista?
If so the odds are 97% in favor of a consensus saying it is proven, but later found to be completely wrong.

Gary in Erko
Reply to  old44
June 27, 2015 8:44 pm

Depends. Is it to be used right-side-up or upside-down?

Reply to  old44
June 28, 2015 5:09 pm

Ad where the heck is Yamal?
Are you referring to Siberia?
Brrr…no thankee!

Reply to  old44
June 28, 2015 5:12 pm

Do they have trees there?
Is it not all permafrost?

Richard Barraclough
Reply to  Menicholas
June 29, 2015 3:08 am

I once spent 3 days on the trans-Siberian express. I reckon I have seen about half the world’s trees.

auto
Reply to  Menicholas
June 29, 2015 12:43 pm

Richard B
yes, the tour companies promote the Siberian tree-watch – and even some of the Canadian 50 mph to another-place-with trees. And whilst I dare say Jasper and its ilk – in each of the biggest countries on the plant – have some fascinating side roads etc. – I know I’d rather do the Caucasus or the Atlantic Provinces – on a pleasure trip.
Pay me to look at trees – fine – everyone has a price.
Auto, who would rather look at oceans and seas

NeedleFactory
June 27, 2015 6:07 pm

“However, they are also only looking at a part of the year. How much of the year? Well, most of the ice is north of 70°S, so it will get measurable sun eight months or so out of the year. That means they’re using half the yearly albedo data. The four months they picked are the four when the sun is highest, so it makes sense … but still, they are discarding data, and that affects the number of trials.”
I think it makes a difference, whether they decided which months to used before or after the examined the data.
Likewise, it makes a difference into how many geographic sectors they divide the arctic circle, and where they put the divisions.
In the latter case, I’m willing to assume they chose the sectors (based on the five seas) before making observations or calculations, and suspend suspicion. And, for myself, I’m willing to agree that looking at the albedo when the sun is highest, as you say “makes sense”, and again suspend suspicion.
But as you say, auto-correlation is another matter.

June 27, 2015 6:35 pm

It is not a coin that is an example of an event in Mr. Eschenbach’s example but rather is a flip. Before conducting a scientific study of ice in the Antarctic, one would have to identify the functional equivalent of a flip. In designing a study, a climatologist doesn’t usually do that. This has dire consequences for the usefulness of the resulting study. It has dire consequences for us when a politician makes conclusions from such a study the basis for public policy.
Under the generalization of the classical logic that is called the “probabilitic logic,” every event has a probability. Usually, one is uncertain of the value of this probability. This value can be estimated by counting the outcomes of events in repeated trials. These counts are called “frequencies.” With rare examples climatological studies don’t produce them. In fact they can’t produce them because the events that need to be counted were not described by the study’s designer.
It is by counting events that scientists provide us with information about conditional outcomes of the future. By ensuring that this information cannot be provided by a climatological study, climatologists ensure that their works are completely useless.

June 27, 2015 6:38 pm

All statistics are tools for analyzing an existing, or past populations.
Any predictions or forecasts are based on the unstated and overwhelming assumption that the future will behave as it has in the past.
Predictions are made by statisticians, not statistics!

HAS
Reply to  Slywolfe
June 27, 2015 7:57 pm

The authors are careful to say don’t use these results out of sample. However as others have said without any prior discussion of hypothesized processes the concept of significance applied to the regression is meaningless. The observations are what they are and the linear trend is one of many calculations able to be derived from them.
To suggest significance the authors need to be arguing they are hypothesizing a linear model for the relationships with certain characteristics.
This is an investigative study and significance testing has no place in it.

Gary in Erko
Reply to  Slywolfe
June 27, 2015 8:47 pm

A quote from William Briggs – “If we knew what caused the data, we would not need probability models. We would just point to the cause. Probability models are used in the absence of knowledge of cause.”

Erik Magnuson
Reply to  Gary in Erko
June 28, 2015 1:51 pm

Which is why I have mixed feelings about “Six Sigma” – it implies a great deal of ignorance about your process. OTOH, used correctly, Six Sigma can be useful in reducing the ignorance.

Reply to  Slywolfe
June 27, 2015 8:50 pm

Slywolfe:
That “any predictions or forecasts are based on the unstated and overwhelming assumption that the future will behave as it has in the past” is the position that was taken by the philosopher David Hume in relation to the so-called “problem of induction.” The process by which one generalizes from specific instances is called “induction.” How to justify the selected process is the “problem of induction.” “If one has observed three swans, all of them white, what can one logically say about the colors of swans in general?” is a question that can be asked in light of this problem.
The problem of induction boils down to the problem of how to select those inferences that are made by a scientific theory. Traditionally the selection is made by one or another intuitive rules of thumb. However, in each instance in which a particular rule selects a particular inference, a different rule selects a different inference. In this way, the law of non-contradiction is violated by this method of selection. In Hume’s day, there was no alternative to this method. Thus, over several centuries of the scientific age the method by which a scientific theory was created was “illogical” in the sense of violating the law of non-contradiction. This scandal lurked at the heart of the scientific method.
Modern information theory provides an alternative that satisfies the law of non-contradiction. There is a generalization of the classical logic that is called the “probabilistic logic.” It is formed by replacement of the classical rule that every proposition has a truth-value by the rule that every proposition has a probability of being true. In the probabilistic logic, an inference has the unique measure that is called its “entropy.” Thus, the problem of induction is solved by an optimization in which the entropy of the selected inference is maximized or minimized under constraints expressing the available information. Among the well known products of this method of selection is thermodynamics. The second law of thermodynamics is an expression of this method.

DesertYote
Reply to  Terry Oldberg
June 28, 2015 12:49 am

I wish more people here would take up the study of Information Theory. I have suggested it many times, but no one seems interested. Besides what you have said, it is within Information Theory that one will find the proper treatment of such things as Measurement Uncertainty. Classical Statistics, alone, is not a powerful enough tool to analyze the data that is used in the study of climate.

Reply to  DesertYote
June 28, 2015 8:17 am

Well said!

Reply to  Terry Oldberg
June 28, 2015 9:36 am

I can tell that what you are saying here is important, but I am not sure I completely understand the point.
I will have to read up on “the law of contradiction”. I do not recall hearing this term used before now.
Can you clarify at all just what this means?
Thanks.

Reply to  Menicholas
June 28, 2015 10:39 am

I’ll try to clarify. The classical logic of Aristotle and his followers contains three “laws of thought.” One of these is the law of non-contradiction (LNC). Let ‘A’ designate a proposition. The LNC is the proposition NOT [ A AND (NOT A) ] where ‘NOT’ and ‘AND’ are the logical operators of the same name.
The rules of thumb aka heuristics that are usually used by scientists in selecting the inferences that will be made by their theories violate the LNC, for there is more than one rule of thumb each selecting a different inference. One commonly used rule of thumb selects as the estimator of the value of a probability that estimator which is the “unbiased” estimator. Despite the attractiveness of the term “unbiased” and ugliness of the term “biased” the “unbiased” estimator has the unsavory property of fabricating information. An estimator that fabricates no information is “biased” under the misleading terminology that is currently taught to university students. That they are misled in this way skirts the issue of the identity of that “biased” estimator which fabricates no information. The latter estimator is information theoretically optimal.

Reply to  Terry Oldberg
June 28, 2015 9:37 am

Can you provide an example or two of this? I would like to understand.

Reply to  Menicholas
June 28, 2015 11:26 am

Menicholas:
Your interest seems to be in how one can construct a model without assuming the future will behave as it has in the past. One can do this by selecting optimization as the method by which the inferences made by a model are selected rather than the method of heuristics. Thermodynamics provides an example that will be familiar to those who have studied engineering or the physical sciences. The induced generalization maximizes the missing information per event (the “entropy”) of our knowledge of the microstate of the thermodynamic system under the constraint of energy conservation. An assumption that the future will behave as it has in the past is not made. This assumption is replaced by optimization.

Reply to  Terry Oldberg
June 28, 2015 9:38 am

Mr. Oldberg,
“However, in each instance in which a particular rule selects a particular inference, a different rule selects a different inference. In this way, the law of non-contradiction is violated by this method of selection.”
Can you provide an example or two of this? I would like to understand.

Reply to  Terry Oldberg
June 28, 2015 5:48 pm

“The induced generalization maximizes the missing information per event (the “entropy”) of our knowledge of the microstate of the thermodynamic system under the constraint of energy conservation.”
Well, if it was that simple, why didn’t you say so to begin with *wrinkles nose and ponders what lies beyond edge of universe*

Reply to  Menicholas
June 28, 2015 9:45 pm

The message is that simple but is hard to decode if one’s background in statistical physics is weak.

David L. Hagen
June 27, 2015 7:25 pm

Such bad statistics are why the need for Raising the bar on statistical significance
Valen Johnson proposed: Revised standards for statistical evidence PNAS

Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25–50:1, and to 100–200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

Brian H
Reply to  David L. Hagen
July 3, 2015 5:45 pm

Yes, I was taught that 0.05 level of significance barely qualified as a valuable hint that something should be further investigated.

ShrNfr
June 27, 2015 7:44 pm

I suggest “Statistics Done Wrong: The Woefully Complete Guide” http://smile.amazon.com/Statistics-Done-Wrong-Woefully-Complete/dp/1593276206 as a good tome on this topic. Basically, a lot of things that are “significant”, just plain are not.
I have no affiliation with the author of this book.

Reality Observer
June 27, 2015 8:22 pm

Hmmm. What are the odds that every “innocent statistical error” just happens to fall on the side of confirming “climate catastrophe?”

gary turner
Reply to  Willis Eschenbach
June 27, 2015 11:14 pm

It strikes me that confirmation bias is a form of auto-correlation. Yes? No?

JPeden
Reply to  Willis Eschenbach
June 28, 2015 8:10 am

“This confirmation bias is the most likely explanation for the direction of the errors of mainstream climate science.”
Yeah, kind of like believing in Jack and The Beanstalk. Shouldn’t mainstream climate science have grown out of it?

Reality Observer
Reply to  Willis Eschenbach
June 28, 2015 7:03 pm

This week’s lesson in semantics…
My use of “innocent” automatically brought up the antonym “guilty” to Mr. Eschenbach – and undoubtedly to several here. I should have used “inadvertent,” “unintentional,” “unconscious” (or several other words) instead.
Correctly, the fact of incorrect statistical analysis does NOT imply fraudulent intent by the researcher(s). (The fact that the majority of such errors support only one possible conclusion does, however, indicate some sort of bias in the process, as noted.)
Of course, the definite finding of fraud committed by one or more researchers DOES strongly indicate that that particular set of researchers, and their conclusions at any time (past or future) are fraudulent.

David A
Reply to  Willis Eschenbach
June 29, 2015 3:45 am

Willis says, ” we are all bozos on this bus” .I like it. A wise man once said, “Those of us to good for this world, are adorning some other”
Now a quick question regarding autocorrelation. You stated…
““Autocorrelation” is how similar the present is to the past. If the temperature can be -40°C one day and 30°C the next day, that would indicate very little autocorrelation. But if (as is usually the case) a -40°C day is likely to be followed by another very cold day, that would mean a lot of autocorrelation.”
This is an example of non randomness WITHIN the period of study, but E.M Smith above gave an example is of non randomness outside the period of study. (The known 60 year ENSO swings, among others)
Does autocorrelation corrects for non random trends outside the period of study?

Neil Jordan
June 27, 2015 10:26 pm
June 27, 2015 10:34 pm

Tossing coins I suppose
Is the same, more or less,
As the computer modeled predictions
They’re using to guess?
http://rhymeafterrhyme.net/false-predictions/

Reply to  rhymeafterrhyme
June 28, 2015 8:09 am

The models that have been used in making policy on CO2 emissions do not make “predictions.” They make “projections.” For predictions there are events. For projections there are none. To conflate “prediction” with “projection” is common and is the basis for applications of the equivocation fallacy in making arguments about global warming ( http://wmbriggs.com/post/7923/ ). Applications of this fallacy rather than logical argumentation are the basis for global warming alarmism.

Reply to  Terry Oldberg
June 28, 2015 9:43 am

Projections become predictions
With alarmists like Al Gore,
Making statements about the future
Is what he’s known for.

Reply to  Terry Oldberg
June 28, 2015 5:17 pm

Predictions are hard…especially about the future.
-Y. Berra

Dawtgtomis
Reply to  Terry Oldberg
June 28, 2015 5:27 pm

Had to laugh for sure! Good shootin’ Will!

Reply to  Dawtgtomis
June 28, 2015 9:55 pm

dawtgtomis:
Will’s shootin’ missed his target (if any). Thus, congratulations to Will on his marksmanship are misplaced.

Dawtgtomis
Reply to  Terry Oldberg
June 28, 2015 5:30 pm

Terry O., when they said “our grandchildren will not know what snow is” (way back when) was a prediction, was it not?

Reply to  Dawtgtomis
June 28, 2015 9:02 pm

dawtgtomis:
Properly defined, a “prediction” is an extrapolation from an observed state of nature to an unobserved but observable state of nature. For example, it is an extrapolation from the observed state “cloudy” to the unobserved but observable state “rain in the next 24 hours.”
The claim that “our grandchildren will not know what snow is” does not match this description. What is the observed state of nature? What is the unobserved but observable state of nature? These questions lack answers.

Frank
June 27, 2015 11:50 pm

I think it is acceptable to break the Antarctic into five separate regions and test for significance without correction – if one had a reason before hand to expect those regions to behave differently. For example, climate models might predict different trends in different regions. Otherwise, sub-dividing into five parts provides four extra opportunities to find “significance” by chance.
Human clinical trials of new drug candidates provides an excellent opportunity to find “significance by chance”. If your drug didn’t provide conclusive evidence of efficacy (p<0.05) with the total patient population treated, then one can look only at only the males, or the female, or the younger patients, or the sicker patients, or those with marker X, Y, or Z etc – and find "conclusive" evidence of efficacy. Then you devise a "compelling rational" for why the efficacious sub-population responded better than the whole treatment group. Fortunately, the FDA has its own statisticians who are motivated to reject such shenanigans – unlike most peer reviewers of climate science papers. They ask the sponsoring company why they included patients less likely to respond in the clinical trial if they had a compelling rational. Then, they tell the sponsor to run a new clinical trial limited to patients now expected to respond to the drug and prove efficacy in this population is not due to chance.
The government's What Works Clearinghouse reviews published studies (many peer reviewed) on education programs and rejects more than 90% for a variety of methodological and statistical flaws. Their results are online.

Brian H
Reply to  Frank
July 3, 2015 6:13 pm

compelling rationale
rhymes with morale

MikeB
June 27, 2015 11:59 pm

The Famous Monty Hall Problem
As the finalist on a game show, you’re given the choice of three doors: Behind one door is a car. Behind the others, some goats (which you don’t want). If you choose the door with the car behind it, you get the car.
You pick one of the doors. At this point the host, Monty Hall, goes to one of the other doors and opens it to reveal a goat.
He then gives you a last chance to change your mind and switch your choice to the other closed door.
Should you change your mind?
a) It’s now 50-50, so it doesn’t matter
b) Yes, by switching you greatly increase your chance of winning the car
c) You should stick with your first choice.
Surprisingly, the answer is b).

Reply to  MikeB
June 28, 2015 9:51 am

How so Mike?

Reply to  MikeB
June 28, 2015 9:54 am
Reed Coray
Reply to  MikeB
June 28, 2015 11:09 am

The Monty Hall problem as stated by MikeB is another example of the Bertrand Paradox (see my comment above). In this case the statement of the problem doesn’t include the “method the host uses to select the door he/she chooses to open.” Most readers make the eminently reasonable assumptions (a) that the host knows which door the car is behind, and (b) the host uses that knowledge to open a door with a goat. [Note: For the three-door problem–one and only one door leads to the car and two doors lead to a goat–if the host knows which door leads to the car, independent of the door the contestant chooses, at least one of the remaining two doors will lead to a goat.] However, neither assumption is explicitly stated. If the host possesses knowledge of the car’s location and he uses that knowledge to always open a door that leads to a goat, then I agree the contestant should switch because by switching he increases his/her probability of getting the car. However, if either (a) the host has no knowledge of which door contains the car or (b) the possesses that knowledge but chooses a door to open by flipping an unbiased coin (i.e., a coin that has a 50% probability of landing heads-up and a 50% probability of landing tails-up), and selecting a door to open based on the results of the coin flip, then I believe there is no benefit to switching–i.e., keeping the original choice and switching to the unopened door are equally likely to win the contestant the car. The reason being, that using knowledge of which door leads to the car, the host can always open a door with a goat. But, lacking such knowledge or using the coin flip to choose which door to open, the host will open a door that leads to the car 1/3 of the time. This changes the likelihood that by switching doors the contestant improves his chances of winning the car. For this case probability is now similar to two contestants each choosing a door (not the same door), opening the third door, and asking the contestants if they want to switch doors with each other. One-third of the time, the opened door (i.e., the door not chosen by either contestant) will contain the car, in which case switching or not switching will produce the same result–i.e., both contestants get a goat. Two-thirds of the time, the opened door will contain a goat, in which case the probability of each contestant getting the car is 1/2 and switching gains one contestant but costs the other contestant Since from the likelihood of winning the car the two contestants are identical, if it’s advantageous for one contestant to switch, it must be advantageous for the other contestant to switch–which is impossible.

Reply to  Reed Coray
June 28, 2015 5:32 pm

Yes, the discussion in the Wikipedia entry goes into some of these details, and both wikis contain a link to the Bertrand problem.
I was taken as well by the possible psychological reasons why people would not switch, as I was by the possibility of cognitive overload making an assessment of the odds difficult to compute on the spot.
Ex: People hate it more if they switch and are wrong than if they stick with the door they “own” already and are wrong.
In the latter case, one might feel more like they gave the car away after winning it to begin with, so to speak.
But to the specifics of the unspoken rules…I think most people, and especially those old enough to remember the show or how game shows in general work, will intuitively know that Monty knows where the car is, and that he will only open a door with a goat, never one with the car.
But figuring the actual odds out while standing there seems to be almost impossible, psychologically speaking.
Note that in some studies, only 13% of people switched. About the number that believe Elvis is still alive.
And note too, that I the original Marilyn vos Savant column, thousands of people, including PhD math teachers, excoriated her in writing for being so wrong…which of course she was not!
[BTW, what are the odds that the smartest person in the world would happen to have the last name of “Savant”? That seems more unlikely that the library detective named “Bookman” on The Seinfeld Show! 😉 ]

Reply to  Reed Coray
June 28, 2015 5:37 pm

I was pondering how these same psychological factors might play a large role in cognitive bias in general, and also considered how these might relate to the reluctance of warmistas to consider contrary evidence.
Of course, skeptics are immune to such bias by our more logical, even tempered and methodical nature.
*insert/smirk*

Philip Mulholland
Reply to  Reed Coray
June 29, 2015 12:02 am

Reed,
You have an interesting take on the problem, but you must address the key fact namely that Monty NEVER opens the door that reveals the car. The reasonable assumption is that Monty knows where the car is and this knowledge guides his choice, this is scenario 1. They told the contestant the truth, the car is there. But suppose, as you do, that Monty like the contestant does not know were the car is, then the only way for Monty to consistently never reveal the car is that there is no car when he makes his choice. They lied about the car being there, this is scenario 2. In this scenario the game is also rigged but in a different way.
In scenario 2 Monty’s choice does not matter, all 3 doors conceal a goat and the car is only placed into the game after he has randomly opened one of two doors available to him. We now have the interesting question of how is the game fixed? Is it fixed by knowledge (scenario 1) or by subterfuge (scenario 2)? If we have the results of a large number of games and assume a consistent scenario is being used for all games played, then we should see for scenario 1 (the truth scenario) a benefit in switching, while for scenario 2, if the car is introduced randomly, the contestant’s choice collapses to 50/50 as you suggest.

Reed Coray
Reply to  Reed Coray
June 30, 2015 10:35 am

Phillip,
I agree with everything you wrote. In my original comment, maybe I should have written “overwhelmingly reasonable” instead of “eminently reasonable.” I’m old enough to have watched the original show (on television, not in person) with Monty Hall as the host. Furthermore, in all the times I watched the show, Monty always opened a door with a booby prize (goat). Having watched the show many times, if given the chance to switch doors, I, too, would switch. Thus, in light of my observations, I’d give odds way better than even money that your scenario 1 case represents reality. It’s only when I dissected the statement of the problem in light of the Bertrand Paradox that it occurred to me that the problem as stated in this thread (and in Marilyn vos Savant’s original write up in Parade magazine) did not have a unique solution because the answer depends on conditions not explicitly stated.
For what it’s worth. As mentioned by Menicholas (June 28, 2015 at 5:32 pm), I, too, was surprised by the vituperative tone of the letters written to her. As I recall, one person who excoriated her wrote a second letter admitting he was wrong and apologizing for his initial letter. Among the important life lessons I have learned are: (1) don’t be absolutely positive about anything, and (2) when addressing probability and/or relativity problems, don’t trust your intuition. At the time Ms. Savant first published the problem in Parade magazine, I wrote a letter to her describing my concerns. I received no response–either in Parade or in personal correspondence.

Andrew
June 28, 2015 12:01 am

Probability of a climate “scientist” finding a statistically significant result is 1.0000

hunter
Reply to  Andrew
June 28, 2015 7:11 am

…if the result supports the consensus.

zabriskie
June 28, 2015 12:10 am

You can’t tell if 7 coins are loaded or not by tossing all of them once. 7 heads is as rare as first 3 coins heads and 4 last coins tails. Bad experimental design, W.

rgbatduke
June 28, 2015 12:11 am

Hi Willis,
There isn’t any certainty of the best way to test multiple hypotheses, although the Bonferroni correction of taking some statistical test for significance and dividing the margin by m (for m hypotheses) is as reasonable as any. The problem arises when considering non-independent hypotheses, which in the case at hand involves spatial as well as temporal correlation. A second but nontrivial problem arises in precisely the context you refer to in your example — in the words of the late, great George Marsaglia, “p happens”. Precisely the same problem that requires one to use Bonferroni (at least) to assess significance when testing multiple hypotheses means that you have to take the rejection of the null hypothesis at the level of p = 0.05 with a very distinct grain of salt. After all, this happens one in twenty times by pure chance. If you had twenty perfectly fair coins or tested one perfectly fair coin twenty times, you would have a very good chance of observing a trial that fails at the p = 0.05 level. The difficulty is this trial can happen first, last, in the middle, and rejecting the coin as being fair on its basis can be a false negative.
Since I wrote Dieharder (based on Marsaglia’s Diehard random number generator tester), which is basically nothing but a framework for testing the null hypothesis “this is a perfect random number generator” in test after test after test, I have had occasion to look at the issue of how p happens in considerably more detail. For example, p itself should, for a perfect random number generator used to generate some statistic, be distributed uniformly. One can test for uniformity itself given enough trials.
This is relevant to the Antarctica question in that if one tested for the probability some “extreme” against the null hypothesis for some statistic in five non-independent zones, if all five of them (say) produced p < 0.5 that is unlikely in and of itself at the level of 1/32, sufficient to fail a p = 0.05 test (which I consider nearly useless, but who can argue with tradition even when it is silly). Except that this is again confounded when one looks at spatial correlation and/or temporal correlation.
The fundamental problem is that the axioms of statistics generally require independent and identically distributed samples for most of the simple results based on the Central Limit Theorem to hold. When the samples are not independent, and are not identically distributed, and when the degree of dependence and multivariate distribution is not a priori known, doing the statistical analysis is just plain difficult.
This applies to this whole issue in so very many ways. I’ve recommended the papers by Koutsoyiannis repeatedly on WUWT, especially:
https://www.itia.ntua.gr/en/docinfo/673/
as a primer for analyzing climate timeseries and the enormous, serious difficulty with looking at some timeseries within some selected window and making complex inferences on the basis of some hypothesized functional trend (linear, quadratic, exponential, sinusoidal).
rgb

Neil Jordan
Reply to  rgbatduke
June 29, 2015 12:37 am

Re your Koutsoyiannis reference, a related issue is the allegation of nonstationarity in recent hydrologic processes caused by manmade global warming. The implication is that at some time in the Edenic past, climate and the resulting hydrologic processes were stationary. If in fact climate has always changed naturally, then there is no Edenic past with respect to hydrologic stationarity.
On the topic of nonstationarity and climate change, “Hirsch and Ryberg, 2012” was referenced in a 2013 flow-frequency statistics workshop I attended. An online check led to CO2 Science:
http://www.co2science.org/articles/V15/N34/C3.php
Reference
Hirsch, R.M. and Ryberg, K.R. 2012. Has the magnitude of floods across the USA changed with global CO2 levels? Hydrological Sciences Journal 57: 10.1080/02626667.2011.621895.
Link to article:
http://www.tandfonline.com/doi/pdf/10.1080/02626667.2011.621895
[begin excerpt]
What it means
In discussing the meaning of their findings, Hirsch and Ryberg state that “it may be that the greenhouse forcing is not yet sufficiently large to produce changes in flood behavior that rise above the ‘noise’ in the flood-producing processes.” On the other hand, it could mean that the “anticipated hydrological impacts” envisioned by the IPCC and others are simply incorrect.
[end excerpt]
Watershed development, dams, or diversions as well as natural climate change are causes of nonstationarity (e.g. CT Haan, “Statistical Methods in Hydrology”). For example, see Yevjevich (referenced by Koutsoyiannis):
https://books.google.com/books/about/Stochastic_processes_in_hydrology.html?id=uPJOAAAAMAAJ
Yevjevich provides an explanation and tests for stationarity. In the simplest application, the record is divided into two or more parts and analyzed independently. “In general, if the subseries parameters are confined within the 95 percent tolerance limits about the corresponding value of the parameter for the entire series, the process is inferred to be self-stationary.”
If climate is considered to have been naturally changing in the past and continues to naturally change in the present and future, then climatic nonstationarity has always existed and is embedded in the hydrologic record. Can the hydrologic record be dusted to reveal the fingerprint of man? A hydrologic record long enough to reveal a fingerprint would need to be longer than the present warming cycle and would stretch back through the previous cooling cycle.

Paul Pentony
June 28, 2015 2:12 am

I agree that the five zones need to be taken into consideration when looking at probabilities. However whether you are right in taking into account the fact that they only used half of the daylight months is more contentious. If they actually checked other options and selected four months because they gave the best results you would be right. But if they settled on the four months on the basis that they were the “brightest” and did not even look at the others then the other four daylight months are no more relevant than the four dark months. It is not a question of what other data exists – it is a question of what other data was analyzed and then discarded.
Auto-correlation is a tricky beast. It is true that auto-correlated data can give rise to spurious trends, but it is also true that trending data can give rise to spurious auto-correlation. To test this create your own series by adding a straight line trend to random white noise. Provided that the standard deviation of the noise is reasonable relative to the trend you will find some spurious auto-correlation. To correct for this one should estimate and remove the trend before performing the auto-correlation calculation – you would of course adjust the degrees of freedom in the auto-correlation calculation accordingly. Maybe you did this, if not you should have.

sophocles
June 28, 2015 2:19 am

Are the coins loaded?

Not necessarily. But the coin tosser could be cheating on the toss.
How can you cheat at tossing a coin? Easy,
catch it in the air at as close to the same height
as it left the hand on the flip.
All fair flips should be allowed to land on the surface
beneath one’s feet.
Try it for yourself. I could, significantly more often than not, obtain
the result i wanted by:
1. Turning the opposite face from the desired one up
(and showing it.)
2. Flipping the coin
3. Catching the coin at as close as I could judge it to the height
where it left my hand on the flip.
4. Turning the coin over onto the back of my hand.
Et voila.
The closer to the release point the catch is made, the
less chance is involved. The coin rotates in the air. If there
is the same number of rotations coming down as there
were going up, then chance isn’t a factor. The starting face
is known, the coin has rotated an even number of times.
The result is known.
Yes, it’s cheating. It doesn’t take much practice, either, if your kinesthetic
senses are reasonable.
Please note; this is just from personal observation. I have never put
it to a rigorous test. I will say if flipping a fair coin to make a decision,
always insist it is not caught but allowed to land.

Ed Zuiderwijk
June 28, 2015 2:44 am

“The odds of seven heads is the product of the individual odds, or one-half to the seventh power. This is 1/128, less than 1%, less than one chance in a hundred that this is just a random result. Possible but not very likely.”
I’m afraid this is nonsense. You confuse the apriori probability of throwing 7 heads in a row with an aposteriori assessment of its likelyhood given the data, i.e. that it has occurred. Seven heads or even more in a row in a long series of throws are common events and in this case, with only seven throws there is no way to decide whether your dice is loaded or not. In fact, the series (of seven throws) is too short to see the dfference between a fair and a fully loaded dice,

Reply to  Ed Zuiderwijk
June 28, 2015 4:37 am

Agreed. The correct answer – and in many other instances – is “I do not know, because of insufficient data and an uncontrolled design”.

Reply to  Willis Eschenbach
June 28, 2015 10:34 am

Seven heads twice in a row would tell me more.

Dinostratus
Reply to  Willis Eschenbach
June 29, 2015 7:21 pm

He’s trying to explain the Monty Hall problem to you dumb bunny.

Matt
June 28, 2015 2:47 am

Gambling fail, statistics fail 🙂 You absolutely cannot tell whether the coins are tampered with or not – not after tossing them once. If you tossed them ten times and they always came up the same way, ONLY THEN you could make that call. You don’t complain that the national lottery is rigged either, if you play it once for fun and happen to win 20 million, on the basis that it was highly unlikely that you would win, righties? In case of the national lottery, you’d even have to assume it is rigged after playing for your entire life – that’s how unlikely it is – but still, you don’t complain. Why is that? It’s called ‘luck’.
There is nothing special about your coins toss – it is only special because you have decided to assign significance to that particular 1 in a 128 outcome. There is nothing special about a full house, either. It is only special because we have assigned significance to that particular set of cards. If it weren’t in the rule book, you’d throw it straight out the window if I dealt you a full house…. (which wouldn’t even be known by that name in that case…)
I am a bit of a gambler myself, and lately, every week the results are out for this particular type of game, there is a poor, poor, soul that will take to the forum and bitch about how the game was rigged (because he didn’t win, of course) 🙂 Two things to note: WHY OH WHY does he come back every time like a true masochist if he either knows or thinks the game is rigged?! And: surely, if it should ever be his turn to win, that will be the week where the game wasn’t rigged for once, needless to say 😉 He will say adorable things, like how it was ‘predictable’ that whoever won would win — reaaally. So he could predict the winner, and hence knew it wouldn’t be him, yet he played anyway?! Seems more like hindsight to me… and a few other less charming things that could be said.

hunter
Reply to  Matt
June 28, 2015 5:06 am

Agreed one cannot make a conclusion that the coins are loaded, but to see something like that in the first roll is in fact highly unusual and would be cause to raise suspicions. That was the way I took his illustration.

Brian H
Reply to  Willis Eschenbach
July 3, 2015 6:39 pm

Doesn’t it only work if you called it in advance? “The next 10 flips will come up heads.”

Call A Spade
June 28, 2015 3:24 am

I have seen 16 heads thrown in two up followed by 11 tails each result is completely independent from the previous statistics in nature are only valid on the day of printing and have little relevance to future events.

Robert of Ottawa
June 28, 2015 3:51 am

Any outcome of coin tosses has the same probability. If the same outcome occurs multiple throws, then you can conclude the coins were fixed.

Reply to  Robert of Ottawa
June 28, 2015 7:47 am

I do not think so. Would you bet 7 heads against 3 heads and 4 tails?

Reply to  bernie1815
June 28, 2015 10:41 am

There are more different ways to wind up with 3 heads and 4 tails.
If you specified ahead of time which coin would get heads and which tails, that would closer to the same thing.
With dice, every face is equally likely to appear, but seven is the most likely roll by far…for this same reason. More ways to get it.

June 28, 2015 4:01 am

“Autocorrelation” makes me wonder if en route from the lab to the Guardian/BBC some pieces of Climate Science research go thru AUTO-TUNING ie filter out all the boring results and hype and emphasise bits which are ‘on message’.
– of course the systematic excluding of skeptical criticism is part of that autotuning.

Eliza
June 28, 2015 4:23 am

Something interesting is happening with NH ice extent this year…
http://ocean.dmi.dk/arctic/old_icecover.uk.php

Reply to  Eliza
June 28, 2015 6:19 am

WOW, thanks for the link Eliza, NH ice extent doing something interesting indeed!

Reply to  George NaytowhowCon
June 28, 2015 10:44 am

Ditto with Greenland ice balance. Slowest melt season on record.
Snow still on the ground in the capital city, with only a few weeks left in the melt season.

Reply to  Eliza
June 28, 2015 8:17 am

Yes, indeed, I watch this daily, and results are back well within 2 Std deviations. Note that this link is to the 30% extent metric. The 15% extent is also available on the same site, and is more widely used. However I like the 30% analysis, because I think it may be more predictive of the trend (guessing that 15% might be more affected by wind shift)
Taylor

bit chilly
Reply to  Eliza
June 28, 2015 10:56 am

according to the arctic sea ice forum it is all going to disappear next week .

Reply to  bit chilly
June 28, 2015 11:00 am

I thought it was last week.

phlogiston
Reply to  Eliza
June 29, 2015 10:14 am

It looks like NOAA’s prediction of above average September minimum is on track:
http://origin.cpc.ncep.noaa.gov/products/people/wwang/cfsv2fcst/imagesInd3/sieMon.gif

Editor
June 28, 2015 4:26 am

Excellent post Willis! I took several probability and statistics classes in college and loved it when I was young. But after going to work as a petrophysicist (earth scientist) I was introduced to SAS (the Statistical Analysis System) and used it on a couple of early projects. I quickly found that it was so flexible and easy to use (abuse?) that you could get any answer you wanted and make it look real. I abandoned it and stochastic methods and never looked back. I’m strictly a deterministic guy to this day. You have put your finger on a very serious problem in science.

Reply to  Andy May
June 28, 2015 8:40 am

Use of probability and statistics is essential if one is to reach a logically valid conclusion in cases in which information needed for a deductive conclusion is missing. However, it is not enough. Both Bayesian and frequentist statistics have logical shortcomings with the result that we cannot rely on either of them by itself. To overcome the shortcomings one needs the help of an idea which, while 50 years old, has not yet penetrated the skulls of many researchers. This idea is information theoretic optimization: the induced generalization maximizes the entropy of each inference that is made by it or minimizes the entropy under constraints.

hunter
June 28, 2015 5:03 am

Excellent essay. Your point, “So in short, the more places you look, the more likely you are to find rarities, and thus the less significant they become.” is not unique but it is profound.
I think there is a corollary hidden in that quote someplace. Something to the effect that:
The rational mind does not lightly infer the universal from the particular; the faith based mind sees the unusual and concludes the universal…..
still needs work, but no coffee and very early…..

commieBob
June 28, 2015 5:31 am

My new favorite example of the misuse of statistics involves chocolate:
link
A real study was conducted.
They didn’t cheat on the methodology to get a pre-determined result
The data wasn’t cooked
They used standard formulas to calculate their p-values
The outcome was entirely due to bad experimental design

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

Journalists were completely sucked in. Not one journalist questioned the results. The very good news is that some people did see the problem. Those were people like us, posting comments on blogs.

There was one glint of hope in this tragicomedy. While the reporters just regurgitated our “findings,” many readers were thoughtful and skeptical. In the online comments, they posed questions that the reporters should have asked.

hunter
Reply to  commieBob
June 28, 2015 7:13 am

Your post is very interesting but the link seems to be non-working.

commieBob
Reply to  hunter
June 28, 2015 12:01 pm

Thanks hunter. Here’s another attempt at a link plus an url if that doesn’t work.
link
url = http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800

Bloke down the pub
June 28, 2015 5:46 am

I said “not long ago” but when I looked it was 2005 … carpe diem indeed.
Carpe diem, or Tempus fugit?

Don K
June 28, 2015 6:09 am

Apropos of nothing much, but illustrating why statistics are so hard to deal with.

However, suppose we take the same seven coins, and we flip all seven of them not once, but ten times. Now what are our odds that seven heads show up in one of those ten flips?

Are you asking what the odds are of seven heads appearing at least once in ten flips or are you asking the odds that seven heads appear exactly once in ten flips. I’m quite sure that (127/128)^10 is the answer to one of those two questions, but I’m far from sure off the top of my head which it answers.
I suspect that careful examination of climate science, medicine, or any other discipline where predictions can’t be checked against precise answers would find a lot of cases of statistics answering a slightly different question than the authors intended to ask.

commieBob
Reply to  Don K
June 28, 2015 3:07 pm

… a lot of cases of statistics answering a slightly different question than the authors intended to ask.

A long time ago a professor observed to me that students, having found a formula, would try to apply it no matter how inappropriate it was given the circumstances. Sadly, I’ve met a few people who never grew out of that habit.

Reply to  commieBob
June 28, 2015 6:45 pm

When you are holding a hammer, everything looks like a nail.

Paul Linsay
June 28, 2015 6:37 am

The physicist Luis Alvarez, who was most famous for identifying an asteroid impact as the cause of the mass extinction of dinosaurs (https://en.wikipedia.org/wiki/Alvarez_hypothesis) , used to say that everyone will encounter a 5 sigma event in his lifetime, which is a confidence limit of 0.00006 %. In high energy physics, no one will even consider a result that is less significant than that because they deal with huge amounts of data and understand that random fluctuations can create spurious results quite easily.
Random time series, like the coin toss, often have very counter-intutive behavior. Consider the famous drunkard’s walk. A drunk starts at a lamp post and takes one step north each time the coin toss comes up heads and one step south if it’s tails. Most people think that the drunk will wind up hanging around the lamp post. In fact, the mostly likely outcome is that he will wander away and never return. If you plot his position versus time it will be a slow, oscillating drift away from the lamp post purely due to the randomness of a fair coin. See the first figure, Example of eight random walks in one dimension, in https://en.wikipedia.org/wiki/Random_walk.
Climate “scientists” don’t seem to understand that (a) high probability random events can occur while you’re watching and (b) that random events will wander around and look like something deterministic is happening.

June 28, 2015 6:42 am

I worked for years in inspection/quality control departments. Human bias is overwhelming. Sometimes you want to find the “Bad” one and sometimes you don’t. Our good friends in climate science desperately want to find the “Bad” one and folks on my side don’t.
Then there’s Alpha and Beta error.

June 28, 2015 6:49 am

What are the odds that the corrections GISS/NASA makes to the GHCN temperature record don’t reflect bias?

June 28, 2015 6:50 am

I thought I could put up an image?
http://oi57.tinypic.com/11ui0cp.jpg

KaiserDerden
June 28, 2015 7:06 am

there are no “honest mistakes” in science … they are willfully ignorant … and its that willfulness that removes the honesty …

Neil
Reply to  KaiserDerden
June 28, 2015 1:29 pm

Really? Then how do you account for N-Rays?
https://en.wikipedia.org/wiki/N_ray

steverichards1984
June 28, 2015 7:07 am

It is interesting to note that most of the reports into climate issues conducted by public bodies suggest in every case that a statistics professional be included in each research team to prevent some of these gross statistical errors reoccuring.

EternalOptimist
June 28, 2015 7:09 am

If someone tossed seven coins I would be mildly suspicious
If that person was a paid entertainer and the only reason he was tossing the coins was to please a crowd
I would be much more suspicious
If the paid entertainer had a peer group clapping away assuring me that it was all fair, I would find them to be suspect as well.
If the paid entertainer then offered me a tin foil hat for a thousand bucks so that I could toss seven heads, but only after he had left town, I would be extremely suspicious.
probably

Don Keiller
June 28, 2015 7:30 am

Statistically speaking there is a huge autocorrelation between climate “psience” papers that attempt to demonstrate that CO2 is the primary driver of Earth’s climate.
And like Willis, so eloquently states, “Once autocorrelation is taken into account both of the trends were not statistically significant at all”

Jim G1
June 28, 2015 7:34 am

Many also forget that there are many other types of error that are not covered by the confidence intervals quoted such as precision of measurement instruments, siting bias in surface temps, and so on.

Taphonomic
June 28, 2015 8:06 am

Willis, is there a typo?
Text states “For N = 10 flips of seven coins, this gives the odds of NOT getting seven heads as (127/128)10, or 92.5%. This means that the odds of finding seven heads in twenty flips is one minus the odds of it not happening, or about 7.5%.”
Should “twenty” be “ten”?

June 28, 2015 8:13 am

It’s “averse”, not “adverse”. Wanna bet?

Pete Austin
June 28, 2015 8:49 am

Q: I flip seven coins in the air at once and they all seven come up heads. Are the coins loaded?
A: Probably. Forget the math – it’s because why would you do a crazy thing like that if something funny wasn’t going on?
I’m pretty sure that a competent magician could rig coin flips with apparently normal coins, so I need to see it independently repeated, before I pay any heed. Likewise any scientific experiment where anything significant hangs on the result.

June 28, 2015 9:17 am

Several bloggers in this thread seem to be under the misapprehension that probability and statistics can be applied to global warming climatology as this field of study is presently constituted. Probability and statistics cannot currently be applied for a necessary ingredient is missing. It is for the functional equivalent of a coin flip to be identified for global warming climatology. The counts that are called “frequencies” cannot be made lieu of identification of this functional equivalent. Thus, there are no relative frequencies, probabilities or probability values. There are no truth-values. The architects of global warming climatology have disconnected it from logic!

Reply to  Terry Oldberg
June 28, 2015 9:29 am

Probability and statistics are essential to any scientific study that deals with measuring any aspect of nature. When one measures such things as mass, length, charge, velocity, etc, there are limits to the precision of the measuring device, which can only be derived and understood using probability and stat.
Repeated measurements that constitute a time series become the input to a pattern recognition process. Probability and statistics tell you that the more samples you take, the closer your estimate of the mean is. Not only can probability and stat be applied to climatology, they are essential to ANY study of it.

Reply to  Joel D. Jackson
June 28, 2015 9:36 am

Probability and statistics cannot be applied to global warming climatology unless the events are identified. I claim they are not yet identified. Do you disagree?

Reply to  Joel D. Jackson
June 28, 2015 9:57 am

The very definition of climate, namely average temperature, wind speed, wind direction, humidity and precipitation are based on doing a statistical operation on the set of these measurements. Average …..also known in the world of statistics as “expected value,” or the first moment of a random variable.
..
So, yes, I disagree with you, as all of my mentioned measurements of climate are defined “events”

Walt D.
Reply to  Joel D. Jackson
June 28, 2015 1:25 pm

Probability and statistics tell you that the more samples you take, the closer your estimate of the mean is.
No. This is based on certain assumptions about the random process. The statistical term for this is ergodicity. There are certain processes that are non-ergodic. The most egregious error in applying statistics to climate “science” is confidence limits. Typically temperatures, or temperature changes, are not independent and identically distributed.so the hypothesis required to invoke the central limit theorem do not apply. You need to know a distribution to deduce confidence limits.

Reply to  Joel D. Jackson
June 28, 2015 1:42 pm

Walt D says: “No. This is based on certain assumptions about the random process”

Have you ever seen the definition of “Standard Error?”
..
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTC_Mv-Zuk2gndYqSNQeIDqTRnF24MQUoxxOwVFlLBm-ZyehMv0cA

No assumption of ergodicity involved.

Walt D.
Reply to  Joel D. Jackson
June 28, 2015 7:11 pm

Have you ever seen the definition of “Standard Error?”
This does not work if you are dealing with a distribution where either the mean or the variance are not defined.,For example Cauchy Distribution.

Reply to  Joel D. Jackson
June 28, 2015 7:15 pm

Walt, if the variance is undefined, then the “s” term in the equation is undefined.

Reply to  Willis Eschenbach
June 28, 2015 1:11 pm

Willis:
I’m not saying that “there no such thing as an average temperature of the cube, because it doesn’t undergo any ‘events’.” “Event” is a concept of probability theory. In particular, a probability is the measure of an event. Thus, for example, the event of ‘heads” has a probability.
There is a mathematical theory of “measure” that describes some of the properties of a probability. This theory is worthy of study if you wish to attain a command of probability and statistics.
The concept called a “unit event” sits on the theoretical side of science. The concept called a “sampling unit” sits on the empirical side. An axiom of probability theory states that the value of the probability of a unit event is 1. In order for this axiom to be empirically satisfied, corresponding to every unit event must be 1 sampling unit. Thus, the relation between unit events and sampling units is one-to-one.
Its complete set of sampling units is called the “population” of a study. A “sample” is comprised of sampling units that are drawn from a study’s population. If one draws a sample and observes the outcome in each element of this sample one can count the sampling units that correspond to each of the possible outcomes of the corresponding events. These counts are called “frequencies.” The heights of the vertical bars of a histogram are proportional to the frequencies of the various possible outcomes.
I’m not presently able to count the sampling units corresponding to the observed outcomes of the events for the study that is underway in global warming climatology because after eight years of looking for a description of them I have not yet found one. AR4 describes no unit events or sampling units. AR5 describes unit events and sampling units that are not a part of global warming climatology. If you know of a description of the unit events or sampling units of global warming climatology that is provided by an authoritative world body such as the WMO please clue me in. In the meantime, I’ll believe that global warming climatologists have committed the worst possible blunder in the design of a study: for no population to underlie the theory that is a product of this study.
I don’t know what you are missing. You might try to construct a histogram from a global temperature time series of your choice. If you try to do so you’ll find that the things that you need to count in assigning heights to the various bars of your histogram, the sampling units, have yet to be identified.

Reply to  Willis Eschenbach
June 28, 2015 1:53 pm

So Terry, if I called you on the phone tomorrow at 3:00 pm, and asked you what the temperature at you home was, how would you respond? (Let’s assume you have a thermometer hanging outside one of your windows)

Reply to  Joel D. Jackson
June 28, 2015 3:26 pm

Joel D. Jackson:
If I had my wits about me, I’d answer politely that that your request was of the form of an equivocation, there being many different temperatures at my home. I’d point out that if I were to answer your question I would do so by drawing a conclusion from an equivocation and that to do so would be logically illicit.

Bubba Cow
Reply to  Willis Eschenbach
June 28, 2015 2:20 pm

Oldberg
you say (above) “… there are no relative frequencies, probabilities or probability values. There are no truth-values. The architects of global warming climatology have disconnected it from logic!”
Are you saying that the population distribution of a variable, say T, is not known and therefore “samples” from such an unknown distribution do NOT regress to the mean; that we are not entitled to perform these sorts of statistics because we don’t know the nature of the distribution???
Trying to comprehend your comments as well.

Reply to  Bubba Cow
June 28, 2015 3:41 pm

Bubba Cow:
No. I am saying that the value of ‘T’ is not a description of an event in the control of the climate. A univariate model such as this one supplies a policy maker with no information about the outcomes from his/her policy decisions making control of the climate impossible. Given that the model is multivariate a degree of control is a possibility provided that the mutual information of the model is not nil. Try to control to the climate when the mutual information is nil would be like trying to control your car with the steering wheel disconnected from the front wheels.

Reply to  Willis Eschenbach
June 28, 2015 3:34 pm

Let’s try it again…
So Terry, if I called you on the phone tomorrow at 3:00 pm, and asked you what the reading on your outside thermometer is, how would you respond?

Reply to  Joel D. Jackson
June 28, 2015 4:23 pm

Joel D. Jackson
I’d give you the reading.

Reply to  Willis Eschenbach
June 28, 2015 3:45 pm

“I am saying that the value of ‘T’ is not a description of an event in the control of the climate.”

Classic “straw man” argument.
Nobody in this thread is talking about controlling the climate.

Reply to  Joel D. Jackson
June 28, 2015 4:29 pm

You’ve concluded that my argument is a “strawman argument” by using the the false claim that “Nobody in this thread is talking about controlling the climate” as the premise to an argument. The transcript, however, reveals that I am talking about it.

Reply to  Willis Eschenbach
June 28, 2015 4:26 pm

Terry says, “I’d give you the reading.”
..
Thank you. Now I’d take the reading, go to the histogram for the particular date, time and location, and add 1 to the bar directly over the number you provided.

Reply to  Joel D. Jackson
June 28, 2015 6:43 pm

Joel D. Jackson:
Recall that a temperature value is a real number and that in any finite interval the count of real numbers is infinite. Thus, if you were to add 1 to the count for a particular temperature it is extremely unlikely that the count would ever rise above 1. Also, for the vast majority of values, the count would never rise above 0. When you are through with your experiment, the frequency values of your histogram will be almost entirely 0s with a few 1s.

Reply to  Willis Eschenbach
June 28, 2015 4:32 pm

Your argument remains a “strawman”

Reply to  Willis Eschenbach
June 28, 2015 7:01 pm

Terry, I understand that temperature is a real number. However, I doubt that the thermometer hanging outside of your window would provide a precision greater than an integer value over the range of -100 to 150 degrees Fahrenheit. Of course I may be wrong, and you might have a super duper digital precision laboratory class thermometer that gives two digits of significance after the decimal point, but then, in that case the number of readings you can get from it is not infinite, but finite. Lets say it has five digits in the display with two digits after the decimal point and a +/- sign. The range would be +150.00 to -100.00 Fahrenheit. This only gives you 25,000 possible readings, not even close to “infinite”

PS please see my other post regarding continuous random variables.

Reply to  Joel D. Jackson
June 29, 2015 7:56 am

Joel:
In our conversation we are currently in a box created by your insistence that a global temperature value is a suitable candidate for use as the outcome of an event. We can carry your idea forward while dealing with the reality of observational error by specifying that the global temperature value is not the true value but rather is the measured value. The resulting histogram is identical to the one for the true value thus sharing all of the shortcomings of this histogram that I identified for you previously. In particular the dearth of sampling units associated with each outcome causes a complete loss of statistical significance.
If you are willing we can escape from this box by abstracting (removing) our description of the global temperature from selected details. we replace the proposition that the global temperature has a particular value (which is a real number) by the proposition that the global temperature has a value that lies within a specified range of values. Reduced to logical terms, this idea can be expressed by the proposition that the global temperature is
T1 OR T2 OR…OR Tn
where T1, T2…TN are alternate values for the global temperature and OR designates the logical operator of the same name. T1 OR T2 OR…OR Tn can be described as an “abstracted state” for it provides a description of the global temperature that is abstracted (removed) from the fine detail. The “macro-state” of statistical thermodynamics is defined similarly and for the same reason.
To make fruitful use of the idea of an abstracted state was achieved recently by Willis Eshenbach in creating his histogram from data in a global temperature time series. Through use of abstracted states the builder of a histogram increases the count of the sampling units that are associated with each vertical bar thus increasing the statistical significance of conclusions that are drawn from the empirical data.

Reply to  Willis Eschenbach
June 28, 2015 9:15 pm

I think that the futility of using “climate science”,and using the predictions or projections that it makes to guide economic policy, is hinged on the belief by some that the climate can somehow be thus controlled.
My observation is that the futility of this is obvious to most skeptics but not most warmistas, and is indeed the subject of much heated discussion in the universe of climate blogging and elsewhere.
For if there is no belief that these policy decisions can or will alter the trajectory of the climate systems of the Earth, then why implement them?
From there the discussions go off in many directions, such as what temperature should policy makers be aiming for, whether they really believe any of this or if it is all just a power and money grab, etc.
Controlling climate, or taking our hands off of the steering wheel and accelerator/brake pedals to let nature take it’s random course, is the central theme of climate alarmism, whether explicitly stated or not.
The concept of uncertainty is similarly central to many articles, and the ensuing conversations and criticisms of them.

Reply to  Menicholas
June 29, 2015 11:37 am

Menicholas:
It is conceivable that a degree of control over the climate could be achieved. This is possibility if and only if a model were to provide us with information about the outcomes of events in the period before these outcomes become observable. In a stupendous blunder, global warming climatologists have spent 200 billion US$ and 20 years on a line of research that provide us with no such information. They have aggravated their offense by convincing politicians that today’s climate models already allow governments to control the climate when this is demonstrably untrue.

Reply to  Willis Eschenbach
June 29, 2015 6:20 pm

I am more or less agnostic on the question of whether or not humans may ever exert any degree of influence over the temperature and rainfall trends of the entire globe.
Clearly some influence is achieved already. With changes in land use, installation of surface paving, and cloud seeding, things are not as they otherwise would have been.
I am fairly certain that the weather patterns over the state of Florida have been altered. Large portions of the state that were formerly swamp and marshlands have been drained and converted to other landforms…from cities, to orange groves and pastures, and in general less saturated conditions. Since much of the rain that falls during certain times of years originates as moisture that was evaporated from the land surface, having less saturated conditions has almost surely led to decreased rainfall, or at least altered patterns. (For evidence of this I point to periods of severe drought, when lack of soil moisture begets more drought. During these times, weather fronts can be seen to simply collapse upon crossing the coastline, time after time, for months on end. Other evidence exists that is more rigorous.)
****Asterisk alert**** Another form of control that I suspect may be being used, but have no hard evidence or even any strong conviction, is in disruption of tropical systems as they approach the state. Papers have been written describing or proposing dumping various chemicals or substances (such as hydrogel) into carefully chosen sectors or areas of such systems as they approach land, but there never seems to be any follow up. I suspect if such was being done, it would have to be kept secret for numerous reasons. A storm diverted from one area hits somewhere else, and the somewhere else residents and governments sue, or people sue because not enough was done to prevent damage, or else people just scream and moan because they hate it when governments do stuff like this. Some may simply object because the consequences of such efforts, whether successful, unsuccessful, or partially so, are almost surely poorly understood.
Not a tin foil hat wearer, I am just wondering to myself if such a thing were attempted, would it be revealed? Almost surely not.
Note since the disastrous years of 2004-2005, not a single major hurricane has made landfall as such. We have seen them appear to become ragged, lose strength, develop dry sectors, etc, as they approached. No evidence, no reason to think so, except that the idea to disrupt storms with hydrogel makes sense and we have not heard of it tried, so maybe they do try it and just say nothing.
[If I disappear mysteriously in the near future, or suffer an untimely accident…forget you read this 😉 ] ****End alert****
But the question which is relevant to CAGW is whether regulating fossil fuels in an attempt to prevent or limit further CO2 rise is a worthy goal, or a misguided folly, or somewhere in between.
My best guess is that, between the natural variability of the climate, the uncertainty of how much effect CO2 really has, the difficulty in achieving worldwide compliance (to say nothing of the cost…separate issue), and the highly dubious nature of the proposition that a warmer earth, and an atmosphere with more CO2, is even a thing to be feared, rather than welcomed…between all of these there is zero chance of control, even if we do have an influence. I think any efforts on the part of people to steer the climate in a particular direction have as much chance of succeeding as a troop of drunk monkeys have of piloting a plane to a particular destination, which sitting in a cockpit with the windows painted black and beating on the controls with whiffle bats.

Reply to  Willis Eschenbach
June 29, 2015 6:24 pm

Success being defined as general agreement the world over that things were made substantially better by any efforts and the subsequent changes made to the climate regimes.

Curious George
Reply to  Terry Oldberg
June 28, 2015 11:53 am

A measurement of a temperature is an “event” or a “coin flip” – except that a coin flip is not supposed to influence the next flip of the same coin, but today’s temperature definitely influences tomorrow’s temperature. So we expect a high degree of autocorrelation in most meteorological or climate data.

Reply to  Curious George
June 28, 2015 3:06 pm

A measurement of a global temperature is a kind of event but it is the wrong kind of event for the controllability of the climate. My argument is contained in my recent response to Mr. Jackson.

Reply to  Curious George
June 28, 2015 3:13 pm

Global temperature is not an “event”
It is an operationally defined calculation based on thousands of measurements separated by space and time.
..
I also think that nobody on this blog would consider that it is possible to “control” the climate.

Reply to  Joel D. Jackson
June 28, 2015 4:20 pm

Joel D. Jackson:
You are right in stating that the “global temperature is not an ‘event’.” A value for the global temperature is, however, one of the possible outcomes of a kind of event.

Reply to  Terry Oldberg
June 28, 2015 2:56 pm

Joel D. Jackson
With reference to the global temperature, the existence of an average implies the existence of a global temperature time series. It does not imply the existence of a description of the unit events or sampling units of a study.
As you implicitly point out, global temperature values could be the elements of a study’s sample space. However, for controllability of the climate, the outcome probabilities must be conditional upon accessible states reached before the outcome of the associated event becomes observable. Thus, in addition to a sample space there must be the set of accessible states, the “conditions” as they are often called. The right kind of event for controllability is described by a pairing of a condition with an outcome.
By climatological tradition, outcomes are averaged over 30 years. The global temperature record then supplies between 5 and 6 independent events going back to the beginning of this record in 1850. On the assumption that there are two possible outcomes in the sample space, independent events are too few for the statistical significance of conclusions from a study by a factor of about 30. Define the outcomes as global temperatures and one reduces the average number of independent events per element of the sample space to nil.

Reply to  Terry Oldberg
June 28, 2015 3:05 pm

“However, for controllability of the climate”

You lost me with that. I don’t think “control” of the climate is an issue here.
..
Secondly, “On the assumption that there are two possible outcomes in the sample space” is an invalid assumption. Thermometers have more than two readings.

Reply to  Joel D. Jackson
June 28, 2015 3:51 pm

Joel D. Jackson:
I am a critic of methodology. Control of the climate is an issue regarding the methodology of global warming research. That the climate remains uncontrollable after the expenditure of 200 billion dollars on research devoted to establishment of control over the climate and that policy makers continue to try to control the climate though control is impossible indicates that something is seriously amiss with this methodology.

Reply to  Joel D. Jackson
June 28, 2015 4:02 pm

Joel D. Jackson:
As the value of a temperature is a real number, a temperature has an infinite number of values. Given a sample of finite size, for the vast majority of these values, the sample size is nil. Thus, for statistical significance it is necessary to aggregate values via a description that is “coarse grained.” In the coarsest grained of descriptions that is compatible with control of the climate, the sample space contains two possible outcomes. This, however, produces far too few independent events for statistical significance given that the averaging period for the outcomes is 30 years.

Reply to  Terry Oldberg
June 28, 2015 4:03 pm

“Control of the climate is an issue regarding the methodology of global warming research.”

OK…..so for example can you tell me how drilling ice cores in glacial ice is going to control the climate.???
.
How do ARGO buoy’s control the climate?
.
How do orbiting satellites measuring microwave brightness…”control the climate?

I do believe you have a serious problem understanding what “scientific research” is, as opposed to using the results of said scientific research for making political policy decisions.
..
No matter which side of the debate you are on, skeptic or not, neither side advocates “controlling” the climate.

Reply to  Joel D. Jackson
June 28, 2015 4:35 pm

Joel D. Jackson:
To the contrary, the EPA is among the many agencies of government who are attempting to control of the climate through curbs on CO2 emissions.

Reply to  Terry Oldberg
June 28, 2015 4:08 pm

Terry says: ” Given a sample of finite size, for the vast majority of these values, the sample size is nil..” </b
.
I can't understand how a sample of fine size can be nil …..Do you mean that if my sample size is 10 then it is nil?

Reply to  Joel D. Jackson
June 28, 2015 6:24 pm

Joel D. Jackson:
Given that the elements of the sample space are the values of T, the number of values in this sample space is infinite. The number of observed values is finite. Thus, the number of observed values divided by the number of values in the sample space averages nil. It follows that for the vast majority of the values in the sample space, the number of observed values is necessarily nil. For example, if the proposition is that T = 15.26…(to an infinite number of decimal places) the probability value is very close to 1 that no event having this outcome has been observed.

Reply to  Terry Oldberg
June 28, 2015 4:10 pm

(my typing stinks today)
..
Terry says: ” Given a sample of finite size, for the vast majority of these values, the sample size is nil..”
.
I can’t understand how a sample of finite size can be nil …..Do you mean that if my sample size is 10 then it is nil?

Reply to  Terry Oldberg
June 28, 2015 4:38 pm

Terry:
..
RE: ” the EPA is among the many agencies ”

You continually use the wrong word. Try using “influence” instead of “control”

Reply to  Terry Oldberg
June 28, 2015 6:43 pm

Terry says: “the number of observed values divided by the number of values in the sample space ” which of course any statistician will tell you has no meaning.
..
Terry says: “averages nil” wrong again, the limit as x approaches infinity of 1/x is zero. It is not a “average” it is exactly equal to zero
..
Terry now says, “It follows that for the vast majority of the values in the sample space, the number of observed values is necessarily nil.” ……this is the most illogical deduction going. If the sample space is five, it is not necessarily nil. In fact it never is nil, it is five.
..
Your problem with the proposition of ” T = 15.26 ” is that you never make one like that in the case of a non-discrete random variable. Probability statements in the continuous case are over an interval, not specific values. Try reading this for a refresher on how probability deals with random variables that are continuous (non-discrete) https://onlinecourses.science.psu.edu/stat414/node/88

Reply to  Joel D. Jackson
June 29, 2015 8:26 am

Regarding continuous random variables, in building a model using this concept the builder claims to know the functional form of a parameterized probability density function. He assigns a value to each parameter, usually through the use of maximum likelihood estimation. MLE is an intuitive rule of thumb aka heuristic one of whose traits is to fabricate information.
The catch is that God does not send to scientists the specifications for parameterized probability density functions. They are fabricated, often by the unwarranted claim that the data are normally distributed. In the creation of a PDF, information is fabricated. People suffer, die and lose their fortunes when scientists fabricate information. For them to do so is unethical.
An ethical alternative to fabrication of information is to avoid fabrication of it. This can be accomplished with the help of modern information theory.

richardscourtney
Reply to  Terry Oldberg
June 29, 2015 12:10 am

Terry Oldberg:
A person does not provide information by merely stringing together words that have a meaning only understood by their presenter; e.g.
Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
Therefore, I again ask you a question I have repeatedly put to you in past years. It is
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
All your comments will remain meaningless nonsense unless and until you answer this question.
Richard

phlogiston
Reply to  Terry Oldberg
June 29, 2015 10:20 am

Richard
Love the Jabberwocky quote
and the movie:

Reply to  Terry Oldberg
June 29, 2015 7:01 pm

Mr. Jackson, I do not wish to criticize you here, but I am trying to follow the points you guys are making. I do not think that one can precisely say that the words “control” and “influence”, whether in broad usage or a specific context, are as completely distinct in their meaning as you are implying.
Just saying…this is muddying the debate.
CONTROL
[ kənˈtrōl ]
NOUN
1.the power to influence or direct people’s behavior or the course of events:
“the whole operation is under the control of a production manager”
synonyms: jurisdiction · sway · power · authority · command · dominance · government · mastery · leadership · rule · sovereignty · supremacy · ascendancy · charge · management · direction · supervision · superintendence
VERB
1.determine the behavior or supervise the running of:
“he was appointed to control the company’s marketing strategy”
synonyms: be in charge of · run · manage · direct · administer · head · preside over · supervise · superintend · steer · command · rule · govern · lead · dominate · hold sway over · be at the helm · head up · be in the driver’s seat · run the show
INFLUENCE
[ ˈinflo͝oəns ]
NOUN
the capacity to have an effect on the character, development, or behavior of someone or something, or the effect itself:
“the influence of television violence”
synonyms: effect · impact · control · sway · hold · power · authority · mastery · domination · supremacy · guidance · direction · pressure
VERB
have an influence on:
“social forces influencing criminal behavior”
synonyms: affect · have an impact on · impact · determine · guide · control · shape · govern · decide · change · alter · transform · sway · bias · prejudice · suborn · pressure · coerce · dragoon · intimidate · browbeat · brainwash · twist someone’s arm · lean on · put ideas into one’s head

Reply to  Menicholas
June 29, 2015 8:51 pm

Menicholas
Well said. The field of engineering contains a theory that bears on the control of a system. This field is called “control theory.” It is not called “influence theory” though the control of a system is usually imperfect.

richardscourtney
Reply to  Terry Oldberg
June 29, 2015 10:53 pm

Terry Oldberg:
The difference between ‘control’ and ‘influence’ is that
(a)
while a controlling factor and an influencing factor may each have an effect,
(b)
a controlling factor can govern an influencing factor
(c)
but an influencing factor cannot govern a controlling factor.
The difference is clearly seen in governments where both politicians and civil servants can affect conduct of the entire populace (including politicians and civil servants) but politicians can establish laws that civil servants must obey while civil servants can only interpret how to apply laws. Politicians are people who like power, and civil servants are people who like influence (as I do).
My having cleared that up for you, I again ask that you answer my question to you that in this thread is here and I remind that it is
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
Richard

Reply to  richardscourtney
June 29, 2015 10:55 pm

It seems that there are other questions Terry is evading…

Reply to  lsvalgaard
June 29, 2015 11:16 pm

lsvalgaard:
What are the questions that I am evading?

Reply to  Terry Oldberg
June 29, 2015 11:18 pm

You are evading even to look: richardscourtney June 29, 2015 at 10:53 pm

Reply to  lsvalgaard
June 29, 2015 11:44 pm

lsvalgaard:
In the post that you reference by richardscourney at 10:53 pm I find a quibble over one’s definition of “control” plus a demand by Courtney for me to define what I mean by “event.” On numerous previous occasions I’ve responded that my definition of “event” is identical to the definition of this term in the ancient field of probability and statistics. To imply that I have not yet responded is incorrect.
Regarding Courtney’s quibble, you can substitute for “control” what every word that you wish for the process that yields the desired state of nature given the observed state of nature and the meaning will remain the same.

Reply to  Terry Oldberg
June 29, 2015 11:46 pm

I thought you had gone away, but no, since you are still here then try not to evade the question I asked now several times.

Reply to  lsvalgaard
June 29, 2015 11:55 pm

This conversation has gotten ridiculous. So long forever.

Reply to  Terry Oldberg
June 29, 2015 11:59 pm

Good riddance…

Reply to  richardscourtney
June 29, 2015 11:12 pm

Richard Courtney:
In engineering there is a field of study called “control theory.” There is no field of study called “influence theory.” Control theory encompases situations in which the degree of control is partial as well as situations in which the degree of control is total. It is in this sense that I have used the term “control.”
Regarding your demand that I “provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an ‘event’ as you understand it…,” probability and statistics is an established discipline within which the term “event” is well defined. My definition of this term is identical to the definition within this discipline. You appear to wish to defame me by painting me as a person who so inept within his own discipline as to misunderstand the term ‘event’. Is this true?

richardscourtney
Reply to  Terry Oldberg
June 29, 2015 11:42 pm

Terry Oldberg:
In this thread I have again repeatedly put to you a question that you have repeatedly evaded in the past; viz.
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
And in this thread you have now repeated an evasion you have used in the past; viz.

Regarding your demand that I “provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an ‘event’ as you understand it…,” probability and statistics is an established discipline within which the term “event” is well defined. My definition of this term is identical to the definition within this discipline.

OK. If that be true, then you can simply provide an answer to my question by copying&pasting the definition from an accepted text book which you reference. Please do.
And you ask me a question ; viz.

You appear to wish to defame me by painting me as a person who so inept within his own discipline as to misunderstand the term ‘event’. Is this true?

I wish you would answer my question which I have yet again put to you in this post.
Please do NOT make untrue accusations concerning my motivation especially when – as in this case – they take the form of ‘Have you stopped beating your wife?’.
You are again failing to answer my question. It is a matter of opinion as to whether your failure defames you; personally, I don’t think it could.
Richard

Reply to  richardscourtney
June 29, 2015 11:51 pm

richardscourtney:
You are as able as I to look up the definition of “event.” To imply that my definition of the word differs from the established definition appears to be an attempt by you to defame me. I demand that you stop this behavior NOW!

richardscourtney
Reply to  Terry Oldberg
June 29, 2015 11:59 pm

Terry Oldberg:
Make all the demands you want: I will treat them with the contempt they deserve.
I will desist from asking you my question if and when you ever answer it. I again remind that it is:
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
Richard

Reply to  richardscourtney
June 30, 2015 12:03 am

richardscourtney:
I understand that UK defamation law places the burden of proof on the defendant in a lawsuit. How do you defend yourself from the charge that you have defamed me?

richardscourtney
Reply to  Terry Oldberg
June 30, 2015 12:12 am

Terry Oldberg:
You write
richardscourtney:

I understand that UK defamation law places the burden of proof on the defendant in a lawsuit. How do you defend yourself from the charge that you have defamed me?

Please, please, please sue me! I could use the money!
My defence is clear: i.e. I have not defamed you and you have no argument and/or evidence that I have defamed you.
I have asked you, and I am asking you
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
You have failed – and you are failing – to answer my question.
Richard

Reply to  richardscourtney
June 30, 2015 12:14 am

richardscourtney:
You’ve made my case against you. Thanks!

richardscourtney
Reply to  Terry Oldberg
June 30, 2015 12:22 am

Terry Oldberg:
You don’t have “a case against” me, but if you are so deluded as to think you do then – as I said – please sue me because I could use the money.
Having got the idiocy about suing me out of your system, please now answer my question; viz.
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
Richard

Reply to  richardscourtney
June 30, 2015 7:59 am

richardscourtney
As the term “event” is defined mathematically and in works that are readily accessible to you it would be pointless for me to respond in any other way than to refer you to this literature. One of these works is “Foundations of the Theory of Probability” by A.N. Kolmogorov . The URL is http://www.kolmogorov.com/Foundations.html . I have nothing further to say to you about this topic.

richardscourtney
Reply to  Terry Oldberg
July 1, 2015 1:06 am

Terry Oldberg:
I looked at your link; (viz. http://www.kolmogorov.com/Foundations.html ) to a paper titled “Foundations of the Theory of Probability” by A.N. Kolmogorov.
I cannot find a clear definition of “event” in that paper.
If you think there is one then why don’t you quote it?

I remind that here in this thread I wrote in reply to your claiming the definition you accept is in “the literature” by saying to you

OK. If that be true, then you can simply provide an answer to my question by copying&pasting the definition from an accepted text book which you reference. Please do.

You have NOT done that.
I yet again repeat my question that you have completely failed to answer.
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
Richard

Reply to  richardscourtney
July 1, 2015 9:14 am

richardscourtney:
If you have passed a course in probability theory then you already know the definition of “event.” If you do not know the definition, you can look it up, take a course or do whatever else turns you on. Please desist from further requests for me define it for you. I’m not your tutor.

richardscourtney
Reply to  Terry Oldberg
July 1, 2015 12:51 pm

Terry Oldberg
If you had attended a course in probability theory then you would be able to provide the definition of “event” that you use.
Clearly, you do not know the definition because you have demonstrated beyond any doubt that you cannot answer my question which is
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
As I explained to you in this thread here

All your comments will remain meaningless nonsense unless and until you answer this question.

And you cannot answer my question because you don’t have an answer.
Richard

Reply to  richardscourtney
July 1, 2015 4:11 pm

richardscourtney:
Thank you for giving me the opportunity to demonstate (once again) that richardscourtney is capable of muddying the climatological waters by drawing a false conclusion from an argument made by him in a climatological blog.
The major premise to richardscourtney’s argument is “A implies B” where in approximate translation A is the proposition “person X completed a course in probability theory” and B is the proposition “person X knows the definition of ‘event’.”
Arguments with true conclusions are of two forms. One is
A implies B
A
Therefore B
The other is
A implies B
NOT B
Therefore NOT A
A implies B is not necessarily true for person X could have completed a course in probability theory and forgotten the definition of “event” after taking and passing the final exam. Let’s assume this premise is true and see where this assumption leads us.
richardscourtney assumes that NOT B is true and draws from this assumption the conclusion that NOT A is true. In reality, A and B are both true. By his argument richardscourtney has burdened the climatological community with a pair of newly minted falsehoods.

richardscourtney
Reply to  Terry Oldberg
July 1, 2015 10:45 pm

Terry Oldberg:
The ON:Y thing this sub-thread has demonstrated is that all your posts are meaningless nonsense because you use terms that have no stated meaning and which you can define.
I give you yet another chance to obtain some small degree of credibility.
Please provide a clear and concise definition of what you mean by an “event” such that anyone can understand what is – and what is not – an “event” as you understand it.
Richard

RWTurner
June 28, 2015 10:30 am

A better word for this is hysteresis. Climate is a hysteretic system as it’s current state is dependent on its past state. Anyone claiming that more record warm years having occurred recently as proof of AGW either doesn’t understand this or is purposely deceitful.

The Ghost Of Big Jim Cooley
June 28, 2015 11:37 am

The event is measuring the temperature. Very fundamentally, by measuring it, you are affecting it. For example, a mercury-in-glass thermometer would emit or absorb energy in order to report the temperature.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
June 28, 2015 11:38 am

I’m damn sure I clicked on Willis’ reply button!

Reply to  The Ghost Of Big Jim Cooley
June 28, 2015 11:42 am

By golly I think you are right Mr Ghost. If you stuck a 2 ounce thermometer that was at 2 degrees C into the Pacific Ocean, would you lower the temperature of the entire Pacific by more or less than 0.00000000000000000000000001 degree ??

The Ghost Of Big Jim Cooley
Reply to  Joel D. Jackson
June 28, 2015 11:43 am

Let’s try it and see!

Reply to  Joel D. Jackson
June 28, 2015 11:47 am

Yup….but then we run into a big problem when we try to do that measurment.
..
We’d need a third thermometer…..which would then require a fourth to offset the third , and a fifth……..

The Ghost Of Big Jim Cooley
Reply to  Joel D. Jackson
June 28, 2015 11:55 am

Oh, yeah! Is this like adding value added tax to your annual tax payment?

Reply to  Joel D. Jackson
June 28, 2015 12:04 pm

Annual taxes have a negative value, so you subtract, not add them.

The Ghost Of Big Jim Cooley
Reply to  Joel D. Jackson
June 28, 2015 12:29 pm

Ah, you’re obviously not British! Here, we have something called Value Added Tax (VAT). If you run a business, you have to collect tax on purchases from your customers, then pay it to the Government. But your purchases (anything you pay out) are tax deductible. So, as you have to pay the tax, if you could deduct VAT from that payment, then it would alter your initial payment, meaning that it would, in turn, alter your deduction. This would go on until you disappeared up your own…

Bubba Cow
Reply to  Willis Eschenbach
June 28, 2015 3:27 pm

his posting – http://wmbriggs.com/post/7923/
“No statistical population underlies the models by which climatologists project the amount, if any, of global warming from greenhouse gas emissions we’ll have to endure in the future”.

wayne
June 28, 2015 12:27 pm

What I gather rgb is saying when he speaks of spatial correlation is that spatial autocorrelation can also exists in such a radial division case of dividing the 360 degrees into five sectors. If the five ‘spokes’ were all to be rotated plus or minus 20 degrees or so you probably would have completely different results since each adjacent sector tends to carry some of the same characteristics (good or bad) as its two neighboring sectors. That is, do these results only hold their significance levels for just that one particular choice of where the sector divisions were chosen to lie?

u.k.(us)
June 28, 2015 12:42 pm

A new method to my madness about picking horse races.
Just use my psychic powers to predict winners.
So far, it ain’t working any better than throwing darts, but all it takes is one score to make me a believer 🙂

Admin
June 28, 2015 1:32 pm

Willis, you can give the study authors some credit. Look at the last paragraph,

Removing points or changing the beginning or ending time of a time series can change the significance of a linear trend, or even completely change the sign of the slope (Watkins and Simmonds 2000, Laine 2008). Any extrapolation of the trend lines beyond the 28-year study period is not scientifically valid. In the future, more in situ SST and albedo data are necessary to validate the satellite retrieval results. Combined satellite and station data can provide reliable tools for evaluating the variability of the sea ice in the Antarctic regions.

Pretty reasonable I would say.

HAS
Reply to  Charles Rotter
June 28, 2015 4:52 pm

I drew attention to this above, but it doesn’t save the authors. The problem is they can’t draw any inference from the data even in-sample i.e. they can’t generalize these results to intermediate time periods for example.
What they have done is investigate sea ice and taken some measurements of it. These they report. It is perhaps useful to graph these, report means, standard deviations, trends etc but these remain just descriptive statistics.
However there is no basis for inference or calculating the probability of those inferences (of which confidence limits are one artifact) until a model for the inference is postulated. This is not done in this study (or is only done by inference). Only then can one test if observations independently sourced fit with the hypothesized model.
The much more fundamental problem with this paper and many like it is that it implicitly uses the same data to both create and validate the implicitly hypothesized model.
What is normally done is a relationship is hypothesized. One set of data is used to estimate the parameters of the model and then a separate independently selected set is used to test it.

Reply to  Willis Eschenbach
June 28, 2015 9:29 pm

I just see the words “In the future…more…data…necessary…validate…”, and my brain translates into “We will be needing more money from now until forever.”
And the dark clouds part and the sun shines through the murk.

Dipchip
June 28, 2015 2:44 pm

I ran the birthdays on 100 current sitting Senators. There are 10 pairs of birthdays on the same day; one pair each month except July, May and April; with 2 pair in December and a triple Birthday for May 3rd.

Reply to  Dipchip
June 28, 2015 9:30 pm

OMG!
What are the odds!
🙂

The Ghost Of Big Jim Cooley
Reply to  Dipchip
June 28, 2015 11:31 pm

dipchip, what are the odds of any Senator declaring themselves atheist or agnostic? Should be pretty good, given the general population. So let’s take a look and see how many atheist Senators there are…
Hmm, that’s odd. What are the chances of that?

Katherine
June 28, 2015 3:43 pm

The title has “Albedot” (with a terminal “t”), though the link uses “albedo.” Is the misspelling of albedo deliberate?

Glenn999
Reply to  Willis Eschenbach
June 28, 2015 5:43 pm

I was going to look up your spelling to see if was latin or french perhaps, with the silet t.
But, alas, I never got around to it on this fine Sunday

The Ghost Of Big Jim Cooley
Reply to  Willis Eschenbach
June 28, 2015 11:33 pm

You have a silent ‘n’ there!

Jay Dunnell
June 28, 2015 4:52 pm

I read this blog every day and had the most fun reading that I have had in a while. I took Statistics in College and only barely passed, so I am learning here. Kinda hard to learn though while laughing so hard!

Dodgy Geezer
June 28, 2015 5:55 pm

…Near as I can tell, statistics was invented by gamblers to answer this type of question. …
Er… I think that was Probability. Statistics is a subtly different topic…

Reply to  Dodgy Geezer
June 28, 2015 9:48 pm

That’s correct. Probability theory stands on the theoretical side of science. Statistics stands on the empirical side. Statistics acts as a check on the claims of probability theory.

davideisenstadt
Reply to  Terry Oldberg
June 30, 2015 4:22 am

troll.

Reply to  davideisenstadt
June 30, 2015 8:21 am

davideisenstadt:
In the midst of debate on Earth’s climate it is logically improper to characterize one’s opponent, whether this characterization is written in disparaging terms or flattering ones. This is because there is not a form of logical argument that employs the character of one’s opponent as a premise and leads to a valid conclusion about the climate. If you disagree, I’d like to see your argument. Otherwise, for the future please confine your remarks to ones leading to logical conclusions about the climate.

commieBob
June 28, 2015 6:17 pm

… I’d say it’s a pretty good bet the coins were loaded.

It seems to be difficult to load coins such that seven simultaneous heads happen reliably. https://izbicki.me/blog/how-to-create-an-unfair-coin-and-prove-it-with-math.html

Walt D.
Reply to  Willis Eschenbach
June 28, 2015 6:59 pm

Willis:
Here is an experiment you can perform in Excel that may help you understand the difference between a finite set of numbers and a statistical/probabilistic model.
Generate 100 sets of 100 numbers formed by taking the ratio of pairs of independent random numbers drawn from a Normal Distribution with mean 0 and standard deviation 1. Calculate the mean and standard deviation of each of these sets. What do you notice? Try increasing the set size to 1000. Then to 10000. What do you notice? Does the Sigma/Sqrt(N) standard error work here?
(From a statistical perspective, you are generating numbers from a Cauchy Distribution, which does not have either a finite mean or variance).
Keep up the good work.

Reply to  Walt D.
June 28, 2015 9:34 pm

Histogram says 1950, not 1850.

Dinostratus
Reply to  Walt D.
June 29, 2015 7:16 pm

I love this example because Willis will never understand why samples taken from a Cauchy Distribution fail to disprove his thinking yet it’s the first (any typically the only) example from probability theory that is used to show the limitations of statistical theory.
He doesn’t even understand the difference between probability and statistics until he googles it.

Reply to  Willis Eschenbach
June 29, 2015 1:35 pm

Willis:
The next step would be for you to infer the value of the probability of each of the vertical bars. If you complete this step you will have constructed a univariate statistical model. This is not the kind of model that is needed for control of the climate but is a good place to start to try to build one .
You can make each inference through use of the frequency values you have already recorded. For the time being I suggest use in making these inferences of the intuitive rule of thumb that statisticians call “maximum likelihood estimation.” It assigns the value of the relative frequency of a bar to the corresponding probability. The relative frequency of a bar is the frequency of this bar divided by the sum of the frequencies of all of the bars.

Reply to  Willis Eschenbach
June 29, 2015 8:43 pm

Willis:
You’ve admitted to ignorance of elementary concepts of probability and statistics. I am up to speed on these concepts. Thus I’ve attempted to tutor you on these concepts, free of charge. The result reminds me of the scenario that resulted from my attempt as an unpaid volunteer in local public schools to teach a Spanish-speaking third grader to read in English. From day one she exhibited hostility to being taught. Sometimes this hostility was expressed by rudeness to me personally. After several weeks of this her teacher and I reached the mutual conclusion that to try to teach her was a hopeless cause because she did not want to learn.

Dawtgtomis
June 28, 2015 6:38 pm

I am crackin’ up watching the St. Louis TV weather make a big deal out of the current rainy conditions like it’s never happened before. The dude just used the term ‘torque’ to describe a slight rotation of the system! Unprecedented melodrama…

johann wundersamer
June 28, 2015 8:01 pm

yes, highest unprobability too were 50times repeated +-
thankfully the ‘2015 Environ. Res. Lett. 10’ crew did their study not in the real antarctis so no icebreaker needed to return them to save grounds.
Regards – Hans

Paul Fischbeck
June 28, 2015 8:09 pm

When you detrend the data, you estimate the trend using the data. This slope estimate is uncertain. How is the slope distributed given autocorrelated data?
You then use the detrended data to estimate the autocorrelation. Even for a single “correct” trend, this is uncertain. How uncertain is the autocorrelation estimate given the uncertain slope estimate?

Admin
June 28, 2015 8:56 pm

Willis, I said SOME credit.

June 28, 2015 9:17 pm

The problem of analysis of significance of ‘geophysical’ time series [such as temperatures, pressure, sunspot numbers, etc] has been ‘solved’ long ago. The fundamental variable is the ‘number of degrees of freedom’. Geophysical time series have ‘positive conservation’, meaning that high (low) values are likely to be followed by other high (low) values at least for some time. The classical (and I submit, still valid) treatment of this problem can be found in ‘Geomagnetism’ by Chapman and Bartels (1940). It can be found here: https://ia600600.us.archive.org/30/items/GeomagnetismVol2_29446/Chapman-GeomagnetismVol2.pdf pages 582 ff sections 16.27-16.28 that bear reading. Taking sunspot numbers as an example it, remarkably, turns out that in 1024 days, or nearly 3 years, the number of independent daily numbers (degrees of freedom) is only about 3, and in a full solar cycle, only about 20.

Reply to  lsvalgaard
June 29, 2015 12:37 pm

“Geomagnetism” was published in 1940, eight years prior to the publication by Claude Shannon of “A mathematical theory of communication” aka “information theory” and forty years prior to the publication by Ronald E. Christensen of enhancements to information theory that adapted it to scientific theorizing (see for example Christensen’s book “Multivariate Statistical Modeling.”) Prior to the publication of Shannon’s and Christensen’s works scientists employed intuitive rules of thumb in selecting the inferences that would be made by the theories they constructed. There were many rules of thumb each selecting a different inference in a given situation. In this way, the manner in which scientists of this period theorized negated the law of contradiction (LNC). Negation of one of the three classical laws of thought horrified many logicians. It seems to have horrified few scientists.
Today, the opportunity is open to scientists to satisfy the LNC via replacement of rules of thumb by principles of reasoning based in modern information theory. Though this opportunity has been open for 35 years, few scientists have seized it. Among the scientists who have failed to seize it are global warming climatologists.

Reply to  Terry Oldberg
June 29, 2015 1:08 pm

The modern theory of information has been very useful in dealing with random noise, but [as evidenced by e.g. climate ‘science’] not is dealing with non-random time series.

Reply to  lsvalgaard
June 29, 2015 2:15 pm

lsvalgaard:
Thanks for taking the time to respond.
You wrote: “…not is dealing with non-random time series.” I think you meant
to write “…not is dealing with non-random time series.”
Actually information theory works fine with non-random time series. Also, while this approach to theorizing has been used in meteorology, I’m not aware of any uses in global warming climatology. In selecting the inferences that are made by their models global warming climatologists use intuitive rules of thumb and ignore the resulting violations of the law of non-contradiction.

Reply to  Terry Oldberg
June 29, 2015 2:19 pm

I meant to write “The modern theory of information has been very useful in dealing with random noise, but [as evidenced by e.g. climate ‘science’] not in dealing with non-random time series”
Of course, information theory can deal with non-random series, but is not any more useful than the classical methods. Perhaps you could link to a clear-cut example of the additional usefulness…

Reply to  lsvalgaard
June 29, 2015 3:59 pm

lsvalgaard:
Information theoretic model building technology selects the inferences that will be made by the model that is under construction by information theoretic optimization. The classical approach uses intuitive rules of thumb for the same purpose. Theoretically, optimization ought to work as well or better,
The paper entitled “Entropy Minimax Multivariate Statistical Modeling II: Applications” (Int. J. General Systems, 1986, Vol 12, 227-305) contrasts the performances of models built by the classical and information theoretic approaches on identical data sets. Models built by information theoretic optimization exhibited a superior ability to predict the outcomes of events in all cases examined and a greatly superior ability in most of them.
The experience with mid- to long-range weather forecasting models is pertinent to the question of which approach should be used in building global warming models. On predictive tasks for which random chance yielded 50% success in predicting the outcome the comparative success rates were:
Classical Information theoretiC
40% 70%
45% 74%
57% 69%
35% 69%
43% 71%
53% 60%
In 13 studies spread over a variety of fields of study, the classical approach yielded statistically significant accuracy ( at the 0.10 level or better ) in 2 while the information theoretic approach yielded statistically significant accuracy in 10.
A description of a long-range weather forecasting model that was built by information theoretic optimization could be of interest. It forecasts precipitation outcomes at precipitation gauges in the Sierra Nevada east of Sacramento over forecasting horizons of 1-3 years with statistical significance. The URL is http://www.knowledgetothemax.com/The%20model%20that%20revolutionized%20meteorology.htm .

Reply to  Terry Oldberg
June 29, 2015 4:05 pm

Is not responsive to my request. It is not about ‘building models’, but about analyzing time series.

Reply to  Willis Eschenbach
June 29, 2015 9:24 pm

Willis:
It apparently has escaped your attention that the Web site that contains my description of the long-range weather forecasting model of Christensen et al also contains a citation to a peer-reviewed paper, published by the American Meteorological Society, describing the research that led to this model and the conclusions from it. Will you now retract your disparaging remarks?

Reply to  Willis Eschenbach
July 3, 2015 9:34 am

Willis Eschenbach:
Your critique of my description of the Chrisensen et al model begins with an application of the poisoning the well fallacy: “Jeez, you actually wrote that piece of junk description of the Christensen model yourself?” In the future I request that you not apply this or any other fallacy in debating a climatological issue. Applications of fallacies have the capacity to lead people to false or unproved conclusions. In science the aim is to come as close as possible to reaching true conclusions.
In the remainder of your critique you concentrate your fire on the alleged fact that the peer-reviewed article of Christensen et al is not cited. Actually, the article is cited but on a different Web page than the one that you evidently read. The URL of this page is http://www.knowledgetothemax.com/Bibliography.htm . Several articles on meteorological models developed by Chrisensen and his colleagues are cited. The article that you wish to read may be Christensen, R., R. Eilbert, O. Lindgren and L. Rans, 1980e.
By the way, the Web page that you read is a part of a tutorial on the topics of logic and scientific theorizing. In lectures to meetings of the American Nuclear Society, American Chemical Society, American Institute of Chemical Engineers and American Society for Quality I’ve delivered similar messages. A similar message is delivered in the three part article that is entitled “The Principles of Reasoning” and is published in the blog Climate, Etc. “The Principles of Reasoning” was published under the peer-review of the owner-editor of this blog, Judith Curry. As you may know, Dr. Curry is a professional climatologist and is chair of Earth Sciences at Georgia Tech. The last time I checked, Google’s search engine ranked part III of this article #2 among 325,000 citations produced by a search on “The Principles of Reasoning.”

Alan McIntire
June 29, 2015 6:24 am

Professor Emeritus Ted Hill of Georgia Tech used to divide up his class in half using a criteria unknown to him. Each member of the class drew a slip of paper. Half the class was assigned to flip a coin 200 times, the other half was to make up coin flip results off the top of their head. Professor Hill guessed right as to whether the student was a coin flipper or made up his or her results over 95% of the time. When flipping coins honestly, there’ a probability over 95% that there will be at least 1 run of 7 -either heads or tails.
When people make up their own results, they are very unlikely to include such long runs.

phlogiston
Reply to  Alan McIntire
June 29, 2015 10:23 am

Sadly, some students got repetitive strain injury and sued the Prof.
For 95% of his annual salary.

Reply to  Dinostratus
June 29, 2015 7:46 pm

Paywalled.

June 29, 2015 7:41 am

Thanks, Willis. The probabilities of you writing a boring article approach 0.

Dinostratus
June 29, 2015 7:04 pm

I need to find something better to do with my life than read this stuff.

Reply to  Dinostratus
June 29, 2015 8:58 pm

Dinostratus:
Well said. Perhaps we could convince Willis to desist from publishing articles that are based upon probability and statistics while he remains ignorant of the basis concepts of probability and statistics.

Reply to  Terry Oldberg
June 29, 2015 9:00 pm

Regardless, you are still evading my simple request. Perhaps for good reason.

Reply to  lsvalgaard
June 29, 2015 9:26 pm

lsvalgaard:
What’s your request? Also, please cut out the innuendo.

Reply to  Terry Oldberg
June 29, 2015 9:30 pm

Repeating myself:
Of course, information theory can deal with non-random series, but is not any more useful than the classical methods. Perhaps you could link to a clear-cut example of the additional usefulness

Reply to  lsvalgaard
June 29, 2015 9:40 pm

lsvalgaard:
I believe that I provided for you a number of clear-cut examples of the additional usefulness. They lie in the realm of scientific theorizing. I gather that your interests lie outside scientific theorizing but do not understand these interests with enough specificity to respond to your request for a clear-cut example of the additional usefulness. Please clarify.

Reply to  Terry Oldberg
June 29, 2015 9:45 pm

You are not responsive. Your example was about building models, not about analyzing time series.
And BTW, the ‘law of contradiction’ is more than 2000 years old, so hardly qualifies as modern information theory. I don’t need another lecture about how brilliant you are.

Reply to  lsvalgaard
June 29, 2015 10:10 pm

lsvalgaard:
I remain in the dark regarding the question to which you say I am unresponsive. In what significant respect does building a model differ from analyzing a time series?
Regarding the law of contradiction, the age of this law is irrelevant as it still applies to situations for which information needed for a deductive conclusion is not missing. Information theory extends logic into the realm in which information for a deductive conclusion is missing. This is the realm in which scientific research is conducted.
That “I don’t need another lecture about how brilliant you are” states an emotional response rather than a reasoned argument. It would be best if you were to stick to the latter.

Reply to  Terry Oldberg
June 29, 2015 10:19 pm

I remain in the dark regarding the question to which you say I am unresponsive. In what significant respect does building a model differ from analyzing a time series?
Apparently you do.
My question is: in what significant respect is model building equal to analyzing a time series?
And about the emotional response: “please cut out the innuendo” is a emotional response as well as urging me to stick to something. I respond the way I consider appropriate. Who are you to tell me otherwise? So don’t.
Information theory extends logic into the realm in which information for a deductive conclusion is missing. This is the realm in which scientific research is conducted.
Is hogwash. I have been a rather successful and much cited scientist for half a century. So don’t come and tell me that I don’t know the realm in which science is conducted.

Reply to  lsvalgaard
June 29, 2015 10:31 pm

lsvalgaard:
You’ve failed to answer my question of “in what significant respect is model building equal to analyzing a time series?” and brushed off my request for you to “cut out the innuendo.” Your lofty reputation is irrelevant. Reasoned discussion with you is evidently impossible. So long for now.

Reply to  Terry Oldberg
June 29, 2015 10:38 pm

Taking your ball and going home?
in what significant respect does model building differ from to analyzing a time series?
You see, I think they have very little to do with each other. So, it is your task, if you want to be taken seriously, to convince me that I am wrong, and you seem to shrink from that (as I thought you would). And my experience is VERY relevant, whether or not you can see it.

Reply to  lsvalgaard
June 29, 2015 10:50 pm

lsvalgaard:
As you have repeatedly evaded this question, I gather that you wish not to responded to the question of “in what significant respect does model building differ from to analyzing a time series?”

Reply to  Terry Oldberg
June 29, 2015 10:53 pm

You sound like a broken record.
I think those two topics have nothing to do with each other.
Convince me that the have. This should be easy for an expert such as you.

Reply to  Terry Oldberg
June 29, 2015 9:00 pm

Moderator: I erred. Please strike “basis” and replace it with “basic” in my post of June 29 at 8:58 pm.

davideisenstadt
Reply to  Terry Oldberg
June 30, 2015 4:30 am

terry: youre a troll.
please stop defecating on this discussion thread?
Most of us here appreciate the time lsvalgaard spends here, and look forward to reading his take on issues presented here.
google scholar search him…you will find a myriad of peer reviewed papers, and his research has been cited thousands of times he ha spent his career actually doing the work in the trenches.
As for you…..what do you bring to the table besides equivocation, and “argument clinic” style discourse?
I’ll answer, not much.

June 29, 2015 9:08 pm

lsvalgaard (June 29 at 9:00 pm):
Is your message direct to me?

Reply to  Terry Oldberg
June 29, 2015 9:10 pm

yes.

June 29, 2015 9:09 pm

Moderator: In my post of June 29 at 9:08 please strike “direct” and replace it with “directed.”