Guest Post by Willis Eschenbach
Over at Judith Curry’s excellent blog there’s a discussion of Trenberth’s missing heat. A new paper about oceanic temperatures says the heat’s not really missing, we just don’t have accurate enough information to tell where it is. The paper’s called Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty.
It’s paywalled, and I was interested in one rough number, so I haven’t read it. The number that I wanted was the error estimate for their oceanic heating rates. This error can be seen in Figures 1a and 3a on the abstract page, and it is on the order of about plus or minus one watt/m2. This is consistent with other estimates of upper ocean heat content measurement errors.
I think I can conclusively demonstrate that their claimed error is way too small. To understand why, let me take a detour through the art, science, and business of blackjack.
In a fit of misguided passion, some years back I decided to learn how to count cards at blackjack. I had money and time at the same moment, an unusual combination in my life, so I took a class from a guy I’ll call Jimmy Chan. Paid good money for the class, and I got good value. I’ve always been good with figures, and I came out good at counting cards. Not as good as Jimmy, though, he was a mad keen player who had made a lot of money counting cards.
At the time they were still playing single deck in Reno. And I was young, single, and stupid. So I took twenty thousand dollars from my savings for my grubstake and went to Reno. It was an education about a curious business.
Here are the economics of the business of counting cards.
First, if you count using one of the usual systems as I did, and you are playing single deck, it gives you about a 1% edge on the house. Not much, to be sure, but it is a solid edge. And you can add to that by using a better counting system or a concurrent betting system, where better means more complex.
Second, if you play head-to-head (just you and the dealer) you can typically play about a hundred hands an hour.
Doesn’t take a math whiz to see that if you don’t blow the count, you will win about one extra hand an hour.
And therein is the catch. It means that in the card counting business, your average hourly wage is the amount of your average bet.
It’s a catch because of the other inexorable rule of counting blackjack. This regards surviving the swings and arrows of outrageous luck. If you don’t want to go home empty-handed, you need to have a grubstake that is a thousand times your average bet. Otherwise, you could go bust just from the natural ups and downs.
Now, twenty thousand dollars was all I could scrape together then. So that meant my average bet couldn’t be more than twenty dollars. I started out at the five dollar level.
I’d never spent any time in a casino up until then. I felt like the rube in every movie I ever saw. I played a while at the five dollar level. You never win or lose much there, so nobody paid any attention to me.
After a day or so making the princely sum of $5 per hour, I started betting larger. First at the ten-dollar level. Then at the twenty-dollar level. That was good money back in those days.
But when you start to make a bit of money, like say you hit a few blackjacks in a row and you’re doubling down, they start paying attention to you, and the trouble begins. First they use the casino holodeck to transport a somewhat malignant looking dwarf armed with a pad and a pencil to your table. He materializes at the shoulder of the dealer, and she starts to sweat. I say she because most dealers were women then and now. She starts to sweat because the casino doesn’t really care about card counters. I was making $20 an hour on average? Big deal, everyone in the casino management made that and more.
What scares casino owners is collusion between dealers and players. With the connivance of the dealer a guy can have a “string of luck” that can clean out a table in fifteen minutes and be out the door, meeting the dealer later to split the money. That’s what casino owners worry about, and that’s why the dealer started sweating, she knew she was being watched too. The dwarf peered through coke-bottle thick glasses, and wrote down the number of chips on each stack in the dealer’s rack, how much money I had, how much other players had. He gave the dealer a new deck. He wore a suit that cost as much as my grubstake. His wingtip shoes were shined to a rich luster. He looked at me as though I were a rich man with a loathsome disease. He watched my eyes, my hands. I started sweating like the dealer.
If I continued to win, the holodeck went into action again. This time what materialized were two large, vaguely anthropoid looking gentlemen, whose suits were specially tailored to conceal a bulge under the off-hand shoulder. They simply appeared, one at each shoulder of the aforementioned vertically challenged gentleman, who looked even dwarfier next to them, but clearly at ease in his natural element. They all three stared at me, and when that bored them, at the dealer. And then at me again.
And if the dealer was sweating, I was melting. I’m not made for that kind of game, I’m not good at that kind of pretence. I found out you can take the cowboy out of the country, but you can’t make him go mano-a-mano with the casinos for twenty bucks an hour.
I lasted a week. I logged my hours and my winnings. During that time, I worked well over forty hours. I only made enough money to pay for the flight and the hotel, and that’s about it. I was glad to put my twenty grand back in the bank.
I couldn’t take the constant strain and pressure of counting and not looking like I was counting and trying to stay invisible and feeling like a million eyes in the sky were watching my every eyeblink and having an inescapable feeling of being that guy in the movies who’s about to be squashed like a bug. But for those who can make it a game and keep it up, what an adventure! I’m glad I did it, wouldn’t do it again.
The part I liked the least, curiously, was something else entirely. It was that my every move was fixed. For every conceivable combination of my cards, the dealer’s card, and the count, there is one and only one right move. Not two. Not “player’s choice”. One move. I definitely didn’t like the feeling that I could be replaced by a vaguely humanoid 100% Turing-tested robot with a poor sense of dress and a really, really simple set of blackjack instructions
But I was still interested in the math of it all. And I had my trusty Macintosh 512. And Jimmy Chan had an idea about how to improve the odds by changing his counting method. And so did some of Jimmy’s friends. And he had a guy who tested their new counting method for them, at some university, for five hundred bucks a run.
So I told Jimmy I’d do the analysis for a hundred bucks a run. He and his friends were interested. I wrote a program for my Mac to play blackjack against itself. I wrote it in Basic, because that was what was easy. But it was sloooow. So I taught myself to program in C, and I rewrote the entire program in C. It was still too slow, so I translated the critical sections into assembly language. Finally, it was fast enough. I would set up a run during the day, programming in the details of however the person wanted to do the count. Then I’d start it when I went to bed, and in the morning the run would be done and I’d have made a hundred bucks. I figured that I’d finally achieved what my computer was really for, which was to make me money while I slept.
The computer had to be fast because of the issue that is at the heart of this post. This is, how many hands of blackjack did the computer have to play against itself to find out if the new system beat the old system?
The answer turns out to be a hundred times more hands per decimal. In practice, this means at least a million hands, and many more is better.
What we are looking at is the error of the average. If I measure something many times, I can average my answers. Is the resulting mean value the true underlying mean of what I am measuring? No, of course not. If we flip a hundred coins, usually it won’t be exactly fifty/fifty.
But it will be close to the true average of the data. How close? Well, the measure of how close it is expected to be to the true underlying average is what is called the “standard error of the mean”. It is calculated as the standard deviation of the data divided by the square root of the number of observations.
It is the last fact that concerns us. It means that if we double the number of observations, we don’t cut the error in half, but only to 0.7 of the original value. One consequence of this is that if we need one more decimal of precision, we need a hundred times the number of observations. That is what I meant by a hundred times per decimal. If our precision is plus or minus a tenth (± 0.1) and we want to know the answer to one more decimal, plus or minus one hundredth (± 0.01), we need one hundred times the data to get that precision.
That is the end of the detour, now let me return to my investigation of their error estimate for the ocean heating rate for the top 1800 metres of the ocean. If you recall, or even if you don’t, that was 1 watt per square metre (W/m2).
Now, that is calculated from temperature readings from Argo floats, about 3,000 of them during the study period.
Let me run through the numbers to convert their error (in w/m2) into a temperature change (in °C/year). I’ve comma-separated them for easy import into a spreadsheet if you wish.
We start with the forcing error and the depth heated as our inputs, and one constant, the energy to heat seawater one degree:
Energy to heat seawater:, 4.00E+06, joules/tonne/°C
Forcing error: plus or minus, 1, watts/m2
Depth heated:, 1800, metres
Then we calculate
Seawater weight:, 1860, tonnes
for a density of about 1.03333.
We multiply watts by seconds per year to give
Joules from forcing:, 3.16E+07, joules/yr
Finally, Joules available / (Tonnes of water times energy to heat a tonne by 1°C) gives us
Temperature error: plus or minus, 0.004, degrees/yr
So, assuming there are no problems with my math, they are claiming that they can measure the temperature rise of the top mile of the global ocean to within 0.004°C per year. That seems way too small an error to me. But is it too small? If we have lots and lots of observations, surely we can get the error down to that small?
Here’s the problem with their claim that the error is that small. I’ve raised this question at Judith’s and elsewhere, and gotten no answer. So I am posing the question again, in the hope that someone can unravel the puzzle.
We know that to get a smaller error by one decimal, we need a hundred times more observations per decimal point. But the same is true in reverse. If we need less precision, we don’t need as many observations. If we need one less decimal point, we can do it with one-hundredth of the observations.
Currently, they claim an error of ± 0.004°C (four thousandths of a degree) for the annual average upper ocean temperature from the observations of the three thousand or so Argo buoys.
But that means that if we are satisfied with an error of ± 0.04°C (four hundredths of a degree), we could do it with a hundredth of the number of observations, or about 30 Argo buoys. And it also indicates that 3 Argo buoys could measure that same huge volume, the entire global ocean from pole to pole, to within a tenth of a degree.
And that is the problem I see. There’s no possible way that thirty buoys could measure the top mile of the whole ocean to that kind of accuracy, four hundredths of a degree C. The ocean is far too large and varied for thirty Argo floats to do that.
What am I missing here? Have I made some major math mistake? Their claimed error seems to be way out of line for the number of observations. I’ve not been able to find a good explanation of how they come up with these claims of extreme precision, but however they’re doing it, my math doesn’t support it.
And that’s the puzzle. Comments welcome.
Regards to everyone,
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Willis, yes I see the annual error bars and they are larger than the one in the abstract as you would expect because Argo data becomes much more useful over longer time spans. if you want to use Argo to explain an annual change, I would agree it is difficult because even with the 0.004 C error you calculate, that is at the annual noise level, and it becomes even more impossible if you raise the error to 0.04 degrees by taking less floats. The point I make is that the floats can resolve changes over longer time spans, and their 0.5 W/m2 decadal accuracy seems reasonable to be able to do that.
Larry Fields says:
January 28, 2012 at 4:20 am
Your analogy fails because you appeal to well known physical hypotheses about growth. The climate scientists managing or using the ARGO buoys have no similar set of physical hypotheses to appeal to. They have nothing (nada, zip) to define their event space. For them, any two temperature readings are perfectly comparable.
Willis,
“I say that if 3,000 Argo buoys give an error of 0.004°C, then other things being at least approximately equal, 30 buoys should give an error of 0.04°C.
1. Are there logical or math errors in that calculation?
2. Do you think 30 Argo buoys (1,080 observations per year) can measure the annual temperature rise of the global ocean to within 0.04°C?”
Actually, from my experience with instrumentation, what we have been discussing about the theoretical aspects of instrumentation accuracy and precisions, and your questions above are about are different issues.
First, for question 1, my experience tells me that 0.004°C is wishful thinking by a couple orders of magnitude is the subject is absolute accuracy. Measuring temperatures to that accuracy, even in a laboratory environment, is tremendously difficult. I see you are using the IF…THEN statement again. I am saying your argument is invalid, but specifically about instrument calibration errors.
That 0.004°C value may come from a standard statistics procedure but that does not necessarily make the value meaningful in a discussion of instrument accuracy. As you stated previously, it is an IF…THEN issue on whether you assume instrument errors and measurement errors may be considered random. Obviously, if true, the 0.004°C might have some significance. Unfortunately, the IF test result is FALSE. Instrument errors may not be considered random.
It also disturbs me that the 0.004°C value is claimed when the value measured varies over a range of some 30°C (tropic to arctic), location (floating free), and extended time period. We use averaging of multiple values to estimate a most probable value for a single process condition. We do not assume it does anything but minimize the contribution of random noise, if it exists. It is never assumed to provide an improvement in overall accuracy. When the value is used in subsequent calculations, accuracy values are carried through as in 25.2°C +/- 1.5°C.
For question 2, again 0.004°C and 0.04°C speaks to the precision or most probably value of the temperature readings. It provides an estimate how well you believe your algorithm performs at calculating that, assuming all errors are random. To be complete, you must also describe the absolute accuracy of the original source data. This is where your systemic, irreducible errors must be described. If your measuring instrument is specified to have a +/- 0.005°C accuracy, you cannot claim they will not all be reading 0.005°C high or some other consistent non zero error value.
That, I believe, is the crux of our difference on this subject: whether instrument errors may be assumed random or not. My experience is that like instruments typically have similar error profiles. All that is required of the instrument is for it to meet its accuracy specification. It is neither required or expected that the error profile of a set of instruments be in any way random.
So, more succinctly, 1: NO, 2: NO, but for reasons based upon the applicability of averaging of values with respect to instrument errors, not about reducing the effect of random noise.
Let’s go back to your graphics above. I am assuming your were thinking that the rifles were the measurement instruments and the bullet patterns were examples of the rifle accuracy errors as displayed upon a calibrated plot shown as a target. This pair of graphics is actually applicable to one instrumentation scenario: Calibration adjustment in a lab. The center of the target center represents the desired instrument reading. From those graphics we can make statements about the absolute accuracy of the measuring instruments. Using your targets, a technician would then make the adjustments necessary to center the dots within the target.
When we take the instrument out into the field to make a measurement, we no longer have the absolute value target for comparison. In fact, all we have are the rifle and the bullet pattern. If average the positions of the bullets to arrive at some single point value, no matter how many bullet holes were created, we have no way of knowing if we are seeing the upper or lower pattern. We are effectively using the Texas Sharpshooter algorithm. (Shoot at a blank wall and draw a circle around the biggest concentration of bullet holes!) All we can say about that average as we increase number of holes we count is what the most probable value is tending toward. Our rifles could all be shooting high and to the left – after all they were all sighted in at the same station in the production line.
Do you think we have beat this subject to death yet?
Would a floating buoy also attract inorganic matter as well as tend to move toward other floating objects? I suspect a floating buoy warms over time from the micro and macro changes in it’s immediate environment.Has this been tested?
Unattorney says:
January 28, 2012 at 7:56 am
The Argo floats do spend time drifting around, about nine days out of ten.
They are drifting, however, at a depth of 1,000 metres … I doubt greatly there is much seaweed that lives that deep …
w.
Larry Fields says:
January 28, 2012 at 4:20 am
if you use Excel to do an actual calculation, you will find that yes, you do lose accuracy when you pull out every other data point. If you didn’t … you could estimate the trend as accurately with three data points as with 3000.
Go do the actual experiment, and calculate the actual effect on the standard error of the trend, Larry. You are not as skilled in statistics as you think. I just tried it with three datasets, HadCRUT, UAH, and RSS. I took out every other datapoint.
Since we have half the data points, you would expect the trend error to increase by the square root of 2, or 1.414. You say that the trend error wouldn’t change.
The three actual results were an increase of 1.41, 1.45, and 1.42 in the size of the trend error. Your claim is 100% wrong.
In other words … don’t quit your day job quite yet …
w.
PS—Folks, if you have a claim like this, save everyone time and pull out a spreadsheet and test it before you make yourself look foolish.
Brilliant contribution, George E. Smith; 1/27, 8:16 pm
Bringing in Nyquest into the argument is going for the jugular! You have Nyquest issues simultaneously in time, lat, long and depth. “The emperor has no clothes.”
Going back to Willis’s illustration of Accuracy vs Precision, I just want to point out that when looking at the data, all you have are the shot groupings — you don’t have the bullseye or the scope picture to go on. So the “More Precise” grouping will naturally be seen as “More Accurate” without very careful, very often, recalibration. A broken clock is a very precise time measurement device, worthless for accuracy, but very precise.
Finally, I want to tie the issue of independence of the measurments and covariance I brought up concerning Valdez and with Willis card counting at the top. I believe that the Argo buoys are probably positivly corelated in nearby measurements and that far away they are likely independent. That however, is a hypothesis on my part.
Willis’ card counting example is one of negative correlation: What has happened in the past within a deck changes the odds in the rest of the deck. That’s how it is possible to turn the odds in your favor. Sampling without replacement. Of course, it take a lot of knowledge aforehand about the domain and behavior of the deck.
What if, mind you: If, If, If If…., you had some 8 parameter Bessel function that had uncanny ability to predict ocean temperatures given (Lat, Long, Z, t). You use a random sample of 1/2 of your argo measurements to calibrate the 8 parameters. This then specifies the 4-D temperature profile of the ocean in the domain of the data. You then take the other 1/2 of the data points, calcullate the residual (measured – prediction) and you show that the model accounts for 99.99% of the measurement variance. If, If, If. If you had such a model, then your ability to evaluate the mean temp could be quite high.
Mind you, this is all theoretical. You must first show that magical predictive function, do uncertainty analysis on each of the parameters. But my point is that the measurement of the mean is not simply a function of the standard deviation of the measurements: It really should be the standard deviation of the error (measurement-prediction), which can be a small number.
Even with the theoretical model in mind, Willis’s 100 x more measurements for an other significant digit still stands. The theoretical model is critically based upon George E Smith’s observation that the sampling methodolgy passes the Nyquest test or the theoretical model is a bunch of hooey from the start.
Willis: You might look at this paper (not behind a paywall) using Monte-Carlo calculations to determine how many stations were needed to meet the needs of the US Climate Reference Network.
Vose, Russell S., Matthew J. Menne, 2004: A Method to Determine Station Density Requirements for Climate Observing Networks. J. Climate, 17, 2961–2971.
A procedure is described that provides guidance in determining the number of stations required in a climate observing system deployed to capture temporal variability in the spatial mean of a climate parameter. The method entails reducing the density of an existing station network in a step-by-step fashion and quantifying subnetwork performance at each iteration. Under the assumption that the full network for the study area provides a reasonable estimate of the true spatial mean, this degradation process can be used to quantify the relationship between station density and network performance. The result is a systematic ‘‘cost–benefit’’ relationship that can be used in conjunction with practical constraints to determine the number of stations to deploy. The approach is demonstrated using temperature and precipitation anomaly data from 4012 stations in the conterminous United States over the period 1971–2000. Results indicate that a U.S. climate observing system should consist of at least 25 quasi-uniformly distributed stations in order to reproduce interannual variability in temperature and precipitation because gains in the calculated performance measures begin to level off with higher station numbers. If trend detection is a high priority, then a higher density network of 135 evenly spaced stations is recommended. Through an analysis of long-term obser vations from the U.S. Historical Climatology Network, the 135-station solution is shown to exceed the climate monitoring goals of the U.S. Climate Reference Network.
http://journals.ametsoc.org/doi/abs/10.1175/1520-0442(2004)017%3C2961%3AAMTDSD%3E2.0.CO%3B2
The appears to be a similar paper for the Argo network, also freely available:
Schiller, A., S. E. Wijffels, G. A. Meyers, 2004: Design Requirements for an Argo Float Array in the Indian Ocean Inferred from Observing System Simulation Experiments. J. Atmos. Oceanic Technol., 21, 1598–1620.
doi: http://dx.doi.org/10.1175/1520-0426(2004)0212.0.CO;2
Experiments using OGCM output have been performed to assess sampling strategies for the Argo array in the Indian Ocean. The results suggest that spatial sampling is critical for resolving intraseasonal oscillations in the upper ocean, that is, about 500 km in the zonal and about 100 km in the equatorial meridional direction. Frequent temporal sampling becomes particularly important in dynamically active areas such as the western boundary current regime and the equatorial waveguide. High-frequency sampling is required in these areas to maintain an acceptable signal-to-noise ratio, suggesting a minimum sampling interval of 5 days for capturing intraseasonal oscillations in the upper Indian Ocean. Sampling of seasonal and longer-term variability down to 2000-m depth is less critical within the range of sampling options of Argo floats, as signal-to-noise ratios for sampling intervals up to about 20 days are almost always larger than one. However, these results are based on a single OGCM and are subject to model characteristics and errors. Based on a coordinated effort, results from various models could provide more robust estimates by minimizing the impact of individual model errors on sampling strategies.
Old44, thank you!
E. M. Smith writes: (5 paragraphs)
“We assume implicitly that the air temperature is some kind of “standard air” or some kind of “average air”; but it isn’t. Sometimes it has snow in it. The humidity is different. The barometric pressure is different. (So the mass / volume changes).
For Argo buoys, we have ocean water. That’s a little bit better. But we still have surface evaporation (so that temperature does not serve as a good proxy for heat at the surface as some left via evaporation), we have ice forming in polar regions, and we have different salinities to deal with. Gases dissolve, or leave solution. A whole lot of things happen chemically in the oceans too.
So take two measurements of ocean temperature. One at the surface near Hawaii, the other toward the pole at Greenland. Can you just average them and say anything about heat, really? Even as large ocean overturning currents move masses of cold water to the top? As ice forms releasing heat? (Or melts, absorbing it)? How about a buoy that dives through the various saline layers near the Antarctic. Is there NO heat impact from more / less salt?
Basically, you can not do calorimetry with temperature alone, and all of “Global Warming Climate Science” is based on doing calorimetry with temperatures alone. A foundational flaw.
It is an assumption that the phase changes and mass balances and everything else just “average out”, but we know they do not. Volcanic heat additions to the ocean floor CHANGE over time. We know volcanoes have long cycle variation. Salinity changes from place to place all over the ocean. The Gulf Stream changes location, depth, and velocity and we assume we have random enough samples to not be biased by these things.”
George M. Smith writes later: (5 paragraphs)
“#1 The sea is not like a piece of rock (large rock), it is full of rivers with water flowing along every which way; meandering if you will, and this meandering is aided and abetted by the twice daily tidal bulge.
So the likelihood of a buoy, no matter how tethered or GPS located, being in the same water for very long is pretty miniscule, so you might as well assume that evry single observation, is actually a single observation of a different piece of water.
Second and far more important, this like all climate recording regimens, is a sampled data sytem.
So before you can even begin to do your statistication on the observations, you have to have VALID data, as determined by the Nyquist theorem.
You have to take samples that are spaced no further apart than half the wavelength of the highest frequency component in the “signal”. The “signal” of course is the time and space varying temperature or whatever else variable you want to observe. That of course means the signal must be band limited, both in space and time. If the Temperature shall we say undergoes cyclic variations in say a 24 hour period, that look like a smooth sinusoid if you take a time continuous record, then thn you must take a sample sooner that 12 hours after the previous one. If the time variation is not a pure sinusoid, then it at least has a second harmonic overtone component, so you would need one sample every six hours,
And if the water is turbulent and has eddies with spatial cycles of say 100 km, then you would need to sample every 50 km. OOoops !! I believe that all of your spatial samples need to be taken at the same time, otherwise you are simply sampling noise.”
Both men nail the problem. E. M. Smith offers a commonsense explanation of the problem which nails the point that all Warmists’ statistical work on the ocean assumes that there are no differences between any two “sections” of ocean measured for temperature. George E. Smith then introduces the Nyquist Theorem to make the point that the Warmists’ regime of sampling is far inadequate to its purpose.
Both points support my argument that if one is to do statistical work at all then one must have an “event space” that is well defined by well confirmed physical hypotheses which serve the role of identifying the relevant events and the kinds that they fall into. Once the event space is so defined, then temperature measurements can be sorted in accordance with the physical hypotheses governing the natural phenomena, such as oceanic “rivers,” in the oceans sampled. Without such knowledge of the oceans sampled, statisticians are assuming that their temperature measurements are wholly plastic; that is, they are assuming that any two temperature measurements are comparable to one another regardless of the differences in the areas of ocean sampled. Such assumptions go beyond the offense of “a priori” science and become a clear cut example of plain old cheating. (Maybe not cheating, but the alternative is idiocy.)
By the way, all the counterexamples to Willis’ card analogy fail because each of them introduces knowledge about the cards, about a child’s growth patterns, or whatever. Yet the Warmists’ use of ARGO data omits all reference to any differences among areas of the ocean sampled so that the event space is plastic. So the counterexamples fail because each introduces knowledge that makes the event space non-plastic.
Why is it that every Warmist fails to understand the requirements of empirical science or seeks to avoid those requirements?
Willis, your blackjack story poses a question. If the casino is so concerned that dealer might cheat in favor of (collaboration with) a customer, doesn’t that imply that the casino admits that a dealer is capable of cheating? If so, shouldn’t the customers also be concerned that the dealer might cheat?
That 1% edge could evaporate very quickly.
Frank says:
January 28, 2012 at 12:55 pm
I will definitely look at those two documents, Frank, many thanks.
However, the beauty of my proof that the Argo network can’t obtain the claimed accuracy is that it doesn’t depend on their choice of methods. They can have all the justification in the world for their methods. I’m claiming, not that the methods are faulty, but that the result is unsupportable.
w.
Theo Goodwin says:
January 28, 2012 at 1:31 pm
“Why is it that every Warmist fails to understand the requirements of empirical science or seeks to avoid those requirements?”
This cann’t be repeated often enough.
scarletmacaw says:
January 28, 2012 at 2:33 pm
A couple of thoughts on that.
First, the dealer working alone can’t cheat for herself. Her actions in handling the money are circumscribed and carefully watched. Without a partner there’s no way she’s putting casino money in her purse.
So if the dealer is cheating, she’s cheating for the casino.
Cheating is something that casinos do. They have special dealers on call that they can ring in when needed. Those dealers are so good that they can steal your underwear right when you’re scratching your unmentionables and you wouldn’t notice.
But a casino cheating on a day in day out basis? No way. Two reasons. Dangerous, and not needed.
Dangerous because in Nevada the Gambling Commission is very concerned that they have a squeaky-clean state image. So they watch for cheating quite closely, and punish it severely.
Also dangerous even without the law, because if the rumor goes round the high rollers that you’re cheating them, there are plenty of other casinos happy to take their money.
Not needed because the casino is a freaking money machine. Most businesses are happy to make ten percent a year on their money.
Casinos are making ten percent a day on your money.
So there’s no need for them to cheat. Be clear, be very clear, that the casinos are not gambling. Their income is a sure thing, not a gamble in any sense of the word. They will make, I forget but from memory 15% off the keno bets. Sure, it’ll go up and down, but at the end of the year they will in fact make that profit off of the dollars bet. That’s not a gamble. That’s a sure thing.
So the 1% edge is indeed there, as my friend Jimmy Chan and other folks right here on this thread can attest …
w.
Theo Goodwin says:
January 28, 2012 at 1:31 pm
One thing I learned early in this game is to always avoid absolute statements.
You like that? An absolute statement saying avoid absolute statements.
“Every” AGW supporter doesn’t do anything. There are always exceptions. There are always freaks of nature, sports.
In addition, my horizons are quite limited. I know X people. There are X times 10^6 people out there. The blogs I read only have certain people commenting. I range widely, sure, but I don’t know a ten-thousandths of the “Warmists”.
So a real question might be:
I don’t use the term “Warmists” myself, after a couple of people objected. I don’t like being called a Denier, they didn’t like being called a Warmist, seemed fair.
Willis Eschenbach says:
January 28, 2012 at 5:00 pm
“One thing I learned early in this game is to always avoid absolute statements.
You like that? An absolute statement saying avoid absolute statements.”
Scores with me. Enjoyed your post.
Call it 3,000 Argo buoys during the time of the study. More now, I think 4,000. Now they are claiming much smaller errors.
Three profiles per buoy per month. That means that during the study we were sampling the global ocean at a frequency of 108,000 times per year, or about 300 times per day.
Certainly, this is not an optimum solution, because of the size and variability of the ocean. However, that should not stop us from taking the measurements that we have and understanding what they do mean, and in the case of the errors, what they don’t mean about the temperature.
w.
Before someone busts me regarding spatial sampling and Nyquist, we’re looking for an answer at fine temporal frequencies (say months). We’re taking ~ 300 samples per day.
Spatially, for our purposes all we’re interested in is the coarsest frequency, the entire globe. We don’t care about small fluctuations. So I don’t see how Nyquist applies.
Am I missing something?
w.
Camburn says:
January 28, 2012 at 4:47 pm
Theo Goodwin says:
January 28, 2012 at 1:31 pm
“Why is it that every Warmist fails to understand the requirements of empirical science or seeks to avoid those requirements?”
“This cann’t be repeated often enough.”
Thanks, Camburn. I want to take just a moment to dredge up the heart of my claim and make it clear as a bell for everyone. The claim is very important as I will explain.
No “consensus” climate science working at this time will address the claim that there are no well confirmed physical hypotheses that can explain even one physical connection between increasing CO2 concentrations in the atmosphere and the behavior of clouds and related phenomena. They will not address the claim because they know that no such physical hypotheses exist. And that is the scandal of climate science today. Even Arrhenius knew that without the “forcings” and “feedbacks” there is no way to know what effects increasing concentrations of CO2 might have on Earth’s temperature. Thus, “consensus” climate scientists are mistaken to claim that there is scientific evidence that supports the CAGW or AGW thesis.
When I say that “consensus” climate scientists either do not understand the requirements of empirical science or seek to avoid those requirements, it is their failure to address the nonexistence of these necessary physical hypotheses that is uppermost in my mind. This is the scandal of climate science and every critic should be doing all that he can to hold “consensus” scientists’ feet to the fire. They are beaten on the science. All that is necessary is that we press the case.
What “consensus” climate scientists are willing to discuss are unvalidated and unvalidatable computer models and the laughable proxy studies. Those two topics are nothing but grand Red Herrings.
Finally, if you need proof that “consensus” climate scientists have no well confirmed physical hypotheses that can explain and predict the effects of CO2 on clouds, just ask them for the hypotheses. None have produced them and none can produce them. The necessary physical hypotheses do not exist.
From Theo Goodwin January 28, 2012 at 1:31 pm and quoting E M Smith and George M. Smith:
This is the simple crux of the matter:
1. “E. M. SmithSo the likelihood of a buoy, no matter how tethered or GPS located, being in the same water for very long is pretty miniscule, so you might as well assume that every single observation, is actually a single observation of a different piece of water.”
2. “…. that all of your spatial samples need to be taken at the same time, otherwise you are simply sampling noise.”
Any calculation of SEMs for those plotted curves must be a very strangely fabricated web of modifiers, smoothings and adjustments.
For the statisticians:
How well can we derive Global Ocean Indicators from Argo data? K. von Schuckmann and P.-Y. Le Traon
Ocean Sci. Discuss., 8, 999–1024, 2011
http://www.ocean-sci-discuss.net/8/999/2011/
doi:10.5194/osd-8-999-2011
© Author(s) 2011. CC Attribution 3.0 License.
http://www.ocean-sci-discuss.net/8/999/2011/osd-8-999-2011-print.pdf
re: Argo bias.
Has there been anything written on the “flight” behavior of the Argo submersible that propels itself by changing buoyancy and tilting vanes? If these were hot air balloons they would center themselves in a column of warmer air – and appear to be pushed out of a colder column when sitting astride a boundary – this may even be an amplifier (small change at less than the level of the resolution of the device could have a significant positional effect (?)).
And large bodies of water have much more energy (mass in motion) (in very low frequency turbulence) than air. So how do we know the instrument isn’t “heat seeking” given the opportunity? And what delta in heat will it seek? Smaller than the instrument’s resolution, perhaps?
Willis,
I’ve also done high precision and accuracy temperature measurements in a laboratory. Measuring to 0.001 degree over a fairly wide range of temperature can be tedious, but not at all impossible. But the ARGO buoys do not need to measure a wide range of temperature. I would be very surprised if the individual buoys don’t have a measurement precision and accuracy of better than 0.004 degree.
DeWitt Payne says:
January 31, 2012 at 1:41 pm
Thanks, DeWitt. The listed accuracy is 0.005°C. The question is not how accurate individual measurements are.
The question is, can 30 Argo floats measure the temperature of six hundred and forty million cubic kilometres of water to an annual accuracy of ± 0.04°C? Each float takes 108 temperature profiles per year, and each float has to cover about
22 million240 thousand cubic kilometres of water.Let me ask you as someone with experience in taking the temperature of liquids, DeWitt … does that sound in any way possible? Measuring to the nearest four hundredths of a degree, with one temperature profile for every 200,000 cubic kilometres of water?
w.
Instrumentation is what I did for ten years and no way could I guarantee +- .005 c of accuracy on a temperature device that has to measure such a wide temperature range and even then it would require 6 month cals. These floats only cost $15,000 (only ?) and no way can a device bob around in the ocean for 5 years and remain that accurate.