Robert Balic writes:
I recently read the Willis Eschenbach article Argo, Temperature, and OHC (http://wattsupwiththat.com/2014/03/02/argo-temperature-and-ohc/) which reported the trend in the global ocean temperatures as 0.022 ± 0.002 deg C /decade and Steven Casey asked
“Can we believe we have that much precision to 0.002 deg C/decade? And we have not yet measured a full decade.”
Also, there was a reply to a comment of mine on The Conversation mentioning the uncertainty which stated “The temperatures in the Argo profiles are accurate to ± 0.005°C http://www.argo.ucsd.edu/FAQ.html#accurate“.
I checked the website http://www.argo.ucsd.edu/How_Argo_floats.html and found that
“The SBE temperature/salinity sensor suites is now used almost exclusively. In the beginning, the FSI sensor was also used. The temperature data are accurate to a few millidegrees over the float lifetime,” and “The temperatures in the Argo profiles are accurate to ± 0.002°C”.
The temperature profiles might be accurate to ± 0.002°C now, but weren’t the measurements made to the nearest 0.1°C previously? I looked up the accuracy of their thermistors earlier this year and it was written as 0.1°C. A high precision commercial instrument usually has a claimed ± 0.05°C accuracy so they most likely did record to the nearest 0.1°C until they installed the new units. They can’t now insist that the smaller error in the previous trend remains uncorrected because they have new instruments this year.
Why is it relevant that the temperature measurements were taken to the nearest 0.1°C if they looked at the average of over 100 measurements? Well if you take my height for example and measure me to the nearest centimeter 100 times, then the average would probably come out to be 183cm with a standard deviation of 0. Perfect!
If you had recorded my height to the nearest millimeter having taken 50 measurements of 1825mm and 50 measurements of 1835mm, you would get an average of 1830mm with a standard deviation of 5mm or 0.5cm. A random spread of measurements over that range would bring the SD down to about a quarter of a centimeter and the error estimate is usually twice this value.
The rule of thumb that I was once taught is that your minimum error is plus or minus the value of the increment that the measurements were made with (eg. 1 cm) where the number of measurements are a few, or half this value when there are a large number of measurements (eg.± 0.5cm). So if the Argo floats only measured in increments of 0.1°C then the uncertainty in the mean of many measurements is at least ± 0.05°C. Hence, a trend of 0.02°C/decade measured over less than a decade is utterly meaningless.
Someone should also have a word in the ears of those at The University of Washington.
“Because total Arctic sea ice volume from PIOMAS is computed as an average over many grid points, the random error (scatter in above figures) doesn’t affect the uncertainty in the total ice volume and trend very much.”
This is the excuse to ignore the large errors implied by this plot.
Where the model predicts a 4m thickness the submarine data is spread evenly between 2.5 and 6 m. The range is nearly 0 to almost 3.5m where the estimate from the model is 1m, that is over 100% uncertainty in the thickness yet they are absolutely sure that the ice is in a death spiral.
![Fig2[1]](https://wattsupwiththat.files.wordpress.com/2014/10/fig21.png?resize=609%2C630&quality=75)
Something to do with past satellite records of Arctic and Antarctic sea iice. Enjoy..
NIMBUS: Recovering the Past: http://youtu.be/bvGIE1y3cXA
https://mobile.twitter.com/NJSnowFan/status/520446647038644225
See the IPCC 1990 graph.
http://stevengoddard.wordpress.com/2014/10/08/monthly-serreze-propaganda-update/
“The temperature profiles might be accurate to ± 0.002°C now, but weren’t the measurements made to the nearest 0.1°C previously?”
They aren’t saying that the temperatures are accurate to ± 0.002°C. They are quoting at trend with statistical error ± 0.002°C/decade. Different units, for a start.
They arren’t measuring the trend with a thermistor; they are calculating the trend, which is a weighted average over time and many observations, and reporting the population-based standard error of that average.
So let’s say at time 0 the reading is 10.0C and it’s 10.1C after ten years. They could say the rate was 0.01 per year +/- 0.002? Is there enough precision to conclude this?
“They could say the rate was 0.01 per year +/- 0.002?”
There you see the units issue. It was ± 0.002°C/decade, or ± 0.0002°C/yr, or ± 0.02°C/cen. So how many °C?
Nick, are you actually claiming that the accuracy of the length of a meter decreases over time? Your reasoning and logic is backwards.
If they measured over a century a rate of 0.02°C/decade then you would have a difference over that time of 0.2°C and an error of ± 0.05°C would mean that there was a significant trend (if the data was smooth and no further error due to sampling).
I don’t doubt the manufacturers claim as it should be possible to build such an instrument, its the previous FSI instrument that wouldn’t have recorded the temperatures with enough precision to make the trend meaningful.
Yes they are. That is exactly what they are saying: “The temperatures in the Argo profiles are accurate to ± 0.002°C …”
No they aren’t. They are stating the accuracy of the recorded temperatures. They do not say “0.002°C/decade”, they say “0.002°C.” This is because they are not talking about temperature trends, they are talking about temperatures.
Yes, you are using different units than those stated. Should have been your first clue that you were talking out of your ass.
That should have been your second clue. They don’t say “The temperature trends calculated from the ARGO profiles are accurate to ± 0.002°C/decade.” They say “The temperatures in the Argo profiles are accurate to ± 0.002°C …”
You are correct, JJ. Nick is wrong.
JJ, meet the incorrigible Nick Stokes.
What really gets me is that nine times out of ten the raw data doesn’t show a trend and has to be ‘adjusted’ for one to become visible. Doesn’t this basically prove the measurement error is greater then the trend?
Indeed, JJ, you are right and I was wrong. I saw the mention of 0.022 ± 0.002 deg C /decade in the first para, with Stephen Casey’s question featured, and didn’t notice the later reference to 0.002°C as quoted accuracy. My apologies to the author.
Kudos to Nick Stokes for manning up and admitting that he was wrong.
I’ve had my differences with Nick, and that will continue until he sees that I’m right.☺
I mention this only because it is so extremely rare for anyone on the alarmist side to ever admit they were wrong about anything — when we know they’re wrong about everything. Well, as far as their alarming predictions go, anyway.
Well, I do have to add that the 0.002 °C is the manufacturer’s claim..
Maybe I misunderstand. Are you implying – but not saying – that with 1000 thermometer readings, each one accurate to ± 1°C, you can get a weighted average accurate to ± 0.001°C? If temperatures at the start of a decade and at the end of a decade are both known with an accuracy not exceeding ± 0.002°C, what is the accuracy of a trend?
Puhleeeze! Real scientists know how significant figures work. Witch doctors don’t.
That scatter plot is interesting since it indicates that PIOMAS consistently overestimates the thickness of thin ice and underestimates thick ice. Note that there is almost no dots below the green line for thicknesses below 1.5 meters (5 feet) and almost none above it over 4 meters (13 feet). Incidentally this implies that PIOMAS systematically underestimates the amount of multi-year ice.
Shhhh…. don’t give a hint to the climate obsessed. Watching them looking for the missing melt will be more fun than watching them look for the missing heat.
On Arctic sea ice thickness. Here is a small sample.
Co2, the trace time traveller.
Oh boy, they sure could sound alarmist back in the day. I thought it was just a modern phenomenon.
Oh boy is right, Jumbo. You should write up a study on this, or someone should.
@ur momisugly#!!&^*!! autocorrect Read Jimbo above
If that happened today we would have a headline of “Walrus dental record of climate change found at the North Pole”.
That’s a very interesting, and telling, observation. Thanks for pointing this out!
rip
A line of best fit would probably intercept the y-axis at 1m so its overestimating but for earlier estimates as well as later ones. I’m not sure what that means for the death spiral. Does it make it much steeper?
The only use for a plus minus 0005c of the worlds oceans is for the GREENS to cry ” we are going to die, the oceans are evaporating “
It’s a little worse than that. Even if you stayed in the same stadium in NY, if you’re not measuring the same 1,000 people they the law of large numbers still doesn’t apply. It could only work if the float followed the same volume of water wherever it went. Since this can never happen, you’re SOL.
Um, haven’t you forgotten something. Firstly ice doesn’t melt all year, and at some places it may never melt so you can’t average over the whole of the arctic to find the energy requirement, nor is it uniform there is much greater ice loss at the higher latitudes. If you take this into account you find out that you need 10-20 W per square meter over a much smaller time and space to account for the ice loss “Where it’s actually occuring” and 10-20W per square meter is at least 15 times the amount of energy imbalance supposedly generated by CO2 and its supposed feedbacks. This means that ice loss is NOT being caused by CO2.
“10-20W per square meter is at least 15 times the amount of energy imbalance supposedly generated by CO2 and its supposed feedbacks. This means that ice loss is NOT being caused by CO2.”
Not necessarily. The alleged energy imbalance caused by CO2 is a global imbalance. Wind and water could (and are by some thought to) be collecting energy in consequence of that imbalance and transporting it to the poles, thereby concentrating its action at the poles on the process of melting ice.
But the larger issue is spatial coverage, and the fact that the ARGO buoys are free floating.
Even if many measurements are averaged, they are not measuring the temperature at the same place twice. Every measurement made is taken at a diiferent place, where it may not be surprising to see small differences in temperatures.
If I was to measure the average height of a 1000 people in a football stadium New York, and then next month measure the height of a 1000 people in a hospital in Washington, the following month in a school in Seattle, the following month in the streets of Paris, the following month in a restaurant in Brussels, the following month in a park in Berlin, the following month in a concert Hamburg, the following month in a market in Cairo, etc. does the law of large numbers apply with the same veracity?
The claimed for errors in climate data are unrealistic, period.
There are lots of BIG errors in climate science without resorting to small ones.
Exactly! Now I don’t have to say it!
True. I omitted a bit about this being the uncertainty in height for one person. What they are doing is like measuring samples of 3000 people in NY to find the trend in heights of adults over a decade.
Climate scientists can generate very accurate figures from what appears (to non climate scientists) to be garbage, or at best low-resolution / imprecise data. They use “gut feeling”, “adjustments”, “infilling”, “homogenization” and good old-fashioned “making stuff up” to ensure the results are “on message”. When the reality doesn’t match their theory the assume reality is wrong.
How else can you possibly explain their absolute certainty, to tiny fractions of a degree – based on things like tree-rings, lake sediments or ice cores. It’s nonsense.
+10. Yes, the old fashioned ways of fooling people seem to work well.
You missed “Kriging”.
One observation about measuring your height vs measuring buoy temperatures.
When you measure your height with many observations the height (ideally) is unchanging — so you are making many observations of a constant.
The measurement of temperature at a point in time and space on a buoy could be considered an independent observation of “something” — which has essentially nothing to do with the last “something” — or the next. Does the CLT (etc.)apply in this case? Considering the drift on the buoy, and that each measurement is independent, I am not sure that we can say that the stated accuracy is achieved. Just curious.
Actually, height IS changing. It varies during the day. Getting up in the morning, people are at their tallest height.
Height gradually shrinks during the day thanks to gravity compressing the padding in people’s joints.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1545095/pdf/archdisch00801-0068.pdf
CLimate change driven?
Bill_W
Yes, during our nighttime horizontal climate we lengthen as the padding rebounds, however when we change to our usual daytime vertical climate we slowly shrink. I have thought about graphing it but then I realized I would need to do an IRB before performing and publishing such a graph and decided it isn’t worth the paperwork.
Besides, unlike normal climate science, I would want to include all the confounding variables like whether the subject was standing or sitting most of the day or what the effects of an afternoon nap may have. Then there is the problem of figuring out what minimum sample size would be needed to accurately represent the universe of people at an appropriate standard error would be. Then there are the unknown unknowns that would throw a monkey wrench into the whole thing. [only partly sarc]
So if i go to antartica i will be taller?
In reply to Bob Boder.
http://curious.astro.cornell.edu/question.php?number=310
The strength of gravity is about 0.3% less at the equator than at the poles, so you would be an infinitesimal amount shorter at the poles.
With NO gravity, astronauts are about 3% taller than on earth.
http://www.space.com/19116-astronauts-taller-space-spines.html
WillR
CLT or Central Limit Theorem is time independent – so convergence to errors that follow a normal distribution could take 1 week or 50 years (i.e beyond the lifetime of the sensor). Real sensors often show discontinuous drift when measured against more accurate devices something that keeps many employed in various national standards agencies. So we can’t just assume that CLT applies. We need to regularly calibrate devices and most important make sure we have consistency of measurement method. Change how you measure something and it increases the error.
The overall problem here is that the accuracy of the measuring device is not just the error; there is also the changing environment, the drift in the sensor and other things. As a rule of thumb a good thermocouple or thermistor would give you ± 0.5 °C but more likely ±1 °C. And that’s in a lab environment with good thermal contact and placement.
The other thing is that your accuracy is only as good as the calibration device. So if your national standards agency (NIST, NPL etc) cannot produce 0.002°C/decade then your sensor cannot claim that accuracy.
On the other hand you could just use a bunch of assumptions and forget to remind people that you’ve done that. Now where have I seen that done before I wonder?
…”so you are making many observations of a constant.” Lately I have been growing taller then my hair!
The major problem with virtually all long term climate data is the lack of random replicated samples. Nearly every other branch of field science requires both random sampling and replication in each set to define the size of errors. Climate science generally appears to think it jolly to take ONE sample and then proceed to perform all of the statistical analysis as if the variance of ONE sample was not equal to infinity. Has ALWAYS been garbage in and garbage out, to the closest infinity.
When you give a highly precise computer to somebody totally ignorant of metrology, you get pearls of wisdom such as “They arren’t measuring the trend with a thermistor; they are calculating the trend”.
Discerning minds will understand the pearl to mean the “trend” be a value with only tenuous connections to reality.
IOW why spend money on a measurement system if you don’t measure something with it, instead of asking the proverbial electromechanical ass aka a computer.
As one of the people who ran a maritime mobile weather station in the 60’s and 70’s I call bullshit on this whole sorry tale. We took air and sea temperatures to the nearest half degree by mark one eyeball. When you listen to these computer brained prats snivelling about 1000ths of a degree you just have to laugh at the stupidity on show. 0.7 degree rise in atmospheric temperature in 150 years based on random observations, almost none in the southern hemisphere, made to the nearest 1/2 degree. Give it a break.
Ivor – I remember reading about a guy who worked In Alaska back in the 50s/60s. He said often they did not go out to read the temps, just filled in off the top of his head. He said in those days no one thought that these figures would become so important in the future.
This account of the Sources of 20th Century Ocean Temperatures by the late Dr Robert Stevenson, chimes perfectly with your comments.
“Yes, the Ocean Has Warmed; No, It’s Not “Global Warming” by Dr. Robert E. Stevenson
http://www.21stcenturysciencetech.com/articles/ocean.html
“Surface water samples were taken routinely, however, with buckets from the deck and the ship’s engine-water intake valve. Most of the thermometers were calibrated into 1/4-degrees Fahrenheit.
They came from the U.S. Navy. Galvanized iron buckets were preferred, mainly because they lasted longer than the wood and canvas. But, they had the disadvantage of cooling quickly in the winds, so that
the temperature readings needed to be taken quickly.
I would guess that any bucket-temperature measurement that was closer to the actual temperature by better than 0.5° was an accident, or a good guess. But then, no one ever knew whether or not it was good or bad. Everyone always considered whatever reading was made to be precise, and they still do today.
The archived data used by Levitus, and a plethora of other oceanographers, were taken by me, and a whole cadre of students, post-docs, and seagoing technicians around the world. Those of us who obtained the data, are not going to be snowed by the claims of the great precision of “historical data found stored in some musty archives.”
from the same paper:
“It sometimes seems as if I’m living in a “time-warp” in which some people, and scientists, are unaware that rational life existed before their birth—or before they got out of the sixth grade. Yet, we marine scientists did not enter the second half of the 20th century without a fair bit of understanding of the thermal ocean.”
also from the same paper:
In 1991, when the IUGG and its associations met in Vienna for their General Assembly, the presidents and the secretaries-general of the four associations I’ve mentioned, discussed the program we would propose to forward to the International Commission of Scientific Unions (ICSU) for consideration at the 1992 Rio de Janeiro Conference. We all decided not to prepare any programs!
In our joint statement, which I paraphrase here, we noted that “To single out one variable, namely radiation through the atmosphere and the associated ‘greenhouse effect,’ as being the primary driving force of atmospheric and oceanic climate, is a simplistic and absurd way to view the complex interaction of forces between the land, ocean, atmosphere, and outer space.”
Furthermore, we stated, “climate modelling has been concentrated on the atmosphere with only a primitive representation of the ocean.” Actually, some of the early models depict the oceans as nearly stagnant. The logical approach would have been to model the oceans first (there were some reasonable ocean models at the time), then adding the atmospheric factors.
Well, no one in ICSU nor the United Nations Environment Program/World Meteorological Organization was ecstatic about our suggestion. Rather, they simply proceeded to evolve climate models from early weather models. That has imposed an entirely atmospheric perspective on processes which are actually heavily dominated by the ocean.
I would say that as the arctic ice is affected by oceanic movements (e.g. the warm Gulf Stream water ends up near the extremes of winter ice to the north of Norway), it is probably sensible not to make definitive judgements on how ice behaves until one has observed its behaviour for somewhere between 50 and 100 years. That would suggest that 2030 at earliest is probably the first meaningful datapoint for analysis of 1979 – 2028 satellite data.
This assumes that satellite data is all absolutely accurate when compared to manual measurement methods (I’m assuming that this is so, but experts can no doubt correct me).
Assuming that both arctic and antarctic measurements are equally accurate, currently the total sea ice is above the 30 year mean, which is hardly death throes…..
My take after reading many articles and posts on this is the cliamte science is so dominated by so-called global warming bias that even basic processes like homogenization end up corrupted. In real science homogenization yields more data and identifies underlying trends better. In climate science homogenization is used to hide trends and destroys data.
I agree with you. My local National Weather Service office at Pleasant Hill Missouri has a systematic bias built into its daily average temperature calculations. All averages are rounded up. At first blush, it looks reasonable and consistent. However, these averages are used to calculate heating and cooling degree days, with the result that, at the end of a month or season, heating degree days tend to be understated and cooling degree days tend to be overstated. This makes any time period appear to be warmer than it actually was, if one only looks at this parameter.
I note that the daily averages are not used to calculate monthly or seasonal averages.
The often-hyped “Arctic death spiral” is a very much simplified, very clever – but very false – piece of pure propaganda.
From August 22 through March 22, every extra sq meter of sea ice around the Antarctic reflects more energy back into space than the loss of a sq meter of sea ice in the Arctic can gain.
It is only in those five fleeting months of April, May, June, July and August that even a little bit of “arctic warming” occurs if sea ice is lost.
And even that statement compares only heat absorbed to heat energy reflected from the ocean surface to an ice-covered surface. When sea ice is lost in the arctic from today’s conditions, more heat is lost by increased evaporation, conduction, and convection, and LW radiation from the now-open surface than is lost when ice is present. Add those increased losses into the equation, and open ocean in the Arctic
Net effect?
More open ocean in the arctic cools the planet nine months of the year.
More ice around Antarctica cools the planet every month of the year.
The death spiral is up there with the hockey stick: Simple to understand, deceptive as heck.
I think the cooling period is longer than this. Once the angle of incident get over about 75 degrees, smooth water has the about the same albedo as ice. So only for the last month or so of NH summer does the angle get smaller than this, but since the Earth is a ball, a few hours east and west of the Sun line, that too ends up with a AoI larger than 75-80 degrees. So for lets say 6 hours a day, a single location would be net energy positive, but the rest of the day it’s net negative. When you calculate cooling rates with S-B between open water and very cold clear skies, the water is dumping more heat to space than it’s collecting.
There will be no death spiral, what we are seeing is built in temperature regulation, since the last time this happened was in the 30’s and 40’s, it took about 70 years for a complete heating/cooling cycle.
Don’t forget the insulation effect that sea ice has in the Antarctic/Arctic . The more sea ice you have, the less heat the warm water underneath the ice loses.
+/- 0.002 degrees ain’t even noise – no matter what you are using to measure the temperature.
A fish burp within 500 feet of the sensor would cause more of a change than that.
+/- 0.002 degrees is what you do when you are looking for any signs of heat. It is a desperate, desperate measure [no pun intended].
Its always funny to see the claims for accuracy greater than an instruments is capable of given due to the application maths. Yes you can do averages etc but the reality remains if you can only accurately measure to one decimal place you can only give accurate measurements to one decimal place . After that its intelligent guesswork.
Or in some cases UNINTELLIGENT guesses.
I agreed entirely with every word until you got to, “After that its intelligent guesswork.”
Please define “intelligent”.
Intelligent guesswork: A statistical method designed to extract money from politicians and free PR from reporters. In Climate Disruption™ “science” it usually involves a complete and total disregard of the mathematics of reality and relies on the misapplication of statistical manipulations to create a wholly fictional result. The desired end result is another 1 to 3 years at the public trough working diligently to create and publish said fiction in order to get to the next round of grant money. Kind of like politicians having to campaign every 2, 4 or 6 years to get elected… but with far less honesty.
Or motivated spinning.
It’s common to hear quotes of system accuracy, when they really mean measurement resolution.
They might be able to resolve a relative change of 0.002°C, but an absolute accuracy of 0.002°C? Yikes!
How do you calibrated a device to that kind of absolute accuracy?
PK
+10, you hit the nail on the head. The age old confusion between “precision” and “accuracy”. The fact that they can read a precise figure of 15.632 C on one buoy does not mean that it is necessarily the same temperature as the equally precise reading of 15.632 C on a different buoy. You need to know how accurate they are, that is how well calibrated against a known standard – or more importantly when calculating a trend (rather than a spot figure) how well calibrated they are against each other .
Also important is the temporal stability of that reading. Is a reading of 15.632 C in June 2012 the same temperature as a reading from the same buoy of 15.632 C in 2014?
Having said all that, statistical sampling to calculate a trend is all about the number of readings before and after. Once you get over 2000 data points (2000 before and 2000 after) errors in a discernable trend decline dramatically. Individual buoys may not be particularly accurate, or even very precise, but if the average of 2000 readings after is 0.2C higher than 2000 readings before, then the error bars on that 0.2C become vanishingly small.
Caution: This assumes that there in not a systemic problem with the sensors that means that their readings all rise by the same amount in a given time even if the temperature has not changed. I think the chances of that being the case is pretty remote!
It also assumes that the buoys are reasonably randomly distributed and that they cover the same area of ocean before and after. It does not really matter if, for instance, they do not sample the southern ocean at all, you just have to caveat your observation.
What is usually missing is the caveats. So, for example, if there is a calculated trend of 0.2C a decade excluding the southern ocean then you just have to say “excluding the southern ocean” – something climate scientists rarely do! They prefer to use questionable extrapolation techniques to fill the gaps.
Example
I use a Leica Disto laser distance measurement device in my job. It is “precise” to 1 mm in 50 metres. The marketing blurb says it is “accurate” to 1mm in 50 as well, although I have no way of testing that independently. Let us say I am measuring two walls roughly 10 metres apart. However, my hand is not that steady, the wall I am holding it against may be bumpy and I do not always hold it perfectly level or target exactly the same spot on the other wall. One measurement is never enough. I usually repeat two or three times and take the average because individual readings will often be up to 5mm out over 10 metres. I also know that if I took 2000 readings as precisely as I could on one one day and 2000 readings of the same spots on the walls the next, the chances of the average of each day’s measurements being more than 1mm different would be almost zero. I also know that if I took the same readings 2 years apart and the average of the second set were 5mm larger then I could be pretty certain that one of the walls was falling over! This is the case even though my “error bars” on an individual reading are plus or minus 5mm.
Upshot
The Argo float temperature system takes tens of thousands of readings every year. The trend is probably very accurate (repeat – the trend, not the absolute temperature!) and I can believe that the error bars on the trend might be +/- 0.002 C for the reasons stated above. It probably has nothing to do with the accuracy or precision of the individual temperature sensors at all, just the sheer number of samples taken. I think this is probably either a failure in communication or a failure in understanding – probably a bit of each. I would certainly trust the temperature trend from the Argo floats rather more than I would from our surface stations!
Good explanation TLM. However, you say
“… I can believe that the error bars on the trend might be +/- 0.002 C for the reasons stated above.”
A trend is a rate, correct? So the units on a trend variable should be deg C/[unit of time] e.g. deg C / year, and not just deg C. Minor point, but stating error in the proper units would go a long way towards avoiding the obviously erroneous claims of +/-0.002 deg C error for Argo temperatures.
“Caution: This assumes that there in not a systemic problem with the sensors that means that their readings all rise by the same amount in a given time even if the temperature has not changed. I think the chances of that being the case is pretty remote!”
should actually be:
“Caution: This assumes that there in not a systemic problem with the sensors that means that the readings of a majority of them change by some amount in the same direction in a given time even if the temperature has not changed. I think the chances of that being the case is quitr high!”
It is called “sensor drift”
A an accuracy of 0.002degC does seem quite unbelievable to me also and yet that is what is claimed by the company that manufactures (Seabird) many of the Argo probes. One such example is the SBE 41CP CTD. If you open this link http://www.seabird.com/sbe41-argo-ctd and click on the specifications page the initial accuracy is given as +/-0.002degC. They don’t state whether this is typical or maximum.
I would be very interested to know (a) what equipment they use to calibrate to this accuracy and (b) what method of temperature measurement they use. If they do use thermistors, as mentioned in the article above, then in my experience the best ‘Interchangeable’ types have an accuracy of 0.1degC. It is possible I suppose that Seabird could individually calibrate each thermistor in a controlled environment. However, even if the spec of the thermistor was 0.002degC then this does not take into account other errors. For example, the thermistor must be powered, often by a constant current source and the corresponding voltage then measured by an ADC. The current source and ADC also need to be calibrated and their errors included. Other factors such as temperature and temporal drift and variation of power supply can affect the reading. I note also that the probes ascend and take measurements. How long do they wait at a certain depth before taking a reading (i.e. what is the settling time of the float)?
There are many other questions about the float data quality. This paper http://onlinelibrary.wiley.com/doi/10.1002/rog.20022/full makes a good job at addressing many of the issues.
TTY
I don’t argue that there may some drift in individual sensors in individual buoys, but there would need to be roughly the same magnitude of drift – and in the same direction – for all of the sensors in order for there to be a systemic error of this magnitude.
Is that really possible? Surely if drift does occur it would be random? If it were so predictably uniform then the manufacturer is bound to know and could easily incorporate adjustment mechanisms into the read-out.
I would be gobsmacked if the operators and manufacturers of the buoys have not tested for such inherent and uniform errors and that they could occur without the operators knowledge. Or are you suggesting that they deliberately ignore the errors because the results suit their agenda?
I am sceptical of some of the claims of the scientists but I am not a cynic. Argo is a genuinely and massively impressive engineering and monitoring project – a real achievement that should be celebrated. The reality, or otherwise, of AGW will only become apparent with long term studies of this type. More satellites, more and better monitoring all round. Ignoring or casting doubt on this kind of project will just leave a space of ignorance for politicians and media pundits to fill with alarm and/or denial.
Scepticsm, a thirst for knowledge and curiosity are all part of the scientific process and the enemy of the alarmists and sky dragon slayers of this world.
This is so very incorrect that it is difficult to know how to begin. Or rather, it might be correct but is very probably not terribly useful.
Take a look at (for example) HADCRUT4. See how it jigs up and jigs down? Those jigs are the result of doing precisely what you suggest leads to meaningful before and after comparisons. It samples far more than 2000 inputs for each output. Each and every output is different from the one before, usually right about at the bounds of the claimed statistical accuracy. Are these changes representative of a discernible trend?
Don’t be absurd! Of course not. They are noise.
In order to make any sort of statements about significance of a linear trend in a timeseries you have to do far more work than this, and the actual statistical accuracy/significance of your claim is probably and order of magnitude smaller than you might think it would be if you do the analysis carelessly. It also depends on an assumption literally unprovable from the data itself — that there exists a linear trend in the data to be statistically extracted in the first place in the only context that matters — one that is extrapolable as a prediction of the future. That is what “trend” analysis is all about — it isn’t just fitting a linear curve to some data series, it is fitting the linear curve to the data series because it has some meaning.
See my remarks elsewhere in this thread about reading Briggs’ articles on the dangers of fitting linear trends to timeseries data. On the one hand, one can, as you say, make statements like the following: The HADCRUT4 temperature in June of whatever was warmer/cooler than it was in June of whatever else, with a difference larger than the claimed error bar. That is a statement of fact. But it does not suffice to establish this as a linear trend, as if you could fit a line between these two points and it would be a good predictor of all of the data in between, or any of the data preceding or following the two points.
Sigh.
One day, if everybody studies this stuff, we can actually progress to where we start talking about relaxation rates and autocorrelation and fluctuation-dissipation. This is still well shy of where we can talk about them in a chaotic nonlinear problem, but at least we can start to understand what they are in linear stochastic problems, things like Langevin equations with delta correlated noise.
rgb
rgbatduke. Your comments are so misleading I don’t know where to begin. All I will say is that of course all the wiggles are noise, that is why we all do moving averages, linear regression, polynomials and all the other analysis of time series. These are not “fitting linear trends”, who said a trend had to be linear? Who said it only has to be between two points?
Maybe “trend” is the wrong word. Perhaps “track” would be a better one. All you can ever get from the data is how it has changed in the past. Seeing the signal through the noise is what it is all about. Of course this is no predictor of the future, only the stupid and ignorant would extrapolate a linear trend, or any trend for that matter and expect it to be right.
How the data changes helps us to understand how the climate changes over time. All science ever does is make a “hypothesiss” (model) and test that hypothesis with real world data. If the data does not fit or prove the model then the model is wrong.
How can we ever know whether the model is right or wrong if we don’t collect the data? Argo is the best data we have on sub-surface ocean temperatures. I would argue that it is very high quality data and well worth collecting and can tell us a huge amount about how the ocean and atmosphere are linked.
How would you measure sub-surface ocean temperatures? How much would it cost?
An excellent question. If you care to look at the answer:
http://wmbriggs.com/blog/?p=5107
http://wmbriggs.com/blog/?p=5172
etc. Look, neither I nor Briggs are suggesting that linear trend analysis is without any merit at all. However, it is a tool that is so heavily abused as to be instantly suspect whenever it is invoked to “prove” something in climate science. Read these articles, seriously. One doesn’t need to fit anything to see that datapoint x is higher than datapoint y in a timeseries. The only reason to fit anything is to make claims of some sort of connection between the form being fit and the presumed underlying hidden, unknown dynamics that gave rise to the time series. And the only way we should give such a fit the slightest credence is if the fit proves to have predictive skill outside of the range being fit, and then only with the direst of conditionals appended to the conclusion, especially when one is dealing with a process that is manifestly non-stationary.
Look if you like, you can go debate the issue with HenryP, who lurks on these pages, and who has built a dataset for “global temperature” that can be fit with a quadratic function with a negative slope to a few zillion digits and hence will swear up down and sideways that he can confidently predict global cooling. Me, personally I’m planning to read the entrails of the next chicken I see, as it is as likely to give the right answer.
I am on my second company founded upon predictive modeling. I have been doing it professionally and as a hobby (odd as that might sound) for almost 20 years at this point. I can cite you chapter and verse on training sets, trial sets, pattern recognition, regression linear and nonlinear, in hard problems where people will pay you a lot of money to get it right. Even a little bit right, just beating random chance. So by all means, fit anything you want and assert that the result is “significant”. Bet your own personal fortune on it.
Just don’t bet mine.
rgb
TLM, I used the example of my height to point out that the actual SD (from more precise measurements) can be greater than than that calculated when the these are rounded to a larger increment or the instrument records to a larger increment (the resolution). I don’t know which is the case for the Argo buoys.
While that was one measurement and not a trend, there is a similar problem when the trend is 0.02°C/decade calculated from data measured over less than a decade. If it was over a century then there are still other problems.
Don’t you know it is IEEE inspired?
That’s Imagine, Estimate, Exaggerate, Extrapolate
The GASTA anomaly due to the late 20th century warming is something like 0.8c. If global warming has caused less than 1 degree of warming in the air, how much would it cause in the oceans? My guess, a lot less than 0.02C. In other words, undetectable with current precision.
You know, NIST and CIPM have very nice guidelines on measurement uncertainty and error propagation. I’ve seen very few (there are some) discussions of the full meaning of those. With a few examples, it would probably become very easy to show that the estimated trend lines in most things in climatology (as in other fields) are not meaningful, even if statistically significant (at least at the lazy statistics level).
Also, saying that an instrument is good to whatever is generally not acceptable without the right calibration traceability documentation. We don’t know the linearity bounds on the sensors, the hysteresis, or many other things. I don’t know. Since doing some instrumentation and data quality work, I have a hard time trusting any but bulk assessments of things or direct measurements with lots of documentation. When people start doing math with things, they very, very rarely move their uncertainties forward.
At least with ocean sensors the linearity range only needs to extend from about -3C to about 33C. I doubt there will be any ocean readings outside that range anywhere in the world, but if someone knows of hotter or colder water somewhere I am ready to be educated by the data.
The Argo network is the best climate instrument system we have.
The coverage and number of floaters in use is large enough to give us reliable numbers over a period of a few years.
And the network is showing us that the oceans are not warming as fast as the theory predicts and is, in fact, a very low rate indicating global warming will not be a problem.
Compare the Argo system to the NCDC data-hiders-adjusters and one should understand how lucky climate science is to have Argo.
I like your comment better than the top post, truthfully. This:
is what we should be talking about, not measurement error.
Take the recent Josh Ellis paper, from the abstract:
What gives? Last time I checked the claims were for a heck of a lot greater an energy imbalance than 0.64 Wm-2. Wasn’t the claim 1.7 Wm-2 (net) in AR5?
” Wasn’t the claim 1.7 Wm-2 (net) in AR5?”
I think you’re thinking of the 1.7 Wm-2 extra forcing since 1750 due to C)2 (2.3 to all anthro). That’s different. That forcing causes warming, which increases outward IR. The difference (0.64) represents the amount of forcing that is not balanced by temperature rise since 1750.
The extra human induced forcing since 1750 is +2.3 W/m2 (not 1.7 W/m2) and, on top of that, there should have been an additional +1.8 W/m2 in feedback forcing apparent given the temperature increase.
And from 2005 to 2013, the net energy imbalance is 0.535 W/m2, not 0.64 W/m2.
If you run all those numbers backwards and forwards, all one gets is 1.5C per doubling of CO2 as equilibrium sensitivity.
Thanks Nick. I don’t really understand, but maybe the fault is mine. What do you mean by forcing that is not balanced by temperature rise?
Hey, btw, I’m not arguing anything yet. I’m honestly asking what this is all about. Hopefully Nick can ‘esplain it to me.
Nick, are you saying that the 0.8C warming we’ve seen since 1750 partially balances part of the 1.7 Wm-2 forcing imbalance (because forcings cause temps to rise, driving the system back towards equilibrium and driving the imbalance down towards zero) that there’s effectively a 0.64 Wm-2 imbalance right now?
I have no quarrel with this, and if I understand this properly it’s consistent with Hansen et al..
As I understand it, we’re still missing some heat. I’ll get back to this after I dig a bit, it’s been awhile. 🙂
This article is misleading. If you have N statistically independent observations, each with a standard deviation of sigma, the standard deviation of the mean will be sigma divided by the square root of N. If you know a little programming, it can be quite instructive to try this yourself. I just generated one million quasi-random numbers between 0 and 1 (with 16 decimals) and computed the mean as 0.4996796058936124. I then rounded each of the 1 million values of to only ONE decimal. The mean of the highly inaccurate one-decimal numbers was 0.4996611, so correct to 4 decimals…
N statistically independent observations that are as identical as possible.
That means using the same equipment with the same calibration to make the same measurement N times.
For example, using a laser rangefinder to measure the distance between the two same points N times.
However, systematic errors such as changes in air density between the two points and instrumental drift will eventually dominate.
In the case of the ARGO buoys one is averaging measurements made at different changing positions by different buoys. One is not making N repeated identical measurements. Even with simple averaging, the systematic errors will dominate: calibration, instrument drift, etc.
I’ll make it a little clearer. If the resolution of the instrument is 0.1°C ie, temperature calculated from a mean of voltage measurements over a few seconds and then recorded to the nearest 0.1°C, you can’t treat it as if it was exactly 0.1°C.
The North Pole is warming up. Air temperature to blame, women and children assisted first.
Deja Vu! As I have said before, as a young student of civil/structural engineering at Kingston-Upon-Thames College in the very early 80s, the Swiss made Wilt (pronounced Vilt) T2 Total Survey Station was the state-of-the-art gismo in the surveying world. There were only two in existence in the UK at the time, the then Greater London Council had one, the college had the other, at £20,000 a pop back then! The advertising claims were that this pice of kit measured angles to within 1 second of arc accuracy. It’s then Japanese equivalent also made the same claim. However, the Japanese version had its lenses ground to within 1 second of arc, but the Swiss T2 had its lenses ground to only 3 seconds of arc! It’s all in the tolerances, folks! 😉
“Wilt” or “Wild”? I recall vaguely some Wild instruments but please don’t embarrass me by asking what.
Ian M
It’s “Wild”. They have been making precision photogrammetric and surveying equipment for a long time. There is a (true) story about an extremely senior officer inspecting No 1. P.R.U. (Photographic Reconnaissance Unit) of the RAF during WW 2, and seeing a number of folders labelled “Wild plans” remarked: “You wouldn’t like to show me those, I suppose”
even in 1938 in the Arctic it wasn’t so cold.
http://trove.nla.gov.au/ndp/de…
“NORTH POLE NOTSO VERY COLD.
Weather observations broadcast fromthe North Pole since June 17 show that this is certainly not the coldestspot in the world. In fact, some of the temperatures recorded are only a fewdegrees below those taken during the early morning in parts of England.
The Soviet . scientists at the Pole havetraced a warm current of water thatflows far below- the polar ice, ‘making the area warmer than had – ever been thought.There are much, colder spots in theinterior of Greenland, where the temperature falls far below zero in the middle of the Arctic summer, while North Central Siberia has had temperatures of 95 degrees below zero Large expanses of open sea in the vicinity of the North Pole are mainly -responsible for keeping up the temperature, But they also cause Ionsperiods of dense fog and showers ofice-laden rain..”
H/T real science.
this link will work.
http://trove.nla.gov.au/ndp/del/article/146229328?searchTerm=north%20pole%20warmer&searchLimits=dateFrom=1938-01-01
I apologize if this observation has already been made … and I certainly have noticed comments around this issue … but the statement “reported the trend in the global ocean temperatures as 0.022 ± 0.002 deg C /decade” SHOULD go to the statistical issues of population variability and sampling error. For the proponent of the statement to be talking about measurement accuracy is about as disingenuous as you can get.
back to 1923-
http://trove.nla.gov.au/ndp/del/article/87532315?searchTerm=arctic%20thaw&searchLimits=
ARCTIC ICE THAWlNG.
AN ISLAND DISCOVERED.
LONDON, September. 1.
The Norwegian explorer, Captain Wiktor
Arensen, who has just returned from the
Arctic, claims to discovered an island
twelve miles in circumference near Franz
Joseph ,Island, in latitude, 80.40. It was
previously hidden by an, iceberg between
70 and 80 ft. high, which has melted. This
shows the exceptional nature of the recent
thawing in the Arctic Ocean.