From the University of California – San Diego Scripps Institute, you gotta love the subheading in this PR. I didn’t know robots could travel back in time. Gosh, I learn something new every day. Apparently 300 soundings done by the HMS Challenger between 1872-1876 are enough to establish a “new global baseline” for the last century. The temperature rise is pretty much what we’d expect from LIA recovery. Though, for an outfit that hauls Titanic Chicken of the Sea debate ducker James Cameron to the bottom of the deepest ocean trench, I’d take this PR with a grain of sea salt, especially since it provides no supporting graphics or documentation. I’d sure like to see how the distribution of those 300 sounding looks. – Anthony
New comparison of ocean temperatures reveals rise over the last century
Ocean robots used in Scripps-led study that traces ocean warming to late 19th century
A new study contrasting ocean temperature readings of the 1870s with temperatures of the modern seas reveals an upward trend of global ocean warming spanning at least 100 years.
The research led by Scripps Institution of Oceanography at UC San Diego physical oceanographer Dean Roemmich shows a .33-degree Celsius (.59-degree Fahrenheit) average increase in the upper portions of the ocean to 700 meters (2,300 feet) depth. The increase was largest at the ocean surface, .59-degree Celsius (1.1-degree Fahrenheit), decreasing to .12-degree Celsius (.22-degree Fahrenheit) at 900 meters (2,950 feet) depth.
The report is the first global comparison of temperature between the historic voyage of HMS Challenger (1872-1876) and modern data obtained by ocean-probing robots now continuously reporting temperatures via the global Argo program. Scientists have previously determined that nearly 90 percent of the excess heat added to Earth’s climate system since the 1960s has been stored in the oceans. The new study, published in the April 1 advance online edition of Nature Climate Change and coauthored by John Gould of the United Kingdom-based National Oceanography Centre and John Gilson of Scripps Oceanography, pushes the ocean warming trend back much earlier.
“The significance of the study is not only that we see a temperature difference that indicates warming on a global scale, but that the magnitude of the temperature change since the 1870s is twice that observed over the past 50 years,” said Roemmich, co-chairman of the International Argo Steering Team. “This implies that the time scale for the warming of the ocean is not just the last 50 years but at least the last 100 years.”
Although the Challenger data set covers only some 300 temperature soundings (measurements from the sea surface down to the deep ocean) around the world, the information sets a baseline for temperature change in the world’s oceans, which are now sampled continuously through Argo’s unprecedented global coverage. Nearly 3,500 free-drifting profiling Argo floats each collect a temperature profile every 10 days.
Roemmich believes the new findings, a piece of a larger puzzle of understanding the earth’s climate, help scientists to understand the longer record of sea-level rise, because the expansion of seawater due to warming is a significant contributor to rising sea level. Moreover, the 100-year timescale of ocean warming implies that the Earth’s climate system as a whole has been gaining heat for at least that long.
Launched in 2000, the Argo program collects more than 100,000 temperature-salinity profiles per year across the world’s oceans. To date, more than 1,000 research papers have been published using Argo’s data set.
The Nature Climate Change study was supported by U.S. Argo through NOAA.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Parsing the above statement one glaring issue jumps out at me. The amount of warming in the first 50 years is the same as the warming seen in the last 50 years. Anthropogenic global warming is generally accepted as occurring only in the last 50 years. So, if the amount of warming in the first 50 years is the same as the last 50 years, doesn’t that suggest that all of the warming in the last 50 years is a continuation of the natural processes driving the warming in the first 50 years? This study seems to inadvertently imply that there’s no link between ocean warming and anthropogenic global warming.
Didn’t we just go through a time where the Experts said that only modern data is useful? Okay, the Thames freezing over is “anecdotal”, but if the Challenger, now, were to say they were frozen in the Thames harbour, would that count?
It isn’t useful data unless it is. I love tautologies.
It all apparently depends on whose Ox is GOREd.
Some 300 ocean temperature readings from HMS Challenger in the 1870s are used by the Warmist Team as proof positive for thier case that the Oceans are sequestering heat. Although CERES satellites readings over 30 years are showing the Green house “blanket” IR is really pretty threadbare and leaky. Heat is readily escaping and equalling HF solar input, after certainly getting bounced around a few times by CO2 and H2O molecules, before penetrating the atmosphere and radiating back into Space.
Georg Beck’s historical research revealed 93,000 laboratory records of atmospheric CO2 concentrations in the 18th and 19th centuries that show the average was CO2 levels were around 340ppm with volcanic inspired peaks of Tambora and Krackatoa as high as 441 ppm. That is just as former IPCC leaderand Ice core expert, Dr. Jaworowski warned.
He warned that raw CO2 readings from ice cores are not correct, unless you include the CO2 clathrate correction, as CO2 rapidly goes into clathrate formation under very light pressure levels; and CO2 levels were never 290 ppm, as that is the clathrate equilibrium pressure only.
The Warmnist team read him out for his heresy, and never acknowledged nor applied the clathrate corrections to ice core analyses.
@Steven Mosher
Hi Steve,
nobody of any real science background dislikes data, per se – even bad data can be ‘useful’ with treatment – but it’s the post collection treatment that is the key to good (or bad) science. As you say, even wide error bar data can be ok – so long as that wide error margin is acknowledged and not ‘massaged’ out or statistically treated to make it meaningless or disappear (e.g. via graphical (mis) representation)!
But, just to stretch a point – how many publications are you aware of, say using old, or perhaps less reliable data – that start off with a caveat along the lines of ‘using old unreliable data, this paper shows xxxx is potentially indicative of yyyy’ ? The point being that such a statement would of course be truthful and even then the findings MAY of course be representative – but again, how many publications add at the end ‘of course, this may not be correct and our interpretation may be wrong!’ or even ‘taking the potential error bars into account, the data could actually show the reverse!’
Looking at it more on the AGW topic – how many global temp graphs have you seen showing the error bars above and below the prety squiggly line over the last few years? all the major datasets seem to be reproduced without the actual error margins shown – as if the actual ‘line’ has become accepted!
Steven Mosher, data is indeed data provided it is fully understood and does not contain unknown adjustments. In the case of ocean temperature data, a well-defined heat and material balance is required to make any sense of the random data. Ocean currents play havoc with unknown readings. I once gathered a large amount of GOM temperature data to develop a temperature profile versus depth as part of a gas hydrate study. It soon became apparent that temperatures could vary considerably over relatively short distances. A study of currents soon revealed the source of the variations.
I think this new data could be useful provided the analysis was done without prejudgment as to result. Based on my observations of the players in the global warming game, this may be mission impossible.
Once data is properly adjusted for known modifiers, one must sit back and look at the overall picture to see if the data makes sense based on a heat and material balance. There could be unknown modifiers having some influence. In the case of the oceans, we know full well that earth undergoes ice ages lasting long periods, even by geologic time standards. These ice age times are long enough to cool the ocean depths. When the globe warms once again, it takes time to warm the oceans back to standard conditions. Thus one could easily find warming going on it the oceans and cooling as well.
By claiming that heat is hidden in the oceans over a very short period requires a very well calculated heat and material balance. Perhaps Trenberth and the other Non-Deniers have done this, but if so I haven’t seen it.
When I look at Hadcrut ocean temperature data, it moves up and down by too large amounts too quickly for me to accept that it is plausible. Perhaps you or Willis might take a quick look at one or two of the large yearly changes and see if they look realistic? I recall the Non-Deniers adjusting ocean temperature data from sailing ships due to a “bucket” factor, Was this new data adjusted in the same manner?
The “Report on the scientific results of the voyage of H.M.S. Challenger during the years 1873-76 under the command of Captain George S. Nares and the late Captain Frank Tourle Thomson” is here:
http://archive.org/stream/p1reportonscient01chaluoft/p1reportonscient01chaluoft_djvu.txt
The Challenger had Miller-Casella protected thermometers which had a number of problems according to a summary on page 37 of “Understanding the oceans: a century of ocean exploration
By Margaret Deacon, C. P. Summerhayes”
http://books.google.co.uk/books?ei=ixh6T8ivBYLv8AOSrKXRDQ&id=F5agn3NSzEoC&dq=deep+sea+reversing+thermometers+problems&ots=QaLxOCH07o&q=%27protected%27+thermometers#v=snippet&q='protected'%20thermometers&f=false
Just a further comment on ‘baseline’ – IMHO, there is really no such thing based on geological timescales, even glacial event timescales! Taking a nominal 15000 year interglacial time period, a 150 year span of data is only 1% of this timescale – please can someone explain where can any ‘baseline’ representation be drawn from a miniscule 1% of recorded time period data???? For flips sake, this is what really get’s my goat – the scale keeps getting forgotton……..and as for the the 30 year magic ‘climate data period’ … well, nuff said!
I’m with Gav jackson. The Challenger data, sparse as it is, indicates significant warming ( about the same as for the last 50 years I gather ) yet over a time frame where anthropogenic effects are known to be minor compared to today. Ergo a very large natural component is evidenced, ergo the AGW case is.. wait for it …. alarmist.
Another problem with this data. According to this page:
http://aquarium.ucsd.edu/Education/Learning_Resources/Challenger/science3b.php
Although the principle of the reversing thermometer had been described by scientists as early as 1845, this Negretti and Zambra thermometer was the first thermometer to accurately determine the temperature at great depth and return to the surface and retain its readings. As such, it is considered the first modern reversing thermometer. This was the reversing thermometer sent to scientists and used on the Challenger expedition.
This thermometer was housed in a helical mounting mechanism meant to cause the reversing thermometer to flip at the required depth. A helical screw would measure the depth on the way down and release the mounting at the desired depth.
The Challenger data collection appears to precede the concept of ‘thermometric depth’ which is designed to cope with the problem that your thermometer is never at the depth you think it is: it won’t be deeper but it is almost always shallower because of currents and/or drift of the vessel doing the cast.
A later technique uses two thermometers: a protected one and one that is subjected to the ambient pressure. The one that is squeezed by the pressure at depth records a higher temperature. The difference between the two thermometers allows an accurate depth to be calculated – the ‘thermometric depth’.
So apart from the fact that HMS Challenger would have been lucky to know where they were to better than 1 nautical mile they would also have over-estimated the depth of the temperature readings they were taking because theyused distance along the wire as a measure of depth, not the real depth.
Way to little data. And such warming as may be inferred is consistent with the retreat of the little Ice Age, which continues.
Give it up, Mosher.
Your subtle attacks on others are irksome.
Your incessant thread manipulation is tiresome.
A sidelight to the HMS Challenger expedition was the investigation into “Bathybius” (Bathybius haeckelii). For example, see
http://www.huntsearch.gla.ac.uk/cgi-bin/foxweb/huntsearch/DetailedResults.fwx?collection=all&searchTerm=111845&mdaCode=GLAHM&browseMode=on
Bathybius had been discovered earlier in seawater samples and was considered to be another form of life by the settled science of the day. Bathybius turned out to be something closer to the recent polywater. For example, see:
http://home.comcast.net/~earlwajenberg/onlinestorage/PhilosophyMuseum.html
For Bathybius, scroll down to 5,751 Year Old Fossils. For polywater, scroll down to The Science Warehouse. For the curious, this museum also has samples of caloric and phlogiston.
Seriously, investigations into old thermometer readings should include investigations into the old thermometers that were used to get the readings. See for example:
http://aquarium.ucsd.edu/Education/Learning_Resources/Challenger/science3a.php
for a description of the reversing thermometer developed to capture the temperature at some depth without being affected by different temperatures while the thermometer was retrieved. The apparent temperature is also affected by how much the hydrostatic pressure squeezes the thermometer column. Modern oceanographic thermometers are paired, one exposed to pressure and one protected from pressure. The difference in apparent temperature provides an estimate of pressure, and therefore depth.
Gotta plug this link again; a far more substantive collection, but of different data.
Changes in total wind speed over the last 150 years; recorded by merchant fleets of many nations and collected by a UK institution. Shows continually growing average wind speed, also indicating a warming. Steven Mosher is of course right, we would expect that as we came out of the LIA.
http://www.seafriends.org.nz/issues/global/fletcher.htm
That idea, that knowledge, LIKE ALL KNOWLEDGE, comes with error bars. Sometimes tight, sometimes wide. It’s knowledge nevertheless.
Well sure, but in this case a data set of 300 readings world wide over 4 years share an error bar with guessing.
Mosher claims that all data is good. I suppose as a simple value judgment all data has some value, to somebody, perhaps.
The real question for science, however, is whether the data “is good” (e.g., relevant, applicable, and of adequate quality and reproducibility) for its intended purpose. That’s another matter entirely.
In this case, interestingly, the alarmists seem to put forward these “handful” of data of very questionable relevance and applicability (for comparison to modern ocean temperature measurements of orders of magnitude more frequency, substantially greater accuracy and reproducibility, and vastly greater global coverage) to support their alarmist claims, when even a superficial reading shows that these same old data in fact undermine their alarmist claims, as the bulk of increase occurred prior to meaningful man-made contributions to CO2 and its purported driving of global warming/climate change/climate uncertainty/whatever they are calling it these days.
I expect there are a lot of “open bars” at these “climate” conferences.
Steven Mosher says:
April 2, 2012 at 12:11 pm
data is a good thing.
xxxxxxxxxxxxxxxxxxxxxxxxx
By this logic…we should be adding surface stations – not drastically cutting them.
Data is just that – data. It should sit as a historical measurement of that time – in that place.
Data used to defend an hypothesis – demands we don’t accept it, just for being data. It is now called evidence and scientific evidence means not just the data is suspect but also conclusions made from using that data.
Frankly, I’m a bit surprised at your simplistic answer above.
Data is data, neither a good thing or bad thing in and of itself. Like a two edged blade of a sword or a knife, it can cut both ways without respect for the good, the bad, or the ugly. Instead, it reflects only the motives, merits, and demerits of the person or people wielding it and tugging at it in various directions for good and ill purposes.
The HMS CHALLENGER soundings have about as much data representation of the subject as releasing 300 rawindsonde balloons into the troposphere and the stratosphere to take flawed single point observations of air temperaturewhile taking no account of the meanderings of the jet streams yet to be discovered, much less measured by comprehensive measurements over a period of sixty or more years. In other words, they are interesting peaks into water temperaures that could and did greatly only meters apart, but they are hardly more useful than the blind man exploring the elephant’s leg to determine the shape and nature of the elephant’s anterior versus posterior.
The data is not good or bad, but the people wielding that data are.
Anthony
You want some really good soundings, and of an area of extreme interest today? Try the recordings that Nansen’s Fram expedition to the North Pole vicinity made in the 1890’s. I read Nansen’s book “Furtherest North” and in it they took readings all the time, all the way to the bottom of the Arctic. One thing that I thought particularly interesting is that they were able to detect the temperature variance of the gulf stream above 83 degrees north latitude.
I will be impressed when that data is integrated and compared with today’s data.
(yes this is a challenge to those who might want to do this)
Miss Grundy says:
April 2, 2012 at 12:38 pm
Yes, but the margins of this blog page are too small to contain them …
On a more serious note, Steven is right, and that’s exactly my complaint about this press release. They claim a warming of 0.33°C without stating any error bars …
ktwop says:
April 2, 2012 at 1:06 pm
Could be … or not. Does anyone have a copy of the actual study? ktwop, did they adjust for autocorrelation? And how large were the error bars?
I have a few problems. The first is that the voyage took three years. Wikipedia says there were 263 temperature observations. They were 1,281 days on the voyage, almost exactly three and a half years. That makes about 6 soundings a month … for the entire ocean …
Second, no place was sampled more than once. So not only do we have only a few samples. No location was sampled over a calendar year.
Third, huge areas of the ocean were left unsampled in any form.
So how accurate is their calculation of the global oceanic temperatures? Let me suggest that claiming an error less than a full degree would be hubris of the first order …
w.
What a truly remarkable sentence. What is “LIA recovery”? Who is the “we” who expects this recovery? Do you really believe that there is some equilibrium climate state to which the climate “recovers” after a perturbation? If so, what physical mechanisms determine what this equilibrium temperature is? What, indeed, is it? Are we there yet? If not, when will we get there? How will you know when we get there? What is the rise that you would expect from this so-called “LIA recovery”?
There is data
There is information
There is knowledge
They are not the same.
The Australian Bureau of Met conveniently threw out all the great old pre-1910 temp data [which contained many temp records] on the basis that these data were collected with less than state-of-the-art tech.
Scripps Inst use arguably even less pertinent data as a “new global baseline”.
But of course it must depend on which direction this old data points, as to whether it can be used or not.
This study assumes that temperatures measured on the expedition were accurate. That may or may not be the case……. but for the sake of argument let us assume that they were accurate.
Now here is a whole different problem. As a geologist I am quite familiar with geochemistry, the use of standards, splits and duplicates. Whether one is attempting to sample say a glacial soil in a representative fashion, or you are measuring temp. in a representative fashion, the issues are pretty much the same. The quality control on the 1800’s readings would have been poor to non-existent. However, let us assume again that there were no quality control issues. For example that the thermometers were recalibrated after rolling around the deck in a gale, etc. Assume that they lowered two thermometer to the same depth, under identical circumstances,at the same time to get the inter-sampling error. Then of course they would have needed to lower the thermometer say 15 – 20 times to get an idea of the analytical error (temp. measurement when repeated multiple times on the same sample……….. it is starting to become a very ugly picture. But it gets far worse. Did they recognize the presence of warm or cold currents in the ocean and did they ensure that the sampling was `representative’ of the ocean as a whole? Huh!
Lets return to the soil analogy. I send my student out and tell him/her to come back with 300 samples at the end of the summer from an area of 10,000 km square. They are to be taken at a uniform depth of .5 m. I forget to mention that there are 10 different glacial tills in the area distributed in a non-uniform manner. So out they go and back with the 300 samples. (We will assume that they didn’t go to 3 gravel pits, fill all the bags and then go to the beach for the rest of the summer…. it does happen).
The next year I send out 50 highly trained crews, they are well versed in identifying the different tills and they sample on a 1 km square grid, identifying the till type for each sample. They would have come back with 10,000 samples, hopefully not from a gravel pit or two. They of course would have standards, splits, duplicates in every batch of 20.
Now which survey do you think would come up with the best `average composition’ of each of the 10 tills. Remember, different tills would be often physically manifest in different types of terrain, vegetation, etc. Think the students walking around would slog through the thick bush, mosquito infested swamps …… I know for $10/hr I wouldn’t.
Does anyone actually think that one could compare the average soil composition from the 300 samples to the average composition of the different tills in the detailed survey. Then, if they didn’t compare one would look for the reason and conclude that the 10,000 samples weren’t the same because of anthropogenic contamination?
Lets just think about that.
Does anyone else smell Confirmation Bias?
Of all the data available from the Admeralty and the US Navy, why did they pick this data set?
Don,
You can’t get your head around: if the world warmed during the MWP and cooled during the LIA to similar temps prior to the MWP that maybe the next cycle was a warm one?