Guest Post by Willis Eschenbach
As many folks know, I’m a fan of good clear detailed data. I’ve been eyeing the buoy data from the National Data Buoy Center (NDBC) for a while. This is the data collected by a large number of buoys moored offshore all around the coast of the US. I like it because it is unaffected by location changes, time of observation, or Urban Heat Island effect, so there’s no need to “adjust” it. However, I haven’t had the patience to download and process it, because my preliminary investigation a while back revealed that there are a number of problems with the dataset. Here’s a photo of the nearest buoy to where I live. I’ve often seen it when I’ve been commercial fishing off the coast here from Bodega Bay or San Francisco … but that’s another story.
And here’s the location of the buoy, it’s the large yellow diamond at the upper left:
The problems with the Bodega Bay buoy dataset, in no particular order, are:
• One file for each year.
• Duplicated lines in a number of the years.
• The number of variables changes in the middle of the dataset, in the middle of a year, adding a column to the record.
• Time units change from hours to hours and minutes in the middle of the dataset, adding another column to the record.
But as the I Ching says, “Perseverance furthers.” I’ve finally been able to beat my way through all of the garbage and I’ve gotten a clean time series of the air temperatures at the Bodega Bay Buoy … here’s that record:
Must be some of that global warming I’ve been hearing about …
Note that there are several gaps in the data
Year 1986 1987 1988 1992 1997 1998 2002 2003 2011 Months 7 1 2 2 8 2 1 1 4
Now, after writing all of that, and putting it up in draft form and almost ready to hit the “Publish” button … I got to wondering if the Berkeley Earth folks used the buoy data. So I took a look, and to my surprise, they have data from no less than 145 of these buoys, including the Bodega Bay buoy … here is the Berkeley Earth Surface Temperature dataset for the Bodega Bay buoy:
Now, there are some oddities about this record … first, although it is superficially quite similar to my analysis, a closer look reveals a variety of differences. Could be my error, wouldn’t be the first time … or perhaps they didn’t do as diligent a job as I did of removing duplicates and such. I don’t know the answer.
Next, they list a number of monthly results as being “Quality Control Fail” … I fear I don’t understand that, for a couple of reasons. First, the underlying dataset is not monthly data, or even daily data. It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control? And second, the data is already checked and quality controlled by the NDBC. So what is the basis for the Berkeley Earth claim of multiple failures of quality control on a monthly basis?
Moving on, below is what they say is the appropriate way to adjust the data … let me start by saying, whaa?!? Why on earth would they think that this data needs adjusting? I can find no indication that there has been any change in how the observations are taken, or the like. I see no conceivable reason to adjust it … but nooo, here’s their brilliant plan:
As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.
One other oddity. There is a gap in their records in 1986-7, as well as in 2011 (see above), but they didn’t indicate a “record gap” (green triangle) as they did elsewhere … why not?
To me, all of this indicates a real problem with the Berkeley Earth computer program used to “adjust” the buoy data … which I assume is the same program used to “adjust” the land stations. Perhaps one of the Berkeley Earth folks would be kind enough to explain all of this …
w.
AS ALWAYS: If you disagree with someone, please QUOTE THE EXACT WORDS YOU DISAGREE WITH. That way, we can all understand your objection.
R DATA AND CODE: In a zipped file here. I’ve provided the data as an R “save” file. The code contains the lines to download the individual data files, but they’re remarked out since I’ve provided the cleaned-up data in R format.
BODEGA BAY BUOY NDBC DATA: The main page for the Bodega Bay buoy, station number 46013, is here. See the “Historical Data” link at the bottom for the data.
NDBC DATA DESCRIPTION: The NDBC description file is here.





Perhaps Berkley just fed the buoy data to their standard program which treated it as if it were land data?
It’s data Jim, but not as we know it.
From Stewie:
That’s my assumption, Jim, but it’s just a guess.
Thanks,
w.
So Willis, I noted in YOUR graph, it is specifically labeled “AIR” Temperature.
Seems to me that buoys are conveniently sitting on a lot of water. How convenient; so one could also measure the WATER temperature at say -1 metre, and record both water and air temps.
When John Christy et al, did this for about 20 years of dual data from some oceanic buoys, they found that (a) they aren’t the same; and (b) they aren’t correlated.
Why would they be, when air current speeds, might be up to two orders of magnitude faster than water currents, so the move relative to each other.
So why no water temps for Bodega Buoy ??
But you seem to have found another number mine to dig.
george, there are water temps along with a lot of other data in the dataset … but as the title says, this is the “first cut” at the data.

In any case, this is a full-service website, so here you go:
All the best,
w.
Good work in exposing yet more shameless behavior by CACA scamsters.
Thanks!
Thanx Willis.
And yes I did notice that you warned us this was the first cut. I like the water scatter plot. It looks like it is heading off to the higher air temps at the same water temp, like a comet tail.
Here’s the NDBC platform accuracy page. Notice for marine air temperatures, the stated resolution is (+/-)0.1 C while the stated accuracy is (+/-)1.0 C. That’s for every single listed type of deployed buoy.
Those accuracies are not to be seen as statistical standard deviations. They do not represent normal distributions of random error (i.e., precision) and do not average away with repeated observations.
Honestly, it is so very refreshing to see such a forthright official declaration of temperature sensor accuracy in a climate science context. All honor to the NDBC staff, scientists, engineers, technicians and everyone else.
Notice, by the way, that the SST limit of accuracy is (+/-)1 C, as well.
But anyway, let’s track that accuracy through the preparation of an air temperature anomaly.
For creating an anomaly, the average temperature over a standard 30-year interval is taken, say 1951-1980 if you’re GISS. The average accuracy of the standard mean temperature is (+/-)sigma = sqrt[(sum-over-errors)^2/(N-1)] = ~(+/-)1 C, where N is the number of temperature measurements entering the average.
To find the anomaly, monthly or annual means are subtracted from the 30-year average. The accuracy of a monthly or annual mean is calculated the same way as the 30-year mean, and it works out to pretty much the same uncertainty: ~(+/-)1 C.
The annual temperature anomaly = [(annual mean) minus (30-year average)]. The accuracy of the anomaly is (+/-)sigma = sqrt[(annual accuracy)^2 + (30-year-standard accuracy)^2] = sqrt[1^2 +1^2] = sqrt[2] = (+/-)1.4 C.
There it is, the uncertainty in any buoy marine air temperature anomaly is (+/-)1.4 C. That should be the width of the error bars around every BEST, GISS, and UEA buoy marine air temperature anomaly.
Anyone see those error bars in the BEST representation?
In any field of physical science except climate science, error bars like that are standard. Such error bars put boundaries what can be said because they indicate what is actually known.
The (+/-)1.4 C is the 1-sigma uncertainty. Those error bars would obscure the entire average trend, leaving nothing to be said at all. At the 95% confidence interval, (+/-)2.8 C, pretty much the entire set of temperature anomalies would be submerged.
So it goes in climate science. The occulted is far more important than the displayed.
A simple, clean, precise illustration of the general point RGBatDuke makes. Well done.
For the equivalent for sea level rise determined by Jason-2 (or by tide guages) see essay Pseudo Precision in Blowing Smoke.
Excellent. Good question – I have wondered why there are never any error bars. Climate Science or Art of Noise?
Quite a stable temp at that spot- always a wear your coat day.
What purpose does that bouy serve?
What depth is it moored at?
Is that yellow plate at the upper right corner a wind vane?
Does it measure water and air temp?
The buoy collects a variety of environmental data, including wind speed, wind gusts, water and air temps, wave height, peak wave height, wave direction and a host of others. See the data page I linked to above for details.
And per the trusty GPS on my iPhone, it’s moored at a depth of 385 feet, call it 120 metres.
w.
Berkley was correcting for UHI, after all it’s only about 70 miles away, that type of heat also travels upwind.
/sarc
If this assumption is correct then the sarc tag may not be required.
They’re compensating for changes in elevation – the oceans are rising!
This once again helps illustrate the question: “Without raw data adjustments, homogenisation, manipulation or torturing, would there be any man made global warming/climate change?”
The answer is:”Maybe a little, but not enough to be of any concern, and certainly no reason for a massive switch from cheap reliable energy sources to expensive unreliable ones, as advocated by so many western leaders today.”
Anyhow, well spotted, but I doubt if the Berkely Earth people will deign to provide you with an answer to your question on ‘Estimated Mean Station Bias’ and if they do, it will not make much sense.
I agree, human interference with data probably serves a plan.
adjustments are ok as long…good reasons to do them are given, and some kind of verifications are made afterwards…and caveats for those who want to look at global means afterwards..
I just cracked up as I read this article. They fiddled the data and hey presto!!!, the contrarian trend disappears.
Sub prime science in its basic form. Now you see some reality – now you don’t.
LOL I just love it.
Wait til the msm catch on. ( Don’t hold your breath – it could damage your health)
PS
I am recommending the widespread use of the term “sub prime science” in reference to the sort of schlock we all are aware of. I think it captures the essence of CAGW perfectly in terms that everybody understands at a fairly visceral level.
It is not as deliberately vicious a term say as “denier” but nonetheless uses the same associative connotation that naturally resonates.
Can I recommend it to the blogosphere?
There is already a term: Cargo Cult Science
Just trying a bit of subtlety Alan.
“Cargo cult” is probably accurate, certainly when referring to the hard core ‘team ‘ and the boondoggle beneficiaries but it has overtones of utter ignorance that are comparable to “denier”.
A softer term may actually penetrate the mindset of the msm which is probably the best way to demolish the CAGW freakshow.
“As many folks know, I’m a fan of good clear detailed data.”
My mind was just blown.. I TOO love clear detailed data!!! I didn’t know there were others out there.. Wild.
I also love my fruit fresh, as opposed to a bit overripe.
Further, I like to be comfortable. I tend to prefer garments that offer up a fair bit of protection from the elements, without sacrificing much in the way of skin feel. But hey. I like to stay new age know what I mean?
I suggest IGPOCC science.
(Get it?)
“Sub Prime” will be understood by everyone
As “Denier” is identified with the Nazis “Sub Prime” will be identified with dodgy bankers.
Bankers who shamelessly manipulate the data – LIBOR, Forex etc. Quite appropriate!
Yes! ‘Subprime science’
Subprime science! Perfecto!
Marc Morano has been calling it sub-prime science for quite a while now. It’s a good line. Great minds think alike, etc.
Perfect terminology, I’m adopting for personal use. Thanks.
As I understand it sub-prime refers to loans made at a rate below the prime interest rate. That seems like a good thing to me as a borrower. Sub-par makes more sense but both seem so weak. “Denier”, as a charge, has weight and an ignominious history so, I would suggest something with more impact to counter it.
Sub-prime means that the borrower isn’t a very good risk and the loans are at a higher interest rate.
Of course that was before the QE’s and Fed interventions.
How about “fluffer”?
What about ‘Fraudster’?
Clear, and to the point.
Punchy – but may involve visits to local courts (of course, completely incorruptible and uninfluenced), so not recommended. A number of folk – Menn – may be a touch litigious . . . .
Maybe SOPS – Sub Optimal Pseudo Science?
Auto
YES! Imagine the MSM press release “Here is another example of Sub Prime climate science from “fill in the name”” LOL
Can I recommend it to the blogosphere?
Certainly Ursus, I will be pleased to insert it into one of my inflammatory comments on The Guardian.
Thanks for the positive feedback. It just sounded so right I had to put it out there and if Marc Morano is onto it then I think we have lift off!
Ursus Augustus. Thank you for that idea. ‘Sub prime science’ fits like a glove.
Why not be more explicit and call it “sub-standard science”?
I had no idea they are temperature monitor buoys. I almost smacked into one once, blazing home at 30 knots after dark on my Sunseeker. I’d accidentally wandered to the edge of the channel, because I was a little tipsy after an evening in a pub in Cowes 🙂
You’re in Aus right? You can be arrested for DUI on a boat.
Same here.
Hey I was totally sober after I saw a buoy leap out of the dark and almost hit the boat 🙂
Well, that explains one of the gaps in the data! Thanks. Berkeley Earth software just used that to adjust the data. 🙂
“As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.”
It’s a sophisticated statistical tool named “slice-and-dice”. When you get a trend line you just KNOW is wrong, you may sllce-and-dice it into disconnected, horizontal lines with a note to ignore the “step functions”. If you insist on going further and REVERSING the bad trend, you may hold the graph up to a strong light and view it from the backside. My stock broker (a real whiz-bang) employs this technique when we review my portfolio performance.
The mind boggles. As w says so correctly, “Why on earth would they think that this data needs adjusting?”. The regional average temperature is the (weighted) average of all the temperature measurements in the region. This buoy’s temperature is one of those temperature measurements. So the regional average temperature is derived from,this buoy’s temperature. It is surely utterly illogical to adjust data using something that is derived from that data. To my mind, mathematically and scientifically you just can’t do that.
Exactly, Mike. It’s ‘adjusting’, from the general dataset to one particular buoy.
It’s a Peer Reviewed Recursive Adjustment of Temperatures or PRRAT where the “suspect” data is averaged into a set of other “pristine” stations within 1200km, which have the “correct” trend based on the current models. This procedure is repeated until the “problem” data no longer shows the troubling anomaly. It falls under “best practices” as all good climate “scientists” know that positive feedback is how climate works.
Prat Reviewed Science.
The software may assume the gap/break in the data and the associated decreases in temperature after each gap as a station move to a new location. Do we know that the buoy has not moved? Although in the ocean, if moved only a small distance it should not matter much unless it gets moved in or out of a current with a different temperature.
That whole Pacific coastal water is Darned Cold. All the time. It has not warmed up, based on my Mark I toes… I’ve swum in it on and off for a few decades. It’s awful cold all the time. Remember the arguing over folks not being able to survive a swim out of Alcatraz? That’s the warmer water in the S.F. Bay… It may well have cooled in the last decade. About a decade ago I stopped swimming in it. (Florida water is much nicer 😉
So they could move that thing a few miles and it would read the same. Just don’t drag it to shore.
Good work, Willis.
” Why on earth would they think that this data needs adjusting?”
####
Maybe Steven Mosher can explain that. He works a lot with their data, so he should be familiar with their procedures.
Steven is their defence council. He’ll be along soon
Some questions spring to mind:
Are these buoys dotted about the globe – Eric says above that he almost smacked into one in the Solent (coast of England)?
Where is the raw data for ALL of them?
Has anyone compiled it into a chart?
Gaps don’t affect a trend. Brrrr – getting colder.
As some further steps in the analysis, I would suggest that you try taking a look at the N. Pacific Ocean (PDO) temperatures and the local land station temperatures.
The ocean air temperature usually stays close to the ocean surface temperature. The ocean waters come down the coast from the Gulf of Alaska as part of the California current and the N. Pacific Gyre. The local buoy temperatures should follow the PDO/N. Pacific Ocean temperatures.
Over land, the minimum (nighttime) temperatures should follow the ocean temperatures as the ocean air moves inland. The daytime maximum temperatures indicate the solar heating produced by the convective mixing of the warm surface air at the station thermometer level.
The climate all the way across California is determined by the Pacific Ocean temperatures.
http://scienceandpublicpolicy.org/originals/pacific_decadal.html
Joe D’Aleo showed that the US average temperatures are mainly a combination of the AMO and PDO.
http://www.intellicast.com/Community/Content.aspx?a=127
Over land, the minimum (nighttime) temperatures should follow the ocean temperatures as the ocean air moves inland.
Doesn’t the land air head out to sea at night? Or are you referring to the general west to east flow?
Maybe they’re homogenizing the data with nearby (land) stations, a time-tested and honored practice that teases previously hidden warming from the raw data.
The problems with the Bodega Bay buoy dataset, in no particular order
Willis, your description of the buoy dataset sounds more like a log book than a dataset. Thoughts of “HARRY_READ_ME.txt” fill my mind along with my own experiences in both the financial and Network Management industries. It is difficult to take an alleged ‘climate crisis’ seriously when the basic data is collected, manipulated and archived in such a haphazard manner.
In the financial community we would backup after each run and daily send mag tape to be put on microfiche which would be diligently verified as we would be constantly subjected to serious outside audit.
The idea that this alleged ‘climate crisis’ is still based on Keystone Cop investigative competence after several decades tells us the actual importance of this ‘climate crisis’.
Geez… Why the heck didn’t all the US government climate related agencies more than ten years ago outsource all data collection, archiving and data distribution to IBM or some other entity that knows what data management is all about!
Without a doubt, it’s because anyone who actually knows how to manage raw data would not find what they wanted to be found. That’s a large number of people, I’m under the impression the climate “scientists” had to search far and wide to find people so ignorant of data management and proper math and statistics that they could find warming in the last 3 decades.
They do not have to be ignorant of data management. They can be crooks and liars as well.
They explain it further down on the same page:
Whether these quality requirements can be justified is another question.
/Jan
Thanks, Jan. I saw that, but I didn’t understand it. I guessed that their 73 “regional climatology outliers” are what they have circled as “quality control fail” … but I didn’t see how that makes any sense. I mean, under what kind of rubric can you toss out data that appears in all other regards to be perfectly valid simply because it is different from its neighbors? They have thrown out a fifth of the months of data, simply because it differs from the other datasets in the area? Doesn’t seem believable … but then, I’m not sure that believability was one of their key values …
And given that this is a buoy … what are the other datasets in the region to which they are comparing this dataset?
Anyhow, I appreciate your highlighting that, Jan. I read it but I didn’t put it in the post because it didn’t make sense. For one thing, I couldn’t figure out the meaning of the repeated “daily or monthly values”, when the data is neither monthly nor daily, but hourly … add it to the many mysteries.
w.
The explanation is that they use the same wording for all stations, and most stations seems to have daily or monthly values. They should have written ”hourly, daily or monthly values” to cover all situations.
Concerning the other datasets in the region, I think they use the same methodology everywhere. They say:
http://berkeleyearth.org/about-data-set
I suppose the have to go quite far to find the nearest 21 stations to this though.
/Jan
If the nearest neighbours are all or mostly land side, it wouldn’t be so surprising that most of the buoy data looks like outliers. I dunno what they do but you can’t interpolate over a discontinuity like a shoreline.
“changes in latitude and altitude”, “miscoded as Fahrenheit when reporting Celsius”?? I don’t see how this could apply to a fixed buoy where the temps are recorded electronically. Must be a boilerplate text.
Where is Mosher when you nedd him?
I believe Jimmy said “Changes in Latitude Changes in Attitude”
Not sure how Best messed that up.
It’s a good song.
Jan and Willis, see Bill Illis below and my comment thereto. In the case of station 166900, they go at least 1300 km horizontally and 2300 meters vertically.
I suppose the health department, using climate sub-prime scientific methods, will need to adjust the temp of the walk-in freezer to bring it more in line with the kitchen and dining room temps. “Sorry, your freezer’s adjusted and homogenized temperature doesn’t meet code requirements. We’re shutting you down.”
That may seem a stretch, but that’s what the climate pseudo-scientists do when the make comparisons and adjustments across boundaries or differing environments.
‘To me, all of this indicates a real problem with the Berkeley Earth computer program’
one person problem is another opportunity, now work out how such ‘adjustments ‘ give an ‘opportunity ‘ and to ‘who’ and you have got there.
Cui bono?
Auto
Paul in Sweden makes a very good point.
I’ve just completed and published a study of wind speeds (and thus power generation) for the UK and northern Europe spanning the years 2005-13:
http://www.adamsmith.org/wp-content/uploads/2014/10/Assessment7.pdf
Where did I get the data for this? The UK MET Office? (No, they charge – a great deal!),. I got it from aviation METAR reports – I just happen to know about these because I had a PPL.
By the way, the results for wind generation variability and intermittancy make alarming reading.
Alarming, as…?
Capell
Tell us something we don’t know. Here on the South Coast of England yesterday it was blowing a gale, today there is no wind. I would surmise that yesterday any wind turbines would have had to shut down and today there’s no wind to power them. Variability and intermittency in action (or non action) which becomes ever more serious as it forms an increasingly large percentage of the UK’s energy
tonyb
Ghost and tonyb
Dipping into the summary of my paper:
For the UK we have:
The model reveals that power output has the following pattern over a year:
i Power exceeds 90 % of available power for only 17 hours
ii Power exceeds 80 % of available power for 163 hours
iii Power is below 20 % of available power for 3,448 hours (20 weeks)
iv Power is below 10 % of available power for 1,519 hours (9 weeks)
Although it is claimed that the wind is always blowing somewhere in the UK, the
model reveals this ‘guaranteed’ output is only sufficient to generate something
under 2 % of nominal output. The most common power output of this 10 GW model
wind fleet is approximately 800 MW. The probability that the wind fleet will produce
full output is vanishingly small.
Long gaps in significant wind production occur in all seasons. Each winter of the
study shows prolonged spells of low wind generation which will have to be covered
by either significant energy storage (equivalent to building at least 15 plants of the
size of Dinorwig) or maintaining fossil plant as reserve.
And for the European fleet:
Unifying all three fleets by installation of European interconnectors does little or
nothing to mitigate the intermittency of these wind fleets. For the combined system,
which has an available power output of 48.8 GW:
• Power exceeds 90 % of available power for 4 hours per annum,
• Power exceeds 80 % of available power for 65 hours per annum,
• Power is below 20 % of available power for 4,596 hours (27 weeks) per annum,
• Power is below 10 % of available power for 2,164 hours (13 weeks) per annum.
I would be very interested in your power generation study. The pdf link didn’t work for me – I have crappy internet (satellite and in Vermont with snow). Is the link good? Sorry too, that I’ve been away from a machine for the day so I’m late into this.
Our runaway governor is obsessed with renewables – even though Vermont is 98% carbon emission clean – I realize that doesn’t matter except that constructing whirligigs produces 250 – 750 tons of CO2 from concrete/rebar bases through steel posts and we have legal statues prohibiting generating CO2 and I’ll happily use their stupidity against them.
I know I’m tilting here. I care about your study, but I know they won’t. I still prefer knowing.
Thanks,
Jim
It’s curious that Berkeley Earth included Marine Air Temperature data from buoys in a land surface air temperature dataset. I’ll second the “to my surprise”.
Bob, they may think that near shore is ‘close enough’. That gets into the interesting RUTI project issues Frank Landser in Europe has been exploring. Similar to your ocean/ENSO investigations in several ways. Highly recommended reading for all.
My both eyeballs say that most of the “Quality Control Fails” in the Berkeley Earth Surface Temperature dataset for the Bodega Bay buoy are below the trend line – fancy that, who would have thought that?
Dear Willis,
There’s some information about the measurement history of the buoy here:
http://www.ndbc.noaa.gov/data_availability/data_avail.php?station=46013
In the left hand column is some information (not a lot sadly) about the buoy itself. The somewhat cryptic notation says something about the type of deployment. 10D, 6N and 3D are, I think, designations for 10m Discus buoy, 6m NOMAD buoy and 3m Discus buoy.
http://www.ndbc.noaa.gov/mooredbuoy.shtml
I don’t know what effect that would have on the air temperature measurements, but this NDBC page suggests that there would have been a change in measurement height associated with the switch form 10m to 6m/3m:
http://www.ndbc.noaa.gov/bht.shtml
GSBP and VEEP are the sensor packages. Again there are some changes there:
http://www.ndbc.noaa.gov/rsa.shtml
Best regards,
John
Many thanks, John. Curiously, they show nothing about missing data after 06/03, however it’s interesting nonetheless.
I don’t see any evidence that the different buoys made any difference in the temperature readings. The difference is only about 5 metres between the two buoy types, and because of the presence of waves and wind, the lower layers of ocean air are generally pretty well mixed.
Regards, your research is much appreciated.
w.
“Explosive hydrogen gas can accumulate inside the hull of 3-meter-discus buoys.
This dangerous gas is caused by batteries corroding due to water intrusion. While a remedial plan is being developed, mariners are asked to give this, and all other 3-meter-discus buoys, a wide berth. The buoys are 3-meter discus shaped, typically with a yellow hull and a 5-meter tripod mast. Each buoy is identified by the letters “NOAA” and the station identifier number, such as “46050”. Each buoy has a group of (4) flashing 20-second, yellow lights.”
http://www.ndbc.noaa.gov/station_page.php?station=46013
Maybe they adjusted for the hydrogen gas?
/sarc
Anyway, the USCG buoy tenders are responsible for the maintenence, so it could be just that for the missing data (I know that they repainted it back in 2010).
Perhaps the lift from the hydrogen is being interpreted as “Sea Level Rise”………:-P
Clearly as the data does not reveal Global Warming, and worse than that, shows actual Global Cooling, so it absolutely has to be adjusted with the usual algorithms. If this problem continues then we may well see the buoys being sunk by Naval Gunfire. This situation of having such actual data available is completely contrary to the consensus.
Willis,
It could be that you are seeing the Berkeley scalpel in action. Where they detect a discontinuity, they treat as separate stations. And the marked discontinuities are substantial. Why the other breaks did not invoke the scalpel, I don’t know.
Nick Stokes; “…, I don’t know.” WOW Nick! If only other “experts” had the same level of integrity and honesty. I’d buy you a VB (If that is your tipple).
What you are suggesting is that the adjustments are algorithm based. Not human-error-recognized.
More of my Computational Reality instead of Representation Reality.
Doug Proctor
Algorithm based like the NASA and other reconstructions that show a record of continually warming the cold years 100 and 35 years ago?
they treat as separate stations
============
and as a result deliver a misleading result. So many methods sound so good in theory, but fail utterly in practice.
Which makes me wonder why an algorithm is needed at all? Seems a better process would be to pick GOOD stations not torture ALL stations. It seems self-evident to me but you don’t get to use your fancy education I guess!.
Nick, this example bynitself demonstrates two things. First, the BEST scapel technique is inconsistently applied, as you point out. Second, the underlying ‘ station move’ assumption can be faulty, as it appears this buoy has been there all along at the same place. Dr. Merohasy was able to prove the same faulty justification for Australian BOM homogenization of rural station Rutherglen’s flatnto decline into marked post homogenization. For details, follow the footnote hyperlinks to the Rutherglen example in essay When Data Isn’t in Blowing Smoke. As you are from down under, you probably are already aware of this analogous kerfuffle. Perhaps many posting here are not.
Rud, it isn’t a station move assumption. It isn’t any kind of assumption. The assumption would be that the measuring conditions (instruments etc) are the same after the break as before. I think discarding that leads to loss of information. It’s usually true. But discarding is what the scalpel does.
As you’ve observed, I live not so far from Rutherglen. I think BoM’s treatment of that is OK.