Buoy Temperatures, First Cut

Guest Post by Willis Eschenbach

As many folks know, I’m a fan of good clear detailed data. I’ve been eyeing the buoy data from the National Data Buoy Center (NDBC) for a while. This is the data collected by a large number of buoys moored offshore all around the coast of the US. I like it because it is unaffected by location changes, time of observation, or Urban Heat Island effect, so there’s no need to “adjust” it. However, I haven’t had the patience to download and process it, because my preliminary investigation a while back revealed that there are a number of problems with the dataset. Here’s a photo of the nearest buoy to where I live. I’ve often seen it when I’ve been commercial fishing off the coast here from Bodega Bay or San Francisco … but that’s another story.

bodega bay buoy

And here’s the location of the buoy, it’s the large yellow diamond at the upper left:

bodega bay buoy location

The problems with the Bodega Bay buoy dataset, in no particular order, are:

One file for each year.

Duplicated lines in a number of the years.

 The number of variables changes in the middle of the dataset, in the middle of a year, adding a column to the record.

Time units change from hours to hours and minutes in the middle of the dataset, adding another column to the record.

But as the I Ching says, “Perseverance furthers.” I’ve finally been able to beat my way through all of the garbage and I’ve gotten a clean time series of the air temperatures at the Bodega Bay Buoy … here’s that record:

air temp bodega bay buoy

Must be some of that global warming I’ve been hearing about …

Note that there are several gaps in the data

Year 1986 1987 1988 1992 1997 1998 2002 2003 2011

Months  7    1    2    2    8    2    1    1    4

Now, after writing all of that, and putting it up in draft form and almost ready to hit the “Publish” button … I got to wondering if the Berkeley Earth folks used the buoy data. So I took a look, and to my surprise, they have data from no less than 145 of these buoys, including the Bodega Bay buoy … here is the Berkeley Earth Surface Temperature dataset for the Bodega Bay buoy:

berkeley earth bodega buoy raw

Now, there are some oddities about this record … first, although it is superficially quite similar to my analysis, a closer look reveals a variety of differences. Could be my error, wouldn’t be the first time … or perhaps they didn’t do as diligent a job as I did of removing duplicates and such. I don’t know the answer.

Next, they list a number of monthly results as being “Quality Control Fail” … I fear I don’t understand that, for a couple of reasons. First, the underlying dataset is not monthly data, or even daily data. It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control? And second, the data is already checked and quality controlled by the NDBC. So what is the basis for the Berkeley Earth claim of multiple failures of quality control on a monthly basis?

Moving on, below is what they say is the appropriate way to adjust the data … let me start by saying, whaa?!? Why on earth would they think that this data needs adjusting? I can find no indication that there has been any change in how the observations are taken, or the like. I see no conceivable reason to adjust it … but nooo, here’s their brilliant plan:

berkeley earth bodega bay adj

As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.

One other oddity. There is a gap in their records in 1986-7, as well as in 2011 (see above), but they didn’t indicate a “record gap” (green triangle) as they did elsewhere … why not?

To me, all of this indicates a real problem with the Berkeley Earth computer program used to “adjust” the buoy data … which I assume is the same program used to “adjust” the land stations. Perhaps one of the Berkeley Earth folks would be kind enough to explain all of this …


AS ALWAYS: If you disagree with someone, please QUOTE THE EXACT WORDS YOU DISAGREE WITH. That way, we can all understand your objection.

R DATA AND CODE: In a zipped file here. I’ve provided the data as an R “save” file. The code contains the lines to download the individual data files, but they’re remarked out since I’ve provided the cleaned-up data in R format.

BODEGA BAY BUOY NDBC DATA: The main page for the Bodega Bay buoy, station number 46013, is here. See the “Historical Data” link at the bottom for the data.

NDBC DATA DESCRIPTION: The NDBC description file is here.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
November 28, 2014 11:28 pm

Perhaps Berkley just fed the buoy data to their standard program which treated it as if it were land data?

Reply to  jim
November 28, 2014 11:32 pm

It’s data Jim, but not as we know it.

Reply to  Truthseeker
November 29, 2014 9:20 am

From Stewie:

george e. smith
Reply to  Willis Eschenbach
November 29, 2014 12:16 pm

So Willis, I noted in YOUR graph, it is specifically labeled “AIR” Temperature.
Seems to me that buoys are conveniently sitting on a lot of water. How convenient; so one could also measure the WATER temperature at say -1 metre, and record both water and air temps.
When John Christy et al, did this for about 20 years of dual data from some oceanic buoys, they found that (a) they aren’t the same; and (b) they aren’t correlated.
Why would they be, when air current speeds, might be up to two orders of magnitude faster than water currents, so the move relative to each other.
So why no water temps for Bodega Buoy ??
But you seem to have found another number mine to dig.

Reply to  Willis Eschenbach
November 29, 2014 2:24 pm

Good work in exposing yet more shameless behavior by CACA scamsters.

george e. smith
Reply to  Willis Eschenbach
November 30, 2014 7:41 pm

Thanx Willis.
And yes I did notice that you warned us this was the first cut. I like the water scatter plot. It looks like it is heading off to the higher air temps at the same water temp, like a comet tail.

Pat Frank
Reply to  jim
November 29, 2014 11:52 am

Here’s the NDBC platform accuracy page. Notice for marine air temperatures, the stated resolution is (+/-)0.1 C while the stated accuracy is (+/-)1.0 C. That’s for every single listed type of deployed buoy.
Those accuracies are not to be seen as statistical standard deviations. They do not represent normal distributions of random error (i.e., precision) and do not average away with repeated observations.
Honestly, it is so very refreshing to see such a forthright official declaration of temperature sensor accuracy in a climate science context. All honor to the NDBC staff, scientists, engineers, technicians and everyone else.
Notice, by the way, that the SST limit of accuracy is (+/-)1 C, as well.
But anyway, let’s track that accuracy through the preparation of an air temperature anomaly.
For creating an anomaly, the average temperature over a standard 30-year interval is taken, say 1951-1980 if you’re GISS. The average accuracy of the standard mean temperature is (+/-)sigma = sqrt[(sum-over-errors)^2/(N-1)] = ~(+/-)1 C, where N is the number of temperature measurements entering the average.
To find the anomaly, monthly or annual means are subtracted from the 30-year average. The accuracy of a monthly or annual mean is calculated the same way as the 30-year mean, and it works out to pretty much the same uncertainty: ~(+/-)1 C.
The annual temperature anomaly = [(annual mean) minus (30-year average)]. The accuracy of the anomaly is (+/-)sigma = sqrt[(annual accuracy)^2 + (30-year-standard accuracy)^2] = sqrt[1^2 +1^2] = sqrt[2] = (+/-)1.4 C.
There it is, the uncertainty in any buoy marine air temperature anomaly is (+/-)1.4 C. That should be the width of the error bars around every BEST, GISS, and UEA buoy marine air temperature anomaly.
Anyone see those error bars in the BEST representation?
In any field of physical science except climate science, error bars like that are standard. Such error bars put boundaries what can be said because they indicate what is actually known.
The (+/-)1.4 C is the 1-sigma uncertainty. Those error bars would obscure the entire average trend, leaving nothing to be said at all. At the 95% confidence interval, (+/-)2.8 C, pretty much the entire set of temperature anomalies would be submerged.
So it goes in climate science. The occulted is far more important than the displayed.

Rud Istvan
Reply to  Pat Frank
November 29, 2014 12:53 pm

A simple, clean, precise illustration of the general point RGBatDuke makes. Well done.
For the equivalent for sea level rise determined by Jason-2 (or by tide guages) see essay Pseudo Precision in Blowing Smoke.

Paul mackey
Reply to  Pat Frank
December 1, 2014 1:39 am

Excellent. Good question – I have wondered why there are never any error bars. Climate Science or Art of Noise?

November 28, 2014 11:29 pm

Quite a stable temp at that spot- always a wear your coat day.
What purpose does that bouy serve?
What depth is it moored at?
Is that yellow plate at the upper right corner a wind vane?
Does it measure water and air temp?

November 28, 2014 11:32 pm

Berkley was correcting for UHI, after all it’s only about 70 miles away, that type of heat also travels upwind.

Reply to  Kit
November 28, 2014 11:38 pm

To me, all of this indicates a real problem with the Berkeley Earth computer program used to “adjust” the buoy data … which I assume is the same program used to “adjust” the land stations.

If this assumption is correct then the sarc tag may not be required.

Reply to  MCourtney
November 30, 2014 9:09 pm

They’re compensating for changes in elevation – the oceans are rising!

Peter Miller
November 28, 2014 11:35 pm

This once again helps illustrate the question: “Without raw data adjustments, homogenisation, manipulation or torturing, would there be any man made global warming/climate change?”
The answer is:”Maybe a little, but not enough to be of any concern, and certainly no reason for a massive switch from cheap reliable energy sources to expensive unreliable ones, as advocated by so many western leaders today.”
Anyhow, well spotted, but I doubt if the Berkely Earth people will deign to provide you with an answer to your question on ‘Estimated Mean Station Bias’ and if they do, it will not make much sense.

oebele bruinsma
Reply to  Peter Miller
November 29, 2014 1:17 am

I agree, human interference with data probably serves a plan.

lemiere jacques
Reply to  Peter Miller
November 29, 2014 5:03 am

adjustments are ok as long…good reasons to do them are given, and some kind of verifications are made afterwards…and caveats for those who want to look at global means afterwards..

Ursus Augustus
November 28, 2014 11:35 pm

I just cracked up as I read this article. They fiddled the data and hey presto!!!, the contrarian trend disappears.
Sub prime science in its basic form. Now you see some reality – now you don’t.
LOL I just love it.
Wait til the msm catch on. ( Don’t hold your breath – it could damage your health)

Ursus Augustus
November 28, 2014 11:40 pm

I am recommending the widespread use of the term “sub prime science” in reference to the sort of schlock we all are aware of. I think it captures the essence of CAGW perfectly in terms that everybody understands at a fairly visceral level.
It is not as deliberately vicious a term say as “denier” but nonetheless uses the same associative connotation that naturally resonates.
Can I recommend it to the blogosphere?

Alan Bates
Reply to  Ursus Augustus
November 29, 2014 1:25 am

There is already a term: Cargo Cult Science

Ursus Augustus
Reply to  Alan Bates
November 29, 2014 1:43 am

Just trying a bit of subtlety Alan.
“Cargo cult” is probably accurate, certainly when referring to the hard core ‘team ‘ and the boondoggle beneficiaries but it has overtones of utter ignorance that are comparable to “denier”.
A softer term may actually penetrate the mindset of the msm which is probably the best way to demolish the CAGW freakshow.

Tis Knobsdale
Reply to  Alan Bates
November 29, 2014 10:27 am

“As many folks know, I’m a fan of good clear detailed data.”
My mind was just blown.. I TOO love clear detailed data!!! I didn’t know there were others out there.. Wild.
I also love my fruit fresh, as opposed to a bit overripe.
Further, I like to be comfortable. I tend to prefer garments that offer up a fair bit of protection from the elements, without sacrificing much in the way of skin feel. But hey. I like to stay new age know what I mean?

Reply to  Ursus Augustus
November 29, 2014 3:01 am

I suggest IGPOCC science.
(Get it?)

Reply to  Ursus Augustus
November 29, 2014 4:02 am

“Sub Prime” will be understood by everyone
As “Denier” is identified with the Nazis “Sub Prime” will be identified with dodgy bankers.

Paul mackey
Reply to  mwhite
December 1, 2014 1:43 am

Bankers who shamelessly manipulate the data – LIBOR, Forex etc. Quite appropriate!

Reply to  mwhite
December 1, 2014 2:45 am

Yes! ‘Subprime science’

sleepingbear dunes
Reply to  Ursus Augustus
November 29, 2014 4:12 am

Subprime science! Perfecto!

Quinn the Eskimo
Reply to  Ursus Augustus
November 29, 2014 7:02 am

Marc Morano has been calling it sub-prime science for quite a while now. It’s a good line. Great minds think alike, etc.

Ray Kuntz
Reply to  Ursus Augustus
November 29, 2014 8:14 am

Perfect terminology, I’m adopting for personal use. Thanks.

Sal Minella
Reply to  Ursus Augustus
November 29, 2014 9:35 am

As I understand it sub-prime refers to loans made at a rate below the prime interest rate. That seems like a good thing to me as a borrower. Sub-par makes more sense but both seem so weak. “Denier”, as a charge, has weight and an ignominious history so, I would suggest something with more impact to counter it.

Reply to  Sal Minella
November 29, 2014 2:34 pm

Sub-prime means that the borrower isn’t a very good risk and the loans are at a higher interest rate.
Of course that was before the QE’s and Fed interventions.

Sal Minella
Reply to  Ursus Augustus
November 29, 2014 9:40 am

How about “fluffer”?

Reply to  Ursus Augustus
November 29, 2014 12:16 pm

What about ‘Fraudster’?
Clear, and to the point.
Punchy – but may involve visits to local courts (of course, completely incorruptible and uninfluenced), so not recommended. A number of folk – Menn – may be a touch litigious . . . .
Maybe SOPS – Sub Optimal Pseudo Science?

James Allison
Reply to  Ursus Augustus
November 29, 2014 12:39 pm

YES! Imagine the MSM press release “Here is another example of Sub Prime climate science from “fill in the name”” LOL

Reply to  Ursus Augustus
November 29, 2014 12:49 pm

Can I recommend it to the blogosphere?
Certainly Ursus, I will be pleased to insert it into one of my inflammatory comments on The Guardian.

Ursus Augustus
Reply to  Ursus Augustus
November 29, 2014 2:11 pm

Thanks for the positive feedback. It just sounded so right I had to put it out there and if Marc Morano is onto it then I think we have lift off!

Jaakko Kateenkorva
Reply to  Ursus Augustus
November 29, 2014 9:40 pm

Ursus Augustus. Thank you for that idea. ‘Sub prime science’ fits like a glove.

Leonard Lane
Reply to  Ursus Augustus
November 29, 2014 10:25 pm

Why not be more explicit and call it “sub-standard science”?

November 28, 2014 11:41 pm

I had no idea they are temperature monitor buoys. I almost smacked into one once, blazing home at 30 knots after dark on my Sunseeker. I’d accidentally wandered to the edge of the channel, because I was a little tipsy after an evening in a pub in Cowes 🙂

Reply to  Eric Worrall
November 29, 2014 2:59 am

You’re in Aus right? You can be arrested for DUI on a boat.

Reply to  Patrick
November 29, 2014 9:42 am

Same here.

Reply to  Patrick
November 29, 2014 8:40 pm

Hey I was totally sober after I saw a buoy leap out of the dark and almost hit the boat 🙂

Reply to  Eric Worrall
November 29, 2014 5:40 am

Well, that explains one of the gaps in the data! Thanks. Berkeley Earth software just used that to adjust the data. 🙂

Claude Harvey
November 28, 2014 11:45 pm

“As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.”
It’s a sophisticated statistical tool named “slice-and-dice”. When you get a trend line you just KNOW is wrong, you may sllce-and-dice it into disconnected, horizontal lines with a note to ignore the “step functions”. If you insist on going further and REVERSING the bad trend, you may hold the graph up to a strong light and view it from the backside. My stock broker (a real whiz-bang) employs this technique when we review my portfolio performance.

November 28, 2014 11:52 pm

The mind boggles. As w says so correctly, “Why on earth would they think that this data needs adjusting?”. The regional average temperature is the (weighted) average of all the temperature measurements in the region. This buoy’s temperature is one of those temperature measurements. So the regional average temperature is derived from,this buoy’s temperature. It is surely utterly illogical to adjust data using something that is derived from that data. To my mind, mathematically and scientifically you just can’t do that.

Reply to  Mike Jonas
November 29, 2014 12:53 am

Exactly, Mike. It’s ‘adjusting’, from the general dataset to one particular buoy.

Reply to  Mike Jonas
November 29, 2014 4:24 am

It’s a Peer Reviewed Recursive Adjustment of Temperatures or PRRAT where the “suspect” data is averaged into a set of other “pristine” stations within 1200km, which have the “correct” trend based on the current models. This procedure is repeated until the “problem” data no longer shows the troubling anomaly. It falls under “best practices” as all good climate “scientists” know that positive feedback is how climate works.

Reply to  nielszoo
November 29, 2014 8:10 am

Prat Reviewed Science.

Reply to  Mike Jonas
November 29, 2014 5:42 am

The software may assume the gap/break in the data and the associated decreases in temperature after each gap as a station move to a new location. Do we know that the buoy has not moved? Although in the ocean, if moved only a small distance it should not matter much unless it gets moved in or out of a current with a different temperature.

Reply to  Bill_W
November 29, 2014 9:21 am

That whole Pacific coastal water is Darned Cold. All the time. It has not warmed up, based on my Mark I toes… I’ve swum in it on and off for a few decades. It’s awful cold all the time. Remember the arguing over folks not being able to survive a swim out of Alcatraz? That’s the warmer water in the S.F. Bay… It may well have cooled in the last decade. About a decade ago I stopped swimming in it. (Florida water is much nicer 😉
So they could move that thing a few miles and it would read the same. Just don’t drag it to shore.

November 29, 2014 12:06 am

Good work, Willis.
” Why on earth would they think that this data needs adjusting?”
Maybe Steven Mosher can explain that. He works a lot with their data, so he should be familiar with their procedures.

Stephen Richards
Reply to  mpainter
November 29, 2014 1:01 am

Steven is their defence council. He’ll be along soon

The Ghost Of Big Jim Cooley
November 29, 2014 12:15 am

Some questions spring to mind:
Are these buoys dotted about the globe – Eric says above that he almost smacked into one in the Solent (coast of England)?
Where is the raw data for ALL of them?
Has anyone compiled it into a chart?

November 29, 2014 12:17 am

Gaps don’t affect a trend. Brrrr – getting colder.

November 29, 2014 12:20 am

As some further steps in the analysis, I would suggest that you try taking a look at the N. Pacific Ocean (PDO) temperatures and the local land station temperatures.
The ocean air temperature usually stays close to the ocean surface temperature. The ocean waters come down the coast from the Gulf of Alaska as part of the California current and the N. Pacific Gyre. The local buoy temperatures should follow the PDO/N. Pacific Ocean temperatures.
Over land, the minimum (nighttime) temperatures should follow the ocean temperatures as the ocean air moves inland. The daytime maximum temperatures indicate the solar heating produced by the convective mixing of the warm surface air at the station thermometer level.
The climate all the way across California is determined by the Pacific Ocean temperatures.
Joe D’Aleo showed that the US average temperatures are mainly a combination of the AMO and PDO.

Mike McMillan
Reply to  Roy Clark
November 29, 2014 12:33 am

Over land, the minimum (nighttime) temperatures should follow the ocean temperatures as the ocean air moves inland.
Doesn’t the land air head out to sea at night? Or are you referring to the general west to east flow?

Mike McMillan
November 29, 2014 12:27 am

Maybe they’re homogenizing the data with nearby (land) stations, a time-tested and honored practice that teases previously hidden warming from the raw data.

November 29, 2014 12:32 am

The problems with the Bodega Bay buoy dataset, in no particular order
Willis, your description of the buoy dataset sounds more like a log book than a dataset. Thoughts of “HARRY_READ_ME.txt” fill my mind along with my own experiences in both the financial and Network Management industries. It is difficult to take an alleged ‘climate crisis’ seriously when the basic data is collected, manipulated and archived in such a haphazard manner.
In the financial community we would backup after each run and daily send mag tape to be put on microfiche which would be diligently verified as we would be constantly subjected to serious outside audit.
The idea that this alleged ‘climate crisis’ is still based on Keystone Cop investigative competence after several decades tells us the actual importance of this ‘climate crisis’.
Geez… Why the heck didn’t all the US government climate related agencies more than ten years ago outsource all data collection, archiving and data distribution to IBM or some other entity that knows what data management is all about!

Reply to  Paul in Sweden
November 29, 2014 4:52 am

Without a doubt, it’s because anyone who actually knows how to manage raw data would not find what they wanted to be found. That’s a large number of people, I’m under the impression the climate “scientists” had to search far and wide to find people so ignorant of data management and proper math and statistics that they could find warming in the last 3 decades.

Leonard Lane
Reply to  CodeTech
November 29, 2014 10:35 pm

They do not have to be ignorant of data management. They can be crooks and liars as well.

November 29, 2014 12:40 am

Next, they list a number of monthly results as being “Quality Control Fail” … I fear I don’t understand that, for a couple of reasons. First, the underlying dataset is not monthly data, or even daily data. It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control?

They explain it further down on the same page:

Quality Control Summary:
Months missing 10 or more days 26
Serially repeated daily or monthly values 16
Extreme local outliers 0
Regional climatology outliers 73

Whether these quality requirements can be justified is another question.

Reply to  Willis Eschenbach
November 29, 2014 1:12 am

For one thing, I couldn’t figure out the meaning of the repeated “daily or monthly values”, when the data is neither monthly nor daily, but hourly … add it to the many mysteries.

The explanation is that they use the same wording for all stations, and most stations seems to have daily or monthly values. They should have written ”hourly, daily or monthly values” to cover all situations.
Concerning the other datasets in the region, I think they use the same methodology everywhere. They say:

Regional filter: For each record, the 21 nearest neighbors having at least 5 years of record were located. These were used to estimate a normal pattern of seasonal climate variation. After adjusting for changes in latitude and altitude, each record was compared to its local normal pattern and 99.9% outliers were flagged. Simultaneously, a test was conducted to detect long runs of data that had apparently been miscoded as Fahrenheit when reporting Celsius. Such values, which might include entire records, would be expected to match regional norms after the appropriate unit conversion but not before

I suppose the have to go quite far to find the nearest 21 stations to this though.

Reply to  Jan Kjetil Andersen
November 29, 2014 1:55 am

If the nearest neighbours are all or mostly land side, it wouldn’t be so surprising that most of the buoy data looks like outliers. I dunno what they do but you can’t interpolate over a discontinuity like a shoreline.

Bernd Palmer
Reply to  Jan Kjetil Andersen
November 29, 2014 4:48 am

“changes in latitude and altitude”, “miscoded as Fahrenheit when reporting Celsius”?? I don’t see how this could apply to a fixed buoy where the temps are recorded electronically. Must be a boilerplate text.
Where is Mosher when you nedd him?

Reply to  Jan Kjetil Andersen
November 29, 2014 10:10 am

I believe Jimmy said “Changes in Latitude Changes in Attitude”
Not sure how Best messed that up.
It’s a good song.

Rud Istvan
Reply to  Jan Kjetil Andersen
November 29, 2014 1:04 pm

Jan and Willis, see Bill Illis below and my comment thereto. In the case of station 166900, they go at least 1300 km horizontally and 2300 meters vertically.

gary turner
Reply to  Jan Kjetil Andersen
November 30, 2014 10:57 am

I suppose the health department, using climate sub-prime scientific methods, will need to adjust the temp of the walk-in freezer to bring it more in line with the kitchen and dining room temps. “Sorry, your freezer’s adjusted and homogenized temperature doesn’t meet code requirements. We’re shutting you down.”
That may seem a stretch, but that’s what the climate pseudo-scientists do when the make comparisons and adjustments across boundaries or differing environments.

November 29, 2014 1:10 am

‘To me, all of this indicates a real problem with the Berkeley Earth computer program’
one person problem is another opportunity, now work out how such ‘adjustments ‘ give an ‘opportunity ‘ and to ‘who’ and you have got there.

Reply to  knr
November 29, 2014 12:24 pm

Cui bono?

November 29, 2014 1:12 am

Paul in Sweden makes a very good point.
I’ve just completed and published a study of wind speeds (and thus power generation) for the UK and northern Europe spanning the years 2005-13:
Where did I get the data for this? The UK MET Office? (No, they charge – a great deal!),. I got it from aviation METAR reports – I just happen to know about these because I had a PPL.
By the way, the results for wind generation variability and intermittancy make alarming reading.

The Ghost Of Big Jim Cooley
Reply to  Capell
November 29, 2014 1:49 am

Alarming, as…?

Reply to  Capell
November 29, 2014 2:04 am

Tell us something we don’t know. Here on the South Coast of England yesterday it was blowing a gale, today there is no wind. I would surmise that yesterday any wind turbines would have had to shut down and today there’s no wind to power them. Variability and intermittency in action (or non action) which becomes ever more serious as it forms an increasingly large percentage of the UK’s energy

Reply to  tonyb
November 29, 2014 4:50 am

Ghost and tonyb
Dipping into the summary of my paper:
For the UK we have:
The model reveals that power output has the following pattern over a year:
i Power exceeds 90 % of available power for only 17 hours
ii Power exceeds 80 % of available power for 163 hours
iii Power is below 20 % of available power for 3,448 hours (20 weeks)
iv Power is below 10 % of available power for 1,519 hours (9 weeks)
Although it is claimed that the wind is always blowing somewhere in the UK, the
model reveals this ‘guaranteed’ output is only sufficient to generate something
under 2 % of nominal output. The most common power output of this 10 GW model
wind fleet is approximately 800 MW. The probability that the wind fleet will produce
full output is vanishingly small.
Long gaps in significant wind production occur in all seasons. Each winter of the
study shows prolonged spells of low wind generation which will have to be covered
by either significant energy storage (equivalent to building at least 15 plants of the
size of Dinorwig) or maintaining fossil plant as reserve.
And for the European fleet:
Unifying all three fleets by installation of European interconnectors does little or
nothing to mitigate the intermittency of these wind fleets. For the combined system,
which has an available power output of 48.8 GW:
• Power exceeds 90 % of available power for 4 hours per annum,
• Power exceeds 80 % of available power for 65 hours per annum,
• Power is below 20 % of available power for 4,596 hours (27 weeks) per annum,
• Power is below 10 % of available power for 2,164 hours (13 weeks) per annum.

Bubba Cow
Reply to  Capell
November 29, 2014 5:32 pm

I would be very interested in your power generation study. The pdf link didn’t work for me – I have crappy internet (satellite and in Vermont with snow). Is the link good? Sorry too, that I’ve been away from a machine for the day so I’m late into this.
Our runaway governor is obsessed with renewables – even though Vermont is 98% carbon emission clean – I realize that doesn’t matter except that constructing whirligigs produces 250 – 750 tons of CO2 from concrete/rebar bases through steel posts and we have legal statues prohibiting generating CO2 and I’ll happily use their stupidity against them.
I know I’m tilting here. I care about your study, but I know they won’t. I still prefer knowing.

November 29, 2014 1:14 am

It’s curious that Berkeley Earth included Marine Air Temperature data from buoys in a land surface air temperature dataset. I’ll second the “to my surprise”.

Rud Istvan
Reply to  Bob Tisdale
November 29, 2014 1:10 pm

Bob, they may think that near shore is ‘close enough’. That gets into the interesting RUTI project issues Frank Landser in Europe has been exploring. Similar to your ocean/ENSO investigations in several ways. Highly recommended reading for all.

November 29, 2014 1:26 am

My both eyeballs say that most of the “Quality Control Fails” in the Berkeley Earth Surface Temperature dataset for the Bodega Bay buoy are below the trend line – fancy that, who would have thought that?

November 29, 2014 1:31 am

Dear Willis,
There’s some information about the measurement history of the buoy here:
In the left hand column is some information (not a lot sadly) about the buoy itself. The somewhat cryptic notation says something about the type of deployment. 10D, 6N and 3D are, I think, designations for 10m Discus buoy, 6m NOMAD buoy and 3m Discus buoy.
I don’t know what effect that would have on the air temperature measurements, but this NDBC page suggests that there would have been a change in measurement height associated with the switch form 10m to 6m/3m:
GSBP and VEEP are the sensor packages. Again there are some changes there:
Best regards,

November 29, 2014 1:53 am

“Explosive hydrogen gas can accumulate inside the hull of 3-meter-discus buoys.
This dangerous gas is caused by batteries corroding due to water intrusion. While a remedial plan is being developed, mariners are asked to give this, and all other 3-meter-discus buoys, a wide berth. The buoys are 3-meter discus shaped, typically with a yellow hull and a 5-meter tripod mast. Each buoy is identified by the letters “NOAA” and the station identifier number, such as “46050”. Each buoy has a group of (4) flashing 20-second, yellow lights.”
Maybe they adjusted for the hydrogen gas?

Reply to  Sera
November 29, 2014 2:00 am

Anyway, the USCG buoy tenders are responsible for the maintenence, so it could be just that for the missing data (I know that they repainted it back in 2010).

Mike Ozanne
Reply to  Sera
November 29, 2014 2:01 am

Perhaps the lift from the hydrogen is being interpreted as “Sea Level Rise”………:-P

November 29, 2014 2:24 am

Clearly as the data does not reveal Global Warming, and worse than that, shows actual Global Cooling, so it absolutely has to be adjusted with the usual algorithms. If this problem continues then we may well see the buoys being sunk by Naval Gunfire. This situation of having such actual data available is completely contrary to the consensus.

November 29, 2014 2:27 am

It could be that you are seeing the Berkeley scalpel in action. Where they detect a discontinuity, they treat as separate stations. And the marked discontinuities are substantial. Why the other breaks did not invoke the scalpel, I don’t know.

Reply to  Nick Stokes
November 29, 2014 4:16 am

Nick Stokes; “…, I don’t know.” WOW Nick! If only other “experts” had the same level of integrity and honesty. I’d buy you a VB (If that is your tipple).

Doug Proctor
Reply to  Patrick
November 29, 2014 10:32 am

What you are suggesting is that the adjustments are algorithm based. Not human-error-recognized.
More of my Computational Reality instead of Representation Reality.

Doug Allen
Reply to  Patrick
November 29, 2014 6:22 pm

Doug Proctor
Algorithm based like the NASA and other reconstructions that show a record of continually warming the cold years 100 and 35 years ago?

Reply to  Nick Stokes
November 29, 2014 8:20 am

they treat as separate stations
and as a result deliver a misleading result. So many methods sound so good in theory, but fail utterly in practice.

Dave in Canmore
Reply to  ferdberple
November 29, 2014 8:39 am

Which makes me wonder why an algorithm is needed at all? Seems a better process would be to pick GOOD stations not torture ALL stations. It seems self-evident to me but you don’t get to use your fancy education I guess!.

Rud Istvan
Reply to  Nick Stokes
November 29, 2014 1:20 pm

Nick, this example bynitself demonstrates two things. First, the BEST scapel technique is inconsistently applied, as you point out. Second, the underlying ‘ station move’ assumption can be faulty, as it appears this buoy has been there all along at the same place. Dr. Merohasy was able to prove the same faulty justification for Australian BOM homogenization of rural station Rutherglen’s flatnto decline into marked post homogenization. For details, follow the footnote hyperlinks to the Rutherglen example in essay When Data Isn’t in Blowing Smoke. As you are from down under, you probably are already aware of this analogous kerfuffle. Perhaps many posting here are not.

Nick Stokes
Reply to  Rud Istvan
November 29, 2014 4:47 pm

Rud, it isn’t a station move assumption. It isn’t any kind of assumption. The assumption would be that the measuring conditions (instruments etc) are the same after the break as before. I think discarding that leads to loss of information. It’s usually true. But discarding is what the scalpel does.
As you’ve observed, I live not so far from Rutherglen. I think BoM’s treatment of that is OK.

November 29, 2014 2:48 am

It just shows that if you cherry-pick data you can have any conclusion that you want. This seems to be the ongoing theme of AGW.
PS What happened to all the heat that mysteriously (and contrary to the Laws of Thermodynamics) is alleged to have disappeared into the oceans?

Bill Illis
November 29, 2014 3:01 am

Berkeley’s algorithms are (by design or by accident) predisposed to find more downspike breakpoints than upspike breakpoints.
The downs are taken out, the ups are left in.
And the algorithms are finding down breakpoints in high quality trusted station data where none should be found at all. Amundsen-Scot station at the South Pole has the same problem as this buoy. The station is staffed by dozens of highly qualified scientists using the best equipment and best methods possible. Yet Berkeley finds more than 20 down breakpoints in this dataset and removes them. But they find not a single breakpoint on the high side. Sorry Berkeley should not be touching this data at all. Scientists working in in -70C temperatures would be very disheartened to know someone is just throwing out their hard work. I pointed out this particular station to Berkeley (Zeke) several months ago and they noted something was wrong and they would look at it. Nothing was fixed of course.
Berkely is predisposed to adjust the temperature record up.
I vote we throw out all of the adjustments, homogenizations from Berkely and the NCDC and the sea level satellites and all the others and just go back to the Raw temperatures. The Raw temperatures might not be perfect, but they are actually more likely to be closer to the real trends than the adjusted records are.

Reply to  Bill Illis
November 29, 2014 6:02 am

Again, here is a problem that Steven Mosher can shed some light on, perhaps.

Rud Istvan
Reply to  Bill Illis
November 29, 2014 9:54 am

Exactly. It is station 166900. The 26 month quality control fails are to the ‘regional expection fIeld’–according to a rather nasty argument I had with Mosher about the BEST treatment. Well, the next nearest station is McMurdo base, which is 1300 km away amd 2700 meters lower on the coast.
See footnote 24 to essay When Data Isn’t in Blowing Smoke.

Doug Allen
Reply to  Bill Illis
November 29, 2014 6:25 pm

I vote we have two records one without adjustments and one with the adjustments so that it’s easy to see the trend of the adjustments!

November 29, 2014 3:35 am

The idea that Berkeley should adjust anything without an independent review is laughable on its face.

Reply to  Oatley
November 29, 2014 8:38 am

the simplest test for temperature adjustments is to count the number of high and low adjustments. Statistically they should even out due to chance, except for adjustments for population trends. if the pattern of adjustments does not match expectations, then likely the algorithms are wrong.
When you look at these Berkely results, as well the published adjustments for the other major temperature series, the adjustments are all contrary to expectations. this strongly suggests that the methodology currently being used to adjust the major temperature series is likely based on a fault shared algorithm.
Why does Berkely find more down breakpoints than up? Statistically this should no happen. Why do GISS adjustments show a positive trend, when adjustments for population should result in a negative trend? Statistically this should not happen.
This is really very basic quality control testing. If your adjustments don’t match expectations, then it is no good arguing they are correct because they conform to such and such theoretical method.
If your results work in theory but not in practice, then the theory is wrong. No amount of peer review can change this.

November 29, 2014 3:58 am

[quote]As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.[/quote]
I can’t see that. I see that there is no trend _drawn_, just three means supposedly matching three different device setups used to measure temps. Whether the setup has changed and where that would be documented, I don’t know. It is also quite unclear to a layman how that estimated mean bias is calculated.
If there is long gap in measurements, it is probably because the buoy broke and was fixed mush later. That may affect temperature readings. Without metainfo on why data is missing, it is not possible to do reliable long term analysis. IMO.

Reply to  Hugh
November 29, 2014 4:01 am

As usual, add missing ‘a’s and ‘an’s where expected. Thanx.

Reply to  Hugh
November 29, 2014 8:49 am

I can’t see that.
See Nicks reply above: “Where they detect a discontinuity, they treat as separate stations”
Because they treat the data as three separate stations, what appears convincingly to the eye as a trend when they are combined as a single station, disappears when they are treated as separate stations.
Combine that with an algorithm that finds more/bigger down-spikes than up-spikes, and you will end up splitting down-trends into separate stations with no trend, while leaving the up-trends as single stations.
After averaging this will have the effect of creating an uptrend where there was no uptrend in the raw data. and since the adjusted result matches expectations of rising temps due to rising CO2, the researchers don’t suspect their algorithm is faulty and don’t bother to check.
basic quality control is skipped because they get the answer they think is correct.

Reply to  ferdberple
November 29, 2014 8:54 am

statistically, if the data errors are random, there should be no statistical difference between up-spike (up breakpoint) and down-spike detection (down breakpoint) in the Berkley adjustments. If there is a statistical difference, then the errors are not random or the adjustments are faulty. however, if the errors are not random, then the Berkely algorithm will deliver faulty results. so in either case, if there is a statistical imbalance in the spike detection, the adjusted results will be faulty.

Reply to  Willis Eschenbach
November 30, 2014 2:26 am

Isn’t the motivation in breaking the series that the measuring instruments do change in the course of time? There is a half life for any setup…
But still the the breakpoints in this case really-really come from an algorithm rather than some external metadata? How could we know if they don’t tell?
On the other hand, breaking at gaps randomly should not statistically cause any systematic trend. By choosing gaps to break at, you can fiddle with the trends whichway you prefer. Are you afraid that happened?
As long as the procedure is not open and public, it is little bit difficult to reproduce it and thus, also difficult to trust it.

Bruce Cobb
November 29, 2014 3:59 am

All in a day’s work for “science” in the service of an ideology. Lysenkoism, on a grand scale.

Rud Istvan
Reply to  Bruce Cobb
November 29, 2014 9:59 am

Lysenkoism practiced by warmunists. H/t to former Czech president Vaclav Klaus and his book Blue Planet in Green Chains.

sleepingbear dunes
November 29, 2014 4:28 am

Last week I looked at BEST data for trends in several Asian and European countries. Some numbers at first glance just didn’t make sense . I compared the trend since 1990 in Ukraine against Germany and there was a massive difference between the 2 countries while they are separated by only Poland. I wouldn’t expect a huge difference between say Pittsburgh and Kansas City. I think the holiday season is a good time to look more deeply into BEST.
Where is Mosher when we need him? This post showed me more clearly than any other what might be amiss in all that we depend on.

November 29, 2014 4:33 am

Good work Willis. I am shocked. It seems Berkeley have gone so far down the rabbit hole of developing solutions to problems in pursuit of scientific excellence that common sense checks have been forgotten.

November 29, 2014 4:33 am

Why do the 3 data sets/graphs shown look like they are from different Buoys? They don’t correlate to each other at all if you overlap them.
The highest temperature for graph 1 is 1983, graph 2 is 1992.5, graph 3 is 1998.
The lowest temperature for graph 1 is 1989, graph 2 is 2013, graph 3 is 2012.

November 29, 2014 4:41 am

There is a long way from Hamburg to Kiev. Btw, I’m located exactly north from Kiev, and we have much milder climate, thanks to the Baltic sea and Atlantic influence. But we lack the heat of the Crimean summer. Hopefully we lack the war of Crimea as well.

Reply to  Hugh
November 29, 2014 4:43 am

Comment meant to sleepingbear dunes, but misdirected thanks to tablet.

Reply to  Hugh
November 29, 2014 5:52 am

Thanks for your reply. It may all be explained away but it just struck me that the difference in the upward trend of over 7 F degrees per century since 1990 between Ukraine and Germany seemed large. But there may be very reasonable explanations.

Steve Case
November 29, 2014 4:47 am

Of all the metrics associated with climate change, the only one that seems to not be adjusted, is the PSMSL tide gauges. Everything elapse seems to be suspect

Steve Case
November 29, 2014 4:50 am

Elapse = else

November 29, 2014 4:53 am

Looking at data from nearby buoys is probably the only way to check the validity of any adjustment.

John L.
November 29, 2014 4:55 am

Oh Mann! Haven’t we seen this “adjustment” thingy somewhere before?

November 29, 2014 5:12 am

… and again we see the climate “scientists” trying to make a weather monitoring and warning system into a climate data system. These buoys are designed to get real time data for marine weather warnings and forecasts and were the primary source of marine data before satellites. I’m sure their products have been modified over the years to match the more sophisticated mariner’s weather systems whose purpose is to keep marine traffic and coastal areas apprised of dangerous conditions. Absolute and unchanging data formatting’s not too important and long term climate archiving would be lower on the list than the format requirements for warning and forecast, but maybe now that we’re in the “information age” they can get data formats to some cleaned up standard.
These buoys, like anything else man drops into the ocean, are NOT 100% reliable. I occasionally look at their data during hurricanes (I’m in Florida, so it’s been a while) and it is not uncommon for them to lose data for one or more sensors, drop off the air or break loose from their moorings and go walkabout. (I think that’s where they got the idea for ARGO.) Some of these things are hundreds of miles offshore and, unlike NOAA’s best stations, are not “ideally sited” in a densely populated city at the junction of several 6 lane highways with a convenient asphalt parking lot around the station for maintenance vehicles. It can sometimes take months to repair these buoys, depending on the frequency of the maintenance boat’s service schedule. The same holds true for calibrations and long term accuracy.
I’m sure there’s very good data to be had from these buoys, but it was never their primary mission and one needs to remember that when wading into the data they’ve gathered.

November 29, 2014 5:13 am

Thank you, Willis. I live on the Gulf Coast, and one of the things that always drives me a little crazy is that the figures that are given for tropical systems by NOAA are ALWAYS higher than are reported by the buoys. That is, when the location and wind speed of a tropical system are given, the wind speed measured at any of the buoys, even when they are virtually planted in the northeast quadrant of the tropical system, are lower by 20% to 30% or more routinely, measured in mph. This is especially true of marginal systems, which seems to me to indicate an exaggeration of the number and strength of tropical systems.

Reply to  JR
November 29, 2014 5:43 am

I’ve noticed that as well during tropical events.

Pillage Idiot
Reply to  JR
November 29, 2014 9:17 am

Not at expert – but I don’t believe there is a conspiracy on this particular item. The hurricane hunter aircraft cannot fly on the deck. They always measure wind speed at altitude, where the wind speed is higher than at the sea surface where the buoys are located.
The also use drop sondes to gather information from their altitude to the sea surface. In my recollection, I believe I have even seen maps where the wind speed at the buoys is corrected upwards to reflect the maximum wind speed at altitude when the eyewall passed over the location of the buoy.
[This response is from a hurricane non-expert.]

Reply to  JR
November 29, 2014 11:44 am

Wind speeds reported by the National Hurricane Center are estimates of the highest 1-minute-average windspeed anywhere in the storm at 10 meters above the surface. And the windiest part of a hurricane is usually small, may miss every station, and the duration of the windiest part of a direct hit often, maybe usually, lasts less than half an hour. So, I consider it expected for the actual maximum 1-minute wind to be significantly greater than the highest hourly reading anywhere.

Reply to  Donald L. Klipstein
November 29, 2014 12:50 pm

While what you say about the NHC is correct, the buoys in the Gulf have continuous wind readings, graphs and also measure the highest gusts in the measurement period. Consistently, the highest gust is significantly lower than the reported max 1-minute wind speed reported by the NHC.

November 29, 2014 5:30 am

Willis writes “It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control?”
If you lose a single hourly reading in a day’s recording, then you lose the ability to be sure of the maximum or minimum for that day. Sure, you can analyse the readings you do have and make a best guess but to be certain, a single lost hour “breaks” a day. So I expect you wouldn’t need to lose much data to put a month in question.

Steve from Rockwood
Reply to  TimTheToolMan
November 29, 2014 5:39 am

I would expect temperature change to be gradual over a 24 period. Interpolation of one hour within that day would make no difference. On the other hand if you assumed something went wrong with the sensor and sought to shift the mean of several months to years of data in order to correct for something, the source of which is not known – well that could produce some serious errors.
Reminds me of the movie “The Adjustment Bureau” where men run around trying to change the world to fit their idea of success.

Reply to  Steve from Rockwood
November 29, 2014 5:51 am

I would expect temperature change to be gradual over a 24 period.

What do they do with the data when a front goes by? T-storm cells can easily drop air temps 15°F in 10 or 15 minutes around their edges, outside the precipitation area… and is perfectly accurate data. Do they toss it? It underscores the difficulties of trying to monitor “climate” with data from a bunch of discrete sensors designed to monitor weather.

Reply to  TimTheToolMan
November 29, 2014 9:01 am

So I expect you wouldn’t need to lose much data to put a month in question
however, if data losses are random, then no adjustments are required, as the positive and negative errors will balance over time.
in attempting to correct random errors you could actually reduce data quality, because it is very difficult to design algorithms that are truly random. In effect adjustments add a non-random error to a time series that has only random error, which means the errors will from then on not be expected to average out to zero. instead the adjustment errors will introduce bias.

Reply to  ferdberple
November 29, 2014 9:04 am

the classic example is on-line casino’s that use random number generators. because these are typically pseudo random, they have been exploited to generate casino losses.

Reply to  TimTheToolMan
November 29, 2014 1:13 pm

I’ve seen buoy air temperature data in the Arctic obviously drop a minus sign causing a 40° jump and fall over a 15 minute interval. It might be interesting to know how much it takes to knock out a whole day. The impression I got was the problem was a transmission problem rather than a sensor problem.

Steve from Rockwood
November 29, 2014 5:32 am

If the Pacific Ocean was cooling, would we even know?

November 29, 2014 5:34 am

Which part of “the heat is being sucked into the DEEP ocean” did you not understand?

Reply to  probono
November 29, 2014 5:52 am

Don’t you mean teleported?

Mike M
Reply to  probono
November 29, 2014 6:39 am

Yep, heat has been getting sucked into the deep ocean for millions of years and if it ever ‘decides’ to come back out – we’re all toast!

November 29, 2014 5:42 am

Willis writes “It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control?”
Another thought occurred to me…if there is “hydrogen hazard” associated with the buoys then they probably have a lead-acid based battery which would need replacing periodically and that could take weeks to happen. Especially in Winter which is why eyeballing the graph appears to show more “cold” lost data than warm…it’d be winter and even harder to change batteries. That would itself introduce a bias I would think.

Steve from Rockwood
Reply to  TimTheToolMan
November 29, 2014 9:27 am

I once helped design and build a calibrated temperature sensor (mainly on the software side). The device operated over a voltage range of 24-28 VDC. As the voltage dropped in the main battery the current draw increased slightly to keep the operating voltage within its correct range. The device would operate down to 17 VDC but much below 24 you could see a change in temperature that correlated to the voltage drop. We also measured and recorded voltage and warned the users not to rely on temperatures acquired outside the operating range of the device. This is trivial with today’s electronics. Even a simple 16-bit A/D can measure temperature and voltage to a few decimal places and store years worth of measurements in RAM.

November 29, 2014 6:52 am

Reblogged this on Centinel2012 and commented:
I will take you take on the data set over theirs any day even though I don’t know you. I have seen so much data tampering from the various government agencies that I can’t believe much of anything they publish in climate work or economics, which I also follow.

November 29, 2014 7:02 am

Moored buoys do have a hard life.
Here: http://www.ndbc.noaa.gov/mooredbuoy.shtml ndbc describe the buoy types and
Here: http://www.ndbc.noaa.gov/wstat.shtml they document faults. You can see when sensors etc fail.
The quality control rules for data from these buoys is contained in:http://www.ndbc.noaa.gov/NDBCHandbookofAutomatedDataQualityControl2009.pdf
and they do have a section on how they allow for changes when a front passes over but still flag values significantly out of range.
The QA manual above, does not help in the discussion of why BEST then trash the data, once received.
One concern they have noted:
“Air temperature measurements (ATMP1, ATMP2) are generally very reliable; however,
it is important to note that the physical position of temperature sensors can adversely
affect measurements. Air temperature housings can lead to non-representative readings in
low wind conditions. Air temperature is sampled at a rate of 1Hz during the sampling
period. “

November 29, 2014 7:06 am

Simple Willis.
The station is treated as if it were a land station. Which means it’s going to be very wrong.
That’s why when you do regional or local work with the
Data you don’t use these stations.
Finally one last time.
The data isn’t adjusted.
It’s a regression. The model creates fitted values.
Fitted values differ from the actual data. Durrr
Lots of people like to call these fitted values adjusted data.
Think through it

Reply to  Steven Mosher
November 29, 2014 8:49 am

“Fitted values”
And the tailor always has “a ceartain flair”

Reply to  Steven Mosher
November 29, 2014 8:51 am

“It’s a regression. The model creates fitted values.
Fitted values differ from the actual data. Durrr”
I don’t know, Steven…. “fitting” properly-collected and accurate data because it lies outside of some pre-determined range….. it causes discomfort to many honest observers. Doesn’t matter if the variation was due to a passing thunderstorm or katabatic winds…. it was legitimate, it existed, and it reflected a piece of the atmospheric energy balance puzzle at a given moment. At lot of folks will always have valid arguments for not smoothing these kinds of observations.

Steve from Rockwood
Reply to  Steven Mosher
November 29, 2014 9:31 am

When you have a near continuous time series you don’t need regression fitting. Simple interpolation works just fine.

Reply to  Steven Mosher
November 29, 2014 12:14 pm

Fitted values differ from the actual data. Durrr
isn’t “differ from the actual data” the definition of adjusted?
or are you arguing that changing a value by “fitting” so that the differ from the actual data in no way involves any adjustment?
a rose by any other name…

Reply to  Steven Mosher
November 29, 2014 1:48 pm

Steven Mosher
Let me first express that I appreciate that you follow this weblog and put in your comments. I regard it as very important that you, as a representative for the BEST temperature data product, participate in the discussions.
I am also happy to see that you write full sentences in your reply. I would wish however that you make some effort to put forward more complete arguments.
The intention of this reply is to try to explain why I think that your comments cannot be regarded to contain complete or proper arguments:
“The station is treated as if it were a land station. Which means it’s going to be very wrong.”
The first sentence does not say who is treating the station as a land station. Is it the BEST data product or is it Willis Echenbach? As far as I can tell Willis Echenbach is not treating the data series as anything more than a temperature data series. Is it the BEST model that treat the station as a land station?
The second sentence does not give me any clue why «it» is going to be very wrong. It does not say what «it» is. How can a series of valid temperature measurements be wrong?
“That’s why when you do regional or local work with the Data you don’t use these stations.”
Does this sentence mean that BEST does not use this station? It is very clear form your record, as presented by Willis, that BEST make adjustments to this data series. Why do you perform adjustments to the data series if you do not use it? This really does not make any sense to me.
“Finally one last time. The data isn’t adjusted. It’s a regression. ”
This is what Wikipedia has to say about regression.
“In statistics, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable (or ‘criterion variable’) changes when any one of the independent variables is varied, while the other independent variables are held fixed.»
Hence, It seems reasonable to say that in regression analysis, you do not perform any adjustment to your measurements. A regression is supposed to find a relationship between independent variables and dependent variable. Willis put forward example that shows that you perform adjustment to the data”
“The model creates fitted values. Fitted values differ from the actual data.»
Finally, I find a sentence that make sense. The sentences seems to mean that the BEST data product creates fitted values, and that fitted values differ from actual data. This seems to be exactly what Willis has pointed out. Your fitted data differ from the measured data and, as I understand it, he can see no reason why real data should be replaced by an estimate unless you have identified a specific error, and can provide a better estimate for the measurand than the measurement.
“Lots of people like to call these fitted values adjusted data.»
To me, this seems to be a correct observation. I will regard anything else than a measured value to be an adjusted value.
“Think through it»
You can regard me as one who has thought about it, and I am not able to make any sense of what you write.
In «About us» at your web site I find the following:
“Berkeley Earth systematically addressed the five major concerns that global warming skeptics had identified, and did so in a systematic and objective manner. The first four were potential biases from data selection, data adjustment, poor station quality, and the urban heat island effect.”
Willis put forward an proper example where it seems that BEST perform data adjustments which does not seem to be not justified. To me, this seems to be a serious observation that deserve a serious reply.
I wish that you could put some more effort into your reply, and make sure that you use full sentences and put forward proper arguments. I think the BEST temperature data product would be better represented if you take your time to formulate proper and complete arguments.

Reply to  DHF
November 29, 2014 3:32 pm

Yours is a thoughtful and very worthwhile comment. I hope that Mosher sees it.

Doug Allen
Reply to  DHF
November 29, 2014 6:31 pm


Reply to  Steven Mosher
November 29, 2014 3:24 pm

Well in my opinion these are precisely the kinds of stations that SHOULD be used for regional work.

Don K
Reply to  Terry
November 30, 2014 7:12 am

I’m not remotely an expert in this, but I’m GUESSING that temperature data from a buoy is going to be heavily influenced by water surface temperatures and thus is likely to be cooler in the afternoon, warmer at night and less influenced by cloud cover than a nearby station on land. If someone tells you different, you should probably believe them, not me.

Reply to  Terry
November 30, 2014 12:42 pm

To give you a short hand explaination.
T = C +W
That is in the berkeley approach we dont average temperatures. We create a model that PREDICTS temperatures. T = C + W
People continue to misunderstand this cause they dont look at the math.
T= C+ W. We decompose the temperature into a climate portion ( C) and a W portion. (W) The climate portion is estimated by creating a regression equations C = f(lat,elevation) in other words the climate for all locations is estimated as a function of that locations latitude and elevation. on a global basis this regression explains over 80% of the variation.. That is, 80% of a places temperature is determined by its latitude and altitude. Now of course you will always be able to find local places where 80% is not explained. Here are some other factors : distance from a body of water, geography conducive to cold air drainage. land cover, These are no present in the global regression, BUT if your goal was a REGIONAL ACCURACY in the 10s of kilometers range, then you might add these factors to the regression. Continuing. With 80% of the variation explained by latitude and elevation, the remaining 20% is assigned to weather. So you have a climate field which is determined soley by the latitude and elevation ( by season of course) and you have a RESIDUAL that is assigned to the Weather feild. Or W. Now in actuality because the regression isnt perfect, we know that the W feild can contain some factors that should be in the Climate.. for example, cold air draining areas will have structure in their residuals. The regression model will have more error on thse locations than others. In addition the residuals near coasts will have more error in the climate field. So, if you want to get the local detail ( at a scale below 50km say ) expressed more perfectly, then you need variables to the climate regression. We are currently working on several improvements to handle the pathological cases: cases near water, case in geographical areas that experience cold air drainage. And adding land cover regressors although these have to be indexed over time. On a global scale we know that Improving the local detail doesnt change the global answer. In other words the local detail can be wrong, but fixing it doesnt change the global answer. This is clear if you look at distance from coast. When you add that to the regression some local detail will change but the overall regression ( r^2) doesnt change. Some places in the climate get a little warmer and others get a little cooler..
Once folks understand that the approach is a prediction at its heart a regression based prediction, then a few things become clear. 1) the actual local data is always going to vary from the predicted local value. The predicted local value ( the fitted values of the regression) are not ‘adjusted data” As the documentation explains these values are what we would expect to have measured if the site behaved as the regression predicted. 2) if you are really interested in a small area, then DONT use data that predicted form a global model. Take the raw data we provide and do a local approach. For examples you can look at various states that use krigging to estimate their temperatures. At smaller scales you have the option of going down to 1km or higher resolutions required to get cold air drainage right. Further you can actually look at individual sites and decide how to treat bouys for example. We treat them as land stations. That means they will be horribly wrong in some cases when you look at the fitted values. Why? because the fitting equation asumes they are over land! If you are doing a local data set then you would decide how you wanted to handle them. In the future I would hope to improve the treatment of bouys by adding a land class to the regression and if that doesnt add anything then they would get dropped from land stations and put into a marine air temp database.

Bill Illis
Reply to  Steven Mosher
November 29, 2014 3:31 pm

Steven Mosher, for about the 8th time now, I am asking for a distribution of the detected breakpoints:
– the number of breakpoints that that are detected as higher than the regional expectation and then how many are lower than the regional expectation;
– more importantly, the distribution of the same through time (by month or year) of the higher breakpoints detected and the lower than regional expectation breakpoints.
– it would be nice to know how much each affected the trend over time as well but maybe that is asking for too much computing resources).
The point being are there more breakpoints on the downside detected than the upside and has that changed through time – we should expect exactly 50:50 every single months throughout the entire record if the algorithm was working properly – it is described as something that should be very close to completely random (to answer Willis’ question about a citation for my statements, it would sure be nice and many would have expected that there would be data available showing this that could be cited. I don’t know how you present temperature data in the manner that BEST has done without showing this important point – I’ve asked about 7 times for this information before today).

Steve Fitzpatrick
November 29, 2014 7:17 am

Steve Mosher,
The question is if this (and other) ocean station data are included in the land historical trend or not. Clearly (as you say) they are very wrong for land data, and so should not be included in land trends… but are they?

Steve from Rockwood
Reply to  Willis Eschenbach
November 29, 2014 9:34 am

Could be something as simple as formatting errors in the original data set with no human looking at the processed data records.

Reply to  Willis Eschenbach
November 29, 2014 12:59 pm

Like a lot of Climate Science, they overestimate their ability to interpret the data.

November 29, 2014 8:51 am

Willis: As usual…I’m afraid I had a “bias” myself, when this Berkley effort was proposed: I.e., that somehow, magically, they’d get the “result” they wanted.
As Dilbert always says to Dogbert, “You are an evil little dog!” (Image: Tail WAG!). Using raw data, not manipulating at all….and “wallah”. 34 years of several stations, which not only do not show the alledged upward (land based) trend, but almost completely opposite.
Sun, cloud cover, weather patterns…NORMAL VARIATIONS account for everything. The thinning at the North pole, MATCHED by ice growth at the south pole (shelf and THICK!)
Balance is maintained.
And the AWG claim, is again unraveling as either “narrow vision” (i.e. select years, or manipulated data)…or
anecdotal, with no consideration of “world wide” scoping.

John Peter
November 29, 2014 9:06 am

“Steven Mosher November 29, 2014 at 7:06 am”
If that is all he can say about this post I have gone even further OFF Berkeley Earth.
This one “made my day” “Finally one last time.
The data isn’t adjusted.
It’s a regression. The model creates fitted values.
Fitted values differ from the actual data. Durrr
Lots of people like to call these fitted values adjusted data.
Think through it”.

November 29, 2014 9:13 am

Any temperature record that shows cooling must be adjusted. The adjustment process is “spring loaded” to adjust any sudden downward change. That “sudden” change might be the natural result of a few days of missing data when the season is naturally cooling, might be due to a change in wind direction, a change in ocean current, could be anything. But the process is built to look for a “adjust” any sudden downward change upward. The idea that there could be a NATURAL sudden downward change is completely alien to them. Only upward changes are “natural”, apparently.

Billy Liar
November 29, 2014 9:21 am

The Bezerkeley ‘scalpel’ paper:
Extract from the summary:
Iterative weighting is used to reduce the influence of statistical outliers. Statistical uncertainties are calculated by subdividing the data and comparing the results from statistically independent subsamples using the Jackknife method. Spatial uncertainties from periods with sparse geographical sampling are estimated by calculating the error made when we analyze post-1960 data using similarly sparse spatial sampling.
Trying to pretend the data isn’t ‘adjusted’ by calling the adjustments ‘fitted data’ borders on the bizarre.
The full paper is available (for free).

Steve from Rockwood
Reply to  Billy Liar
November 29, 2014 9:39 am

If you use iterative weighting to reduce outliers the resulting “fitted” data converges to the mean rather than eliminating a trend.The only way to eliminate a trend is to adjust the mean over a time period that is long enough to be insensitive to outliers but much shorter than the time series (e.g. several months in a decadal time series sampled daily).

Billy Liar
Reply to  Steve from Rockwood
November 29, 2014 9:50 am

What their algorithm does, I think, is turn a station with a falling trend (say) into 3 stations (say) with no trend. Any ‘statistical’ outliers are removed whether or not there is any good reason to do so.

November 29, 2014 9:23 am

Did you ask BE for comment before posting? There is that rhetorical (?) question at the bottom, so I am wondering whether someone at BE is expected to read here and act upon your invitation, or whether they didn’t/wouldn’t grace you with a reply anyway?

Rick K
November 29, 2014 10:06 am

Just a simple “Thank you.” I always learn something from your posts.
And I LOVE how the mighty fall with just some simple investigative questions!

Gary Pearse
November 29, 2014 10:16 am

I think Berkeley must have chosen only the two hours of normal temp collection times as if it were a land thermometer and used that for data.

Pete in Cumbria UK
Reply to  Willis Eschenbach
November 29, 2014 12:23 pm

A little further to missing data..
I’ve got myself three of the little USB data and humidity loggers. One is dangling off my washing line in my garden, another is near the middle of a 125 acre patch of permanent grassland cow-pasture (nearest building is a holiday home, empty 11 months/year and 500 metres away) The third is temp logger only and is in the company of the stopcock on my farm’s water supply ~30” underground.
I did have them taking readings every 5 minutes so as to make a good comparison with my local (3 mile away) Wunderground station which broadcasts a new reading every 5 minutes. After a couple of years, Excel on my 7yo lappy was ‘feeling the strain’ of 288 data points for every day.
Out of curiosity, as you do, I compared the daily average of the 288 points to the average of the maximum and minimum temperatures I’d recorded (just the two data values) whenever they occurred between midnight and midnight on whatever day.
To one decimal place (the loggers only record to ±0.5°C anyway), there was no difference if I used 288 points or just two data points to get the daily average. The answer was the same. It was really really surprising – 286 readings were redundant. And believe it or not, the same applies to the data coming from the Wunderground station.
It kinda makes the whole business of TOBS adjustment redundant as well dunnit and reveals what a fraud it is.

Reply to  Pete in Cumbria UK
November 29, 2014 4:42 pm

Pete writes “there was no difference if I used 288 points or just two data points to get the daily average.”
Now that IS interesting. It does make you wonder just how justified the TOBs adjustment is. I mean the TOBs adjustment makes sense conceptually but I wonder if the justification for its use was against enough varied test data. It’d be very agenda driven “Climate Science” if they found the effect a few times and extrapolated it to every station regardless.
Of course the reverse could be true and your experience is the exception rather than the rule…

Reply to  Pete in Cumbria UK
November 29, 2014 4:49 pm

Actually having thought about that a little more, I’m not convinced simply taking the average is the right answer. You probably need to take the average starting at say 10am for 24 hours worth of readings and compare that to taking the average starting at 10pm for 24 hours of readings to get the “TOB” bit in there.

Reply to  Pete in Cumbria UK
November 29, 2014 4:51 pm

Oh and, for example, the min of any 24 hours cant be less than the min at the start point.

Reply to  Pete in Cumbria UK
November 29, 2014 4:52 pm

Of course I meant
The min of any 24 hours cant be less than the temperature at the start point.

Reply to  Pete in Cumbria UK
November 29, 2014 5:17 pm

Bah, I’ll get the statement right eventually :-S
The min of any 24 hours can only be equal to or less than the temperature at the start point. It cant be greater (and consequently a single min may be counted twice for consecutive “days”). A similar argument applies to the max.

November 29, 2014 10:52 am

Common sense time.
Someone with a bit of gravitas in the field of meteorology should compile two lists: the first would show the common ways that temperature can suddenly drop, for example a cold front passage, thunderstorm, santa anna, mistral, katabatic wind, etc. The second could show the common ways the temperature might suddenly spike upward, such as a sirocco. Follow up with a discussion characterizing the relative quantities of each “disturbance” and the likelihood that “fitting” would alter valid observations due to these conditions.
I would expect that such a conversation would reveal that many more below-expected observations would need to be “fitted” than higher-than-expected observations.

Reply to  Sciguy54
November 30, 2014 9:04 am

Well said. Thinking back over my almost 6 decade life I can think of very few times that it has suddenly gotten “hotter” but many times when it has suddenly gotten “cooler.” The hotter episodes are limited to small short wind gusts in desert gullies or washes. The cooler episodes happen a lot here in Florida when T-storms wind through. We’d get the weird, dry cold chunks of air during tornado season in the Midwest. Cold blasts coming down off a mountain etc. Sudden drops seem to be more the norm than sudden rises in temps from a purely subjective pov.

Kent Gatewood
November 29, 2014 12:11 pm

Would a land station or a buoy station be appropriate to smear (another word?) across large stretches of ocean without stations?

sleepingbear dunes
November 29, 2014 12:30 pm

After reading this post and all these comments, I thought Willis did an exceptional job.
As for Mosher in defending the BEST system? Just a “durr”. That says it all. Mosher being Mosher without enlightening us. It is not adjustment. It is some other gyration that by any other name is still a rose.

November 29, 2014 12:33 pm

Willis writes “So I’m sorry, but you are very wrong when you say that “you wouldn’t need to lose much data to put a month in question”. In fact, knocking out a full quarter of the monthly data leads to a MAXIMUM error in 1000 trials of seven hundredths of a degree …”
There is a difference between “knowing” and having an acceptable estimate. You’re talking about the latter but in the same breath wonder how Berkley Earth “lose” a whole month. Perhaps you have a better speculative answer?

Reply to  Willis Eschenbach
November 29, 2014 3:49 pm

Willis writes “Not sure what you mean by “knowing” in quotes, or what your point is.”
And the point is that if they choose to drop data rather than estimate it, then a relatively small amount of data might make a month unusable.
Above, you said “Actually, the Berkeley folks lost an entire year”
But earlier you wondered “so while the odd hourly record might be wrong, how could a whole month fail quality control?”
I speculated on your monthly statement, not on how a whole year might be missing.

Pat Frank
Reply to  Willis Eschenbach
November 29, 2014 2:09 pm

Willis, “Now, according to the data you reference, the errors are in fact symmetrical, as they are given as ± 1°C (as opposed to say +0.5/-1.5°C).
That’s just the way accuracy is written, Willis. It’s just the empirical 1-sigma standard deviation of the difference between a calibration known and the measurement. It doesn’t imply that the errors in accuracy are in fact symmetrical about their mean.
Any distribution of error, as unsymmetrical as one likes, will produce an empirical (+/-)x 1-sigma accuracy metric.
An accuracy of (+/-)1 C transmits that the true temperature may be anywhere within that range. But one does not know where, and the distribution of error is not necessarily random (symmetrical and normal).
Here’s a NDBC page with links at the bottom to jpeg pictures of the various buoys. On the coastal buoys in particular, one can make out the gill shield of a standard air temperature sensor — similar to those used in land stations — mounted up on the instrument rack. It’s especially evident in this retrieval picture.
Those sensors are naturally ventilated, meaning they need wind of >=5 m/sec., to remove the internal heating produced by solar irradiance.
In land station tests of naturally ventilated shields, under low wind conditions, solar heating can cause 1 C errors in temperature readings, with average long-term biases of ~0.2 C and 1-sigma SDs of ~(+/-)0.3 C. None of the distributions of day-time error were symmetrical about their means. Night-time errors tended to be more symmetrical and smaller. Average error was strongly dominated by day-time errors.
So, there isn’t any reason to discount the (+/-)1 C buoy accuracy metric as the standard deviation of a random error.
Thanks for discussing buoy temperatures, Willlis. A careful and dispassionate appraisal of accuracy in marine temperatures is long overdue.

KRJ Pietersen
November 29, 2014 2:12 pm

Anything that self-aggrandisingly gives itself the acronym ‘BEST’ had honestly better make sure of itself prior to sticking its neck above the parapet. Because if it’s not the ‘BEST’, then it’s going to get found out sooner or later and isn’t that a fact.
Mr Mosher’s arrival upon and very rapid about-turn and departure from the battlefield tell their own story. He did three things in his post:
He called Mr Eschenbach “Simple Willis” for his own amusement hoping that nobody would pick him up on it.
He said “The station is treated as if it were a land station. Which means it’s going to be very wrong”.
Which kind of fouls up the BEST idea.
And he said “Finally one last time. The data isn’t adjusted. It’s a regression. The model creates fitted values. Fitted values differ from the actual data. Durrr”
Regressions and fitted values are kinds of adjustments to data. Are they not?
Overall, to mix my sporting metaphors, I’d call this game, set and match to Willis Eschenbach by a knockout.

November 29, 2014 2:14 pm

Willis writes “In fact, errors are generally not uniformly distributed, but instead have something related to a normal distribution.”
“something related to a normal distribution” seems right but not necessarily normally distributed around the mean. The sensor itself might behave like that but its the whole measurement process that you need to consider and that includes errors introduced by varying voltages provided by the battery, its condition and how/when it charges.

Catherine Ronconi
November 29, 2014 2:22 pm

The Team’s Torquemadas torture ocean data until they confess because without such manipulation, surface “data” sets would show cooling, since that’s what’s actually happening.
Phil Jones admits that after adjusting all the land records upward (even when adjusting for UHIs!), then the ocean data need to be upped even more so that they won’t be out of line with the cooked land books.
All the surface series, HadCRUT4, GISTEMP & BEST, are literally worse than useless, except as boogiemen to raise money for their perpetrators.

Rud Istvan
Reply to  Catherine Ronconi
November 29, 2014 3:41 pm

Catherine, the oceans are 71% of the planets surface. Until Argo, they were grossly undersampled. Some maintain they still are. Other than seafarers, we don’t experience these surface temperatures.
No matter how ‘adjusted’/fitted/homogenized, land temperatures can say nothing useful about global averages. There is a reason Earth is called the Blue Planet. Regards.

Reply to  Catherine Ronconi
November 29, 2014 10:03 pm

Thanks for reminding readers of this Catherine –

November 29, 2014 2:25 pm

I was taught that:
“Assumption is the mother of all mess-up’s”
Now, I start thinking that:
“Adjustment is the mother of all mess-up’s”

Reply to  DHF
November 29, 2014 2:52 pm

Or rather:
“Adjustment is the father of all mess-up’s”

David Norman
Reply to  DHF
November 30, 2014 5:25 am

And, when an Assumption is coupled with an Adjustment they frequently give birth to an Ad hominem.

November 29, 2014 3:03 pm

The data needed to be adjusted because it is contrary to the GCM outputs.

November 29, 2014 5:12 pm

But it seems clear that there is severe yellow diamond pollution near the California coast.

Reply to  RoHa
November 30, 2014 10:31 am

They’re going to cover them up with wind turbines so no one will notice them… especially the gulls and pelicans and condors and falcons and eagles…

November 29, 2014 5:47 pm

Steve Mosher,
Come on, we are waiting for a reasoned answer. Clearly, ocean measurements should not be treated as land measurements, but on the face of it, they have been so treated. What say you (and Berkley Earth)?

Doug Allen
November 29, 2014 6:40 pm

Thanks Willis. Perhaps you have done for buoy data what Watts did for the land station data, not in the siting, but in the confidence we should have in the accuracy of the data. If you have the time, I would like to see if other buoy data are also similarly adjusted, oh sorry, fitted.

November 30, 2014 1:38 am

Boy oh Buoy! Looks like things are getting cooler.

Philip Bradley
November 30, 2014 2:34 am

Note, these are hourly measurements, not predominately min/max as all the land datasets are. And are therefore more valid measures of temperature changes over time.
In addition, they are free of the numerous local to regional scale effects on measured temperatures that exist on land.
Being fixed they are also free of any drift biases that exist with Argo.
IMO, perhaps the best air temperature trend dataset we have.
No real surprise to me they show a cooling trend.

November 30, 2014 3:45 am

It is a catastrophe that the IPCC (i.e. Dr Pachauri and his Panel mates), pro-global warming politicians and bureaucrats, most of the media, academia, the environmental movement, and other global warming alarmist supporters, do not give a damn about real world observational data on climate. They are only interested in climate propaganda driven by the UN.
There is now an overwhelming amount of data and research that demonstrates the IPCC’s supposition of catastrophic man-made global warming is wrong. Yet the grand deception goes on and on.
It no longer matters what the weather and temperature does anymore because, whichever way they go, the climate change charlatans just blame it all on “climate change”… global warming or cooling… droughts or floods… hurricanes or no hurricanes… winter blizzards or no wonder blizzards… sea level rise or sea-level decline… it matters not, anymore.

November 30, 2014 10:32 am

Pro-CAGW data isn’t data until it’s made a pass through the Gruber engine. Thus refined, it is fit to publish.

November 30, 2014 12:49 pm

Willis, I have a question regarding short term ocean temperature changes. With arctic air upon our area for the first time this year, decided to look at a buoy off the Washington coast to see how cold the water was. NDBC 46087 (Neah Bay) at 11/29/14, 1720 hours, Air temp of 36.3F and Water temp of 52.3F. At 11/30/14, 0720 hours, Air temp of 34.3F and Water temp of 50.0F. The two air temp readings seemed logical, but the larger difference of the two water temp readings surprised me.
Is it unusal to have that much change in the water along the coast? Or, is the effect of tides and currents moving the water temperature in larger swings than the air normal?
Also, was curious if all the buoys report temperatures in Farenheit?

November 30, 2014 2:49 pm

So, no one anywhere knows how the planet’s temp is measured, across decades as measurement has changed (supposedly for the better), or what the planet’s temp is, but we have all been certain for decades that the planet has been warming. OK, I got it.

November 30, 2014 4:22 pm

Great job on getting the data and presenting it in a form that tells the story.
As an engineer I appreciate your use of raw data. If engineers corrupted the data the way the “climate scientists” do we would have structure failures and bridges falling down and processes melting the containment equipment.
In evaluating equipment failures and operating performance Engineers would never accept the data modification employed by the “team”

Sun Spot
Reply to  Catcracking
November 30, 2014 7:06 pm

Mosher dos’nt know anything about engineering so he can’t comprehend first principals of data integrity ergo BEST data is suspect. Yes cAGW suffers from integrity failure just like a badly designed bridge fails structural integrity.

Reply to  Willis Eschenbach
December 1, 2014 7:01 am

Sort of cuts thru it, doesn’t it. Well done.

December 1, 2014 5:13 am

Thanks, Willis. Food for thought about BEST; not good.

jim hogg
December 1, 2014 8:07 am

I wonder the trend would like with the climate fails included?

jim hogg
December 1, 2014 8:08 am

That like would look better if it was actually a look.

jim hogg
December 1, 2014 8:10 am

OMG – last try, promise! I wonder how that trend would look with the climate fails included . . . there!” ?

The Old Bloke
December 1, 2014 1:42 pm

Hi everyone. I’ve been lurking these pages for quite a few years and this is my first post. Yay! Some of you might know me as The Old Bloke form the BBC bias web site, if you do, hello also. I’ve been “interested” in meteorology for 53 years now and have “seen it all” here in the U.K. I am also a pilot. Concerning the data sets from buoys, please have a look at this:

December 1, 2014 2:32 pm

Mosher confirms here what many of us have long known about BEST’s algorithm. It is not an objective statistical treatment of actual measurements, but the purposeful creation of a pseudo-scientific fiction, whose rationale is couched in Orwellian double-speak.

Reply to  Willis Eschenbach
December 1, 2014 4:27 pm

You think it’s decent & honest to blame skeptics for Obama’s executive orders?
More evidence that Lalaland includes Berkeley.

Reply to  Willis Eschenbach
December 1, 2014 7:38 pm

I guess you missed the recent comments in which Steven jumped the shark, even by his own high standard of cartilaginous fish vaulting.
If in your bubble he’s a great guy, who am I to pop it?

Reply to  1sky1
December 1, 2014 4:26 pm

Good grief, Willis, you have no idea what an actual “ad hominem attack” is. What I’m attacking is BEST’s patently tendentious algorithm and Mosher’s Orwellian defense of it here.

Reply to  1sky1
December 2, 2014 4:56 am

I’m not so sure Willis. Is it an ad hominem attack to say the IPCC are tailoring their reports for self preservation?
I do think the people at Berkeley Earth are genuinely trying to do better science but I also think they underestimate the problems with their scalpel approach and probably overestimate their ability to correctly process source data according to their stated rules.

Reply to  1sky1
December 2, 2014 4:31 pm

BEST’s deliberate fiction, which creates the illusion that more is
known than is possible from available data, relies upon on two unwarranted
1) That a global regression of observed temperature upon latitude and
elevation provides a realistic criterion for evaluating and adjusting
actual station data throughout the globe. The claimed R^2 of ~0.8 simply
doesn’t stand up, however, when the periodic seasonal component is removed
to isolate the aperiodic climate signal, rendering the projections of the
regression model unfit for the purpose.
2) That decade-scale “scalpeling” can preserve bona fide low-frequency
(multidecadal) climate signal components and “kriging” can establish
reliable estimates of entire time-series where no measurements at all have
been made. While Monte Carlo testing of “break-point” detection routines on
AR(1)processes may not show a low-frequency bias, the power spectra of
actual climate signals are very different from the monotonically decaying
structure of such processes. Likewise, successful kriging is entirely
dependent upon spatial homogeneity of temporal variation, which is seldom
encountered in nature over scales greater than a few hundred miles. Yet
BEST produces time-series even in locations more than 1000 miles away from
the nearest station.
What is Orwellian about Mosher’s defense of BEST’s methods is not just their
justification, but the characterization of actual measurements as being “wrong.”
And, of course, Mueller has presented the “results” of BEST’s “findings” to Congress and the media as if they were purely the product of diligent analysis of hard empirical data, rather than
of the over-reach of academic presumption.
Only someone sitting on a branch that he himself is sawing would pretend that my verifiable observations constitute ad hominem argumentation.

Reply to  1sky1
December 3, 2014 2:39 am

Willis writes “He is trying to discredit the ALGORITHM by attacking the supposed motives of its creators. This is an ad hominem attack, that is to say trying to throw doubt on scientific results by throwing doubt on the scientists involved.”
Am I not doing the same when I say the IPCC is tailoring their reports for self preservation?

Reply to  1sky1
December 3, 2014 4:08 pm

To attack personally one’s character or motives in order to distract attention away from the substantive ISSUES of a debate is indeed ad hominem argumentation. To observe the function of someone’s public stance RELATIVE to the issue is not. Nowhere in my critique of BEST’s methodology do I impute motive.

Reply to  1sky1
December 3, 2014 4:23 pm

It wasn’t an ad hominem attack, but a statement of fact.
I guess you missed this comment by Mosher & the subsequent exchange:
Steven Mosher
November 14, 2014 at 9:54 am
When the pause officially ends folks will go back to some other nonsense to deny what they dont need to deny:
C02 warms the planet. the question is how much.
Now, skeptics who want to make an impact ( like Nic Lewis) focus on the real question. Imagine what would happen if all skeptics learned from his example?
Instead they clown around denying basic physics. They clown around chasing the orbit of Jupiter.
They clown around complaining about anomalies and the colors of charts. Faced with clowns like this, Obama pulls out his phone and pen.
In short, some of the craziness spouted by fringe skeptics gets used to paint the whole tribe. And that
picture gets used to justify executive action. By denying basic physics fringe skeptics enabled the like of Lewandowski. They give cover for an imperial president .
Pete Ross
November 14, 2014 at 10:46 am
This comment is beyond Orwellian, blaming sceptics for Obama’s craziness.

December 3, 2014 3:07 am

1sk1 writes “That decade-scale “scalpeling” can preserve bona fide low-frequency (multidecadal) climate signal components”
It seems to me that a fundamental assumption of climate science is that there can be no multidecadal scale regional climate change. At least not without some other part of the planet compensating. Archaeology tells us regional climate change is real and compensation at multidecadal scales is an assumption based on naive views that the energy cant be retained or lost at different rates over time.