Buoy Temperatures, First Cut

Guest Post by Willis Eschenbach

As many folks know, I’m a fan of good clear detailed data. I’ve been eyeing the buoy data from the National Data Buoy Center (NDBC) for a while. This is the data collected by a large number of buoys moored offshore all around the coast of the US. I like it because it is unaffected by location changes, time of observation, or Urban Heat Island effect, so there’s no need to “adjust” it. However, I haven’t had the patience to download and process it, because my preliminary investigation a while back revealed that there are a number of problems with the dataset. Here’s a photo of the nearest buoy to where I live. I’ve often seen it when I’ve been commercial fishing off the coast here from Bodega Bay or San Francisco … but that’s another story.

bodega bay buoy

And here’s the location of the buoy, it’s the large yellow diamond at the upper left:

bodega bay buoy location

The problems with the Bodega Bay buoy dataset, in no particular order, are:

One file for each year.

Duplicated lines in a number of the years.

 The number of variables changes in the middle of the dataset, in the middle of a year, adding a column to the record.

Time units change from hours to hours and minutes in the middle of the dataset, adding another column to the record.

But as the I Ching says, “Perseverance furthers.” I’ve finally been able to beat my way through all of the garbage and I’ve gotten a clean time series of the air temperatures at the Bodega Bay Buoy … here’s that record:

air temp bodega bay buoy

Must be some of that global warming I’ve been hearing about …

Note that there are several gaps in the data

Year 1986 1987 1988 1992 1997 1998 2002 2003 2011

Months  7    1    2    2    8    2    1    1    4

Now, after writing all of that, and putting it up in draft form and almost ready to hit the “Publish” button … I got to wondering if the Berkeley Earth folks used the buoy data. So I took a look, and to my surprise, they have data from no less than 145 of these buoys, including the Bodega Bay buoy … here is the Berkeley Earth Surface Temperature dataset for the Bodega Bay buoy:

berkeley earth bodega buoy raw

Now, there are some oddities about this record … first, although it is superficially quite similar to my analysis, a closer look reveals a variety of differences. Could be my error, wouldn’t be the first time … or perhaps they didn’t do as diligent a job as I did of removing duplicates and such. I don’t know the answer.

Next, they list a number of monthly results as being “Quality Control Fail” … I fear I don’t understand that, for a couple of reasons. First, the underlying dataset is not monthly data, or even daily data. It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control? And second, the data is already checked and quality controlled by the NDBC. So what is the basis for the Berkeley Earth claim of multiple failures of quality control on a monthly basis?

Moving on, below is what they say is the appropriate way to adjust the data … let me start by saying, whaa?!? Why on earth would they think that this data needs adjusting? I can find no indication that there has been any change in how the observations are taken, or the like. I see no conceivable reason to adjust it … but nooo, here’s their brilliant plan:

berkeley earth bodega bay adj

As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.

One other oddity. There is a gap in their records in 1986-7, as well as in 2011 (see above), but they didn’t indicate a “record gap” (green triangle) as they did elsewhere … why not?

To me, all of this indicates a real problem with the Berkeley Earth computer program used to “adjust” the buoy data … which I assume is the same program used to “adjust” the land stations. Perhaps one of the Berkeley Earth folks would be kind enough to explain all of this …

w.

AS ALWAYS: If you disagree with someone, please QUOTE THE EXACT WORDS YOU DISAGREE WITH. That way, we can all understand your objection.

R DATA AND CODE: In a zipped file here. I’ve provided the data as an R “save” file. The code contains the lines to download the individual data files, but they’re remarked out since I’ve provided the cleaned-up data in R format.

BODEGA BAY BUOY NDBC DATA: The main page for the Bodega Bay buoy, station number 46013, is here. See the “Historical Data” link at the bottom for the data.

NDBC DATA DESCRIPTION: The NDBC description file is here.

 

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
228 Comments
Inline Feedbacks
View all comments
jim
November 28, 2014 11:28 pm

Perhaps Berkley just fed the buoy data to their standard program which treated it as if it were land data?

Truthseeker
Reply to  jim
November 28, 2014 11:32 pm

It’s data Jim, but not as we know it.

Reply to  Truthseeker
November 29, 2014 9:20 am

From Stewie:

george e. smith
Reply to  Willis Eschenbach
November 29, 2014 12:16 pm

So Willis, I noted in YOUR graph, it is specifically labeled “AIR” Temperature.
Seems to me that buoys are conveniently sitting on a lot of water. How convenient; so one could also measure the WATER temperature at say -1 metre, and record both water and air temps.
When John Christy et al, did this for about 20 years of dual data from some oceanic buoys, they found that (a) they aren’t the same; and (b) they aren’t correlated.
Why would they be, when air current speeds, might be up to two orders of magnitude faster than water currents, so the move relative to each other.
So why no water temps for Bodega Buoy ??
But you seem to have found another number mine to dig.

milodonharlani
Reply to  Willis Eschenbach
November 29, 2014 2:24 pm

Good work in exposing yet more shameless behavior by CACA scamsters.
Thanks!

george e. smith
Reply to  Willis Eschenbach
November 30, 2014 7:41 pm

Thanx Willis.
And yes I did notice that you warned us this was the first cut. I like the water scatter plot. It looks like it is heading off to the higher air temps at the same water temp, like a comet tail.

Reply to  jim
November 29, 2014 11:52 am

Here’s the NDBC platform accuracy page. Notice for marine air temperatures, the stated resolution is (+/-)0.1 C while the stated accuracy is (+/-)1.0 C. That’s for every single listed type of deployed buoy.
Those accuracies are not to be seen as statistical standard deviations. They do not represent normal distributions of random error (i.e., precision) and do not average away with repeated observations.
Honestly, it is so very refreshing to see such a forthright official declaration of temperature sensor accuracy in a climate science context. All honor to the NDBC staff, scientists, engineers, technicians and everyone else.
Notice, by the way, that the SST limit of accuracy is (+/-)1 C, as well.
But anyway, let’s track that accuracy through the preparation of an air temperature anomaly.
For creating an anomaly, the average temperature over a standard 30-year interval is taken, say 1951-1980 if you’re GISS. The average accuracy of the standard mean temperature is (+/-)sigma = sqrt[(sum-over-errors)^2/(N-1)] = ~(+/-)1 C, where N is the number of temperature measurements entering the average.
To find the anomaly, monthly or annual means are subtracted from the 30-year average. The accuracy of a monthly or annual mean is calculated the same way as the 30-year mean, and it works out to pretty much the same uncertainty: ~(+/-)1 C.
The annual temperature anomaly = [(annual mean) minus (30-year average)]. The accuracy of the anomaly is (+/-)sigma = sqrt[(annual accuracy)^2 + (30-year-standard accuracy)^2] = sqrt[1^2 +1^2] = sqrt[2] = (+/-)1.4 C.
There it is, the uncertainty in any buoy marine air temperature anomaly is (+/-)1.4 C. That should be the width of the error bars around every BEST, GISS, and UEA buoy marine air temperature anomaly.
Anyone see those error bars in the BEST representation?
In any field of physical science except climate science, error bars like that are standard. Such error bars put boundaries what can be said because they indicate what is actually known.
The (+/-)1.4 C is the 1-sigma uncertainty. Those error bars would obscure the entire average trend, leaving nothing to be said at all. At the 95% confidence interval, (+/-)2.8 C, pretty much the entire set of temperature anomalies would be submerged.
So it goes in climate science. The occulted is far more important than the displayed.

Reply to  Pat Frank
November 29, 2014 12:53 pm

A simple, clean, precise illustration of the general point RGBatDuke makes. Well done.
For the equivalent for sea level rise determined by Jason-2 (or by tide guages) see essay Pseudo Precision in Blowing Smoke.

Paul mackey
Reply to  Pat Frank
December 1, 2014 1:39 am

Excellent. Good question – I have wondered why there are never any error bars. Climate Science or Art of Noise?

Zel202
November 28, 2014 11:29 pm

Quite a stable temp at that spot- always a wear your coat day.
What purpose does that bouy serve?
What depth is it moored at?
Is that yellow plate at the upper right corner a wind vane?
Does it measure water and air temp?

Kit
November 28, 2014 11:32 pm

Berkley was correcting for UHI, after all it’s only about 70 miles away, that type of heat also travels upwind.
/sarc

Reply to  Kit
November 28, 2014 11:38 pm

To me, all of this indicates a real problem with the Berkeley Earth computer program used to “adjust” the buoy data … which I assume is the same program used to “adjust” the land stations.

If this assumption is correct then the sarc tag may not be required.

noaaprogrammer
Reply to  MCourtney
November 30, 2014 9:09 pm

They’re compensating for changes in elevation – the oceans are rising!

Peter Miller
November 28, 2014 11:35 pm

This once again helps illustrate the question: “Without raw data adjustments, homogenisation, manipulation or torturing, would there be any man made global warming/climate change?”
The answer is:”Maybe a little, but not enough to be of any concern, and certainly no reason for a massive switch from cheap reliable energy sources to expensive unreliable ones, as advocated by so many western leaders today.”
Anyhow, well spotted, but I doubt if the Berkely Earth people will deign to provide you with an answer to your question on ‘Estimated Mean Station Bias’ and if they do, it will not make much sense.

Reply to  Peter Miller
November 29, 2014 1:17 am

I agree, human interference with data probably serves a plan.

lemiere jacques
Reply to  Peter Miller
November 29, 2014 5:03 am

adjustments are ok as long…good reasons to do them are given, and some kind of verifications are made afterwards…and caveats for those who want to look at global means afterwards..

Ursus Augustus
November 28, 2014 11:35 pm

I just cracked up as I read this article. They fiddled the data and hey presto!!!, the contrarian trend disappears.
Sub prime science in its basic form. Now you see some reality – now you don’t.
LOL I just love it.
Wait til the msm catch on. ( Don’t hold your breath – it could damage your health)

Ursus Augustus
November 28, 2014 11:40 pm

PS
I am recommending the widespread use of the term “sub prime science” in reference to the sort of schlock we all are aware of. I think it captures the essence of CAGW perfectly in terms that everybody understands at a fairly visceral level.
It is not as deliberately vicious a term say as “denier” but nonetheless uses the same associative connotation that naturally resonates.
Can I recommend it to the blogosphere?

Alan Bates
Reply to  Ursus Augustus
November 29, 2014 1:25 am

There is already a term: Cargo Cult Science

Ursus Augustus
Reply to  Alan Bates
November 29, 2014 1:43 am

Just trying a bit of subtlety Alan.
“Cargo cult” is probably accurate, certainly when referring to the hard core ‘team ‘ and the boondoggle beneficiaries but it has overtones of utter ignorance that are comparable to “denier”.
A softer term may actually penetrate the mindset of the msm which is probably the best way to demolish the CAGW freakshow.

Tis Knobsdale
Reply to  Alan Bates
November 29, 2014 10:27 am

“As many folks know, I’m a fan of good clear detailed data.”
My mind was just blown.. I TOO love clear detailed data!!! I didn’t know there were others out there.. Wild.
I also love my fruit fresh, as opposed to a bit overripe.
Further, I like to be comfortable. I tend to prefer garments that offer up a fair bit of protection from the elements, without sacrificing much in the way of skin feel. But hey. I like to stay new age know what I mean?

rogerknights
Reply to  Ursus Augustus
November 29, 2014 3:01 am

I suggest IGPOCC science.
(Get it?)

mwhite
Reply to  Ursus Augustus
November 29, 2014 4:02 am

“Sub Prime” will be understood by everyone
As “Denier” is identified with the Nazis “Sub Prime” will be identified with dodgy bankers.

Paul mackey
Reply to  mwhite
December 1, 2014 1:43 am

Bankers who shamelessly manipulate the data – LIBOR, Forex etc. Quite appropriate!

Coldish
Reply to  mwhite
December 1, 2014 2:45 am

Yes! ‘Subprime science’

sleepingbear dunes
Reply to  Ursus Augustus
November 29, 2014 4:12 am

Subprime science! Perfecto!

Quinn the Eskimo
Reply to  Ursus Augustus
November 29, 2014 7:02 am

Marc Morano has been calling it sub-prime science for quite a while now. It’s a good line. Great minds think alike, etc.

Ray Kuntz
Reply to  Ursus Augustus
November 29, 2014 8:14 am

Perfect terminology, I’m adopting for personal use. Thanks.

Sal Minella
Reply to  Ursus Augustus
November 29, 2014 9:35 am

As I understand it sub-prime refers to loans made at a rate below the prime interest rate. That seems like a good thing to me as a borrower. Sub-par makes more sense but both seem so weak. “Denier”, as a charge, has weight and an ignominious history so, I would suggest something with more impact to counter it.

Reply to  Sal Minella
November 29, 2014 2:34 pm

Sub-prime means that the borrower isn’t a very good risk and the loans are at a higher interest rate.
Of course that was before the QE’s and Fed interventions.

Sal Minella
Reply to  Ursus Augustus
November 29, 2014 9:40 am

How about “fluffer”?

Auto
Reply to  Ursus Augustus
November 29, 2014 12:16 pm

What about ‘Fraudster’?
Clear, and to the point.
Punchy – but may involve visits to local courts (of course, completely incorruptible and uninfluenced), so not recommended. A number of folk – Menn – may be a touch litigious . . . .
Maybe SOPS – Sub Optimal Pseudo Science?
Auto

James Allison
Reply to  Ursus Augustus
November 29, 2014 12:39 pm

YES! Imagine the MSM press release “Here is another example of Sub Prime climate science from “fill in the name”” LOL

Antagonista
Reply to  Ursus Augustus
November 29, 2014 12:49 pm

Can I recommend it to the blogosphere?
Certainly Ursus, I will be pleased to insert it into one of my inflammatory comments on The Guardian.

Ursus Augustus
Reply to  Ursus Augustus
November 29, 2014 2:11 pm

Thanks for the positive feedback. It just sounded so right I had to put it out there and if Marc Morano is onto it then I think we have lift off!

Jaakko Kateenkorva
Reply to  Ursus Augustus
November 29, 2014 9:40 pm

Ursus Augustus. Thank you for that idea. ‘Sub prime science’ fits like a glove.

Leonard Lane
Reply to  Ursus Augustus
November 29, 2014 10:25 pm

Why not be more explicit and call it “sub-standard science”?

Admin
November 28, 2014 11:41 pm

I had no idea they are temperature monitor buoys. I almost smacked into one once, blazing home at 30 knots after dark on my Sunseeker. I’d accidentally wandered to the edge of the channel, because I was a little tipsy after an evening in a pub in Cowes 🙂

Patrick
Reply to  Eric Worrall
November 29, 2014 2:59 am

You’re in Aus right? You can be arrested for DUI on a boat.

Reply to  Patrick
November 29, 2014 9:42 am

Same here.

Admin
Reply to  Patrick
November 29, 2014 8:40 pm

Hey I was totally sober after I saw a buoy leap out of the dark and almost hit the boat 🙂

Bill_W
Reply to  Eric Worrall
November 29, 2014 5:40 am

Well, that explains one of the gaps in the data! Thanks. Berkeley Earth software just used that to adjust the data. 🙂

Claude Harvey
November 28, 2014 11:45 pm

“As you can see, once they “adjust” the station for their so-called “Estimated Station Mean Bias”, instead of a gradual cooling, there’s no trend in the data at all … shocking, I know.”
It’s a sophisticated statistical tool named “slice-and-dice”. When you get a trend line you just KNOW is wrong, you may sllce-and-dice it into disconnected, horizontal lines with a note to ignore the “step functions”. If you insist on going further and REVERSING the bad trend, you may hold the graph up to a strong light and view it from the backside. My stock broker (a real whiz-bang) employs this technique when we review my portfolio performance.

Editor
November 28, 2014 11:52 pm

The mind boggles. As w says so correctly, “Why on earth would they think that this data needs adjusting?”. The regional average temperature is the (weighted) average of all the temperature measurements in the region. This buoy’s temperature is one of those temperature measurements. So the regional average temperature is derived from,this buoy’s temperature. It is surely utterly illogical to adjust data using something that is derived from that data. To my mind, mathematically and scientifically you just can’t do that.

Reply to  Mike Jonas
November 29, 2014 12:53 am

Exactly, Mike. It’s ‘adjusting’, from the general dataset to one particular buoy.

Reply to  Mike Jonas
November 29, 2014 4:24 am

It’s a Peer Reviewed Recursive Adjustment of Temperatures or PRRAT where the “suspect” data is averaged into a set of other “pristine” stations within 1200km, which have the “correct” trend based on the current models. This procedure is repeated until the “problem” data no longer shows the troubling anomaly. It falls under “best practices” as all good climate “scientists” know that positive feedback is how climate works.

ferdberple
Reply to  nielszoo
November 29, 2014 8:10 am

Prat Reviewed Science.

Bill_W
Reply to  Mike Jonas
November 29, 2014 5:42 am

The software may assume the gap/break in the data and the associated decreases in temperature after each gap as a station move to a new location. Do we know that the buoy has not moved? Although in the ocean, if moved only a small distance it should not matter much unless it gets moved in or out of a current with a different temperature.

E.M.Smith
Editor
Reply to  Bill_W
November 29, 2014 9:21 am

That whole Pacific coastal water is Darned Cold. All the time. It has not warmed up, based on my Mark I toes… I’ve swum in it on and off for a few decades. It’s awful cold all the time. Remember the arguing over folks not being able to survive a swim out of Alcatraz? That’s the warmer water in the S.F. Bay… It may well have cooled in the last decade. About a decade ago I stopped swimming in it. (Florida water is much nicer 😉
So they could move that thing a few miles and it would read the same. Just don’t drag it to shore.

mpainter
November 29, 2014 12:06 am

Good work, Willis.
” Why on earth would they think that this data needs adjusting?”
####
Maybe Steven Mosher can explain that. He works a lot with their data, so he should be familiar with their procedures.

Stephen Richards
Reply to  mpainter
November 29, 2014 1:01 am

Steven is their defence council. He’ll be along soon

The Ghost Of Big Jim Cooley
November 29, 2014 12:15 am

Some questions spring to mind:
Are these buoys dotted about the globe – Eric says above that he almost smacked into one in the Solent (coast of England)?
Where is the raw data for ALL of them?
Has anyone compiled it into a chart?

dp
November 29, 2014 12:17 am

Gaps don’t affect a trend. Brrrr – getting colder.

November 29, 2014 12:20 am

As some further steps in the analysis, I would suggest that you try taking a look at the N. Pacific Ocean (PDO) temperatures and the local land station temperatures.
The ocean air temperature usually stays close to the ocean surface temperature. The ocean waters come down the coast from the Gulf of Alaska as part of the California current and the N. Pacific Gyre. The local buoy temperatures should follow the PDO/N. Pacific Ocean temperatures.
Over land, the minimum (nighttime) temperatures should follow the ocean temperatures as the ocean air moves inland. The daytime maximum temperatures indicate the solar heating produced by the convective mixing of the warm surface air at the station thermometer level.
The climate all the way across California is determined by the Pacific Ocean temperatures.
http://scienceandpublicpolicy.org/originals/pacific_decadal.html
Joe D’Aleo showed that the US average temperatures are mainly a combination of the AMO and PDO.
http://www.intellicast.com/Community/Content.aspx?a=127

Mike McMillan
Reply to  Roy Clark
November 29, 2014 12:33 am

Over land, the minimum (nighttime) temperatures should follow the ocean temperatures as the ocean air moves inland.
Doesn’t the land air head out to sea at night? Or are you referring to the general west to east flow?

Mike McMillan
November 29, 2014 12:27 am

Maybe they’re homogenizing the data with nearby (land) stations, a time-tested and honored practice that teases previously hidden warming from the raw data.

November 29, 2014 12:32 am

The problems with the Bodega Bay buoy dataset, in no particular order
Willis, your description of the buoy dataset sounds more like a log book than a dataset. Thoughts of “HARRY_READ_ME.txt” fill my mind along with my own experiences in both the financial and Network Management industries. It is difficult to take an alleged ‘climate crisis’ seriously when the basic data is collected, manipulated and archived in such a haphazard manner.
In the financial community we would backup after each run and daily send mag tape to be put on microfiche which would be diligently verified as we would be constantly subjected to serious outside audit.
The idea that this alleged ‘climate crisis’ is still based on Keystone Cop investigative competence after several decades tells us the actual importance of this ‘climate crisis’.
Geez… Why the heck didn’t all the US government climate related agencies more than ten years ago outsource all data collection, archiving and data distribution to IBM or some other entity that knows what data management is all about!

CodeTech
Reply to  Paul in Sweden
November 29, 2014 4:52 am

Without a doubt, it’s because anyone who actually knows how to manage raw data would not find what they wanted to be found. That’s a large number of people, I’m under the impression the climate “scientists” had to search far and wide to find people so ignorant of data management and proper math and statistics that they could find warming in the last 3 decades.

Leonard Lane
Reply to  CodeTech
November 29, 2014 10:35 pm

They do not have to be ignorant of data management. They can be crooks and liars as well.

November 29, 2014 12:40 am

Next, they list a number of monthly results as being “Quality Control Fail” … I fear I don’t understand that, for a couple of reasons. First, the underlying dataset is not monthly data, or even daily data. It is hourly data … so while the odd hourly record might be wrong, how could a whole month fail quality control?

They explain it further down on the same page:

Quality Control Summary:
Months missing 10 or more days 26
Serially repeated daily or monthly values 16
Extreme local outliers 0
Regional climatology outliers 73

Whether these quality requirements can be justified is another question.
/Jan

Reply to  Willis Eschenbach
November 29, 2014 1:12 am

For one thing, I couldn’t figure out the meaning of the repeated “daily or monthly values”, when the data is neither monthly nor daily, but hourly … add it to the many mysteries.

The explanation is that they use the same wording for all stations, and most stations seems to have daily or monthly values. They should have written ”hourly, daily or monthly values” to cover all situations.
Concerning the other datasets in the region, I think they use the same methodology everywhere. They say:

Regional filter: For each record, the 21 nearest neighbors having at least 5 years of record were located. These were used to estimate a normal pattern of seasonal climate variation. After adjusting for changes in latitude and altitude, each record was compared to its local normal pattern and 99.9% outliers were flagged. Simultaneously, a test was conducted to detect long runs of data that had apparently been miscoded as Fahrenheit when reporting Celsius. Such values, which might include entire records, would be expected to match regional norms after the appropriate unit conversion but not before

http://berkeleyearth.org/about-data-set
I suppose the have to go quite far to find the nearest 21 stations to this though.
/Jan

Jit
Reply to  Jan Kjetil Andersen
November 29, 2014 1:55 am

If the nearest neighbours are all or mostly land side, it wouldn’t be so surprising that most of the buoy data looks like outliers. I dunno what they do but you can’t interpolate over a discontinuity like a shoreline.

Bernd Palmer
Reply to  Jan Kjetil Andersen
November 29, 2014 4:48 am

“changes in latitude and altitude”, “miscoded as Fahrenheit when reporting Celsius”?? I don’t see how this could apply to a fixed buoy where the temps are recorded electronically. Must be a boilerplate text.
Where is Mosher when you nedd him?

Reply to  Jan Kjetil Andersen
November 29, 2014 10:10 am

I believe Jimmy said “Changes in Latitude Changes in Attitude”
Not sure how Best messed that up.
It’s a good song.

Reply to  Jan Kjetil Andersen
November 29, 2014 1:04 pm

Jan and Willis, see Bill Illis below and my comment thereto. In the case of station 166900, they go at least 1300 km horizontally and 2300 meters vertically.

gary turner
Reply to  Jan Kjetil Andersen
November 30, 2014 10:57 am

I suppose the health department, using climate sub-prime scientific methods, will need to adjust the temp of the walk-in freezer to bring it more in line with the kitchen and dining room temps. “Sorry, your freezer’s adjusted and homogenized temperature doesn’t meet code requirements. We’re shutting you down.”
That may seem a stretch, but that’s what the climate pseudo-scientists do when the make comparisons and adjustments across boundaries or differing environments.

knr
November 29, 2014 1:10 am

‘To me, all of this indicates a real problem with the Berkeley Earth computer program’
one person problem is another opportunity, now work out how such ‘adjustments ‘ give an ‘opportunity ‘ and to ‘who’ and you have got there.

Auto
Reply to  knr
November 29, 2014 12:24 pm

Cui bono?
Auto

Capell
November 29, 2014 1:12 am

Paul in Sweden makes a very good point.
I’ve just completed and published a study of wind speeds (and thus power generation) for the UK and northern Europe spanning the years 2005-13:
http://www.adamsmith.org/wp-content/uploads/2014/10/Assessment7.pdf
Where did I get the data for this? The UK MET Office? (No, they charge – a great deal!),. I got it from aviation METAR reports – I just happen to know about these because I had a PPL.
By the way, the results for wind generation variability and intermittancy make alarming reading.

The Ghost Of Big Jim Cooley
Reply to  Capell
November 29, 2014 1:49 am

Alarming, as…?

tonyb
Editor
Reply to  Capell
November 29, 2014 2:04 am

Capell
Tell us something we don’t know. Here on the South Coast of England yesterday it was blowing a gale, today there is no wind. I would surmise that yesterday any wind turbines would have had to shut down and today there’s no wind to power them. Variability and intermittency in action (or non action) which becomes ever more serious as it forms an increasingly large percentage of the UK’s energy
tonyb

Capell
Reply to  tonyb
November 29, 2014 4:50 am

Ghost and tonyb
Dipping into the summary of my paper:
For the UK we have:
The model reveals that power output has the following pattern over a year:
i Power exceeds 90 % of available power for only 17 hours
ii Power exceeds 80 % of available power for 163 hours
iii Power is below 20 % of available power for 3,448 hours (20 weeks)
iv Power is below 10 % of available power for 1,519 hours (9 weeks)
Although it is claimed that the wind is always blowing somewhere in the UK, the
model reveals this ‘guaranteed’ output is only sufficient to generate something
under 2 % of nominal output. The most common power output of this 10 GW model
wind fleet is approximately 800 MW. The probability that the wind fleet will produce
full output is vanishingly small.
Long gaps in significant wind production occur in all seasons. Each winter of the
study shows prolonged spells of low wind generation which will have to be covered
by either significant energy storage (equivalent to building at least 15 plants of the
size of Dinorwig) or maintaining fossil plant as reserve.
And for the European fleet:
Unifying all three fleets by installation of European interconnectors does little or
nothing to mitigate the intermittency of these wind fleets. For the combined system,
which has an available power output of 48.8 GW:
• Power exceeds 90 % of available power for 4 hours per annum,
• Power exceeds 80 % of available power for 65 hours per annum,
• Power is below 20 % of available power for 4,596 hours (27 weeks) per annum,
• Power is below 10 % of available power for 2,164 hours (13 weeks) per annum.

Reply to  Capell
November 29, 2014 5:32 pm

I would be very interested in your power generation study. The pdf link didn’t work for me – I have crappy internet (satellite and in Vermont with snow). Is the link good? Sorry too, that I’ve been away from a machine for the day so I’m late into this.
Our runaway governor is obsessed with renewables – even though Vermont is 98% carbon emission clean – I realize that doesn’t matter except that constructing whirligigs produces 250 – 750 tons of CO2 from concrete/rebar bases through steel posts and we have legal statues prohibiting generating CO2 and I’ll happily use their stupidity against them.
I know I’m tilting here. I care about your study, but I know they won’t. I still prefer knowing.
Thanks,
Jim

Editor
November 29, 2014 1:14 am

It’s curious that Berkeley Earth included Marine Air Temperature data from buoys in a land surface air temperature dataset. I’ll second the “to my surprise”.

Reply to  Bob Tisdale
November 29, 2014 1:10 pm

Bob, they may think that near shore is ‘close enough’. That gets into the interesting RUTI project issues Frank Landser in Europe has been exploring. Similar to your ocean/ENSO investigations in several ways. Highly recommended reading for all.

Parakoch
November 29, 2014 1:26 am

My both eyeballs say that most of the “Quality Control Fails” in the Berkeley Earth Surface Temperature dataset for the Bodega Bay buoy are below the trend line – fancy that, who would have thought that?

November 29, 2014 1:31 am

Dear Willis,
There’s some information about the measurement history of the buoy here:
http://www.ndbc.noaa.gov/data_availability/data_avail.php?station=46013
In the left hand column is some information (not a lot sadly) about the buoy itself. The somewhat cryptic notation says something about the type of deployment. 10D, 6N and 3D are, I think, designations for 10m Discus buoy, 6m NOMAD buoy and 3m Discus buoy.
http://www.ndbc.noaa.gov/mooredbuoy.shtml
I don’t know what effect that would have on the air temperature measurements, but this NDBC page suggests that there would have been a change in measurement height associated with the switch form 10m to 6m/3m:
http://www.ndbc.noaa.gov/bht.shtml
GSBP and VEEP are the sensor packages. Again there are some changes there:
http://www.ndbc.noaa.gov/rsa.shtml
Best regards,
John

Sera
November 29, 2014 1:53 am

“Explosive hydrogen gas can accumulate inside the hull of 3-meter-discus buoys.
This dangerous gas is caused by batteries corroding due to water intrusion. While a remedial plan is being developed, mariners are asked to give this, and all other 3-meter-discus buoys, a wide berth. The buoys are 3-meter discus shaped, typically with a yellow hull and a 5-meter tripod mast. Each buoy is identified by the letters “NOAA” and the station identifier number, such as “46050”. Each buoy has a group of (4) flashing 20-second, yellow lights.”
http://www.ndbc.noaa.gov/station_page.php?station=46013
Maybe they adjusted for the hydrogen gas?
/sarc

Sera
Reply to  Sera
November 29, 2014 2:00 am

Anyway, the USCG buoy tenders are responsible for the maintenence, so it could be just that for the missing data (I know that they repainted it back in 2010).

Mike Ozanne
Reply to  Sera
November 29, 2014 2:01 am

Perhaps the lift from the hydrogen is being interpreted as “Sea Level Rise”………:-P

November 29, 2014 2:24 am

Clearly as the data does not reveal Global Warming, and worse than that, shows actual Global Cooling, so it absolutely has to be adjusted with the usual algorithms. If this problem continues then we may well see the buoys being sunk by Naval Gunfire. This situation of having such actual data available is completely contrary to the consensus.

November 29, 2014 2:27 am

Willis,
It could be that you are seeing the Berkeley scalpel in action. Where they detect a discontinuity, they treat as separate stations. And the marked discontinuities are substantial. Why the other breaks did not invoke the scalpel, I don’t know.

Patrick
Reply to  Nick Stokes
November 29, 2014 4:16 am

Nick Stokes; “…, I don’t know.” WOW Nick! If only other “experts” had the same level of integrity and honesty. I’d buy you a VB (If that is your tipple).

Reply to  Patrick
November 29, 2014 10:32 am

What you are suggesting is that the adjustments are algorithm based. Not human-error-recognized.
More of my Computational Reality instead of Representation Reality.

Reply to  Patrick
November 29, 2014 6:22 pm

Doug Proctor
Algorithm based like the NASA and other reconstructions that show a record of continually warming the cold years 100 and 35 years ago?

ferdberple
Reply to  Nick Stokes
November 29, 2014 8:20 am

they treat as separate stations
============
and as a result deliver a misleading result. So many methods sound so good in theory, but fail utterly in practice.

Dave in Canmore
Reply to  ferdberple
November 29, 2014 8:39 am

Which makes me wonder why an algorithm is needed at all? Seems a better process would be to pick GOOD stations not torture ALL stations. It seems self-evident to me but you don’t get to use your fancy education I guess!.

Reply to  Nick Stokes
November 29, 2014 1:20 pm

Nick, this example bynitself demonstrates two things. First, the BEST scapel technique is inconsistently applied, as you point out. Second, the underlying ‘ station move’ assumption can be faulty, as it appears this buoy has been there all along at the same place. Dr. Merohasy was able to prove the same faulty justification for Australian BOM homogenization of rural station Rutherglen’s flatnto decline into marked post homogenization. For details, follow the footnote hyperlinks to the Rutherglen example in essay When Data Isn’t in Blowing Smoke. As you are from down under, you probably are already aware of this analogous kerfuffle. Perhaps many posting here are not.

Nick Stokes
Reply to  Rud Istvan
November 29, 2014 4:47 pm

Rud, it isn’t a station move assumption. It isn’t any kind of assumption. The assumption would be that the measuring conditions (instruments etc) are the same after the break as before. I think discarding that leads to loss of information. It’s usually true. But discarding is what the scalpel does.
As you’ve observed, I live not so far from Rutherglen. I think BoM’s treatment of that is OK.

1 2 3 4