Berkeley Earth finally makes peer review – in a never before seen journal

berkeley_earth_surface_temperature_logo[1]After almost two years and some false starts, BEST now has one paper that has finally passed peer review. The text below is from the email release sent late Saturday. It was previously submitted to JGR Atmospheres according to their July 8th draft last year, but appears to have been rejected as they now indicate it has been published in Geoinformatics and Geostatistics, a journal I’ve not heard of until now.

(Added note: commenter Michael D. Smith points out is it Volume 1 issue 1, so this appears to be a brand new journal. Also troubling, on their GIGS journal home page , the link to the PDF of their Journal Flier gives only a single page, the cover art. Download Journal Flier. With such a lack of description in the front and center CV, one wonders how good this journal is.)

Also notable, Dr. Judith Curry’s name is not on this paper, though she gets a mention in the acknowledgements (along with Mosher and Zeke). I have not done any detailed analysis yet of this paper, as this is simply an announcement of its existence. – Anthony

===============================================================

Berkeley Earth has today released a new set of materials, including gridded and more recent data, new analysis in the form of a series of short “memos”, and new and updated video animations of global warming.  We are also pleased that the Berkeley Earth Results paper, “A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011” has now been published by GIGS and is publicly available.

here: http://berkeleyearth.org/papers/.

The data update includes more recent data (through August 2012), gridded data, and data for States and Provinces.  You can access the data here: http://berkeleyearth.org/data/.

The set of memos include:

  • Two analyses of Hansen’s recent paper “Perception of Climate Change”
  • A comparison of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques on ideal synthetic data
  • Visualizing of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques

and are available here: http://berkeleyearth.org/available-resources/

==============================================================

A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011

Abstract

We report an estimate of the Earth’s average land surface

temperature for the period 1753 to 2011. To address issues

of potential station selection bias, we used a larger sampling of

stations than had prior studies. For the period post 1880, our

estimate is similar to those previously reported by other groups,

although we report smaller uncertainties. The land temperature rise

from the 1950s decade to the 2000s decade is 0.90 ± 0.05°C (95%

confidence). Both maximum and minimum daily temperatures have

increased during the last century. Diurnal variations decreased

from 1900 to 1987, and then increased; this increase is significant

but not understood. The period of 1753 to 1850 is marked by

sudden drops in land surface temperature that are coincident

with known volcanism; the response function is approximately

1.5 ± 0.5°C per 100 Tg of atmospheric sulfate. This volcanism,

combined with a simple proxy for anthropogenic effects (logarithm

of the CO2 concentration), reproduces much of the variation in

the land surface temperature record; the fit is not improved by the

addition of a solar forcing term. Thus, for this very simple model,

solar forcing does not appear to contribute to the observed global

warming of the past 250 years; the entire change can be modeled

by a sum of volcanism and a single anthropogenic proxy. The

residual variations include interannual and multi-decadal variability

very similar to that of the Atlantic Multidecadal Oscillation (AMO).

Full paper here: http://www.scitechnol.com/GIGS/GIGS-1-101.pdf

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

247 Comments
Inline Feedbacks
View all comments
January 20, 2013 12:30 pm

errata
“. It says to believe who believe in AGW– yup, this data is consistent with the theory. Nothing more.”
. It says to THOSE who believe in AGW– yup, this data is consistent with the theory. Nothing more.

January 20, 2013 12:34 pm

Steven Mosher says:
January 19, 2013 at 10:12 pm
Steve, there is no doubt that some of the data is ‘bad’. I don’t have a problem with that, it is perfectly normal in science to ignore bad data if it can be shown to be bad/suspect, etc.
What is NOT normal, is to include/exclude data, kind of, at will – to suit ones agenda or anticipated findings. To avoid accusation of this, you must provide the full monty of data, used/unused/adjusted, etc – do you not agree?
###############
yes. Ifyou go to the data portion you will find a
A) a link to all sources.
B) a datafile containing all the source data reformated to our format ( multi series data )
C) a datafile after the removal of duplicate stations
D) a datafile prior to QA
E) a datafile after QA.
The data that I hope to have up in due course would be the scalpeled data. That is, showing where all the cuts are made. In due course the data for every station will be online with charts showing where the scapel was applied and why it was applied. Or you can download the code and see for yourself. This is what I did with GISS for example.

January 20, 2013 12:36 pm

“There is no indication in the article itself (note: It is the ONLY article in Volume 1 Issue 1) that it has been peer-reviewed. No “Thanks to Reviewer #1…” no notation of peer-review, no nothing in the published piece.”
The article was reviewed by three anonymous reviewers.
do you believe we landed on the moon?

January 20, 2013 12:38 pm

DirkH says:
January 20, 2013 at 9:15 am (Edit)
And I’d like to direct attention again to this razor sharp demolition of the BEST “scalpel” method.
##############
The scapel method ( as endorsed by Willis ) works. See the AGU poster for a double blind test of the method.

mike ozanne
January 20, 2013 12:41 pm

In the dim and distant past of the UK we had a “glam rock” group called Slade. In the natural course of events, they toured America following the ever present UK pop group dream of ‘breaking the states”. During the course of the tour Noddy Holder, the lead singer, helped two women escape a fire that broke out at their hotel. This lead the rest of the group to remark that his record with the fair sex must be pretty poor if he was reduced to setting fire to hotels and then offering to “rescue” anything attractive that was milling around in the chaos. In similar vein it seems that BEST’s luck with peer review is so poor they are having to start a journal………

January 20, 2013 12:41 pm

“They didn’t and they can not. In statistics if you had thirty thermometers reading simultaneously the same area at the same time you could do the statistics on the data to get a better estimate of the true value and get better precision than you had with one reading.”
well this is wrong. you can see the methods memo. or you can takes some stats from Brillinger.
The temperature field is predicted to +- 1.6C with a nugget of .46C on monthly temps.

January 20, 2013 12:46 pm

‘Just because a journal has bad published bad papers doesn’t seem to matter much. We’ve seen the dreck published in Nature and Science, splashed on the front cover, no less.”
There was a time when WUWT stood against the nonsense of worshipping peer review. Partly because there were papers with no code and no data and you could not check for yourself. Also, because folks here were well aware of the politics involved in peer review.. You all saw what happened to Odonnells paper when one reviewer was determined to hold it up. Ask yourself…
Do you think it was a skeptical reviewer who objected to us taking the record back beyond 1850? really?

January 20, 2013 12:51 pm

jim2 says:
January 20, 2013 at 5:33 am (Edit)
I appears at least that Mosher is carrying a lot of water for these guys but produces little in the way of links to the “raw” data or in other cases backs his various assertions in any other way.
Data:
http://berkeleyearth.org/data/
Code
http://berkeleyearth.org/our-code/
Memos
http://berkeleyearth.org/available-resources/
On Hansens claims about extremes
http://berkeleyearth.org/pdf/hausfather-hansen-memo.pdf
http://berkeleyearth.org/pdf/wickenburg-hansen-memo.pdf
Methods comparsions
http://berkeleyearth.org/pdf/robert-rohde-memo.pdf
( for the non math types )
http://berkeleyearth.org/pdf/visualizing-the-average-robert-rohde.pdf
Scalpel
http://berkeleyearth.org/images/agu-2012-poster.png

Don Monfort
January 20, 2013 12:56 pm

“The article was reviewed by three anonymous reviewers.
do you believe we landed on the moon?”
The moon landing thing is wearing thin. The so-called journal, GIGS, has zero credibility. And you know it.

January 20, 2013 12:57 pm

Jimbo says:
January 20, 2013 at 7:52 am (Edit)
Why didn’t they publish with some of the well known journal? Imagine if Anthony Watts publishes his new paper at this journal? There would be howls of protest and much gnashing of teeth.
#############
howls of protest?
not from me. in fact I would suggest that Anthony submit his work to this journal. The reviewers are knowledgeable and helpful. There is no paywall. and the turn time was very good.
As long as people provide data and code, you can do what should do ANYWAY, check for yourself. For example, Ross did a paper on UHI. It passed peer review. Nobody caught the data errors, but one day I looked at his data and “blam” data error.
But if I read a paper published in Nature that doesnt have the data and code, then I’m not even considering it.

January 20, 2013 1:03 pm

“The supplementary pdf isn’t all that helpful either. I think they will need to produce very detailed methodology, e.g. how they have dealt with station dropouts and outliers – with actual demonstrated ‘outlier’ procedures, code, etc; along with the datasets, and if necessary all the little ‘notes’ describing how/why stuff was adjusted – UHI anyone?. As it stands, I am suspicious and I am sure Anthony will remain so until proven otherwise.”
The detailed methodology is here
http://berkeleyearth.org/pdf/methods-paper.pdf
and here
http://berkeleyearth.org/pdf/methods-paper-supplement
the code is here
http://berkeleyearth.org/our-code/
The easiest way to understand how the method works is to look at this
http://berkeleyearth.org/pdf/visualizing-the-average-robert-rohde.pdf
Then if you want to see which method is best, you can look at how various methods perform using synthetic data. This is the first time various averaging methods have been evaluated using a ground truth dataset
http://berkeleyearth.org/pdf/robert-rohde-memo.pdf

Kev-in-Uk
January 20, 2013 1:12 pm

Steven Mosher says:
January 20, 2013 at 12:46 pm
you’re right – peer review is what it is, and varies according to journal/purpose, etc. In this case, obviously, the peer review needs to be done by folks of the specialist statistical analysis and computing type variety, as well as bog-standard climate boys.
I am aware of the availability of the BEST data, but sadly, being only computer literate from the age of Fortran IV, and basically a software ‘user’ since – I am not familiar with the code used to make the analysis. I don’t doubt if I was unemployed, I could sit and learn it – but it’s out of reach from a time perspective at the moment! So, in terms of peer review, I rely on someone who has those skills at hand to do the review for me, so to speak. In practise, that, of course, is the actual purpose of peer review. – it means the data and methods are checked (properly), so that others don’t have to. The beef with the AGW meme peer review process, is that it is clearly evident to be mostly pal review and certainly without much public availability of data – just ask Mann for his!
To return to my earlier post regarding the datasets – BEST uses others data – and as far as I can tell this is not the raw data? So, as far as I can deduce, Best have taken averaged data, from homogenised and adjusted datasets, and re-averaged in some spatial weighted formula etc. So, we are basically saying that BEST used potentially flawed data, and averaged it ? – and everyone wonders why it still shows the same trend? – without proper UHI adjustment, etc…
So – let me make a reasonable presumption. The BEST dataset is simply a better average/combination of other ‘source’ datasets in a combined fashion – and has nothing to do with special checking and quality control of each and every dataset and their subsets themselves? Is that a fair presumption, is that right? If so, IIRC correctly, that was not the initial intention of the BEST project – in fact, I’m sure I recall them saying they wanted to sort out the UHI issue, etc (but I may be wrong, or perhaps they simply moved the objective after the initial PR ?)

Auto
January 20, 2013 1:20 pm

From the paper in question [page 6]: –
“Many of the changes in land-surface temperature follow a
simple linear combination of volcanic forcing (based on estimates
of stratospheric sulfate injection) and an anthropogenic term
represented here by the logarithm of the CO2 concentration.”
Yet their own Figure One shows their ‘Land Average’ falling from 1800 or so, to a low about 1815-1818 [by eye – as noted somewhere above the resolution of the figures is not good in the .pdf, even at about 300%].
The ‘Land Average’ also seems to have been falling sharply from about 1775 to a low about 1786 – again by eye.
Yet these lows seem to be attributed to vulcanism – page 4 of the paper: –
“Most of the eruptions with significant stratospheric sulfate
emissions can be qualitatively associated with periods of low
temperature. For most of these events, the volcano and year of
emission is historically known. These include the eruptions of Laki in
1783, Tambora in 1815, and Cosiguina in 1835. The sulfate spike at
1809 does not have an associated historical event, and it might consist
of two separate events [22,23]. The famous explosion of Krakatau in
1883 is smaller (measured in sulfates) than these other events.”
So it looks as if – on their figures – temperature is a very good leading indicator for vulcanism.
[As well as hemlines? Or do hemlines lead?]
And their figures – from pages 2 and 3: –
“To perform the average, the surface of the Earth was divided
into 15,984 elements of equal area and weighted by the percentage
of land at each spot; 5326 of them had >10% land. For each month,
Berkeley Average creates an estimated temperature field for the entire
land surface of the Earth using Kriging to interpolate the available
temperature data.”
So almost two thirds of the globe was not modelled.
Areas with slightly over 10% land [of the 31,900 sq km – or 12,300 square miles – or so of each ‘element’] seem to be treated as equal to those of 100% land.
And the average is taken in monthly chunks.
They candidly say their results are based on a handful of stations – page 3
“The Berkeley Average procedure
allows us to use the sparse network of observations from the very
longest monitoring stations (10, 25, 46, 101, and 186 sites in the years
1755, 1775, 1800, 1825, and 1850 respectfully) to place limited bounds
on the yearly average.” (so even in 1850, each station must do proxy, proportionately, for over a million square miles] but their Figure 1 seems to show changes of global temperature – up and down – of two degrees or so [C] in a decade [again by eye] in the Eighteenth Century, and large parts of a full degree even into the 1870s – in a decade or so, and again – up and down.
The 1930s seem to have – pretty much – vanished.
The maths – scalpel effects and so on – seems to have been debunked already
[DirkH says:
January 20, 2013 at 9:15 am
Thanks ,Dirk].
I won’t comment further.
It just seems to be – how can I phrase this politely? – the sort of paper that a journal of this sort – brand new, seeking to establish itself, and none too careful about its sphere of interest – would reasonably be guessed to publish early in its existence.
And, anyway – hasn’t the temperature ‘now’ been seen as the same as the temperature 13, 15 or more years ago. There has been variation – ‘weather’ we call it here in snowy England – but despite the heady triginometry of The Team, x = 0.
Auto

January 20, 2013 1:20 pm

“And just to keep it simple in respect of wondering how good the ‘current’ dataset record is. I would like to know ONE, yes, only ONE – temperature dataset that has been maintained and recorded for a decent period of time, with each and every RAW recorded reading, still in its ‘pristine’ condition, with a description of each and every recorded adjustment and the reason for such adjustment from day one, such that the TRACEABILITY of the currently used ‘value’ can be worked all the way back, without break, to the raw data. In effect, after the folks crowing on about the 1780′s thermometer readings for Sydney not being traceable to known/validated calibrations, etc – can you, or anyone else demonstrate an adequate (read any, IMHO) level of traceability for the current longer term temp datasets?
Now, I know you have been asked this before – but I am asking once again, DO YOU (or anyone else) KNOW OF SUCH A DATASET? Where is it – and is it publicly available? If not, why not?”
These are good questions, but the notion that there is something called raw data bears examination. There is no such thing. There is a first known report. So for example, even if you have a log book, you don’t know that the log is actually a “raw” report. What you know is that this is the first known report. For example. I read a thermometer. I scribble down the temperature. I transfer that writing to my log book. Which report is “raw”. Can’t tell from the document. You have to trust the document. it doesnt self validate. So I distinguish between records that are known first reports and those that explicitly claim that they are the result of an adjustment process. In all cases where there are first reports we use first reports. There are a few cases ( data paper is coming along ) where the only record that exists is a record known to be adjusted. These are exclusively monthly stations. So, one thing I suggest that people do is work with daily data ONLY because typically adjustments are made on monthly products and not daily products. The best source here is GHCN daily which is “raw” or a first report.
You can tell its “raw” because its full of impossible measures like 15000C or -600C, or the same value stuck for 30 days. This raw data includes a QC flag for every measure. you can use the data “raw” or apply the QC flag and remove the data.
Funny story: I was doing a global average using only GHCN daily. The answer came out matching others, except it was a bit hot. Opps, I forgot to apply the QC flags so I had all this clearly wrong “raw” data. once I applied the flags ( ie get rid of values like 15000C ) then the series cooled a bit and matched better.

DirkH
January 20, 2013 1:25 pm

Steven Mosher says:
January 20, 2013 at 12:38 pm
“DirkH says:
January 20, 2013 at 9:15 am (Edit)
And I’d like to direct attention again to this razor sharp demolition of the BEST “scalpel” method.
##############
The scapel method ( as endorsed by Willis ) works. See the AGU poster for a double blind test of the method.”
Well, when I come across an AGU poster I’ll make sure to check whether it is about the scalpel method.
I’m sure you found SOME low frequency component after the cutting and stichting that you could then interpret as the signal of climate change. There is nothing in the world that prevents a low frequency signal from popping into existence after one stitches together two signals.

Auto
January 20, 2013 1:31 pm

I wrote –
The maths – scalpel effects and so on – seems to have been debunked already
[DirkH says:
January 20, 2013 at 9:15 am
Thanks ,Dirk].
I won’t comment further.
=======
I see Steven Mosher has some doubts about the debunking, posted whilst I was scribing.
I’m not qualified to discuss – he may be right.
Auto

DirkH
January 20, 2013 1:36 pm

Steven Mosher says:
January 20, 2013 at 11:36 am


Dirk
“As Muller is a front for the geo-engineering NOVIM Group and his daughter Elizabeth Muller peddles a “product” called “GreenGov” the assumption that Muller does more shady dealings is not at all absurd. He has already proven himself to be a rent seeker of the first degree; a worthy equivalent to Pachauri. I know, that’s all pretty standard in warmist circles. We know why they do it.”
except Muller had nothing whatsoever to do with the selection of the journal. Zero. zip nada.
so much for your conspiracy theory.”

So you are saying that it is a conspiracy theory that Muller is a front of the NOVIM group that tries to sell geo-engineering, and that Muller’s daughter tries to sell a “product” called “GreenGov”; in other words, you say that Muller is not a rent-seeker who tries to feed on the CO2AGW gravy train whereever he can and with whatever means possible?
Because that is all that I said – I did not say he founded his own journal – I only said that given what we know about his business activities that it would be perfectly reasonable to assume he did such a trick.
We also know that other warmists use every opportunity to use the CO2AGW theory to make money hand over fist; see Mann’s lecture fees or Hansen’s awards. We know that they are without exception rent-seekers. I would call this the null hypothesis.
Do you need more examples? Ask and ye shall receive.

Editor
January 20, 2013 1:47 pm

Steven Mosher says:
January 19, 2013 at 10:22 pm

“You probably haven’t heard of it because it is volume 1 issue 1… Must be his own journal.”

Do you think we landed on the moon?

Steven, after all the sh*t that people have given me in particular and skeptics in general for not publishing in the “proper” journals, and after all the bullshit that Richard Mueller put out in his pre-press press release about how this was going to be published in the scientific journals and it was in review … after all that, surely you can’t be surprised that people are calling you for BEING UNABLE TO GET YOUR BRILLIANT WORK PUBLISHED IN A HIGH IMPACT JOURNAL.
I’m sorry, my friend, but you guys brought this on yourself. Your whining that people are conspiracy theorists is a pathetic response. Admit you couldn’t convince “Science” or “Nature” or a single high-impact journal to publish what you are so proud of, and move on. It’s hilarious that you couldn’t, after Mueller made all the claims about how his work was going to be in the journals so he could justify publishing results without data or code …
You’re embarrassing yourself by trying to defend the indefensible, Steven, particularly since at the end of the day it is meaningless where it is published, other than proving that Mueller was just blowing smoke when he said that the paper was in review at JGR.
The only valid question is, is it true? It may well be true, I take no position on that … but you’re not helping people come to that conclusion.
w.
… Mueller publishes in Volume 1, Number 1 of a brand new journal, and you get your knickers in a twist because people are pointing that out? Steven, his publishing in Vol. 1 No. 1 is too funny, and if you don’t get the joke, well, then, I guess the joke is on you …

Jeff Alberts
January 20, 2013 1:50 pm

Steven Mosher says:
January 20, 2013 at 12:46 pm
‘Just because a journal has bad published bad papers doesn’t seem to matter much. We’ve seen the dreck published in Nature and Science, splashed on the front cover, no less.”
There was a time when WUWT stood against the nonsense of worshipping peer review. Partly because there were papers with no code and no data and you could not check for yourself. Also, because folks here were well aware of the politics involved in peer review.. You all saw what happened to Odonnells paper when one reviewer was determined to hold it up. Ask yourself…
Do you think it was a skeptical reviewer who objected to us taking the record back beyond 1850? really?

You quoted me in your reply, but you’re assuming a lot of things I didn’t say. I don’t speak for WUWT, only myself. I said nothing about Peer review or specific reviewers of specific papers. Bad papers which receive a lot of attention need to be rebutted, no matter which journal, no matter who the reviewers are.

Manfred
January 20, 2013 1:56 pm

Steven Mosher says:
January 20, 2013 at 12:28 pm
it is not presented as THE answer. The argument is entirely different. It goes like this. It starts with givens or assumptions.
1. Given: C02 causes warming
2. Given: Volcanos cause cooling
If you take those two givens you can explain the temperature rise with a residual that looks like AMO.
pretty simple. Now, you can object to #1 or object to #2 or both.
It doesnt “prove” global warming. It says to believe who believe in AGW– yup, this data is consistent with the theory. Nothing more. It says to people who dont believe..
———————————————–
Starting with other assumptions gets opposite results:
Regression of only natural forcings explains at least half of the warming (even though the ENSO process is not correctly represented by the ENSO index)
http://wattsupwiththat.com/2012/10/17/new-paper-cuts-recent-anthropogenic-warming-trend-in-half/
Bob Tisdale can explain ALL:of the recent global warming, if he only considers ENSO.
The bottom line. Different assumptions have to be compared and tested on longer time scales. BEST model already fails with the Medieval Warm Period or in about 95% of the time since the last ice age. It cannot explain variation before the little ice age.
Judith Curry’s comment:
“Maybe the climate system is simpler than I think it is, but I suspect not. I do know that it is not as simple as portrayed by the Rhode, Muller et al. analysis.”
http://judithcurry.com/2012/07/30/observation-based-attribution/
Depressing to see Mosher defend this part of the analysis.

Editor
January 20, 2013 1:57 pm

Steven Mosher says:
January 20, 2013 at 1:20 pm

… These are good questions, but the notion that there is something called raw data bears examination. There is no such thing. There is a first known report. So for example, even if you have a log book, you don’t know that the log is actually a “raw” report. What you know is that this is the first known report. For example. I read a thermometer. I scribble down the temperature. I transfer that writing to my log book. Which report is “raw”. Can’t tell from the document. You have to trust the document. it doesnt self validate. So I distinguish between records that are known first reports and those that explicitly claim that they are the result of an adjustment process.

Seems to me like all you are saying is that what Kev-in-UK calls “raw data” you call the “first report” … so what? That’s just terminology. But you used the excuse of terminology, that you didn’t use the exact same terms he uses, to avoid answering Kev’s question. Instead, you spun it off into a semantic wasteland, and ignored his question entirely.
Steven, I know you already know this, but what Kev calls “raw data” is the first written record of the data, basically the logbook. You call that the “first report”. SO WHAT. Answer his damn question.
w.

Editor
January 20, 2013 2:06 pm

Steven Mosher says:
January 20, 2013 at 12:41 pm

… The temperature field is predicted to +- 1.6C with a nugget of .46C on monthly temps.

“Nugget”? Surely you must be aware that people won’t understand this kind of insider jargon. I know I don’t. Is that “nugget” of 0.46C the error estimate of the temperature field?
And if so, is said “nugget” of error a one sigma nugget, a two sigma nugget, or 95% CI nugget? I’m sure you see the problem—that usage of “nugget” has no agreed-upon statistical meaning outside of your head, and perhaps that of a few of your friends. As a communication tool, “nuggett” no workee.
w.

DirkH
January 20, 2013 2:18 pm

I just notice that they made up virtual temperature measurements across the globe in the 1800’s when there were only a few real thermometers in the US and Europe.
They used a GCM. So whatever they made up there is GCM reality. In the end they find that the temperatures are perfectly in line with what one would expect compared to a GCM.
So BEST is a tautological exercise to give GCM’s credibility by comparing to a make believe past constructed by a GCM.
I see a credibility problem there.

Jimbo
January 20, 2013 2:25 pm

Mr. Mosher,
I know you won’t admit it but it must be utterly embarrassing not to be able to get your groundbreaking research published in any of the say top 5 relevant journals. To be in issue 1, volume 1 and to be the only one published does not look good at all. Hey, I can understand that after all that work you guys must have been desperate. It happens to everyone. ;O)

Jimbo
January 20, 2013 2:42 pm

Mosher,
Another thing I consider about this paper is where it was REJECTED and why. This is not a trivial point.