Berkeley Earth finally makes peer review – in a never before seen journal

berkeley_earth_surface_temperature_logo[1]After almost two years and some false starts, BEST now has one paper that has finally passed peer review. The text below is from the email release sent late Saturday. It was previously submitted to JGR Atmospheres according to their July 8th draft last year, but appears to have been rejected as they now indicate it has been published in Geoinformatics and Geostatistics, a journal I’ve not heard of until now.

(Added note: commenter Michael D. Smith points out is it Volume 1 issue 1, so this appears to be a brand new journal. Also troubling, on their GIGS journal home page , the link to the PDF of their Journal Flier gives only a single page, the cover art. Download Journal Flier. With such a lack of description in the front and center CV, one wonders how good this journal is.)

Also notable, Dr. Judith Curry’s name is not on this paper, though she gets a mention in the acknowledgements (along with Mosher and Zeke). I have not done any detailed analysis yet of this paper, as this is simply an announcement of its existence. – Anthony

===============================================================

Berkeley Earth has today released a new set of materials, including gridded and more recent data, new analysis in the form of a series of short “memos”, and new and updated video animations of global warming.  We are also pleased that the Berkeley Earth Results paper, “A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011” has now been published by GIGS and is publicly available.

here: http://berkeleyearth.org/papers/.

The data update includes more recent data (through August 2012), gridded data, and data for States and Provinces.  You can access the data here: http://berkeleyearth.org/data/.

The set of memos include:

  • Two analyses of Hansen’s recent paper “Perception of Climate Change”
  • A comparison of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques on ideal synthetic data
  • Visualizing of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques

and are available here: http://berkeleyearth.org/available-resources/

==============================================================

A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011

Abstract

We report an estimate of the Earth’s average land surface

temperature for the period 1753 to 2011. To address issues

of potential station selection bias, we used a larger sampling of

stations than had prior studies. For the period post 1880, our

estimate is similar to those previously reported by other groups,

although we report smaller uncertainties. The land temperature rise

from the 1950s decade to the 2000s decade is 0.90 ± 0.05°C (95%

confidence). Both maximum and minimum daily temperatures have

increased during the last century. Diurnal variations decreased

from 1900 to 1987, and then increased; this increase is significant

but not understood. The period of 1753 to 1850 is marked by

sudden drops in land surface temperature that are coincident

with known volcanism; the response function is approximately

1.5 ± 0.5°C per 100 Tg of atmospheric sulfate. This volcanism,

combined with a simple proxy for anthropogenic effects (logarithm

of the CO2 concentration), reproduces much of the variation in

the land surface temperature record; the fit is not improved by the

addition of a solar forcing term. Thus, for this very simple model,

solar forcing does not appear to contribute to the observed global

warming of the past 250 years; the entire change can be modeled

by a sum of volcanism and a single anthropogenic proxy. The

residual variations include interannual and multi-decadal variability

very similar to that of the Atlantic Multidecadal Oscillation (AMO).

Full paper here: http://www.scitechnol.com/GIGS/GIGS-1-101.pdf

0 0 votes
Article Rating
247 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 19, 2013 6:07 pm

Round and around we go – where is the raw data so there can be a VALID PEER REVIEW- I [may be] old but we were required to provide ALL data so others could duplicate our conclusions before we were granted a peer reviewed status.
This is all just so much water vapor . .

u.k.(us)
January 19, 2013 6:16 pm

Simple, who knew ?

Reply to  u.k.(us)
January 19, 2013 6:24 pm

They knew it was all smoke and mirrors from their own data sets . . .

January 19, 2013 6:32 pm

You probably haven’t heard of it because it is volume 1 issue 1… Must be his own journal.

John F. Hultquist
January 19, 2013 6:35 pm

Here’s a short memo:
Arctic Sea ice extent 30% or greater (DMI) shows a drop of 2 m. km.2 over a day or 2 days.

Reply to  John F. Hultquist
January 19, 2013 6:46 pm

Oh not AL GORE DAYS ARE COMING TO AN END – WHAT WILL WE EVER DO?

David Davidovics
January 19, 2013 6:45 pm

Anthony,
If the following link is true, it would certainly explain why you haven’t heard of this scientific journal before. It was launched in 2012:
http://scholarlyoa.com/2012/05/05/omics-publishing-launches-new-brand-with-53-journal-titles/
Quote from link:
“India-based OMICS Publishing Group has just launched a new brand of scholarly journals called “SciTechnol.” This new OMICS brand lists 53 new journals, though none has any content yet.
We learned of this new launch because the company is currently spamming tens of thousands of academics, hoping to recruit some of them for the new journals’ editorial boards.

January 19, 2013 6:55 pm

Oh dearie me, and I thought they’d long ago published something. It was probably that those blizzards of press releases and interviews somehow left me with the impression that Muller had finally got something into a journal. Interesting though whose names are not on it …
http://thepointman.wordpress.com/2012/06/22/mullering-the-data/
Pointman

Ed MacAulay
January 19, 2013 6:55 pm

“..the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”
They have skirted around the big question; So how have they concluded that the rise in CO2 is anthropogenic?

john robertson
January 19, 2013 6:58 pm

Correct me if I’m out to lunch, but did Phil Jones not lose the CRU raw data and the MET was still promising to reconstruct that record? Or did that get done?

George McFly
January 19, 2013 7:17 pm

“the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy”
Funny, I don’t remember a big volcano in 1998…..must have missed it

John West
January 19, 2013 7:17 pm

“the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”
LOL! How scientific /not.
So, if the entire change can be modeled with hemlines that would mean hemlines determine temperature. /sarc

Greg House
January 19, 2013 7:18 pm

“Berkeley Earth finally makes peer review…”
==================================================
So what, peer review seems to be a joke to me, anyway in climate science.

janama
January 19, 2013 7:24 pm

This is such a load of Cr*p – the graphs are such low resolution they can hardly be interpreted and there are even spelling mistakes in the charts. It shows nothing, proves nothing
and is just a waste of time and space.
No wonder Judith Curry stepped aside from it.

January 19, 2013 7:29 pm

Ed MacAulay says:
January 19, 2013 at 6:55 pm
So how have they concluded that the rise in CO2 is anthropogenic?
presumably because the rise matches that which we know we have produced.

Editor
January 19, 2013 7:30 pm

Um, many of the journals are Volume 1, Issue 1 on this website. Like:
Journal of Genital System & Disorders
http://www.scitechnol.com/ArchiveJGSD/currentissueJGSD.php
Must be a brand new outfit. Looking forward to reading a lot of bleeding edge climate science from this new source.

corio37
January 19, 2013 7:50 pm

Anyone want to suggest what OMICS stands for?
‘One Meagre Issue? Clearly Sufficient’
‘Old Mainstream Ideas Covered Sloppily’

bw
January 19, 2013 7:52 pm

Didn’t Muller say they “looked at” the UHI claims and dismissed them? How can temps from thermometers located in the path of air conditioner exhaust and over concrete runways be no different from pristine rural thermometers??? That doesn’t pass the sanity test.
Did they use “homogenized” and “gridded” data from GHCN/NOAA ?? Or do we have to lock down their computers and attempt to determine what they actually did?
Why does the GHCN v3 data change from month to month? With each new data set showing the temps from the 1900s to 1930s becoming cooler while the data from the 1970s to now get warmer than ever. http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
Why do independent observers follow the data over periods of time and see the dame pattern of changing data from month to month? http://chiefio.wordpress.com/gistemp/
and
http://notrickszone.com/2012/03/01/data-tamperin-giss-caught-red-handed-manipulaing-data-to-produce-arctic-climate-history-revision/
Why did the data at Hansen’s GISS site show a massive alteration of data around December 2012 where about 10 percent of station data were altered to show much cooler temps in the early records. Meanwhile many other stations had recent years (2007 to 2012) of data deleted?? Interesting that those years had been showing an obvious cooling trend. And now we see the GISS data website has been down for weeks with “technical” problems???

Australis
January 19, 2013 7:54 pm

“the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”
Interesting that aerosols, land uses, methane, black carbon, etc make no significant impact. Or perhaps they cancel each other out.
have Ross McKitrick’s published concerns been met?

Andrejs Vanags
January 19, 2013 8:03 pm

Why was it rejected from JGR Atmospheres?

wayne
January 19, 2013 8:03 pm

Berkeley Earth has today released a new set of materials — with updated video animations of global warming. And a bit to better know the man behind this work of art…

davidmhoffer
January 19, 2013 8:10 pm

Volume 1 Issue 1 has a total of….two articles. The other one is an editorial. So the sum total of all papers in press by this journal is exactly…. one.
I read the editorial, and it is quite good. Talks about the convergence of data sources and the need to build computer systems that can consume data from diverse sources such as census data, red light cameras and twitter feeds to better understand impacts of (for example) changing traffic light sequences. This is a legitimate area of research which requires not just bleeding edge data processing constructs but physical infrastructures of massive scale. This appears to be the focus of the journal, and if they do a good job of it I might even follow them.
Of course all they have so far is the one article on temperature record which nobody else would publish and is only tangentially related to the journal focus, so they aren’t off to a great start.

John F. Hultquist
January 19, 2013 8:12 pm

They claim to have used this:
. . . the solar forcing history specified for use in the IPCC fifth assessment report global climate models.
I suppose this saved a lot of time and thought.

John F. Hultquist
January 19, 2013 8:26 pm

Please excuse an O/T to —
davidmhoffer says:
“. . . systems that can consume data from diverse sources

Railroads are way ahead on this. Saw an article in a doctor’s office but have no link. This is older.
http://www.omaha.com/article/20111113/MONEY/711139918

SteveB
January 19, 2013 8:26 pm

Hmm. The OMICS Publishing Group has been accused, in the past, of being “a predatory Open Access publisher” and “of tacitly saying it will publish anything”.
http://poynder.blogspot.co.uk/2011/12/open-access-interviews-omics-publishing.html

kim
January 19, 2013 8:42 pm

Attribution, she’s a bitch;
Don’t know how just scratch that itch.
Muller claims it’s men
Who’ve warmed us up since then.
Thank us then, we’ve made ourselves rich.
==================

MattS
January 19, 2013 8:58 pm

John West.
“So, if the entire change can be modeled with hemlines that would mean hemlines determine temperature.”
You intended this as sarcasm but don’t be so quick to dismiss the idea. I would be willing to bet that there is a strong relationship between skirt lengths and temperature. (Though the causality probably runs the other way).
Does anyone have a cup of grant money I can borrow to study this issue? 🙂

mct
January 19, 2013 9:10 pm

I’ve not checked every one of the “journals” they publish, but a randomly selected 6 were ALL “Volume 1 Issue 1”. Some with articles, some not.
This smells.

Werner Brozek
January 19, 2013 9:20 pm

bw says:
January 19, 2013 at 7:52 pm
And now we see the GISS data website has been down for weeks with “technical” problems???
This site, which WFT uses, did have the November anomaly at 0.68, then it went down. When it came back up, it just went to the previous month’s value (October).
http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
However the next two sites do work and they are up to date and they show the anomalies for each hemisphere. The bottom line is that the anomaly in November was 0.68 and it dropped to 0.44 in December.
http://data.giss.nasa.gov/gistemp/tabledata_v3/SH.Ts+dSST.txt
http://data.giss.nasa.gov/gistemp/tabledata_v3/NH.Ts+dSST.txt

JJ
January 19, 2013 9:33 pm

“Berkeley Earth finally makes peer review…”
Are you sure? Does this cOMICS outfit have any peers?

noaaprogrammer
January 19, 2013 9:54 pm

Berklely? … I think it’s all due to acid rain … if you get my drift.

January 19, 2013 10:07 pm

“Correct me if I’m out to lunch, but did Phil Jones not lose the CRU raw data and the MET was still promising to reconstruct that record? Or did that get done?
#################
not lost. never been lost. It still exists at NWS. the good news is you can take every station in CRU, delete it, and you still have 32,000 stations. And of course the answer doesnt change.
facts. hard to deal with. but thems the facts.

January 19, 2013 10:12 pm

“Did they use “homogenized” and “gridded” data from GHCN/NOAA ?? Or do we have to lock down their computers and attempt to determine what they actually did?”
no homogenized data from GHCN was not used. But if you dont like GHCN monthly data you can delete those 7000 stations and you are left with 29000 stations. And the answer doesnt change. hard to deal with I know. But if your theory is that GHCN monthly is bad you can test that theory by not using the data. When you get the same answer with different data sources, then your theory needs some looking at.

January 19, 2013 10:14 pm

“have Ross McKitrick’s published concerns been met?”
Ross was not a reviewer on this paper. he was a reviwer on the UHI paper.

January 19, 2013 10:22 pm

“You probably haven’t heard of it because it is volume 1 issue 1… Must be his own journal.”
Do you think we landed on the moon?

January 19, 2013 10:51 pm

Are there any cardinal/ordinal integer tricks?

dalyplanet
January 19, 2013 11:04 pm

This analysis will have to stand up to intense scrutiny from all sides. Who care where it was published. The peer review “formally” is only somewhat useful in climate science.The nice workup on Hansen alone is worth the look.

Glenn
January 19, 2013 11:10 pm

Who paid to publish this garbage in a scam journal?

dalyplanet
January 19, 2013 11:36 pm

In the political climate it is almost unfathomable how wedded to the consensus influential lobbyists are. And wedded the narrative…
It has been suggested here in a post that graciousness and compromise may be beneficial. Perhaps there is merit in this Berkeley approach.

dalyplanet
January 19, 2013 11:41 pm

LOL kim !!!

thisisnotgoodtogo
January 20, 2013 12:37 am

“Landed on the moon?”
Yes, but this smells cheezy

tallbloke
January 20, 2013 12:51 am

MattS says:
January 19, 2013 at 8:58 pm
I would be willing to bet that there is a strong relationship between skirt lengths and temperature. (Though the causality probably runs the other way).

As we have seen, this is no impediment to running multi-billion dollar scams for decades.
Leif Svalgaard says:
January 19, 2013 at 7:29 pm
Ed MacAulay says:
January 19, 2013 at 6:55 pm
So how have they concluded that the rise in CO2 is anthropogenic?
presumably because the rise matches that which we know we have produced.

And yet Co2’s rate of change (not its atmospheric concentration) is clearly driven by temperature change.
http://tallbloke.wordpress.com/2012/09/12/is-the-airborne-co2-fraction-temperature-dependent/
OMICS publishing: Is this missing a letter ‘C’ at the start?

Andrew
January 20, 2013 1:10 am

I’m sensing the beginnings of what will end in blind panic amongst the Eco-Taliban. By now, even that lot must accept that correlation does not prove causation. This is evidenced by the divergence of recorded temperature from steadily increasing CO2 since ~1998 (see link).
The paper, therefore, is just another example of SISO (GIGO, to be more polite).
http://www.clipular.com/c?1272633=uxyQDogrVLwqXRm229v3kq9OCds

Manfred
January 20, 2013 1:11 am

Werner Brozek says:
January 19, 2013 at 9:20 pm
http://data.giss.nasa.gov/gistemp/tabledata_v3/NH.Ts+dSST.txt
———————————
Dec 2012 second lowest NH temperature of the millenium after Jan 2008.

Sleepalot
January 20, 2013 1:25 am

So presumeably it’ll make it into AR5 then – after the review process, of course.

Mr Green Genes
January 20, 2013 1:44 am

Steven Mosher says:
January 19, 2013 at 10:22 pm
Do you think we landed on the moon?

No, “we” didn’t. Neil Armstrong et al, on the other hand …

MikeB
January 20, 2013 1:54 am

Steven Mosher says: January 19, 2013 at 10:07 pm
Steve, the Met.Office did indeed destroy most of its raw data. They say that they did this to ‘save space’ (very professional). Unfortunately, they are now in a position of compiling one of the world’s major temperature records by relying on data the provenance of which is unknown. They say on their website…

Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences….. The Met Office do not hold information as to adjustments that were applied and, so, cannot advise as to which stations are underlying data only and which contain adjustments.

Do you have a link to the raw data held by NWS in a form that allows it to be compared to ‘homogenised data? It seems very difficult to track down actually which data have been used for surface based temperatures, Reasons for and extent of adjustments are even more difficult to locate. Generalisations such as a database is constructed from 19,000 stations etc. does not really get down to the detail. It gives the impression of being all smoke and mirrors.

ibbo
January 20, 2013 1:57 am

Hi @ Steve Mosher
http://www.guardian.co.uk/environment/2010/feb/15/phil-jones-lost-weather-data
Doesn’t hold the original data.

John R. Walker
January 20, 2013 1:57 am

“the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”
This is a danger to human health – I’m likely to injure myself from falling about laughing!

Peter Maddock
January 20, 2013 2:05 am

Mosher,
Everybody’s first thought after hearing that they could not get published and then appear in volume 1 issue 1 ….. is that this is some form of self publishing …. it may not be correct but it is still everybody’s first thought.
Given that this is what everyone is thinking – they should make some form of statement regarding their relationship with the publishers.
If they make no such statement we are probably correct in our first thoughts.
Peter

oldfossil
January 20, 2013 2:12 am

Dear Anthony Watts
On March 6, 2011 you reported on your visit to BEST and wrote:
And, I’m prepared to accept whatever result they produce, even if it proves my premise wrong. I’m taking this bold step because the method has promise. So let’s not pay attention to the little yippers who want to tear it down before they even see the results. I haven’t seen the global result, nobody has, not even the home team, but the method isn’t the madness that we’ve seen from NOAA, NCDC, GISS, and CRU, and, there aren’t any monetary strings attached to the result that I can tell. If the project was terminated tomorrow, nobody loses jobs, no large government programs get shut down, and no dependent programs crash either. That lack of strings attached to funding, plus the broad mix of people involved especially those who have previous experience in handling large data sets gives me greater confidence in the result being closer to a bona fide ground truth than anything we’ve seen yet.
http://wattsupwiththat.com/2011/03/06/briggs-on-berkeleys-best-plus-my-thoughts-from-my-visit-there/
Absolutely no progress will be made in the climate debate until all parties abandon their adversarial attitudes. We’re not in a courtroom or a gladiatoral arena now. We’re going to have to settle the question by talking to each other like grown adults. The key to a successful dispute resolution through negotiation is to find agreement. In your earlier blog post cited here, you seemed to have found some people on “the other side” who were amenable to a rational discussion. What a pity that you now find it necessary to destroy that relationship.

Paul Dennis
January 20, 2013 2:14 am

I am not really interested in the average surface temperature as a metric for climate change but am surprised that after all the hype associated with BEST that it’s first publication is in a new journal without a citation index. The review policies of these journals are somewhat odd. I quote from their website:
“The Review process for articles being published in SciTechnol Journals is carried out in an easy and quick manner. The submitted manuscript is assigned to one of the Editorial Board Members based on their area of interest. If the Editor agrees to accept the assignment, he can choose any of the three ways:
Review the manuscript himself without assigning it to reviewers; or
Assign atleast 3 potential reviewers for the review process; or
Ask the Associate Managing Editor of the Journal to assign reviewers on his behalf.”
This hardly inspires confidence in the rigour of the peer review process. Can someone associated with the BEST project could clarify which peer review procedure was followed for this paper.

Nigel S
January 20, 2013 2:20 am

Wait till Laden sees you’ve endorsed this paper.

January 20, 2013 2:22 am

Peer review or beer review?
When I couldn’t find a publisher, I also created my own blog. Of course, that doesn’t mean anyone will read it.

January 20, 2013 2:26 am
Me
January 20, 2013 2:30 am

WOW, Mosher has out done himself, Nice work thar. 😆

NikFromNYC
January 20, 2013 2:32 am

Now we are geoinformatics denialists.

January 20, 2013 2:35 am

http://poynder.blogspot.co.uk/2011/12/open-access-interviews-omics-publishing.html
Scitechnol is registered to OMICS which appears to be a vanity press for ‘peer’ review

January 20, 2013 2:40 am

Steven Mosher [January 19, 2013 at 10:22 pm] says:

“You probably haven’t heard of it because it is volume 1 issue 1… Must be his own journal.”

Do you think we landed on the moon?

No Steve, those cranks are on your side, they are your allies. They gave us the grassy knoll nonsense, tried to debunk Apollo, and tell us Jews didn’t show up for work on 9/11 to avoid the controlled demolition of WTC. Naturally they would line up to participate in th AGW hoax. Now what is your excuse?
Mosher thinks he is being clever there. But it is actually an insult to everyone here at WUWT. Wallowing in the sewer with leftist liberal psychopaths has harmed him greatly.
[Reply: Can we cut back on insults? I know a lot of nice “leftist liberals” and hold some such attitudes myself. Calling them ‘psychopaths’ is, er, problematic. -ModE ]

January 20, 2013 2:45 am

Past, present and pending pre-science.
Parallels.
Berkeley Earth, NASA GISS, and Hadley CRU.
The presumptive parallels found in them all in the blogs bind them to mere perceptions of AGW’s alluring pall.
The back story tells the new story, to repeat the same inconclusive old story, again.
. . . and some things are as they were before as we see that Mosher’s gone on about the moon, again.
It’s dialog on time, again.
John

Mike McMillan
January 20, 2013 2:49 am

Steven Mosher says: January 19, 2013 at 10:22 pm
… Do you think we landed on the moon?

And we still have the sound stage here in Houston if we ever wanna go back, too. Freshen up the grey dust, touch up the paint on the LEM, and we’re in business.
SteveB says: January 19, 2013 at 8:26 pm
Hmm. The OMICS Publishing Group has been accused, in the past, of being “a predatory Open Access publisher” and “of tacitly saying it will publish anything”.

Anything, like, maybe a science fiction novel from a budding author?
Not that we really need another take on all the adjusted, homogenized, and value-added raw data, but anyone doing so much PR and puffery deserves to get something out of it. Like tenure, or a reserved parking space.

Bill Illis
January 20, 2013 2:51 am

I would sure like to see the before and after of how the “scapel” method creating 179,000 new stations changed the overall trend over time.
What is the temporal distribution of the changes brought on by this scapel method.
If it works out to be a simple 0.0C for the most part starting in 1753 and going into 1763 and in 1944 and in 2010, it would be more believable. But I suspect it’s not since it is not shown or even described in the paper.
One would think this should be more fully outlined. For all I know it could be -0.5C in the beginning of the record and +0.5C towards the end.

Chuck Nolan
January 20, 2013 3:12 am

Steven Mosher says:
January 19, 2013 at 10:07 pm
“Correct me if I’m out to lunch, but did Phil Jones not lose the CRU raw data and the MET was still promising to reconstruct that record? Or did that get done?
#################
not lost. never been lost. It still exists at NWS. the good news is you can take every station in CRU, delete it, and you still have 32,000 stations. And of course the answer doesnt change.
facts. hard to deal with. but thems the facts.
————————–
Then what was Harry (of the “read me file” fame) working on?
I thought that was the original CRU data.
Am I wrong?
cn

Anoneumouse
January 20, 2013 3:33 am

Here in the UK there is a series of journals with the prefix ‘PRACTICLE’ they are known by us oldies as ‘Camms Comics’.
This series of electronic journals may just come to be known as ‘Sham Omics’

FerdinandAkin
January 20, 2013 3:35 am

In order to be science, the results of the Berkeley Earth Surface Temperature paper must be reproducible. This would imply the data used be available to anyone wanting to recreate the results. The question arises as to were the source data resides.
Assuming they used Dr. James Hansen’s GISS data, then the current published data base, which has been recently adjusted, will not fit the data used by BEST invalidating an accurate reproduction. On the other hand if BEST archived the GISS data, one could compare the archive to the current data base and derive the adjustments.

knr
January 20, 2013 3:43 am

oldfossil do you understand what something has to be to be consider in scientific terms ‘bona fide’?
Although some have claimed so , Watts did not say that BEST could produce ANYTHING and he accept it has VALID.

knr
January 20, 2013 3:48 am

FerdinandAkin its still strikes me has amazing in what is described by its advocates has the most important thing ever , that how very badly the data has been handled with raw data gone altogether, and adjustments guest at. Once again the ‘professional ‘ in climate science operate at a level unacceptable to undergraduate student handing in a essay . And the worst part is the wall of silence from their fellows over this , which means that way AGW theory falls it will take much more with it .

Kev-in-Uk
January 20, 2013 3:51 am

oldfossil says:
January 20, 2013 at 2:12 am
That seems a crazy statement, Sir!
The supposed method as originally outlined, as Anthony surmised, did indeed have promise – but on a cursory glance at the paper this morning I would strongly suspect that the actual applied method is somewhat suspicious – station weighting, averaging, then averaging of averages of averages, etc (which makes Moshers statement of removing stations seem a bit silly!). I’m busy today, but I will be throughly reading this paper in due course, very slowly, as it seems to be a bit smelly on first preview and not very detailed about certain aspects!
The supplementary pdf isn’t all that helpful either. I think they will need to produce very detailed methodology, e.g. how they have dealt with station dropouts and outliers – with actual demonstrated ‘outlier’ procedures, code, etc; along with the datasets, and if necessary all the little ‘notes’ describing how/why stuff was adjusted – UHI anyone?. As it stands, I am suspicious and I am sure Anthony will remain so until proven otherwise.
Your calling Anthony out seems crass in the extreme, as you seem to be suggesting that Anthony should automatically accept this papers’ findings and chat nicely to the warmista? Just because he is being cautious and ‘wondering’ as to how the peer review process has taken place (and so bloody long!) is hardly ‘thumbing his nose’ at the other side – it is indeed, a perfectly valid stance!

Kev-in-Uk
January 20, 2013 3:55 am

Bill Illis says:
January 20, 2013 at 2:51 am
absofeckinglutely Bill!

John Trigge (in Oz)
January 20, 2013 3:57 am

The land temperature rise from the 1950s decade to the 2000s decade is 0.90 ± 0.05°C (95% confidence).

How do they get 95% confidence in figures to 5/100ths of a degree from measurements that are nowhere near this accuracy?

janama
January 20, 2013 3:59 am

“And of course the answer doesnt change.”
yes it does!
raw Sydney data
http://users.tpg.com.au/johnsay1/Stuff/Sydney.png
nothing like Best!

January 20, 2013 4:01 am

Mosher, USHCN is considered best of class and it has demonstrably abominable quality issues. Remove that and you’re using stations that are worse.

January 20, 2013 4:04 am

From the first public announcement of the BEST project, I think its public face has been managed in an unprofessional manner. Now we can look at what the actual project product looks like and judge its scientific professionalism.
The dialog is in.
John

Roger Clague
January 20, 2013 4:07 am

The publisher is called SciTechnol
http://www.scitechnol.com/aboutus.php
On- line, right-on but not well written.
Its facebook page has a link to the Union of Concerned Scientists
On the paper itself, CO2 concentration is not a proxy for anthropogenic effects. It is a proxy for solar effects.

January 20, 2013 4:11 am

ATTENTION: MODS
ironically, my last comment got sucked into the blackhole. Can you save it?
*** PLEASE DELETE THIS ONE ***
Thanks!
Reply: I’d rather delete the black hole one as it is largely a personal communication to me (and your prior comment is ‘up’ already). I have no ‘ax’ here. Just wish to request a more ‘polite and professional’ tone. -ModE ]

Kev-in-Uk
January 20, 2013 4:17 am

Steven Mosher says:
January 19, 2013 at 10:12 pm
Steve, there is no doubt that some of the data is ‘bad’. I don’t have a problem with that, it is perfectly normal in science to ignore bad data if it can be shown to be bad/suspect, etc.
What is NOT normal, is to include/exclude data, kind of, at will – to suit ones agenda or anticipated findings. To avoid accusation of this, you must provide the full monty of data, used/unused/adjusted, etc – do you not agree?
Similarly, when someone does an experiment, takes readings, etc – those readings are the ‘holy grail’ of data – and should be carefully kept in PRISTINE condition – in effect, untouched by human hand. Ok, ok – we all know that data today, is held on computer, but even so, the raw data should be ‘visible’ and carefully held in its pristine condition.
In the case of all the various temperature datasets, the raw data is ‘entered’, saved, adjusted, averaged, readjusted, homogenised, etc, etc. No problemo – but when I want to know WHY something was adjusted, where do I find this out? Where, in the records is the reason/method of adjustment recorded?
And just to keep it simple in respect of wondering how good the ‘current’ dataset record is. I would like to know ONE, yes, only ONE – temperature dataset that has been maintained and recorded for a decent period of time, with each and every RAW recorded reading, still in its ‘pristine’ condition, with a description of each and every recorded adjustment and the reason for such adjustment from day one, such that the TRACEABILITY of the currently used ‘value’ can be worked all the way back, without break, to the raw data. In effect, after the folks crowing on about the 1780’s thermometer readings for Sydney not being traceable to known/validated calibrations, etc – can you, or anyone else demonstrate an adequate (read any, IMHO) level of traceability for the current longer term temp datasets?
Now, I know you have been asked this before – but I am asking once again, DO YOU (or anyone else) KNOW OF SUCH A DATASET? Where is it – and is it publicly available? If not, why not?

James
January 20, 2013 4:21 am

Welcome Paul Dennis,
are you the same Paul Dennis who works at the University of East Anglia?
http://www.uea.ac.uk/environmental-sciences/people/facstaff/dennisp
I look forward to reading your further input.
James

MangoChutney
January 20, 2013 4:25 am

An interview with OMICS Publishing Group’s Srinu Babu Gedela
http://www.richardpoynder.co.uk/OMICSb.pdf
Nevertheless, OMICS has published at least one article that even OMICS itself accepts should never have appeared in a peer-reviewed journal.
So we can now make that 2 articles.
Yes, OMICS operates an author-pays business model and authors are invoiced in relation to the funding available to them. In practice, this means that we provide complete waivers, or discounts of up to 90%, for some articles — depending on the request/research, and the effort the author has put into the respective article.
Right now out of every ten articles, two will get a waiver, and another four will get a discount.
Perhaps this is the answer to the question who paid and how it passed “peer” review having failed peer review in a real science journal
I wonder what cAGWers will make of this publication when they constantly claim sceptics can only publish in crap journals

Kev-in-Uk
January 20, 2013 4:26 am

janama says:
January 20, 2013 at 3:59 am
excellent – but why does it stop in the 1990’s – did the station move/cease?
But this is exactly why I’m calling Mosher on this issue – I would want to know – in detail – how the BEST process dealt with that data, and why – and I would want it demonstrated to such a level that I can accept it. Not by some specky-eyed spotty teenager computer data geek saying, ‘well, it didn’t look right, so I altered it!’ – yeah, in the AGW world, thats called ‘data quality checking’ !
Data quality – my ar$e!

MangoChutney
January 20, 2013 4:28 am

sorry about the formatting of my comment 🙁
Reply: I prettied it up a bit for you 😉 -ModE]

January 20, 2013 4:28 am

If no journal of repute will publish you, then why not publish your own 😕
Let the paper stand or fall on its merits.

Tony Mach
January 20, 2013 4:41 am

Can we cut back on insults? I know a lot of nice “leftist liberals” and hold some such attitudes myself. Calling them ‘psychopaths’ is, er, problematic. -ModE
As an Card Carrying Communist* from Old Europe, who seriously doubts that CO2 will have catastrophic effects on the climate, I am used to receiving insults both from the Green Alarmist side as well as from the Conservative/Free-Market/Libertarian/Neocon/Right-Wing/Reactionary/Whatever side.
Sure I don’t like being insulted (and sometimes it just makes me angry), but by now I simply to ignore these people and everything they say as best as I can.
I ignore any argument that starts with “Because of CO2 …” and I ignore anybody who’s main line of argument is “It’s because the [insert insult] lefties …”. It just makes it so much easier to ignore everything he says, if someone makes such stupid assertions. (And usually if one takes away the CO2 crap at the alarmist side, or the “lefties are evil” crap from the “Libertarian” side, nothing worthy to argue remains anyway).
If someone wants to preach only to “his” side: go ahead and insult the “other” side. However be ready to be ignored. And while we are at it: I don’t mind if you get snipped for your insults, as I wouldn’t read your crap anyway. Now go cry “Censorship!”
My two Euro-Cents
&
Have nice day
*And I want say: please stop lumping everybody together as “liberals” who disagrees with “the conservative” side. Liberalism is a political direction that is much more distinct than “everybody who is not conservative”. I am not an liberal, though I like some parts of liberalism. Same goes for the conservative side (or the libertarian, or any other political direction). It is simply not true that there are only two sets of political opinions, and you either have one or the other on any topic. It is like saying there are only two types of food: Hot and Cold. What utter BS.
And BTW forcing any discussion into a false dichotomy of “Conservatives vs. Liberals” or “Alarmists vs. Deniers” (or any other dichotomy) will hurt any discussion, as it mutes any differential appraisal of arguments, which we need to get a better understanding of reality.

MangoChutney
January 20, 2013 4:45 am

More on OMICS owner of the journal:
http://www.jfdp.org/forum/forum_docs/1013jfdp1040_1_032912094346.pdf
OMICS offer a 21 day turn around, so it appears Muller used this journal to ensure publication will be in time for final submissions to the IPCC
“The quality of work in the OMICS journals appears to vary widely.
The company says that it rejects 30 percent of submissions due to
poor quality and that each article is reviewed by a minimum of two
reviewers, except for “rare cases” in which only one person reviews
an article.
But in some cases, that peer-review process does not appear to have
happened. Last year, for example, the company’s Journal of Earth
Science & Climatic Change published a paper that suggested a
causal link between Stonehenge and global climate change. The
paper was written by Otis D. Williams, a Detroit man with a
bachelor’s degree in criminal justice who says he taught himself
physics and biology in the past 10 years. In the published paper, Mr.
Williams posits that Earth is literally a living organism and that
Stonehenge is evidence of an infection on the European continent.
Global climate change, he argues, is Earth’s immune system
responding to the infection with “fever and chills.””

OMICS also publishes papers without permission.

Bob
January 20, 2013 4:50 am

No publications in well-known journals and now a publication in a brand new journal suggests very strongly that BEST is nothing more than a publicity or grant getting stunt. Wayne’s video link is laughable because it wasn’t a lecture on how to build your own greenhouse. I didn’t watch past covering the aquarium with saran wrap to show how CO2 works and beginning an explanation of how to control the climate. I’m pretty sure the climate has changed, is changing and will continue to change. My skepticism is significantly higher when folks like this tell me they can control the climate.

Gail Combs
January 20, 2013 4:56 am

Ed MacAulay says:
January 19, 2013 at 6:55 pm
“..the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”….
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The key word is MODELED.
You can model the changes in the stock market based on the lengths of ladies skirts too….
What Skirt Lengths Tell You About The Stock Market

…the link between women’s hem lengths and stock market conditions was established in 1926 by Wharton economist George Taylor….

You could probably get a good correlation between climate and the lengths of ladies skirts or at least the number of days a year women bundle up in coats.
Back to CO2 as an indicator. The length of ladies skirts is an indicator of “an element of human behavior “ that effects both the length of the skirts AND the stock market. CO2 is such an indicator since the increase of TSI LEADS CO2 graph and TSI leads the temperature graph See Bob Tisdale: The Natural Warming of the Global Oceans – Videos – Parts 1 & 2 for how the TSI at the surface varies and changes the amount of energy entering the ocean.
CO2 reflects the temperature of the oceans link which reflects the amount of energy from the sun heating the ocean because CO2 out gases as the temperature of the oceans rise.

….cold water dissolves more CO2 (e.g. Segalstad, 1990). Hence, if the water temperature increases, the water cannot keep as much CO2 in solution, resulting in CO2 degassing from the water to the atmosphere. According to Takahashi (1961) heating of sea water by 1 degree C will increase the partial pressure of atmospheric CO2by 12.5 ppmv during upwelling of deep water. For example 12 degrees C warming of the Benguela Current should increase the atmospheric CO2 concentration by 150 ppmv…..
http://www.co2web.info/esef4.htm

The oceans are not only 70% of the earth’s surface but they also directly effect the temperatures on land.
EXAMPLES:
http://wattsupwiththat.com/2011/11/11/ocean-temperatures-can-predict-amazon-fire-season-severity/
http://wattsupwiththat.com/2009/07/23/surge-in-global-temperatures-since-1977-can-be-attributed-to-a-1976-climate-shift-in-the-pacific-ocean/
By leaving out the link between solar energy input to ocean temperature and the link from the ocean temperature to the amount of CO2 in the atmosphere, the entire Climastrologist Team™ misleads the public into thinking the minor addition of CO2 from human activity has some sort of effect on temperature.
This is why the assumptions:
1) TSI changes little and therefore the sun has no impact on climate change link
2) CO2 is well mixed in the atmosphere link and graph
3) C12/C13 ratio shows mankind is the criminal putting more CO2 in the atmosphere link
are defended to the death here on WUWT.

MangoChutney
January 20, 2013 5:09 am

I think Berkley have been duped by OMICS, but then again surely they carried out due diligence on the publication before paying? You know, the same as when they carried out due diligence before their PR stunts

MangoChutney
January 20, 2013 5:22 am

Ed MacAulay says:
January 19, 2013 at 6:55 pm
“..the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”….
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Joanne Nova showed how global warming was attributable to price rises in American Stamps:
http://joannenova.com.au/2009/05/shock-global-temperatures-driven-by-us-postal-charges/

Lars P.
January 20, 2013 5:23 am

bw says:
January 19, 2013 at 7:52 pm
Didn’t Muller say they “looked at” the UHI claims and dismissed them? How can temps from thermometers located in the path of air conditioner exhaust and over concrete runways be no different from pristine rural thermometers??? That doesn’t pass the sanity test.
……………
Why does the GHCN v3 data change from month to month? With each new data set showing the temps from the 1900s to 1930s becoming cooler while the data from the 1970s to now get warmer than ever. http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
………
Why did the data at Hansen’s GISS site show a massive alteration of data around December 2012 where about 10 percent of station data were altered to show much cooler temps in the early records. Meanwhile many other stations had recent years (2007 to 2012) of data deleted?? Interesting that those years had been showing an obvious cooling trend…..

I tend to very much agree with bw.
The way how historical data is being changed is far from being scientific, the adjustments – TOBS adjustment removed would almost completely cancel the results, in addition the huge variances seen in the 19th century which dissapeared in the 20th, the UHI being ignored make me doubt very strongly of the very validity of such reconstruction.
The explanation is not satisfactory.
I find it absurd to think that the thermometers placed around human activities centers are not influenced by those, that the 7 fold increase in human population and 100 fold increase in energy production concentrated in 1-3% of the surface of the planet, exactly in the area where those thermometers are, does not influence those, when each normal human can see this influence day by day with their simple car thermometers. Was the thermometer in Sydney in 1790 influenced the same way as now by the city? Take any other world location. Absurd.
Has this not changed the same as the temperature curve? (Human population increase, land usage, asphalt streets and concrete areas, etc?)
To me the Berkeley temperature curve is telling about the average temperature in the 1% human locations area and very few about the globe climate itself. The less huge variances in the 20th century may be explained through the fact that human locations ensure a more stable area, with bigger impact on the environment reducing the temperature variances within those.
The confidence interval is absurdly narrow and very often we see changes to past temperature values which move them out of previous confidence intervals.
Obviously the models have a systematic flaw in the way how they address the CO2 influence. Retrofitting the data to model results, overeliance on models for calculating the input data seem to be a major issue in the progress of science nowadays.
I find that Berkeley have missed an opportunity to add value to the climate discussion, pity. On the contrary I feel like they managed to increase suspicion and division.
As skeptics for the last 30 years we can rely on satellite measurements RSS, UAH, ocean surface which show small variations. This observations need to be continued and kept away from adjusters, raw data archived and analysed, we may learn something about climate out from there.
As the climate does not cooperate, the longer the time goes by, the weaker the warmista position is. The numbers are revised and revised down, and I am confident that this will continue.
The past adjustments continue but less and less people take this as serious indication of past temperature.

Bruce Cobb
January 20, 2013 5:28 am

“Many of the changes in land-surface temperature can be explained by a combination of volcanoes and a proxy for human greenhouse gas emissions. Solar variation does not seem to impact the temperature trend.”
Oh really? If this is the Best they have to offer, I’d hate to see their Worst. This is simply pseudo-science, fit for the garbage bin only.

jim2
January 20, 2013 5:33 am

I appears at least that Mosher is carrying a lot of water for these guys but produces little in the way of links to the “raw” data or in other cases backs his various assertions in any other way.

January 20, 2013 5:35 am

oldfossil ^up thread^
I sympathize with your desire to hold a civilized dialogue with the plague victims. If I remember correctly you came to the game recently.
I felt like this for nearly the whole of the my first year of being called the most despicable names and having comments deleted/edited for daring to question the central tenets of the rigid dogma of the religion that ‘believes’ CO2 is bad or will be bad for the planet and it’s inhabitants (in the amounts that we will ever be capable of liberating), whilst ignoring the hi-jack of the entirety of environmentalism, the diversion of funding from truly worthy causes and widespread scientific malfeasance in the same watch.
For the last 11 years I have been of a different mind.
I want all of them to face criminal charges. All of them. They all get their day in court to explain why I/we should be hung/drawn/quartered/incarcerated/put on a list/spike etc for questioning their utter lack of evidential support for their smug conjecturing. Or their data-torture. Or their unmitigated sophistry and elitism. Or their abuse of peer-review. Or their ‘hiding’ … well, anything inconvenient. Or their pushing technologies out of the development phase into production decades early. Or their ignorance. Or their blind faith. I could go on.
No turning of the other cheek for me. A decade+ of abuse changes a person.

John West
January 20, 2013 5:52 am

@ oldfossil
When “Best” first started we (at least I) thought they were setting out to better quantify the warming since the LIA, instead they jump to the conclusion that it’s anthropogenic with much ado, fireworks, and fanfare.
Let’s say we have 3 equations with 4 unknowns, we can try different combinations of the variables and find one that works (“a” answer), in fact we could find several that work. The problem I have is they are presenting “a” answer as if it is “the” answer. This is so far from acceptable it is unfathomable that anyone with the credentials of the “Best” crew would present it as scientific.

Gail Combs
January 20, 2013 6:00 am

Peter Maddock says:
January 20, 2013 at 2:05 am
…If they make no such statement we are probably correct in our first thoughts.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Do not forget a Shell Oil President, Marlan Downey, “Former President of the international subsidiary of Shell Oil” is now part of Muller’s privately held consulting firm….
Muller now claims in print to have been a “Climate Skeptic” who has been ‘reformed’ by the evidence but he never was a “Climate Skeptic” as his 2003 published comments show.
At this point I would not trust Muller or anything he touches. The fact that Judith Curry has no use for his work is also indicative. Observation-based (?) attribution by Judith Curry

Latimer Alder
January 20, 2013 6:02 am

I’m underimpressed that the journal is embedding advertising for its own services actually in the copy for the referenced paper. Here’s what it says:
‘Submit your next manuscript and get advantages of SciTechnol
submissions
™™ 50 Journals
™™ 21 Day rapid review process
™™ 1000 Editorial team
™™ 2 Million readers
™™ More than 5000 Facebook
™™ Publication immediately after acceptance
™™ Quality and quick editorial, review processing
Submit your next manuscript at ● http://www.scitechnol.com/submission
Not that I’ve a huge amount of faith that traditional peer-review adds very much apart from gate-keeping, but surely this is scraping along as pretty much ‘vanity publishing’?
And I wonder if the journal will ever have a second edition?.Perhaps ‘Gergis et al’ is still looking for a final resting place…..?

Hot under the collar
January 20, 2013 6:04 am

If OMICS has a history of charging the authors after publication http://scholarlyoa.com/2012/05/05/omics-publishing-launches-new-brand-with-53-journal-titles/
have Mosher, Zeke and Judith Curry received / paid the charge? : > )

Latimer Alder
January 20, 2013 6:12 am

I liked this observation from the editor of the new journal (Mark Birkin, School of Geography, Leeds, UK)

And yet it remains the case to an alarming degree that the most gifted exponents of the new generation of modellers are content to experiment with idealised toy systems. Still it seems that for every conference paper or journal article that actually attempts to validate a model there are ten which are content to bemoan the difficulties in doing so, or to identify further development as a headline for future research

And although he was writing about social science, I’m sure his sentiment will find much resonance here.
‘Idealised toy systems’ will be a lasting image, I think.

Gail Combs
January 20, 2013 6:14 am

Bill Illis says:
January 20, 2013 at 2:51 am
I would sure like to see the before and after of how the “scapel” method creating 179,000 new stations changed the overall trend over time….
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Check out DiggingInTheClay, there are several posts on the WUWT: “march of the thermometers” there.
Look around this article: The ‘Station drop out’ problem

Richard M
January 20, 2013 6:14 am

It wouldn’t surprise me that BEST had trouble publishing in standard journals for the same reasons skeptics have trouble. Remember, he POed Mann and Mann is likely one of the gatekeepers.
It would be interesting to know if any skeptical papers have been submitted to this journal and what happened.

John West
January 20, 2013 6:23 am

@ oldfossil
Let me put this another way; note this from the abstract:
“the entire change can be modeled by a sum of volcanism and a single anthropogenic proxy.”
Given enough time and resources I’m confident I could show:
The entire change can be modeled by a sum of volcanism and a single solar proxy.
The entire change can be modeled by a sum of radio transmissions and a single anthropogenic proxy.
The entire change can be modeled by a sum of international trade and a single cosmological proxy.
The entire change can be modeled by a sum of volcanism and a single witchcraft proxy.
This doesn’t even come close to “proving” anything; in fact it’s barely supporting evidence.

January 20, 2013 6:29 am

GIGS Editors & Editorial Board
Craig ZumBrunnen, PhD University of Washington, USA
Clifford J. Mugnier Louisiana State University, USA
Jeremy Dunning, PhD Indiana University Bloomington, USA
Daniel W. Goldberg, PhD University of Southern California, USA
Christopher Badurek, PhD Appalachian State University, USA
Yong Gang Li, PhD University of Southern California, USA
Yao-Yi Chiang, PhD University of Southern California, USA
Jixiang Wu, PhD South Dakota State University, USA
Darren M. Scott, PhD Mc master university, Canada
Dongmei Chen, PhD Queen’s University, Canada
Kip Jeffrey University Leicester, UK
Nicholas Tate, PhD University Leicester, UK
Mark Birkin, PhD university of leeds, UK
Chris Brunsdon, PhD University of Liverpool, UK
Martin Kappas, PhD Georg-August-Universität Göttingen, Germany
Sandra de Iaco, PhD University of Salento, Italy
Jose-Maria Montero, PhD Castile-La Mancha University, Spain
S.M. de Jong, PhD Utrecht University, Netherlands
Fung Tung, PhD Chinese University of Hong Kong, Hong kong
Abdulwahab A. Abokhodair, PhD King Fahd University, Saudi Arabia
Itzhak Benenson, PhD Tel Aviv University, Israel
Itzhak Omer, PhD Tel Aviv University, Israel

Gail Combs
January 20, 2013 6:30 am

knr says:
January 20, 2013 at 3:48 am
…… And the worst part is the wall of silence from their fellows over this , which means that way AGW theory falls it will take much more with it .
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
You have got that right. I had little respect for the American Chemical Society (ACS) and dropped my membership after thirty years when they endorsed CAGW. What little respect I had left went down the tubes when I read:

Mr. Brad Smith: ACS Office of Public Affairs
Since 1998, Brad has worked for the American Chemical Society’s (ACS) Office of Public Affairs. At the ACS he is actively bridging the gap between practicing chemists and policymakers through advocating ACS policy positions to federal and state policymakers and directing the Society’s grassroots programs. Under Brad’s direction the ACS’ flagship grassroots program (the Legislative Action Network) grew from a few hundred individuals to more than 16,000 active participants. In addition, Brad manages the ACS Local Section Government Affairs Committee and ACS Public Policy Fellowship programs. Brad received his M.A. in U.S. History from Bowling Green State University and his B.A. from Muskingum College.
The Blue Ridge Chemist
Official Local Section Publication of the Virginia Blue Ridge Section, American Chemical Society

Science, WHAT science? ACS is now a lobbying group lead by someone who does not even have a degree in science!
ACS is just one of the scientific societies that gave up the scientific method and critical thinking to climb on to the CAGW band wagon. With luck they will ALL crash and burn with the CAGW band wagon and true scientists will move on to establish different societies.

Gail Combs
January 20, 2013 6:42 am

John Trigge (in Oz) says:
January 20, 2013 at 3:57 am
How do they get 95% confidence in figures to 5/100ths of a degree from measurements that are nowhere near this accuracy?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
They didn’t and they can not. In statistics if you had thirty thermometers reading simultaneously the same area at the same time you could do the statistics on the data to get a better estimate of the true value and get better precision than you had with one reading.
However this was not done. Instead they fake it by using anomalies. See link and link

Gail Combs
January 20, 2013 6:56 am

Tony Mach says: @ January 20, 2013 at 4:41 am
……*And I want say: please stop lumping everybody together as “liberals” who disagrees with “the conservative” side. Liberalism is a political direction that is much more distinct than “everybody who is not conservative”. I am not an liberal, though I like some parts of liberalism. Same goes for the conservative side (or the libertarian, or any other political direction). It is simply not true that there are only two sets of political opinions, and you either have one or the other on any topic. It is like saying there are only two types of food: Hot and Cold. What utter BS….
>>>>>>>>>>>>>>>>>>>>>>>>>>>
Now ain’t that the truth! The real divide is the politicians vs us and they want to keep us fighting each other so we do not realize it.
In the USA the ‘left’ gets the ‘blame’ for ‘growing’ the federal government but if you actually look it is BOTH parties that grow the government and reward their buddies from the public treasury.
I am somewhat a capitalist (Pro small business and limited government) but consider some socialism the mark of a civilized being. Drives people who want to ‘label me’ nuts.

Joe
January 20, 2013 7:01 am

Steven Mosher says:
January 19, 2013 at 10:22 pm
Do you think we landed on the moon?
———————————————————
Of course we did!
In fact, some of the AGW supporters still seem to be there – they’re certainly not on THIS planet 😉

John West
January 20, 2013 7:03 am

Steven Mosher says:
“And of course the answer doesnt change.
facts. hard to deal with. but thems the facts.”

Hardly “hard to deal with” the answer that there’s been warming since the LIA. What’s “hard to deal with” is the continual insistence from people who should know better that warming = anthropogenic warming.
That’s what’s so [self-snip] disappointing about this whole “BEST” episode. I thought we’d finally get a reliable (or at least as reliable as possible) dataset to work with, instead we get another “attribution study” from ignorance that’s touted to be some sort of preeminent treatise from the masters of all knowledge and wisdom. [SPIT]

Joe
January 20, 2013 7:26 am

MangoChutney says:
January 20, 2013 at 4:45 am
http://www.jfdp.org/forum/forum_docs/1013jfdp1040_1_032912094346.pdf
“[…] Last year, for example, the company’s Journal of Earth
Science & Climatic Change published a paper that suggested a
causal link between Stonehenge and global climate change […]””
————————————————————————————
Still available from scribd.com:
http://www.scribd.com/doc/74678923/The-Stonehenge
Publication of that does make you wonder quite how reliable OMIC’s peer review might be – do reviewers ever have a sense of humour and just let stuff through “for a laugh”?

Bruce Cobb
January 20, 2013 7:49 am

“The period of 1753 to 1850 is marked by sudden drops in land surface temperature that are coincident with known volcanism;”
Ah, how convenient. So, how would they “explain” previous cool periods such as the Dark Ages, or indeed the warm period, the Medieval Warming (or Climate Optimum)?
Are volcanoes now to be the planets’ new natural thermostat? Funny how they seem to coincide with solar activity.

January 20, 2013 7:49 am

I’m going to be a voice of dissent here.
I’m glad there is a new Climate journal, especially since this one appears to be open for public viewing and not paywalled. Even if this specific paper doesn’t pass muster down the line (I have no idea about its validity and don’t have time to read it), lets wait and see what other papers get through the P Rev process before we pass judgement on this new journal.

Jimbo
January 20, 2013 7:52 am

Why didn’t they publish with some of the well known journal? Imagine if Anthony Watts publishes his new paper at this journal? There would be howls of protest and much gnashing of teeth.

Doug
January 20, 2013 8:11 am

I’m sure PBS will report on the difficulties BEST has has getting published, and
that the legions of outraged viewers will now agree it was proper to interview Anthony.

January 20, 2013 8:22 am

RE: Mosher thinks he is being clever there. But it is actually an insult to everyone here at WUWT. Wallowing in the sewer with leftist liberal psychopaths has harmed him greatly.
[1] I am clearly responding to one commenter’s own insult, which is obviously an insult to anyone that understands conspiracy theorists and being compared to them.
[2] It is obvious that saying ‘Wallowing in the sewer with leftist liberal psychopaths …’ does not mean all liberals are psychopaths or all psychopaths are liberals, the exact same way that saying “green Ferraris” does not mean all green things are Ferraris or all Ferraris are green ( and thank God for that ). Almost every statement someone makes has an exception, so the paradigm of ‘knowing liberals that are not psychopaths” as somehow invalidating someone else’s opinion can only lead to no opinions being expressed, because their will always be exceptions. Besides, I guarantee that from growing up in the swamp with them I know far more leftist liberals than most here, and the more of them you know the more like me you will ultimately become.

Jeff Alberts
January 20, 2013 8:22 am

Joe says:
January 20, 2013 at 7:26 am
=========
Odd that they call it “the Stonehenge”. Proper names usually don’t use “the” in front of them, unless you’re trying to emphasize or differentiate, like “the Michael Jackson”.
REPLY: Or “the ‘former skeptic’ known as Muller” – Anthony

Jeff Alberts
January 20, 2013 8:29 am

Mike Alexander says:
January 20, 2013 at 7:49 am
I’m going to be a voice of dissent here.
I’m glad there is a new Climate journal, especially since this one appears to be open for public viewing and not paywalled. Even if this specific paper doesn’t pass muster down the line (I have no idea about its validity and don’t have time to read it), lets wait and see what other papers get through the P Rev process before we pass judgement on this new journal.

Just because a journal has bad published bad papers doesn’t seem to matter much. We’ve seen the dreck published in Nature and Science, splashed on the front cover, no less.

Jeff Alberts
January 20, 2013 8:29 am

“REPLY: Or “the ‘former skeptic’ known as Muller” – Anthony”
Indeed 😉

TomRude
January 20, 2013 8:33 am

LOL just in time for AR5… Worldwide media coverage to follow?

Bill H
January 20, 2013 8:41 am

David Davidovics says:
January 19, 2013 at 6:45 pm
Anthony,
If the following link is true, it would certainly explain why you haven’t heard of this scientific journal before. It was launched in 2012:
http://scholarlyoa.com/2012/05/05/omics-publishing-launches-new-brand-with-53-journal-titles/
Quote from link:
“India-based OMICS Publishing Group has just launched a new brand of scholarly journals called “SciTechnol.” This new OMICS brand lists 53 new journals, though none has any content yet.
We learned of this new launch because the company is currently spamming tens of thousands of academics, hoping to recruit some of them for the new journals’ editorial boards.

=============================
OMICS Publishing is a government run propaganda group. Ties to the UN and IPCC with funding from the same and other shadowy folks… I hate it when you follow the money and it ends up in dark places..
I’m going to go out on a limb here but my guess is no one else will touch the BEST article and this was their ONLY avenue to some form of “relevancy” or “credibility”. Given they followed the IPCC meme to the letter and threw out OTHER possibilities for Global Warming or Climate Change its just a continuation of trying to control the message by controlling journals.. and if you can control them make your own..
Why are these people so predictable?

Bill H
January 20, 2013 8:51 am

Bruce Cobb says:
January 20, 2013 at 7:49 am
“The period of 1753 to 1850 is marked by sudden drops in land surface temperature that are coincident with known volcanism;”
Ah, how convenient. So, how would they “explain” previous cool periods such as the Dark Ages, or indeed the warm period, the Medieval Warming (or Climate Optimum)?
Are volcanoes now to be the planets’ new natural thermostat? Funny how they seem to coincide with solar activity.
==============================================
Do you ever get the feeling that since the early release of AR5 and the fact the sun is identified as the primary driver that this just might be damage control via the IPCC and UN? This damaged their control agenda badly.. Then along comes BEST to set the record straight but no reputable journal would touch it… so create one…

Joe
January 20, 2013 8:53 am

Jeff Alberts says:
January 20, 2013 at 8:22 am
Joe says:
January 20, 2013 at 7:26 am
=========
Odd that they call it “the Stonehenge”. Proper names usually don’t use “the” in front of them, unless you’re trying to emphasize or differentiate, like “the Michael Jackson”.
REPLY: Or “the ‘former skeptic’ known as Muller” – Anthony
——————————————————————–
“the artist formerly known as…..”
“the Great Gonzo”
Correlation anyone?

DirkH
January 20, 2013 9:00 am

Steven Mosher says:
January 19, 2013 at 10:22 pm
““You probably haven’t heard of it because it is volume 1 issue 1… Must be his own journal.”
Do you think we landed on the moon?”
As Muller is a front for the geo-engineering NOVIM Group and his daughter Elizabeth Muller peddles a “product” called “GreenGov” the assumption that Muller does more shady dealings is not at all absurd. He has already proven himself to be a rent seeker of the first degree; a worthy equivalent to Pachauri. I know, that’s all pretty standard in warmist circles. We know why they do it.
http://wattsupwiththat.com/2011/11/13/the-waxman-markey-circus-is-coming-to-town-dr-richard-muller-to-showcase-best-under-the-bigtop/#comment-796111
http://jer-skepticscorner.blogspot.com/2011/04/best-novim-and-other-solution.html

DirkH
January 20, 2013 9:07 am

Andrew says:
January 20, 2013 at 1:10 am
“I’m sensing the beginnings of what will end in blind panic amongst the Eco-Taliban. By now, even that lot must accept that correlation does not prove causation.”
They are starting to devour their own.
Michael Mann attacks Nate Silver
http://www.huffingtonpost.com/michael-e-mann/nate-silver-climate-change_b_1909482.html
because Silver thinks Armstrong makes better predictions than the GCM’s
about Armstrong see also here.,
http://wattsupwiththat.com/2013/01/19/so-far-al-gore-appears-to-be-losing-the-climate-bet/#comment-1203534

January 20, 2013 9:07 am

Something different but worth seeing

Andrejs Vanags
January 20, 2013 9:12 am

Interesting how they ignore the entire ‘Little Ice Age’, start the temp record from its lowest point, and give the impression it can all be explained by ‘volcanism’.
For once I would like to see a temp reconstruction from the beginning of the little ice age, some where about 1400, and lets see how they explain it.

DirkH
January 20, 2013 9:15 am

And I’d like to direct attention again to this razor sharp demolition of the BEST “scalpel” method.
Stephen Rasey says:
December 13, 2012 at 11:00 am
http://wattsupwiththat.com/2012/12/13/circular-logic-not-worth-a-millikelvin/#comment-1172277
(As climate change or AGW is a low frequency component, “splicing” temperature series makes it impossible to find – splicing destroys exactly the low frequency signal and replaces it with an artificial one)

Jimbo
January 20, 2013 9:17 am

So to summarize the information about this very fine journal we have the following:
1) It was launched in May, 2012
2) It is currently on volume 1 issue 1
3) It has published 1 paper
4) The editor can “Review the manuscript himself without assigning it to reviewers; or”
5) Its parent group has been accused of “of tacitly saying it will publish anything”
This journal can now join other find journals like the Journal of Cryptozoology
References
http://scholarlyoa.com/2012/05/05/omics-publishing-launches-new-brand-with-53-journal-titles/
http://www.scitechnol.com/ArchiveJGSD/currentissueJGSD.php
http://poynder.blogspot.co.uk/2011/12/open-access-interviews-omics-publishing.html
http://scitechnol.com/reviewer-guidelines.php

John M
January 20, 2013 9:17 am

Gail Combs says:
January 20, 2013 at 6:30 am

ACS is now a lobbying group lead by someone who does not even have a degree in science!

Minor correction Gail. Brad Smith’s not the leader.
To be completely correct, ACS is a “non-profit” lobbying group/soapbox led by someone who has parlayed a BS in Chemistry and a series of member- and tax-funded writing and PR jobs into close to $1 Million dollars in annual compensation.
http://www.chemheritage.org/discover/collections/oral-histories/details/jacobs-madeleine.aspx
http://pipeline.corante.com/archives/2012/09/19/the_american_chemical_societys_lawsuit_problem.php#1084983
Looks like Brad is merely working on demonstrating his bona fides for being a logical successor.
He’ll have to work on builiding up his “Marie Antoinette” core competencies though.
http://chemjobber.blogspot.com/2012/07/ms-madeleine-jacobs-sympathetic-but.html

Jimbo
January 20, 2013 9:24 am

Meant to say
“This journal can now join other FINE journals like the Journal of Cryptozoology”

thisisnotgoodtogo
January 20, 2013 9:33 am

Lew/Mosh
sinking like a moonstone

thisisnotgoodtogo
January 20, 2013 9:37 am

Lew/Mosh
Will the Editor now resign with a letter of apology to Kevin Trenberth?

TomRude
January 20, 2013 9:39 am

“We thank David Brillinger for important guidance in statistical analysis,
Zeke Hausfather, Steven Mosher, and Judith Curry for helpful discussions and
suggestions. This work was done as part of the Berkeley Earth project, organized
under the auspices of the Novim Group (www.Novim.org). We thank many
organizations for their support, including the Lee and Juliet Folger Fund, the
Lawrence Berkeley National Laboratory, the William K. Bowes Jr. Foundation,
the Fund for Innovative Climate and Energy Research (created by Bill Gates), the
Ann and Gordon Getty Foundation, the Charles G. Koch Charitable Foundation,
and three private individuals (M.D., N.G. and M.D.).(…)
So THEY are on the payroll of the Koch brothers? /sarc off
Oh and one has to love the name dropping (“created by Bill Gates” wink, wink, ya the Microsoft guy, a great guy, a wonderful individual… LOL)

Peter Miller
January 20, 2013 9:51 am

A circa 0.9 degrees C global temperature increase over the past 60 years is the highest figure I have yet seen.
So presumably a new improved method of data ‘homogenisation’?

Mindert Eiting
January 20, 2013 9:55 am

Lars P.: ‘in addition the huge variances seen in the 19th century which disappeared in the 20th’. This can be explained because the error variance part of the total variance decreases when the number of stations increases. Much more interesting is a further decrease of variance in the late twentieth century during the great dying of the thermometers. This is the result of carefully weeding out of stations whose time series correlated lowest with those in their environment. This does not result in a reduction of error variance (the station number decreased) but natural (local) variance, resulting in an artificial signal. After analyzing GHCN data for about two years, I stopped because the data set is in an hopeless state, apart from ongoing adjustments. I do not trust the BEST results for the same reasons.

January 20, 2013 9:59 am

Latimer Alder says:
January 20, 2013 at 6:12 am
“‘Idealised toy systems’ will be a lasting image, I think.”
Latimer, for an expansion of that image regarding climate models, see the free sceptical cli-fi / sci-fi story here: http://wearenarrative.files.wordpress.com/2012/11/truth.pdf , which features a short visual description of the ‘toy system’.
The story appeared in the WUWT post here:
http://wattsupwiththat.com/2012/12/15/wuwt-spawns-a-free-to-read-climate-sci-fi-novel/
Hope this image fulfills your expectations 🙂

January 20, 2013 10:09 am

Yes, I will analyze this paper by Muller et al.
For me though, the quintessential story is the review comments and the detailed background qualifications ( the CVs) of all reviewers and editors of any journals that did not accept the Muller et al paper as well as those who did accept it. That is a valuable scientific content, process and context that is hidden. I am fundamentally disappointed in the scientific structure and process we see in this supposedly modern era.
John

Glenn
January 20, 2013 10:31 am

This “OMICS Group” have been *BUSY* beavers. You name it, there’s a “journal” or “conference” for it:
http://www.google.com/#q=%225716+Corsa+Ave.,+Suite+110+Westlake,+Los+Angeles%22&hl=en&tbo=d&ei=XDb8ULuTC6S8iwLzmIDwBw&start=10&sa=N&bav=on.2,or.r_gc.r_pw.r_qf.&fp=94e32a4cea89f529&biw=1280&bih=879

January 20, 2013 11:02 am

I have taken a look at the publisher of the new journal, and it appears to be very much a lower tier academic publisher.
http://newzealandclimatechange.wordpress.com/2013/01/21/the-best-article-finally-published-sort-of/
It all looks very dubious.

January 20, 2013 11:34 am

“So presumably a new improved method of data ‘homogenisation’?”
nope. just a method proposed and endorsed by skeptics before they saw the result.

January 20, 2013 11:36 am

Dirk
“As Muller is a front for the geo-engineering NOVIM Group and his daughter Elizabeth Muller peddles a “product” called “GreenGov” the assumption that Muller does more shady dealings is not at all absurd. He has already proven himself to be a rent seeker of the first degree; a worthy equivalent to Pachauri. I know, that’s all pretty standard in warmist circles. We know why they do it.”
except Muller had nothing whatsoever to do with the selection of the journal. Zero. zip nada.
so much for your conspiracy theory.

Sam the First
January 20, 2013 11:42 am

Gail Coombs wrote: “I am somewhat a capitalist (Pro small business and limited government) but consider some socialism the mark of a civilized being. Drives people who want to ‘label me’ nuts.”
I sympathise, being in the same position. Sadly it means old friends from both sides of the divide tend to alienate themselves as they cannot bear dissenting opinions – much like with the climate debate which is a greatly divisive topic.
As a matter of record regarding the usage “the Stonehenge”, and I’m speaking as a UK reader who has co-authored a book on the the place: it would be correct to write “the stone henge” referring to any henge comprised of eg monoliths etc. It is never however correct to refer to “the Stonehenge”, all one word, when referencing the specific site of Stonehenge in Wiltshire. The poster who queried this is correct – it’s just never done here.
So far as this journal is concerned, my antennae went up the moment I saw it was one of a raft of new journals based in India; and I wasn’t a bit surprised to read further down the thread, that it’s backed by the Indian govt and the UN. I wonder what the links are to the IPCC! (and a certain famous Indian)

January 20, 2013 11:47 am

“Mosher thinks he is being clever there. But it is actually an insult to everyone here at WUWT. Wallowing in the sewer with leftist liberal psychopaths has harmed him greatly.”
Its not an insult to everyone. Its a question. He can simply say that he believes we landed on the moon. The point is rather simple. With no evidence whatsoever the commenter speculated that Muller has something to do with the journal. Muller has nothing to do with the journal. Muller didn’t even know about the journal until it was presented as an option. The comenter had a theory, developed out of whole cloth. Much like Mann theory that McIntyre is an oil shill. This kind of thinking is shallow, predictable, and not the kind of speculation that should be encouraged on either side. I didnt like it when Mann accused Mcintyre of being an Oil shill and since I am consistent I don’t like it when other engage in speculation without any basis whatsoever.
I will say that if you guys read the reviews from JGR you’d be outraged about some of the tangential issues raised.

Manfred
January 20, 2013 11:49 am

A bit disappointing that the paper mixes good efforts with that silly 2 variable regression, that cannot explain any warm climate prior to the little ice age. At least, they try to publish the outdated UHI results elsewhere. Of course, it would be better to try to verify Watt’s new classification first.

Kev-in-Uk
January 20, 2013 11:50 am

Steven Mosher says:
January 20, 2013 at 11:34 am
“So presumably a new improved method of data ‘homogenisation’?”
nope. just a method proposed and endorsed by skeptics before they saw the result.
That’s fine Steve, but aren’t you also jumping the gun? we haven’t seen the workings yet! We must await all the data and code to be released before proper assessment in due course then, eh?
I agree and accept that the conspiracy fanatics will jump on the publication issue perhaps unfairly – but you have to admit that it looks awfully odd for a paper, ‘lauded’ as the be all and end all – cannot get reviewed in ‘standard’ journals? but instead in a brand spanking new journal with dubious review criteria? FFS – we have waited this long for the paper, we could have waited longer for a proper peer review process!

TomRude
January 20, 2013 11:50 am

“I will say that if you guys read the reviews from JGR you’d be outraged about some of the tangential issues raised.”
Go for it Mosher, let us see them…

Editor
January 20, 2013 11:52 am

The full entry for the Review process at Geoinformatics and Geostatistics is as follows:
“Review Process
The Review process for articles being published in SciTechnol Journals is carried out in an easy and quick manner. The submitted manuscript is assigned to one of the Editorial Board Members based on their area of interest. If the Editor agrees to accept the assignment, he can choose any of the three ways:
Review the manuscript himself without assigning it to reviewers; or
Assign atleast 3 potential reviewers for the review process; or
Ask the Associate Managing Editor of the Journal to assign reviewers on his behalf.
The Assigned Reviewers have to submit their review comments within a period of two weeks either to the Assigned Editor or submit it directly to the Editorial Office of the Journal.
The Reviewer has to submit his/her comments in the Electronic Review Form that is sent alongwith the Manuscript whereby he/she can:
Reject the manuscript; or
Re-review after a thorough revision; or
Accept the manuscript with Major Revisions; or
Accept the manuscript with Minor Revisions; or
Accept the manuscript without any changes.
The review comments are then submitted to the Editor who will make a final decision whether to accept, reject or revise a manuscript. The author is notified at the same time with the Editor’s decision and the manuscript is preceded further to publication (if accepted).
The submitted manuscript is published after 7 days from the date of acceptance. ”
There is no indication in the article itself (note: It is the ONLY article in Volume 1 Issue 1) that it has been peer-reviewed. No “Thanks to Reviewer #1…” no notation of peer-review, no nothing in the published piece.
There is no list of editors on the journal site. The associated “Science Blogs” section contains nothing but short throw-away (three or four paragraph) UNSIGNED boiler-plates.
The About Us page has no information about them. The Contact Us page has a web form but no email addresses or names.
So far, for all we are able to know, SciTechnol created this “journal” for the sole purpose of giving print to the BEST paper — which may have been reviewed by a single editor — the same one who agreed to set up the journal for it.
This does not feel right to me….
BEST had lots of important support — financial, social, and scientific — why is its primer paper being only brought to light in this way?

John West
January 20, 2013 11:53 am

The paper is mostly [self-snip] in its conclusions but the data seems at least somewhat defendable. Basically, it says if I make the same assumptions as the IPCC crowd I get about the same results as the IPCC crowd. Big surprise there, I hope the Nobel nomination committee is paying attention.
From the paper:
”Our analysis does not rule out long-term trends due to natural causes; however, since all of the long-term (century scale) trend in temperature can be explained by a simple response to greenhouse gas changes, there is no need to assume other sources of long-term variation are present.”
Since combustion can be explained by phlogiston there’s no need to assume phlogiston doesn’t exist.
Since our analysis does not rule out natural causes, it must mean I can proclaim to the MSM there’s no reason to be skeptical anymore.
Perhaps I’m a little bitter, paint me naïve but when the “BEST” project was first announced I had anticipated them coming out with something like: After extensive analysis of available data we’ve determined there’s been “X” warming (+-)”Y” over “Z” period (here’s the graph). What I did not expect was another conclusion jumping, headline seeking, priori riddled, variable ignoring, unjustifiably assuming, Zohneristic, disturbingly inflated, overhyped, propaganda piece.
Ok, now that that’s out of the way, the section on Diurnal range could prove useful.
”The physics is that greenhouse gases have more impact at night when they absorb infrared and reduce the cooling, and that this effect is larger than the additional daytime warming. This predicted change is sometimes cited as one of the “fingerprints” that separates greenhouse warming from other effects such as solar variability. “
Agreed!
”The rise takes place during a period when, according to the IPCC report, the anthropogenic effect of global warming is evident above the background variations from natural causes.”
Oh? Now why could that be? Perhaps they’ve made some incorrect assumptions. Perhaps CO2 effect peaked in the late 80’s. Perhaps it has nothing to do with CO2 but solar activity, which also peaked in the late 80’s. Perhaps you’re all just blowing smoke.
”We are not aware of any global climate models that predicted the reversal of slope that we observe. “
But the models are produced from settled science, the observations must be wrong. /sarc

Editor
January 20, 2013 11:54 am

eGads! It has the same sort of stink the ‘diatoms from space’ paper had — written and published by a guy in his own personally controlled ‘journal’.

Manfred
January 20, 2013 11:58 am

Steven Mosher wrote out of thin air about hundreds of peer reviewed studies showing high solar / climate correlation::
December 16, 2012 at 7:53 pm
“…Here you see a common error that get repeated over and over again in solar papers. There are an infinite number of climate variables and combinations thereof. They select ( who knows how) looking at temperatures in Norway, and Europe. They start to play with solar cycle length data. They canvas various ways others have looked for correlations and failed to find them. various ways of smoothing the data, not smoothing, all of these are bites at the statistical apple. Through a variety a decisions ( all untested ) then happen upon a relationship between one particular manipulation of a solar parameter (cycle length) and another selection of climate parameter. That is neither good faith or bad faith. That is hunting for a relationship until you find one. …”
lsvalgaard wrote at December 16, 2012 at 3:29 pm about the opposite:
“And there are not that many proxies of solar activity. Everybody uses the same ones or obsolete [and perhaps carefully picked] versions of same.”
——————————–
Steven Mosher, do You think we landed on the moon ?

Don Monfort
January 20, 2013 12:11 pm

“Muller didn’t even know about the journal until it was presented as an option.”
That is believable. Virtually nobody had heard of it, until yesterday. Were there any other options, Steven?

January 20, 2013 12:18 pm

‘Then along comes BEST to set the record straight but no reputable journal would touch it… so create one…”
so have we landed on the moon?

thisisnotgoodtogo
January 20, 2013 12:19 pm

“Much like Mann theory that McIntyre is an oil shill.”
So not so much like No Moon Landing?

Editor
January 20, 2013 12:22 pm

Dear Mosher –> You seem to have inside information, as in :
” Muller has nothing to do with the journal. Muller didn’t even know about the journal until it was presented as an option. “

So tell us all please — Why was this paper published in this shockingly obscure, brand-new journal? Was it actually Peer-Reviewed (notice the initial caps please) — was it really send out in its entirety to at least three world-class respected experts in the necessary fields, let’s say climate and statistics and computer modelling for instance, and thoroughly vetted, revised, etc before publication?
Please only reply with what you known for certain for yourself from your actual personal experience. If you are going to related “what you’ve been told” — please tell us what you have been told and by whom….supply quotes, please.

thisisnotgoodtogo
January 20, 2013 12:25 pm

“Muller didn’t even know about the journal until it was presented as an option”.
Demonstrating Muller has the better spam filter?
Or that he didn’t know of this opportunity until he saw the spam?

January 20, 2013 12:28 pm

John West says:
January 20, 2013 at 5:52 am (Edit)
@ oldfossil
When “Best” first started we (at least I) thought they were setting out to better quantify the warming since the LIA, instead they jump to the conclusion that it’s anthropogenic with much ado, fireworks, and fanfare.
Let’s say we have 3 equations with 4 unknowns, we can try different combinations of the variables and find one that works (“a” answer), in fact we could find several that work. The problem I have is they are presenting “a” answer as if it is “the” answer. This is so far from acceptable it is unfathomable that anyone with the credentials of the “Best” crew would present it as scientific.
###############
it is not presented as THE answer. The argument is entirely different. It goes like this. It starts with givens or assumptions.
1. Given: C02 causes warming
2. Given: Volcanos cause cooling
If you take those two givens you can explain the temperature rise with a residual that looks like AMO.
pretty simple. Now, you can object to #1 or object to #2 or both.
It doesnt “prove” global warming. It says to believe who believe in AGW– yup, this data is consistent with the theory. Nothing more.
It says to people who dont believe..
“Our analysis does not rule
out long-term trends due to natural causes; however, since all of the
long-term (century scale) trend in temperature can be explained by
a simple response to greenhouse gas changes, there is no need to
assume other sources of long-term variation are present. If all of the
residual evolution during the last 150 years is assumed to be natural,
then it places an upper 95% confidence bound on the scale of decadal
natural variability at ± 0.17°C. Though non-trivial, this number is
small compared to what our correlation analysis suggests may be
anthropogenic changes that occurred during the last century.”
with natural variability at .17C per decade, you of course expect to see periods of “pause” or retreat in an otherwise increasing trend.

January 20, 2013 12:30 pm

errata
“. It says to believe who believe in AGW– yup, this data is consistent with the theory. Nothing more.”
. It says to THOSE who believe in AGW– yup, this data is consistent with the theory. Nothing more.

January 20, 2013 12:34 pm

Steven Mosher says:
January 19, 2013 at 10:12 pm
Steve, there is no doubt that some of the data is ‘bad’. I don’t have a problem with that, it is perfectly normal in science to ignore bad data if it can be shown to be bad/suspect, etc.
What is NOT normal, is to include/exclude data, kind of, at will – to suit ones agenda or anticipated findings. To avoid accusation of this, you must provide the full monty of data, used/unused/adjusted, etc – do you not agree?
###############
yes. Ifyou go to the data portion you will find a
A) a link to all sources.
B) a datafile containing all the source data reformated to our format ( multi series data )
C) a datafile after the removal of duplicate stations
D) a datafile prior to QA
E) a datafile after QA.
The data that I hope to have up in due course would be the scalpeled data. That is, showing where all the cuts are made. In due course the data for every station will be online with charts showing where the scapel was applied and why it was applied. Or you can download the code and see for yourself. This is what I did with GISS for example.

January 20, 2013 12:36 pm

“There is no indication in the article itself (note: It is the ONLY article in Volume 1 Issue 1) that it has been peer-reviewed. No “Thanks to Reviewer #1…” no notation of peer-review, no nothing in the published piece.”
The article was reviewed by three anonymous reviewers.
do you believe we landed on the moon?

January 20, 2013 12:38 pm

DirkH says:
January 20, 2013 at 9:15 am (Edit)
And I’d like to direct attention again to this razor sharp demolition of the BEST “scalpel” method.
##############
The scapel method ( as endorsed by Willis ) works. See the AGU poster for a double blind test of the method.

mike ozanne
January 20, 2013 12:41 pm

In the dim and distant past of the UK we had a “glam rock” group called Slade. In the natural course of events, they toured America following the ever present UK pop group dream of ‘breaking the states”. During the course of the tour Noddy Holder, the lead singer, helped two women escape a fire that broke out at their hotel. This lead the rest of the group to remark that his record with the fair sex must be pretty poor if he was reduced to setting fire to hotels and then offering to “rescue” anything attractive that was milling around in the chaos. In similar vein it seems that BEST’s luck with peer review is so poor they are having to start a journal………

January 20, 2013 12:41 pm

“They didn’t and they can not. In statistics if you had thirty thermometers reading simultaneously the same area at the same time you could do the statistics on the data to get a better estimate of the true value and get better precision than you had with one reading.”
well this is wrong. you can see the methods memo. or you can takes some stats from Brillinger.
The temperature field is predicted to +- 1.6C with a nugget of .46C on monthly temps.

January 20, 2013 12:46 pm

‘Just because a journal has bad published bad papers doesn’t seem to matter much. We’ve seen the dreck published in Nature and Science, splashed on the front cover, no less.”
There was a time when WUWT stood against the nonsense of worshipping peer review. Partly because there were papers with no code and no data and you could not check for yourself. Also, because folks here were well aware of the politics involved in peer review.. You all saw what happened to Odonnells paper when one reviewer was determined to hold it up. Ask yourself…
Do you think it was a skeptical reviewer who objected to us taking the record back beyond 1850? really?

January 20, 2013 12:51 pm

jim2 says:
January 20, 2013 at 5:33 am (Edit)
I appears at least that Mosher is carrying a lot of water for these guys but produces little in the way of links to the “raw” data or in other cases backs his various assertions in any other way.
Data:
http://berkeleyearth.org/data/
Code
http://berkeleyearth.org/our-code/
Memos
http://berkeleyearth.org/available-resources/
On Hansens claims about extremes
http://berkeleyearth.org/pdf/hausfather-hansen-memo.pdf
http://berkeleyearth.org/pdf/wickenburg-hansen-memo.pdf
Methods comparsions
http://berkeleyearth.org/pdf/robert-rohde-memo.pdf
( for the non math types )
http://berkeleyearth.org/pdf/visualizing-the-average-robert-rohde.pdf
Scalpel
http://berkeleyearth.org/images/agu-2012-poster.png

Don Monfort
January 20, 2013 12:56 pm

“The article was reviewed by three anonymous reviewers.
do you believe we landed on the moon?”
The moon landing thing is wearing thin. The so-called journal, GIGS, has zero credibility. And you know it.

January 20, 2013 12:57 pm

Jimbo says:
January 20, 2013 at 7:52 am (Edit)
Why didn’t they publish with some of the well known journal? Imagine if Anthony Watts publishes his new paper at this journal? There would be howls of protest and much gnashing of teeth.
#############
howls of protest?
not from me. in fact I would suggest that Anthony submit his work to this journal. The reviewers are knowledgeable and helpful. There is no paywall. and the turn time was very good.
As long as people provide data and code, you can do what should do ANYWAY, check for yourself. For example, Ross did a paper on UHI. It passed peer review. Nobody caught the data errors, but one day I looked at his data and “blam” data error.
But if I read a paper published in Nature that doesnt have the data and code, then I’m not even considering it.

January 20, 2013 1:03 pm

“The supplementary pdf isn’t all that helpful either. I think they will need to produce very detailed methodology, e.g. how they have dealt with station dropouts and outliers – with actual demonstrated ‘outlier’ procedures, code, etc; along with the datasets, and if necessary all the little ‘notes’ describing how/why stuff was adjusted – UHI anyone?. As it stands, I am suspicious and I am sure Anthony will remain so until proven otherwise.”
The detailed methodology is here
http://berkeleyearth.org/pdf/methods-paper.pdf
and here
http://berkeleyearth.org/pdf/methods-paper-supplement
the code is here
http://berkeleyearth.org/our-code/
The easiest way to understand how the method works is to look at this
http://berkeleyearth.org/pdf/visualizing-the-average-robert-rohde.pdf
Then if you want to see which method is best, you can look at how various methods perform using synthetic data. This is the first time various averaging methods have been evaluated using a ground truth dataset
http://berkeleyearth.org/pdf/robert-rohde-memo.pdf

Kev-in-Uk
January 20, 2013 1:12 pm

Steven Mosher says:
January 20, 2013 at 12:46 pm
you’re right – peer review is what it is, and varies according to journal/purpose, etc. In this case, obviously, the peer review needs to be done by folks of the specialist statistical analysis and computing type variety, as well as bog-standard climate boys.
I am aware of the availability of the BEST data, but sadly, being only computer literate from the age of Fortran IV, and basically a software ‘user’ since – I am not familiar with the code used to make the analysis. I don’t doubt if I was unemployed, I could sit and learn it – but it’s out of reach from a time perspective at the moment! So, in terms of peer review, I rely on someone who has those skills at hand to do the review for me, so to speak. In practise, that, of course, is the actual purpose of peer review. – it means the data and methods are checked (properly), so that others don’t have to. The beef with the AGW meme peer review process, is that it is clearly evident to be mostly pal review and certainly without much public availability of data – just ask Mann for his!
To return to my earlier post regarding the datasets – BEST uses others data – and as far as I can tell this is not the raw data? So, as far as I can deduce, Best have taken averaged data, from homogenised and adjusted datasets, and re-averaged in some spatial weighted formula etc. So, we are basically saying that BEST used potentially flawed data, and averaged it ? – and everyone wonders why it still shows the same trend? – without proper UHI adjustment, etc…
So – let me make a reasonable presumption. The BEST dataset is simply a better average/combination of other ‘source’ datasets in a combined fashion – and has nothing to do with special checking and quality control of each and every dataset and their subsets themselves? Is that a fair presumption, is that right? If so, IIRC correctly, that was not the initial intention of the BEST project – in fact, I’m sure I recall them saying they wanted to sort out the UHI issue, etc (but I may be wrong, or perhaps they simply moved the objective after the initial PR ?)

Auto
January 20, 2013 1:20 pm

From the paper in question [page 6]: –
“Many of the changes in land-surface temperature follow a
simple linear combination of volcanic forcing (based on estimates
of stratospheric sulfate injection) and an anthropogenic term
represented here by the logarithm of the CO2 concentration.”
Yet their own Figure One shows their ‘Land Average’ falling from 1800 or so, to a low about 1815-1818 [by eye – as noted somewhere above the resolution of the figures is not good in the .pdf, even at about 300%].
The ‘Land Average’ also seems to have been falling sharply from about 1775 to a low about 1786 – again by eye.
Yet these lows seem to be attributed to vulcanism – page 4 of the paper: –
“Most of the eruptions with significant stratospheric sulfate
emissions can be qualitatively associated with periods of low
temperature. For most of these events, the volcano and year of
emission is historically known. These include the eruptions of Laki in
1783, Tambora in 1815, and Cosiguina in 1835. The sulfate spike at
1809 does not have an associated historical event, and it might consist
of two separate events [22,23]. The famous explosion of Krakatau in
1883 is smaller (measured in sulfates) than these other events.”
So it looks as if – on their figures – temperature is a very good leading indicator for vulcanism.
[As well as hemlines? Or do hemlines lead?]
And their figures – from pages 2 and 3: –
“To perform the average, the surface of the Earth was divided
into 15,984 elements of equal area and weighted by the percentage
of land at each spot; 5326 of them had >10% land. For each month,
Berkeley Average creates an estimated temperature field for the entire
land surface of the Earth using Kriging to interpolate the available
temperature data.”
So almost two thirds of the globe was not modelled.
Areas with slightly over 10% land [of the 31,900 sq km – or 12,300 square miles – or so of each ‘element’] seem to be treated as equal to those of 100% land.
And the average is taken in monthly chunks.
They candidly say their results are based on a handful of stations – page 3
“The Berkeley Average procedure
allows us to use the sparse network of observations from the very
longest monitoring stations (10, 25, 46, 101, and 186 sites in the years
1755, 1775, 1800, 1825, and 1850 respectfully) to place limited bounds
on the yearly average.” (so even in 1850, each station must do proxy, proportionately, for over a million square miles] but their Figure 1 seems to show changes of global temperature – up and down – of two degrees or so [C] in a decade [again by eye] in the Eighteenth Century, and large parts of a full degree even into the 1870s – in a decade or so, and again – up and down.
The 1930s seem to have – pretty much – vanished.
The maths – scalpel effects and so on – seems to have been debunked already
[DirkH says:
January 20, 2013 at 9:15 am
Thanks ,Dirk].
I won’t comment further.
It just seems to be – how can I phrase this politely? – the sort of paper that a journal of this sort – brand new, seeking to establish itself, and none too careful about its sphere of interest – would reasonably be guessed to publish early in its existence.
And, anyway – hasn’t the temperature ‘now’ been seen as the same as the temperature 13, 15 or more years ago. There has been variation – ‘weather’ we call it here in snowy England – but despite the heady triginometry of The Team, x = 0.
Auto

January 20, 2013 1:20 pm

“And just to keep it simple in respect of wondering how good the ‘current’ dataset record is. I would like to know ONE, yes, only ONE – temperature dataset that has been maintained and recorded for a decent period of time, with each and every RAW recorded reading, still in its ‘pristine’ condition, with a description of each and every recorded adjustment and the reason for such adjustment from day one, such that the TRACEABILITY of the currently used ‘value’ can be worked all the way back, without break, to the raw data. In effect, after the folks crowing on about the 1780′s thermometer readings for Sydney not being traceable to known/validated calibrations, etc – can you, or anyone else demonstrate an adequate (read any, IMHO) level of traceability for the current longer term temp datasets?
Now, I know you have been asked this before – but I am asking once again, DO YOU (or anyone else) KNOW OF SUCH A DATASET? Where is it – and is it publicly available? If not, why not?”
These are good questions, but the notion that there is something called raw data bears examination. There is no such thing. There is a first known report. So for example, even if you have a log book, you don’t know that the log is actually a “raw” report. What you know is that this is the first known report. For example. I read a thermometer. I scribble down the temperature. I transfer that writing to my log book. Which report is “raw”. Can’t tell from the document. You have to trust the document. it doesnt self validate. So I distinguish between records that are known first reports and those that explicitly claim that they are the result of an adjustment process. In all cases where there are first reports we use first reports. There are a few cases ( data paper is coming along ) where the only record that exists is a record known to be adjusted. These are exclusively monthly stations. So, one thing I suggest that people do is work with daily data ONLY because typically adjustments are made on monthly products and not daily products. The best source here is GHCN daily which is “raw” or a first report.
You can tell its “raw” because its full of impossible measures like 15000C or -600C, or the same value stuck for 30 days. This raw data includes a QC flag for every measure. you can use the data “raw” or apply the QC flag and remove the data.
Funny story: I was doing a global average using only GHCN daily. The answer came out matching others, except it was a bit hot. Opps, I forgot to apply the QC flags so I had all this clearly wrong “raw” data. once I applied the flags ( ie get rid of values like 15000C ) then the series cooled a bit and matched better.

DirkH
January 20, 2013 1:25 pm

Steven Mosher says:
January 20, 2013 at 12:38 pm
“DirkH says:
January 20, 2013 at 9:15 am (Edit)
And I’d like to direct attention again to this razor sharp demolition of the BEST “scalpel” method.
##############
The scapel method ( as endorsed by Willis ) works. See the AGU poster for a double blind test of the method.”
Well, when I come across an AGU poster I’ll make sure to check whether it is about the scalpel method.
I’m sure you found SOME low frequency component after the cutting and stichting that you could then interpret as the signal of climate change. There is nothing in the world that prevents a low frequency signal from popping into existence after one stitches together two signals.

Auto
January 20, 2013 1:31 pm

I wrote –
The maths – scalpel effects and so on – seems to have been debunked already
[DirkH says:
January 20, 2013 at 9:15 am
Thanks ,Dirk].
I won’t comment further.
=======
I see Steven Mosher has some doubts about the debunking, posted whilst I was scribing.
I’m not qualified to discuss – he may be right.
Auto

DirkH
January 20, 2013 1:36 pm

Steven Mosher says:
January 20, 2013 at 11:36 am


Dirk
“As Muller is a front for the geo-engineering NOVIM Group and his daughter Elizabeth Muller peddles a “product” called “GreenGov” the assumption that Muller does more shady dealings is not at all absurd. He has already proven himself to be a rent seeker of the first degree; a worthy equivalent to Pachauri. I know, that’s all pretty standard in warmist circles. We know why they do it.”
except Muller had nothing whatsoever to do with the selection of the journal. Zero. zip nada.
so much for your conspiracy theory.”

So you are saying that it is a conspiracy theory that Muller is a front of the NOVIM group that tries to sell geo-engineering, and that Muller’s daughter tries to sell a “product” called “GreenGov”; in other words, you say that Muller is not a rent-seeker who tries to feed on the CO2AGW gravy train whereever he can and with whatever means possible?
Because that is all that I said – I did not say he founded his own journal – I only said that given what we know about his business activities that it would be perfectly reasonable to assume he did such a trick.
We also know that other warmists use every opportunity to use the CO2AGW theory to make money hand over fist; see Mann’s lecture fees or Hansen’s awards. We know that they are without exception rent-seekers. I would call this the null hypothesis.
Do you need more examples? Ask and ye shall receive.

Editor
January 20, 2013 1:47 pm

Steven Mosher says:
January 19, 2013 at 10:22 pm

“You probably haven’t heard of it because it is volume 1 issue 1… Must be his own journal.”

Do you think we landed on the moon?

Steven, after all the sh*t that people have given me in particular and skeptics in general for not publishing in the “proper” journals, and after all the bullshit that Richard Mueller put out in his pre-press press release about how this was going to be published in the scientific journals and it was in review … after all that, surely you can’t be surprised that people are calling you for BEING UNABLE TO GET YOUR BRILLIANT WORK PUBLISHED IN A HIGH IMPACT JOURNAL.
I’m sorry, my friend, but you guys brought this on yourself. Your whining that people are conspiracy theorists is a pathetic response. Admit you couldn’t convince “Science” or “Nature” or a single high-impact journal to publish what you are so proud of, and move on. It’s hilarious that you couldn’t, after Mueller made all the claims about how his work was going to be in the journals so he could justify publishing results without data or code …
You’re embarrassing yourself by trying to defend the indefensible, Steven, particularly since at the end of the day it is meaningless where it is published, other than proving that Mueller was just blowing smoke when he said that the paper was in review at JGR.
The only valid question is, is it true? It may well be true, I take no position on that … but you’re not helping people come to that conclusion.
w.
… Mueller publishes in Volume 1, Number 1 of a brand new journal, and you get your knickers in a twist because people are pointing that out? Steven, his publishing in Vol. 1 No. 1 is too funny, and if you don’t get the joke, well, then, I guess the joke is on you …

Jeff Alberts
January 20, 2013 1:50 pm

Steven Mosher says:
January 20, 2013 at 12:46 pm
‘Just because a journal has bad published bad papers doesn’t seem to matter much. We’ve seen the dreck published in Nature and Science, splashed on the front cover, no less.”
There was a time when WUWT stood against the nonsense of worshipping peer review. Partly because there were papers with no code and no data and you could not check for yourself. Also, because folks here were well aware of the politics involved in peer review.. You all saw what happened to Odonnells paper when one reviewer was determined to hold it up. Ask yourself…
Do you think it was a skeptical reviewer who objected to us taking the record back beyond 1850? really?

You quoted me in your reply, but you’re assuming a lot of things I didn’t say. I don’t speak for WUWT, only myself. I said nothing about Peer review or specific reviewers of specific papers. Bad papers which receive a lot of attention need to be rebutted, no matter which journal, no matter who the reviewers are.

Manfred
January 20, 2013 1:56 pm

Steven Mosher says:
January 20, 2013 at 12:28 pm
it is not presented as THE answer. The argument is entirely different. It goes like this. It starts with givens or assumptions.
1. Given: C02 causes warming
2. Given: Volcanos cause cooling
If you take those two givens you can explain the temperature rise with a residual that looks like AMO.
pretty simple. Now, you can object to #1 or object to #2 or both.
It doesnt “prove” global warming. It says to believe who believe in AGW– yup, this data is consistent with the theory. Nothing more. It says to people who dont believe..
———————————————–
Starting with other assumptions gets opposite results:
Regression of only natural forcings explains at least half of the warming (even though the ENSO process is not correctly represented by the ENSO index)
http://wattsupwiththat.com/2012/10/17/new-paper-cuts-recent-anthropogenic-warming-trend-in-half/
Bob Tisdale can explain ALL:of the recent global warming, if he only considers ENSO.
The bottom line. Different assumptions have to be compared and tested on longer time scales. BEST model already fails with the Medieval Warm Period or in about 95% of the time since the last ice age. It cannot explain variation before the little ice age.
Judith Curry’s comment:
“Maybe the climate system is simpler than I think it is, but I suspect not. I do know that it is not as simple as portrayed by the Rhode, Muller et al. analysis.”
http://judithcurry.com/2012/07/30/observation-based-attribution/
Depressing to see Mosher defend this part of the analysis.

Editor
January 20, 2013 1:57 pm

Steven Mosher says:
January 20, 2013 at 1:20 pm

… These are good questions, but the notion that there is something called raw data bears examination. There is no such thing. There is a first known report. So for example, even if you have a log book, you don’t know that the log is actually a “raw” report. What you know is that this is the first known report. For example. I read a thermometer. I scribble down the temperature. I transfer that writing to my log book. Which report is “raw”. Can’t tell from the document. You have to trust the document. it doesnt self validate. So I distinguish between records that are known first reports and those that explicitly claim that they are the result of an adjustment process.

Seems to me like all you are saying is that what Kev-in-UK calls “raw data” you call the “first report” … so what? That’s just terminology. But you used the excuse of terminology, that you didn’t use the exact same terms he uses, to avoid answering Kev’s question. Instead, you spun it off into a semantic wasteland, and ignored his question entirely.
Steven, I know you already know this, but what Kev calls “raw data” is the first written record of the data, basically the logbook. You call that the “first report”. SO WHAT. Answer his damn question.
w.

Editor
January 20, 2013 2:06 pm

Steven Mosher says:
January 20, 2013 at 12:41 pm

… The temperature field is predicted to +- 1.6C with a nugget of .46C on monthly temps.

“Nugget”? Surely you must be aware that people won’t understand this kind of insider jargon. I know I don’t. Is that “nugget” of 0.46C the error estimate of the temperature field?
And if so, is said “nugget” of error a one sigma nugget, a two sigma nugget, or 95% CI nugget? I’m sure you see the problem—that usage of “nugget” has no agreed-upon statistical meaning outside of your head, and perhaps that of a few of your friends. As a communication tool, “nuggett” no workee.
w.

DirkH
January 20, 2013 2:18 pm

I just notice that they made up virtual temperature measurements across the globe in the 1800’s when there were only a few real thermometers in the US and Europe.
They used a GCM. So whatever they made up there is GCM reality. In the end they find that the temperatures are perfectly in line with what one would expect compared to a GCM.
So BEST is a tautological exercise to give GCM’s credibility by comparing to a make believe past constructed by a GCM.
I see a credibility problem there.

Jimbo
January 20, 2013 2:25 pm

Mr. Mosher,
I know you won’t admit it but it must be utterly embarrassing not to be able to get your groundbreaking research published in any of the say top 5 relevant journals. To be in issue 1, volume 1 and to be the only one published does not look good at all. Hey, I can understand that after all that work you guys must have been desperate. It happens to everyone. ;O)

Jimbo
January 20, 2013 2:42 pm

Mosher,
Another thing I consider about this paper is where it was REJECTED and why. This is not a trivial point.

Latitude
January 20, 2013 2:53 pm

Steven Mosher says:
January 20, 2013 at 11:47 am
Muller didn’t even know about the journal until it was presented as an option.
==========================
ROTFLMAO……you mean he didn’t know of something that didn’t even exist
..then ditto everything Willis said much better than I would have

Rhoda R
January 20, 2013 2:57 pm

Thank you, Latitude.

John West
January 20, 2013 3:11 pm

Steven Mosher says:
it is not presented as THE answer……..this data is consistent with the theory. Nothing more.”
That’s not how I (or probably anyone that speaks English) interpret(s): (paraphrasing) ”There’s no reason to be skeptical anymore”. That’s saying THE answer has been found and (if you look @ the context) it is anthropogenic. Do I really need to go find the quote? The gist of it is burned into my memory.
Also, the paper states that there’s no reason to assume there’s any other cause but fails to mention there’s no reason not to either, and actually finds a glaring bit of data that is inconsistent with the AGW supposition.
On top of that I’m about up to 3 feet over my head with “consistent with”. A half eaten cookie on Christmas morning is consistent with a visit from Santa Claus, but that doesn’t make it even reasonably so.

Jimbo
January 20, 2013 3:23 pm

Mosher says:…….
“Muller didn’t even know about the journal until it was presented as an option.”

Mosher, who presented it as an option? Was it the journal or its group which has been accused of spamming scientists? Please do not skip my questions.

Kev-in-Uk
January 20, 2013 3:37 pm

Mosher
and
@Willis
Now, now, guys – lets stay calm!
I accept Steves point that we have to accept the original document as ‘validated’ in itself, as we have to accept eyewitness information or any other form of historical recording from a single source (the only one available) as being ‘it’.
My problem arises from the subsequent treatment of that data. If somebody CRU/GISS/whoever, takes a shitload of paper documets/logs and enters the data into a computer ‘as read’ – this would essentially be the raw data – with of course a few typos/number transpositions, etc! (which is why – at the very very first quality control stage – ANY and ALL queries must be cross checked against the original ‘paper’ record, yes?)
Anyway, let us presume that the first QA check passes off as ‘ok’.
We then have the likes of Mr Hansen, sitting in his office, looking at the data and thinking, ‘hmm, this data looks a bit off – let’s adjust it because I think this is wrong with it, etc, etc’
But this doesn’t happen just once, but many times – with each adjustment to the dataset being recorded as a new ‘version’.
I have no problem with the adjustment, so long as it is valid, and MORE IMPORTANTLY – it is recorded and both pre and post adjusted data are preserved in toto!
Now, on the basis, that we know full well that Jones et al (or the CRU/Met office, as you prefer) have ‘lost’ original data – what does any proper real scientists think about this? This is like, man, the worst ever FAIL possible in science!
So, roll forward a few decades, or whatever, and some chap decides he wants to use the data (without taceability) and it passes peer review, etc – then somebody uses his data, etc, etc, etfeckingcetera!
I just want someone, somewhere to tell me, nay, ‘PROVE’ to me – that the data CURRENTLY being used is VALID, and not validated by the likes of Hansen, Jones or Mann, but by some process of traceability and independent data storage/verifiable sources and documented methods for changes,when why, where and how, etc!
I’m afraid, I just don’t see this as being possible (but I’m happy if someone shows my gut feeling to be wrong!)
Ergo, as I said earlier, the Best findings are , at best, a rehash of the old data, with all its inherent faults. yes? I know it sounds crazy, but until someone proves otherwise, that is what I think! and FFS don’t bother quoting the fact that BEST (or any of its input datasets) passed peer review at me! Show me the data, show me the workings ON ALL that data, and show me it is goddamned correct!
Otherwise, shut the feck up about global temperature datasets completely – because if you ain’t got traceability and scientific standards – you ain’t got Sh1t. Period. (and yeah, Steve, I know it maybe all we have to work with – but I don’t see the warmista advertising the fact that their data isn’t perfect along with all the headlines!)

michael hart
January 20, 2013 4:15 pm

Hell, if the authors wish to pay me, then I’ll set up a new journal myself and publish it again.
For an additional fee I’ll even name the journal “Proceedings of The Masters of The Universe” or some similar catchy title.

Andrejs Vanags
January 20, 2013 4:17 pm

My problem is this statement: ““Our analysis does not rule
out long-term trends due to natural causes; however, since all of the
long-term (century scale) trend in temperature can be explained by
a simple response to greenhouse gas changes, there is no need to
assume other sources of long-term variation are present.”
We are pretty sure that all of the long-term century trend in temperature CANNOT be explained by a simple response to green house gas changes.
As far as I understand it, climate scientist have no clue as to what caused the little ice age, no clue as to what caused the medieval warm period, no clue as to what caused the cold and famine of the middle ages, no clue what caused the hot roman optimum, no clue as to why the entire northern Africa suffered drought, creating deserts and toppling the egyptian civilization and in general no clue why temperatures shoot way up at the beginning of an inter-glacial and slowly decrease until suddenly dropping in the next ige age. No explanation as to why have the very long term temperatures been decaying since 10,000 years ago.
So there is EVERY reason to assume that “other sources of long-term variation are present” its unscientific not to do so.

Don Monfort
January 20, 2013 4:19 pm

I wonder if Mosher, Muller and the BEST team expected their work to gain credibility by being published in the Premier/Grand Opening! issue of G&G (aka ‘journal of last resort’).
Willis’ gloating is justified, Steven.

jim2
January 20, 2013 4:21 pm

WRT RAW data:
Berkeley Data Sets:
Colonial (from the read me)
The original data files are not publicly available.
GHCN Data
GHCN-Daily is comprised of daily climate records from numerous sources that have been integrated and subjected to a common suite of quality assurance reviews.
Looks like there might not be a lot of RAW data to be found, although I didn’t take time to look at all the read-me files.

Catcracking
January 20, 2013 4:23 pm

How do they get 95% confidence in figures to 5/100ths of a degree from measurements that are nowhere near this accuracy?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
They didn’t and they can not. In statistics if you had thirty thermometers reading simultaneously the same area at the same time you could do the statistics on the data to get a better estimate of the true value and get better precision than you had with one reading.
Thank you Gail et al.
While I am not versed in statistics, the +/- 0.05 degree claim does not seem credible to me. Why would they make such a “stretched” claim? It only seems to detract from the credibility of the entire paper.

Editor
January 20, 2013 5:21 pm

Mosher answer questions about the choice of journal etc in the comment string at Climate Etc. starting at:
http://judithcurry.com/2013/01/20/berkeley-earth-update/#comment-287652

Editor
January 20, 2013 5:28 pm

Oops: Mosher answers questions,….

Eugene WR Gallun
January 20, 2013 5:58 pm

Following the link David Davidovics (jan. 19 — 6:45pm) offers leads me to this conclusion. It seems that vanity publlishing has found a new literary area to exploit. Now like poets and novelists obsessed with publication “scientists” can also pay to see their work in print.
BEST published in a “vanity journal”. This is like something out of Monty Python.
Eugene WR Gallun

January 20, 2013 7:26 pm

Mosher answer questions about the choice of journal etc in the comment string at Climate Etc.

Let’s see. Sentences and Paragraphs are capitalized. Punctuation nearly perfect. Adequate linefeeds. Are we sure it’s Mosher?
😉
NB: For the terminally humorless this is just a friendly joke, possibly an inside one for those unaware of Steve Mosher’s sleuthing out Peter ‘Principle’ Gleick in the Heartland phishing affair. Having to explain it does sort of ruin it, but I’ve been misunderstood once already in this thread.

DirkH
January 20, 2013 7:59 pm

Andrejs Vanags says:
January 20, 2013 at 4:17 pm

“My problem is this statement: ““Our analysis does not rule
out long-term trends due to natural causes; however, since all of the
long-term (century scale) trend in temperature can be explained by
a simple response to greenhouse gas changes, there is no need to
assume other sources of long-term variation are present.””

What Mosher and Muller are doing here is they are using the Schneider trap; insinuating that CO2AGW is now the Null hypothesis, not some arbitrary theory with GCM predictions as its hypothesis.
Switching the null hypothesis. An Orwellian, revisionist strategy. Stephen Schneider suggested this Null hypothesis switch first.

January 20, 2013 8:22 pm

I’m prepared to accept whatever result they produce, even if it proves my premise wrong.
REPLY: and I did, read here: http://wattsupwiththat.com/2012/07/29/press-release-2/ – A

Glenn
January 20, 2013 8:32 pm

michael hart says:
January 20, 2013 at 4:15 pm
“Hell, if the authors wish to pay me, then I’ll set up a new journal myself and publish it again.
For an additional fee I’ll even name the journal “Proceedings of The Masters of The Universe” or some similar catchy title.”
Better check with OMICS first, they likely already have that title in use.

mpainter
January 20, 2013 8:52 pm

DirkH says: January 20, 2013 at 7:59 pm
“Our analysis does not rule
out long-term trends due to natural causes; however, since all of the
long-term (century scale) trend in temperature can be explained by
a simple response to greenhouse gas changes, there is no need to
assume other sources of long-term variation are present.””
What Mosher and Muller are doing here is they are using the Schneider trap; insinuating that CO2AGW is now the Null hypothesis, not some arbitrary theory with GCM predictions as its hypothesis.
Switching the null hypothesis. An Orwellian, revisionist strategy. Stephen Schneider suggested this Null hypothesis switch first.
=============================
And that does not sell in skeptic land.

Manfred
January 20, 2013 10:35 pm

DirkH says: January 20, 2013 at 7:59 pm
What Mosher and Muller are doing here is they are using the Schneider trap; insinuating that CO2AGW is now the Null hypothesis, not some arbitrary theory with GCM predictions as its hypothesis.
——————————
Mosher does not seem to be too convinced about that bit either.
http://judithcurry.com/2013/01/20/berkeley-earth-update/#comment-287810
Nevertheless he blames previous peer review for not waving through and praises this new review process. Weird.

Kev-in-Uk
January 20, 2013 11:43 pm

jim2 says:
January 20, 2013 at 4:21 pm
That’s is my view also. Various correction/adjustments are mentioned, usually as quality control checks, but AFAIK the majority of temperature adjustments are likely to have been done by computer algorithms. Fine, so long as it is a valid adjustment – but it is not strictly valid to assume that station X, in the middle (spatially) of half a dozen other stations, but showing a significantly different temperature on a given day, must be wrong. Automatically Weighting ‘against’ that station (without reason) just because it is different – is potentially wrong. But my beef is that once that any valid adjustment has been made – that should really be ‘it’ – so how come, we have ‘had’ to have different versions of datasets, with ‘continual’ adjustments of the old past data? And, the biggest query of all, is; are these later adjustments, simply cheeky little adjustment on top of the original adjustments? – because to my mind – you can easily lose sight of the ‘real’ or ‘raw’ data!
Put it this way, does anyone here think that a current dataset exists where they can printout a list of adjustments for any given station, in chronological order, and with the reason for each adjustment? I dunno, something like:
Station X – May 1885, to March 1989; temps adjusted -0.5C due to thermometer error
Station X – April 1901; new thermometer installed +0.1C added to all previous data
…continuing up to more recent times..
Station X – June 1988; whole dataset shifted +0.2C from due to estimated UHI effect
Station X – Jan 1995; data from 1930-1940 adjusted by -0.1C due to loss of UHI because of financial depression
etc, etc.
These kind of things are exactly what needs to be recorded to keep the dataset ‘intact’ – but I don’t believe they have done it in such a fashion – if at all!

Scarface
January 21, 2013 12:24 am

OMICS Publishing Group…
Now that they are into CAGW, also known a COMICS

Jimbo
January 21, 2013 2:58 am

Read one commenter who has complained about their paper being published without permission and getting repeated requests to give OMICS Group their fee. The founder of the group, Dr Srinu Babu Gedela, replies offering a discount. These are the people BEST have published with. Sad.

Hello.
I had a serious problem with one of the journals of OMICS Group. After receiving a lot of emails offering me publish my works in their journals, I asked them about the possibility of publishing a research paper. They asked me to read the paper and I made ​​the mistake of sending it. I did not hear anything about this editorial, until three months later they told me that they had accepted the job and would publish them if I were paid to them $ 2,700. Then the manuscript has not been published yet and I told them to publish in their magazine not interested me. I did not receive any review of the manuscript and I saw that the data on the web magazine about impact index were false. I only asked for information and I never authorized the publication of my work. Two months later, they published it without my permission. The published paper is full of errors. Since then I have sent a dozen emails urging the withdrawal of my work on their site. However, they did not withdraw and would require payment of $ 2700. What do you recommend I do? No doubt this is a fraud, and I do not know how to get them to withdraw the work and they stop sending payment requirements.
http://poynder.blogspot.co.uk/2011/12/open-access-interviews-omics-publishing.html

Prof. Natarajan Muthusamy, Associate Professor of Internal Medicine and the Ohio State University Medical Center has been named as the Editor-in-Chief of a journal from the OMICS Publishing Group, Journal of Postgenomics: Drug & Biomarker Development. “I am not aware that I am Editor-in-Chief [of this journal]. I do not recall having committed to this job,” he told The Hindu in an email.
http://www.thehindu.com/sci-tech/technology/on-the-net-a-scam-of-a-most-scholarly-kind/article3939161.ece

January 21, 2013 3:03 am

On page 2

This empirical approach implicitly assumes that the spatial relationships between different climate regions remain largely unchanged, …….. in the period 1750 to 1850 when our evidence shows a strong influence from large volcanic eruptions, a phenomenon for which there are only weak analogs in the 20th century ……. our results are accurate only to the extent that the spatial structure of temperature does not change significantly with time.

So they modeled on the assumption that the spatial structures were the same, stated that volcanoes had a strong influence then, but, they don’t now. Presumably, volcanoes would alter the spatial temperature structures, but they modeled on them being the same.
About volcanoes……. one of the premises Mosh states BEST operates under.
“2. Given: Volcanos cause cooling”.
No they don’t. At least, not in the last century for any significant period of time. Looking at all the volcano eruptions in the last century with a VEI of 5 or 6, I don’t see much that would indicate substantial cooling caused by volcanoes. Temps after Mount Agung did drop about 0.5 C, that one was the most noticeable, but, we saw the temps increase by about 0.6C after Bezymianny. 1991 saw Mount Pinatubo and Mount Hudson erupt! What was the temp response? Well, it cooled by about 0.2 C, but that’s just what the temps did, there’s no real attribution to the volcanoes and well within the normal temp swings we see from year to year. Just like nearly all of the 12 VEI 5-6 volcano eruptions last century. ENSO seems to have a much stronger impact.
I think the paper would be easier to take serious if they’d left the 10 and 20 thermometer periods out of the work, and didn’t worry about attribution. The early history period is nothing but circular logic and tautological (as Dirk points out).
Volcanoes made the earth cool in the 1750-1850 time period. We don’t have good analogs for it in the 20th century, but, we used 20th century temp spacing for the earlier time period that volcanoes wrecked havoc on the earth. How stupid is that? I can’t get to discussing scalpels and jack-knifes when this madness is offending my eyes! They even stated this approach was “empirical”!!
As to the other premise about CO2 causing warming……. well, things are modeled that way, aren’t they?

Eliza
January 21, 2013 5:38 am

Wow maybe OT but arctic extent is now highest since records began 5 years ago LOL
http://ocean.dmi.dk/arctic/icecover.uk.php
In fact I dare predict that NH ice may enter the average values quite soon and maybe stay there for the whole year this time really will terminate the farce forever we can hope!

jim2
January 21, 2013 6:00 am

Kev-in-Uk says:
January 20, 2013 at 11:43 pm
Kev – If you visited the grocer in search of raw carrots and you were handed a plate of cooked carrots – you would know the difference. But apparently, some climate scientists believe the carrots raw if they were just handed the plate. There are legitimate exceptions, like AMSU data that require processing to produce the temperature number, but when it come to thermometers, there should be a truly raw number.
The situation with modeling is similar in that some climate scientists confuse model output with measurements taken in the real world in the field.
And they want us to trust them? Hah!

Bob Kutz
January 21, 2013 12:58 pm

Interesting that Daniel Goldber, Yong Gang Li and Yao Yi Chiang are not listed on USC’s web-site faculty/staff directory.
And Jixiang Wu, though on staff at SDSU as an assistant professor, does not mention sitting on the GIGS board, either on his directory web-page or on his curriculum vitae. This is odd, because his vitae lists 9 journals and a US government agency for which he is a reviewer. I guess I find it odd that he sits on an editorial board for an academic journal and it doesn’t appear anywhere on his otherwise extensive and detailed list.
The document I am looking at appears to be almost 2 years old though, so it is possible he just hasn’t updated it with his most recent accomplishment.
Interesting about the three USC board members though. Not being listed as faculty at USC and all.

Hot under the collar
January 21, 2013 1:05 pm

Provenance of any data set is everything.
You would take it with a “pinch of salt” if someone said a painting was by Rembrandt because some ancestor who knew about paintings had said it was.
If the original raw data was (shamefully) lost or deleted, unless it turns up and there is proof or provenance then any data set produced based on the original “lost” data set will be and should be treated with the same “pinch of salt” as the “Rembrandt”, in fact even more so as the question needs to be asked how can scientists publish and then lose or delete their raw data?
Provenance of the data is everything.

January 21, 2013 1:54 pm

From Phil Jones To: Michael Mann (Pennsylvania State University). July 8, 2004 (Climategate emails)
“I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!”
telegraph.uk UEA contentious quotes
Looks to me like BEST just did its part in redefining “peer-review literature.”

January 21, 2013 2:29 pm

There is some info that maybe I missed.
Did the journal JGR Atmospheres reject the subject paper?
Or, did Muller et al withdraw it from JGR Atmospheres? If they withdrew it then why? If they withdrew it from JGR Atmospheres was it because the review period was becoming too long for their paper to make AR5? Did they need to get a quicker acceptance of their paper at a journal like the GIGS journal in order to have a chance at inclusion in AR5?
Did I miss that back story?
John

Rational Db8
January 21, 2013 3:04 pm

Anthony, it would be very interesting to know exactly who and what organization(s) are supporting this new journal… e.g, if there is any conflict of interest with the BEST folks, or if they perhaps even had something to do with getting a new journal started just to be able to publish their paper(s) in what’s ostensibly a ‘peer reviewed reputable academic journal.’

January 21, 2013 5:46 pm

@DirkH 1/20 9:15 am: And I’d like to direct attention again to this razor sharp demolition of the BEST “scalpel” method. (referring to Rasey 12/13/12 11:00 comment specifying “wholesale decimation and counterfeiting of low frequency information happening within the BEST process.”
Thank you for the approving plug, DirkH.
By the theorems of Fourier Analysis, the scalpel destroys the climate-science-critical lowest frequencies in the original data by shortening the surviving temperature records. BEST throws away the signal, then homogenizes the noise. Through the use of the “suture” short segments are spliced into long ones; low-frequencies appear — but from where did they come? Until I see otherwise, I must conclude that the low-frequency components (i.e. the Climate “signal”) of the 100+ year record are artifacts of the suture process. The long term trend is counterfeit – artificial, illegitimate, with a look of reality.
I have yet to see any answer of substance to the Fourier domain Low-cut filter argument. Have you? I didn’t find it in the Dec. 2012 and Jan 2013 links Mosher provided.

John M
January 21, 2013 5:49 pm

Now this is what consensus looks like.
James Annan and Anthony Watts on the same page regarding OMICS.
http://julesandjames.blogspot.com/2013/01/best-laugh-of-day.html

January 21, 2013 5:55 pm

@Mosher 1/20 11:34 am: “So presumably a new improved method of data ‘homogenization’?”
nope. just a method proposed and endorsed by skeptics before they saw the result.

Are you speaking for all skeptics? I hope not. You don’t speak for me.
Endorsed by (some few) skeptics, perhaps. Who? How many? You are referring to the AGU Dec 2012 poster, right? The one that compares two potentially dodgy methods of homogenization against each other?

In both the paper and this project the NCDC and Berkley groups were blinded to the true nature of each world

January 21, 2013 5:58 pm

@Mosher 1/20 12:34 pm: The data that I hope to have up in due course would be the scalpeled data. That is, showing where all the cuts are made. In due course the data for every station will be online with charts showing where the scapel was applied and why it was applied.
So, it is pretty clear that post-scalpel data and the temporal distribution of cuts is not yet available for review. We are to trust the sausage makers that their product is safe to consume. As both a geophysicist and a taxpayer, this turns my stomach. I smell a rat and it is probably coming from inside the sausage.
@Mosher 1/20 12:38 pm The scalpel method ( as endorsed by Willis
THAT I like to see. I wonder if Willis agrees he “endorsed” it? Link, please.
works. See the AGU poster for a double blind test of the method.
From 12:51 pm: link to Poster: http://berkeleyearth.org/images/agu-2012-poster.png. The poster does NOT assuage my concerns. It reinforces I have not misunderstood the BEST process. “Results” amounts to comparing two untrustworthy methods with similar assumptions against each other. But thanks for the link. (my guess is that this is from the Dec. 2012 AGU…. It’s only been 21+ months coming.).
The Rohde 2013 paper uses synthetic error free data. The scalpel is not mentioned. My concern is the use of the scalpel on real, error riddled data.

January 21, 2013 6:04 pm

1/20 1:20 pm The maths – scalpel effects and so on – seems to have been debunked already
I wouldn’t go that far. But potentially fatal flaws in the post-scalpel frequency content have yet to be addressed, judging by the AGU poster (Dec. 2012) and 1/15/2013 PDFs.
@James Sexton: 1/21 3:03am We don’t have good analogs for it in the 20th century, but, we used 20th century temp spacing for the earlier time period that volcanoes wrecked havoc on the earth. How stupid is that? I can’t get to discussing scalpels and jack-knifes when this madness is offending my eyes!
Agreed. There is a dearth of temperature records pre-1900, but that doesn’t stop BEST from making a global temperature profile back to 1750. “Caveat emptor” is no defense here. No amount of error bars is going to cover that sin. I don’t believe things just because it comes out of a computer, but there are too many that do. And too many willing to use what is convenient.

Bill Illis
January 21, 2013 7:12 pm

jim2 says:
If you visited the grocer in search of raw carrots and you were handed a plate of cooked carrots instead – would you …
————
That is a great analogy. Thanks jim2.
I wouldn’t accept the carrots, of course, without knowing how the carrots were cooked.
I wouldn’t want to have a database with matlab code showing how each individual carrot was cooked so that it would be impossible to download the files of 44,000 individual carrots and then spend the next two weeks finding out the 44,000 carrots were cut into 180,000 different pieces and then each piece cooked was at 145F for 10,000 of the carrots, at 155F for 10,000 of the carrots etc.
I would just want to know “how the carrots were cooked”. A nice simple explanation with perhaps a video showing how the carrot/temperatures were cooked.
There are good ways to cook carrots and there are ways that make them too soft or too hard or don’t convert enough of the carbohydrates into sugars, that “over-cook” the carrots/temperatures.
I wouldn’t want to buy cooked carrots without an endorsement from someone saying they taste really good and were cooked to perfection. I’d rather just have the RAW carrots and cook them myself the best way to make the right kind of carrots.

January 21, 2013 7:28 pm

Stephen Rasey says:
January 21, 2013 at 6:04 pm
Agreed. There is a dearth of temperature records pre-1900, but that doesn’t stop BEST from making a global temperature profile back to 1750. “Caveat emptor” is no defense here. No amount of error bars is going to cover that sin. I don’t believe things just because it comes out of a computer, but there are too many that do. And too many willing to use what is convenient.
===================================================
Thanks Stephen, it’s been my experience that the ones believing what comes out of computers are the ones who know the least about what computers do and how they work. And, like you, I think coloring huge error bars is simply a form of dishonesty. Likely to themselves, but maybe to others.

Phil
January 21, 2013 8:36 pm

Re: the Stephen Rasey comments regarding filtering of low frequency information by the scalpel process (following links to comments at WUWT and CA)
Here is my attempt at rephrasing these arguments in layman’s terms with some of my thoughts added:
The filtering of frequencies may perhaps be better understood by reference to wavelengths. High frequencies have short wavelengths and low frequencies have long wavelengths. Therefore, the scalpel method filters out all frequencies that have a wavelength longer than the longest fragment. The only frequencies that the scalpel does not filter at all are those with a wavelength shorter than the shortest fragment. The frequencies with wavelengths longer than the shortest fragment and shorter than the longest fragment are partially filtered. Given that the average fragment length is about 12 years (IIRC), trends of longer than that are effectively filtered out. Any trend longer than that in the reconstruction is apparently a result of modeling and does not come directly from the data (no matter how voluminous), since low frequencies are filtered out.
High frequencies are also filtered out. The use of monthly averages (actually smooths) effectively filters out all frequencies with wavelengths shorter than one month. Deseasonalizing the data (using models) partially filters out all frequencies with wavelengths shorter than a year and longer than a month. Given that some fragments may be only a few years long, it would seem that almost all frequencies are at least partially filtered out. That would leave precious few frequencies that haven’t been filtered at all to validate all the models used by BEST.
How is BEST not effectively a sort of statistical homeopathy? Homeopathy, as I understand it, is basically taking a supposedly active ingredient and diluting it multiple times until almost none of it is left and then marketing it as a cure for various ailments. Doesn’t BEST dilute the information in the mountain of data that they process by filtering, at least partially, almost all of the frequencies inherent in the data and replacing them with modeled data? Isn’t BEST essentially models most of the way down, as most of the information in the data is filtered out?
P.S. I believe that various different individuals post comments at WUWT under the handle Phil

amirlach
January 21, 2013 8:42 pm

Dear oldfossil. RE: “I’m prepared to accept whatever result they produce, even if it proves my premise wrong.”
“Blind Acceptance” of any result before all of the nessasary Peer Review is done is a mistake. Even if it’s a result you expect. The whole reason for Peer Review is so others can try to duplicate and verify, or disprove the authors methods and theory.

climatebeagle
January 21, 2013 9:20 pm

Steven Mosher said:
“The good news is you can take every station in CRU, delete it, and you still have 32,000 stations. And of course the answer doesnt change.”
“But if you dont like GHCN monthly data you can delete those 7000 stations and you are left with 29000 stations. And the answer doesnt change.”
I’ve seen Steven Mosher say similar things before, and it struck me as somewhat strange, or at least it would give me pause for thought. If the answer doesn’t change with different inputs then could the algorithm be somewhat insensitive to input data. Almost as though the answer is being driven by any smoothing algorithms.

Venter
January 21, 2013 10:29 pm

If the answer doesn’t change with inputs, it’s obvious that the algorithms are driving the answer to show ” man made ” global warming.

Editor
January 22, 2013 12:45 am

Phil says:
January 21, 2013 at 8:36 pm

Re: the Stephen Rasey comments regarding filtering of low frequency information by the scalpel process (following links to comments at WUWT and CA)
Here is my attempt at rephrasing these arguments in layman’s terms with some of my thoughts added:
The filtering of frequencies may perhaps be better understood by reference to wavelengths. High frequencies have short wavelengths and low frequencies have long wavelengths. Therefore, the scalpel method filters out all frequencies that have a wavelength longer than the longest fragment. The only frequencies that the scalpel does not filter at all are those with a wavelength shorter than the shortest fragment. The frequencies with wavelengths longer than the shortest fragment and shorter than the longest fragment are partially filtered. Given that the average fragment length is about 12 years (IIRC), trends of longer than that are effectively filtered out. Any trend longer than that in the reconstruction is apparently a result of modeling and does not come directly from the data (no matter how voluminous), since low frequencies are filtered out.

Phil, that’s an interesting claim. I’ve given it some thought, and I’m not sure it’s true. Here’s my thought experiment on the matter. Suppose you have a regular cyclical signal with a period of lets say 30 years. Suppose further that you have 150 years of that data.
Now, let’s subject it to the scalpel by copying chunks of it of random lengths, with random starting points. Some will be longer, some will be shorter, let’s assume an average fragment length of 12 years and a maximum length of 20 years. We make a number of such random copies of the original signal, say a thousand of them.
Here’s the question. From those chopped up fragments, could you reconstruct the original signal?
For me the answer is sure, no problem. As long as there is some overlap between the fragments, we can reconstruct the original signal exactly, 100% correctly.
I bring this up to show that the mere fact that we cut the data into short, 12-year fragments does NOT, as you claim, mean that “trends of longer than that are effectively filtered out”. It also does NOT mean that “the scalpel method filters out all frequencies that have a wavelength longer than the longest fragment”. They are not filtered out in the slightest. Provided there is overlap, we can reconstruct all of the variations from the fragments … and in the real world with the number of temperature station records, there is always an overlap between stations.
Now, how well we can reconstruct the full signal from the individual fragments, and the best method to do that, that’s a separate question.
But your claim, that “Any trend longer than [12 years] in the reconstruction is apparently a result of modeling”, that’s not true. Long-period trends have noise added to them by the scalpel technique, but the scalpel technique does not lose the long-period information as you claim. The long-term trends stay in the data, they are not removed as you think.
w.

Phil
January 22, 2013 3:04 am

Willis,
I agree with the logic in your thought experiment, but it is predicated on an important assumption (which I have highlighted in bold):

Here’s the question. From those chopped up fragments, could you reconstruct the original signal?
For me the answer is sure, no problem. As long as there is some overlap between the fragments, we can reconstruct the original signal exactly, 100% correctly.

Two issues: First, the problem is that there has to be some credible basis on which to reconstruct the original signal, such as the overlap you mention. The problem is that, as I understand the scalpel (and I may not understand it correctly), by definition there isn’t going to be an overlap between fragments of the record of a given station. IIRC, the scalpel is applied after combining multiple station records (where such exist), so there would be no overlap to help stitch the fragments back together. Consequently, you would have to go back to using neighboring stations (or something similar) that, hopefully, aren’t also chopped across the discontinuity that the scalpel has found to help stitch the fragments back together. That implies a model and you are back again to the same issue of just how far you can go to help you stitch the fragments back together. Are you going to use stations as far as 1200 km away from the station in question to do so?
In short, after chopping up the data into 180,000 or so fragments, these fragments need to be stitched back together again using some sort of mathematical technique that should properly be called a model and that should ideally be individually validated. After you stitch these fragments back together, one has to ask whether it is the data talking or the seamstress.
Second, it is easier to imagine stitching fragments back together when considering idealized examples, such as pristine synthetic data. When using real-life, error-riddled data, I would submit that putting the fragments back together is not trivial.
As for evidence of the lower frequencies being filtered out, I would point you to this comment over at CA (and it is worthwhile reading the whole thread):

P. Solar
Posted Nov 4, 2011 at 6:34 PM | Permalink | Paste Link
OK, Stephen may be right about the longer frequencies. Here is a comparison of the FFT of Hadcrut3 and Berkeley-est.
http://tinypic.com/r/24qu049/5
Accepting that Hadcrut is land and sea, it still seems like the longer frequencies have been decimated as Stephen suggested.
For apples to apples this should be done with CRUtemp land only but it’s getting late.

Stephen Rasey replies:

Stephen Rasey
Posted Nov 5, 2011 at 7:09 PM | Permalink | Paste Link
Thanks for the followup. I do not doubt that the long term reconstruction LOOKS credible. After all, it is just what a lot of people were expecting.
My point is that the process as I see it effectively destroys the data we seek. Proof that it is authentically recreated in the suturing process are scant. There is a low frequency returned in the final produce, but it is not original. It is counterfeit.
Based upon what must be happening to the data in the Fourier Domain, I am having the same reaction to the BEST process as most of you would have to a process that seems to violate the 2nd Law of Thermodynamics.

Cheers.

D. B. Stealey
January 22, 2013 9:17 am

Maybe this is the reason that BEST could not get into a legitimate journal.

Rational Db8
January 22, 2013 9:24 am

re: amirlach says: January 21, 2013 at 8:42 pm
While I agree with your warning about the problems with blind acceptance – last I knew peer review was strictly to catch flaws in the scientific method used in each individual experiment (along with minor issues such as typo’s, ambiguous phrases, etc. that need correction). Peer review has essentially nothing to do with replication, verification, and validation (other than ensuring the scientific method was followed, e.g., that the methods are clearly spelled out such that other scientists can replicate if desired, and that there aren’t any gross errors in the paper).
Replication is done in separate experiments by other scientists with no conflict of interest, by identically repeating the exact experiment initially conducted. Replication is done far too infrequently, and we see the resulting problems with retracted papers, fraud, etc. that later comes to light. But what scientist is able to get funding to replicate another scientist’s work – and wants to replicate someone else’s work rather than try to come up with their own new findings? It’s a problem and the lack of sufficient follow up of this nature has been seen time and again. Validation is when other scientists attack an initial experiment’s hypothesis using different experimental methods to see if the hypothesis either holds up or fails. Verification is testing the experiment to ensure that all possible confounding factors are controlled for, that the statistics are sound, etc.

D. B. Stealey
January 22, 2013 10:02 am

Steven Mosher,
I write this in all sincerity: you need to distance yourself from this BEST “peer review” scam, like Judith Curry already has. It is as phony as a three dollar bill.

Editor
January 22, 2013 10:32 am

Thanks, Phil. You are correct that the scalpel technique leaves no overlap between adjoining sections of the same station. And you are right that the variations in other stations are used to fill in the overlap.
However, you seem to think that we are trying to reconstruct an individual station. We’re not. We’re looking for larger averages … and those larger averages perforce contain the overlaps we need. However, the scalpel doesn’t use “neighboring stations” to put them back together. Instead, it uses kriging to reconstruct the original temperature field.
Now, as you point out, unavoidably there will be noise added by the process. And, as you point out, there is a mathematical model involved … but this is true no matter how you combine individual stations into any kind of overall average.
I looked at the two Fourier analyses you posted, one of the BEST land data, and the other of HadCRUT data. Since these are totally different datasets, I’m not surprised in the least that they have different Fourier analyses … were you actually expecting different datasets, one covering two and a half times the area of the other, one of which doesn’t contain the ocean, to have the same Fourier transform?
Because I sure don’t …
Here’s the thing, Phil. There’s no “right” way to do the task of averaging the planetary temperatures. Every way that you do it will have pluses and minuses. Every way that you do it involves some kind of mathematical model. But if you wish to show that the “scalpel” method is inferior to the others, you’ll have to do more than claim it. Kriging can be demonstrated mathematically to be the best technique for joining up spatially and temporally disparate data, so you are fighting an uphill battle.
Look, Phil, the scalpel method has problems, like every other method you might use. But that doesn’t make it inferior to the others as you seem to thing.
w.

January 22, 2013 10:46 am

.
Sir, you have a way with words. Nice concepts, these:
8:36 pm:How is BEST not effectively a sort of statistical homeopathy? Homeopathy, as I understand it, is basically taking a supposedly active ingredient and diluting it multiple times until almost none of it is left and then marketing it as a cure for various ailments.
3:04 amIn short, after chopping up the data into 180,000 or so fragments, these fragments need to be stitched back together again using some sort of mathematical technique that should properly be called a model and that should ideally be individually validated. After you stitch these fragments back together, one has to ask whether it is the data talking or the seamstress.
I want to modify a key statement I made up at 5:46 pm.
By the theorems of Fourier Analysis, the cutting long temperature records into shorter lengths destroys the climate-science-critical lowest frequencies in the original data. BEST attenuates and throws away the climate signal, then spatially homogenizes and krigs the remaining weather noise to deduce a climate signal.
From a climate signal “chain-of-custody” frame of reference, the process is daft.

Editor
January 22, 2013 11:48 am

Stephen Rasey says:
January 22, 2013 at 10:46 am

.
Sir, you have a way with words. Nice concepts, these:

8:36 pm:How is BEST not effectively a sort of statistical homeopathy? Homeopathy, as I understand it, is basically taking a supposedly active ingredient and diluting it multiple times until almost none of it is left and then marketing it as a cure for various ailments.
3:04 amIn short, after chopping up the data into 180,000 or so fragments, these fragments need to be stitched back together again using some sort of mathematical technique that should properly be called a model and that should ideally be individually validated. After you stitch these fragments back together, one has to ask whether it is the data talking or the seamstress.

I want to modify a key statement I made up at 5:46 pm.
By the theorems of Fourier Analysis, the cutting long temperature records into shorter lengths destroys the climate-science-critical lowest frequencies in the original data. BEST attenuates and throws away the climate signal, then spatially homogenizes and krigs the remaining weather noise to deduce a climate signal.
From a climate signal “chain-of-custody” frame of reference, the process is daft.

Thanks, guys. The problem is that simply claiming that the scalpel method “throws away the climate signal” doesn’t establish anything. Nor does describing it as “statistical homeopathy”, although indeed it is a lovely turn of phrase. The scalpel method is described here. It would be useful if you could quote some part of that exposition and demonstrate why it is wrong, rather than simply claiming it is wrong.
Finally, in the end all methods of averaging a bunch of temporally and spatially scattered temperature records are “wrong”, in that there is no agreed upon “right” way to do it. Even if you don’t do anything to the data you have to combine 38,000 records or something. Rather than asking “is it wrong”, it is preferable to ask “is it better than the other methods?”. The BEST folks have done their homework in that regard, as reported here. It outperforms both the CRU and the GISS methods. So if you think the scalpel method is wrong, you’ll have to point us to something better. It is indeed wrong, all global temperature averages are wrong … but the scalpel method is less wrong than the other methods we’ve invented for the purpose.
Look, it’s no secret that I’m absolutely no fan of Richard Mueller, I don’t like a number of things he’s done. And Steven Mosher and I, despite being friends, often butt heads on a wide range of topics. And the journal it finally was published in is totally unknown.
But all of that is scientifically meaningless. It is totally separate and distinct from their math and logic and methods. Those stand or fall on their own, and to date, as near as I can tell, they stand up better than the competition. Always more to learn, anything could be falsified at any time, when the facts change I change my opinion … but that’s how I see it today.
All the best,
w.

January 22, 2013 12:05 pm

@Willis 10:32 am There’s no “right” way to do the task of averaging the planetary temperatures.
True. But there are certainly ways more wrong than other ways.
The goal is not to take the planetary temperature.
The goal is to determine the long term change in the planetary temperature.
If BEST honored the actual values of the recorded temperatures, I wouldn’t be raising this issue. BEST, through the use of the scalpel, shorter record lengths, and homogenization and krigging is honoring the fitted slope of the segments, the relative changes, more than the actual temperatures. By doing that, BEST is turning Low-Pass Temperature records into Band-Pass relative temperature segments.
With Band-Pass signals, you necessarily get instrument drift over time without some other data to provide the low frequency control. I suspect BEST has instrument drift as a direct consequence of throwing away the low frequencies and giving low priority to actual temperatures.
In the petroleum seismic processing (CGG PDF 1 MB link) realm, the recorded signal is a band-pass time-sampled series of sound energy ground accelerations or water pressure. In the seismic model, the Signal = Convolution ( Source, Reflectivity profile). Once you deconvolve the source from the signal, you are left with a band-pass Reflectivity profile. Reflectivity is the difference in Impedance (velocity * density) of layers of the earth. It is possible to integrate the reflectivity profile to get an Impedance profile, but because the original signal is band-limited, there is great drift, accumulating error, in that integrated profile. The seismic industry gets around that drift problem by superimposing a separate low frequency, low resolution information source, the stacking or migration velocity profile estimated in the course of removing Source-Receiver Offset differences and migrating events into place.
In summary, exploration seismic processing will integrate the band-pass acceleration-difference signals to get high-frequency impedance differences, but they use a separate low-frequency data source to control drift to get usable inverted impedance (velocity with an assumed density) profiles. Of course, the quality of the low frequency control governs the quality of the final product.
In a similar vein, BEST integrates scalpeled band-pass short term temperature difference profiles, to estimate total temperature differences over a time-span. Unless BEST has a separate source to provide low-frequency data to control drift, then BEST’s integrated temperature profile will contain drift indistinguishable from a climate signal.
Low frequency content is the “whole ball game” when it comes to a climate signal. High frequency content amounts to weather and seasons. From what I have seen since April 2011, BEST is decimating the low frequency content of the temperature records via the scalpel and not returning that information back into the final product. The low frequency content in the final product must be treated as drift until the low frequency control is defined.

January 22, 2013 12:25 pm

@Willis 10:46 am It would be useful if you could quote some part of that exposition and demonstrate why it is wrong, rather than simply claiming it is wrong.
I cannot quote what is not there.
The problem with the BEST process is what is missing: the low frequency control.

January 22, 2013 12:57 pm

from the Rohde paper: http://berkeleyearth.org/pdf/robert-rohde-memo.pdf
Bottom of page1: By limiting the present discussion to “error-free” data, we can examine the efficacy of the different averaging techniques separate from the consideration of quality control and homogenization issues.
Page 2: The dataset includes 7280 weather stations and will provide a set of times and locations at which the climate model field can be sampled in order to produce synthetic data with a realistic spatial and temporal structure.
Page 4: As the simulated data is intrinsically free from any noise or bias, we have omitted any parts of the respective algorithms associated with quality control or homogenization….. To further reduce differences, we used the true seasonality in the GCM field [something not known in reality] as the basis for removing seasonality from each simulated time series so that slight differences in the handling of seasonality in the three algorithms would not affect our conclusions.
From the above, I’m have to conclude that the scalpel was not used on any time series in this test. From my Fourier point of view, this was a successful test of the krigging on the full spectrum of the data available with the low frequencies preserved.

Kev-in-Uk
January 22, 2013 1:07 pm

Stephen Rasey says:
January 22, 2013 at 12:57 pm
Damn – I’ll have to try and re-read that. But I think I can see your point that the test method did not employ the scalpel to test if re-integration of the data actually retained the LF signal?
But also, did I read that right – that the test was not including any data that was adjusted/homogenised – in which case, as error free data, what was the point?

Editor
January 22, 2013 1:27 pm

Stephen Rasey says:
January 22, 2013 at 12:05 pm

@Willis 10:32 am

There’s no “right” way to do the task of averaging the planetary temperatures.

True. But there are certainly ways more wrong than other ways.
The goal is not to take the planetary temperature.
The goal is to determine the long term change in the planetary temperature.
If BEST honored the actual values of the recorded temperatures, I wouldn’t be raising this issue. BEST, through the use of the scalpel, shorter record lengths, and homogenization and krigging is honoring the fitted slope of the segments, the relative changes, more than the actual temperatures. By doing that, BEST is turning Low-Pass Temperature records into Band-Pass relative temperature segments.

Thanks, Stephen. You keep making that same claim over and over, that low frequency information is lost. As I said above, you need to demonstrate it rather than claiming it. Repeating your claim does nothing.
w.

Editor
January 22, 2013 1:59 pm

Stephen, thanks for hanging in there. Let me see if I can explain the issue. At some points in some temperature records, there are abrupt discontinuities. The record moseys along for thirty years, then jumps a degree or so … but not one of its neighbors does the same thing.
Typically these jumps are caused by things like a change in thermometer location, a change in thermometers, or a change in surroundings. That’s why the neighbors don’t show the jumps.
Now, there’s a case for just leaving those jumps in the record. And it is always worth noting what the raw data looks like before you touch it. The problem arises because more of those abrupt jumps in the record are upwards jumps than downwards jumps (although both exist). So if you just use raw data, you end up with an artificial long-term trend of unknown size and sign. No bueno.
Now, there’s a couple of ways to deal with that challenge. One is to see if you can calculate, from looking at the nearest neighbors, how big the erroneous jump was, than then remove that amount from the subsequent part of that particular temperature record (or add it to the earlier part). That’s how GHCN does it.
Me, I think it is preferable to just acknowledge that the data before and the data after the jump are actually two different datasets. If the thermometer moves across the airport, or if they change to an electronic sensor in a new location, you are not measuring what you were measuring before. It is a brand new record, starting with when it moved across the airport.
To me, that is the underlying logic behind cutting the records—you are just acknowledging the reality on the ground, which is that the measurements pre- and post-jump are not measuring the same thing.
Does this lose low-frequency information as you say? Yes and no. The real answer is, the low-frequency information was never there, since the two records (pre- and post-cut) were measuring different things … and not only that, because they were incorrectly combined, they have a bogus jump in the middle. Think about what that does to your analysis of long-term cycles.
So there is loss of information, but it is information that we have determined to be suspect. Since e.g. the underlying trend is not correct in the raw data, what exactly are you losing by cutting the records other than the incorrect part of the trend?
As to the issue of recombining them, remember that they are not being recombined, because they are different records. Instead, they are simply one more record added into the global average, and that is no different whether you cut or don’t cut the records into shorter sections.
I’m happy to answer questions, although I’m likely not the best person to do so …
All the best,
w.

Editor
January 22, 2013 2:04 pm

Oh, yeah. Steven Mosher, please feel free to step in and explain if you think I’ve got something wrong.
w.

January 22, 2013 2:30 pm

@Willis: 1:59 pmIf the thermometer moves across the airport, or if they change to an electronic sensor in a new location, you are not measuring what you were measuring before. It is a brand new record, starting with when it moved across the airport.
To me, that is the underlying logic behind cutting the records—you are just acknowledging the reality on the ground, which is that the measurements pre- and post-jump are not measuring the same thing.

First, you are cherry picking with your example. For the most part, we don’t have the metadata to identify why there was a change.
Second, do we get a truer sense of the climate if the slice the climate record every time we paint the Stevenson Screen or we preserve the entire record? I argue the latter, one record, is FAR CLOSER to the real trend.

If the low frequency is important – and it sure is, why cut them out of the frequency spectrum at all? I feel the only way to evaluate the GW signal is to do analysis on completely uncut temperature records. Yes, there will be no adjustment for station movements. Yes, UHI will be guaranteed to exist in the signal, but it can be in there only once! The danger of the scalpel-suture process is that UHI can be applied fractionally multiple times in from each tooth in the saw-tooth signal. Was UHI removed? Or was half a UHI signal applied 6 times in six splices of five station moves? From Rasey Nov 1, 2011 12:03 pm CA: Best, Menne Slices

Here is what I think is realistic demonstration:
The Rohde paper seems to prove that that the BEST process can work with Error-free Synthetic data and do a best-in-class job of homogenization. Fine. Do the analysis once with REAL data, UN-Sliced. If you get the same answer as with tens of thousands of cuts, then that is a valid demonstration that slicing doesn’t matter…. (but then why do it?)
If you get a significantly different long term trend with the uncut data… WHY might that be? And wouldn’t that be interesting? Is it because of real changes to the temp record that must be corrected? Or is it because we capture UHI effects and Stevenson Screen paintings, multiple of times in saw-toothed spliced records? At least let us capture that uncertainty.

January 22, 2013 2:54 pm

@Willis 1:27pm, you need to demonstrate it rather than claiming it. Repeating your claim does nothing.
How can I practically demonstrate other than through established mathematical principles and theorems? I think I have done so to the best of my ability with what’s available.

The key theorem is that the frequency resolution dw/2pi Hz = 1/(N*dt) . (where dw is the frequency resolution and dt is time sample interval) Usually we are interested in the high frequency stuff and aliasing. But this time, we are acutely interested in the very lowest frequency, the 1*dw (which is in Hz, cycles/unit-time).
Let’s invert. We want resolution time per cycle (2pi/dw) = N*dt. And we want LONG time per cycle, like a cycle time > 100 years to confirm or disprove the GW hypothesis. Well it is staring us in the face: N*dt is the total length of the temperature record.
Suppose now that we take temperature records and using a scalpel of any kind, we take N*dt and make it into n1*dt and n2*dt were n1+n2 = N and n1 < N, n2 < N. Each of the parts now have a LARGER dw, higher minimum frequency, which means a SHORTER resolution time per cycle than the original. The lowest dw from the original series is now in the bit bucket. Rasey 11:58 am April 2, 2011 in “Expect the BEST….”

Where have I blundered? Where did I get the math wrong? Are lowest frequencies important? Are the lowest original frequencies in the bit bucket after you slice a record? If so, has BEST shown where they come back or are preserved elsewhere? If so, where and how?
If you have been confronted with claims of a Perpetuum Mobile do you go build a demonstration, or do you argue from Thermodynamics that something is not right?
(I’m done for the day. – I’ll reply 1/23.
Thank you for your attention.)

Editor
January 22, 2013 3:21 pm

Stephen Rasey says:
January 22, 2013 at 2:54 pm (Edit)

@Willis 1:27pm,

you need to demonstrate it rather than claiming it. Repeating your claim does nothing.

How can I practically demonstrate other than through established mathematical principles and theorems? I think I have done so to the best of my ability with what’s available.

Well, allow me to assist you, then. What you do is generate say 1,000 synthetic temperature series, with autocorrelation equal to that of temperature data.
Then you put artificial jumps into them, at various places, use the scalpel to chop them up, and see how closely you can reconstruct the underlying signal.
The problem is, we know that there are jumps in the data. Unless you think we should ignore them (in which case we have nothing to discuss), how to you plan to rectify that?

The key theorem is that the frequency resolution dw/2pi Hz = 1/(N*dt) . (where dw is the frequency resolution and dt is time sample interval) Usually we are interested in the high frequency stuff and aliasing. But this time, we are acutely interested in the very lowest frequency, the 1*dw (which is in Hz, cycles/unit-time).
Let’s invert. We want resolution time per cycle (2pi/dw) = N*dt. And we want LONG time per cycle, like a cycle time > 100 years to confirm or disprove the GW hypothesis. Well it is staring us in the face: N*dt is the total length of the temperature record.
Suppose now that we take temperature records and using a scalpel of any kind, we take N*dt and make it into n1*dt and n2*dt were n1+n2 = N and n1 < N, n2 < N. Each of the parts now have a LARGER dw, higher minimum frequency, which means a SHORTER resolution time per cycle than the original. The lowest dw from the original series is now in the bit bucket. Rasey 11:58 am April 2, 2011 in “Expect the BEST….”

Where have I blundered? Where did I get the math wrong? Are lowest frequencies important? Are the lowest original frequencies in the bit bucket after you slice a record? If so, has BEST shown where they come back or are preserved elsewhere? If so, where and how?

Where you have blundered is confusing the maximum frequency resolvable in an individual record, and the maximum frequency resolvable in a group of records. Let’s take one of the simplest ways of finding the average anomaly, using the first differences. If we have 38,000 overlapping records of a host of various lengths, we can take the first differences (the monthly changes in the case of temperature records), average them, and then reconstruct the average signal.
Now, the maximum signal that we can resolve by that method is not limited to the average length of the individual segments, or even the longest segments. I showed that above, and I believe you agreed. Instead, it is limited by the total length of the dataset, with an associated error estimate at each step.
When you average by that method, using the scalpel on the data has very little effect on the average. It just reduces the N for that timestep by one station. On the next timestep you pick up the new station, and the beat goes on. Unless the number of cuts in that particular month is large compared to N, the error will be small.
Think about it with sine waves. Suppose you took 38,000 sine waves, and averaged them by first differences. Then you cut each one at random somewhere in the data. Could you reconstruct the original average by averaging the cut data in the same way? Sure, with 38,000 sine waves, the cuts won’t be much noticed.
Experiment with some synthetic data, and it will be clearer. Like I say, you can use sine waves for simplicity.
Finally, you ask above, should we call it a new record every time we paint the Stephenson screen? I say if it makes a statistically significant difference so it is mathematically detectable as being spurious, sure. Otherwise, if you paint the screen every twenty years, you artificially introduce a spurious twenty-year cycle into the data … which is the exact problem I’m pointing at. Not cutting the data leaves spurious cycles in it, long-term cycles. If a sixty-year record has a spurious one-degree jump in the middle, it appears to contain a sixty year cycle plus a trend … but in reality no such cycle or trend exists.
So yes, we are removing information by cutting, but it is spurious information, incorrect information. If a thermometer is moved, the difference between the last of the old record and the first of the new record is WRONG. That’s the problem. I suggest removing it, because leaving it in creates spurious trends and cycles, particularly at longer time periods. YMMV.
w.

Editor
January 22, 2013 3:43 pm

Stephen Rasey says:
January 22, 2013 at 2:30 pm

@Willis: 1:59 pm

If the thermometer moves across the airport, or if they change to an electronic sensor in a new location, you are not measuring what you were measuring before. It is a brand new record, starting with when it moved across the airport.
To me, that is the underlying logic behind cutting the records—you are just acknowledging the reality on the ground, which is that the measurements pre- and post-jump are not measuring the same thing.

First, you are cherry picking with your example. For the most part, we don’t have the metadata to identify why there was a change.

Why do we need the metadata? That is what the comparison with adjacent stations is for, to see if there is an anomalous jump. We don’t need to know what the jump is from. We just need to be able to determine that in fact it is spurious.
Also, I gave three examples (change in location, change in instrumentation, change in surroundings), not one, and they are the three most likely reasons for spurious jumps in the temperature data. How is that “cherry picking”?
w.
PS—You may not be aware of this, but an accusation of cherry picking is an accusation of deliberate misrepresentation of the data, and I don’t brook a man calling me a liar, no matter how politely. It’s a serious accusation, not one to toss around.

Kev-in-Uk
January 22, 2013 4:35 pm

Willis Eschenbach says:
January 22, 2013 at 3:43 pm
You say we need to know if a shift/jump is spurious – which is of course what you see from a long term series – it should ‘jump’ out at you from a simple plot. Contrast that say, to something like slow electronic sensor drift – which wouldn’t jump out at you without the long term plot and some curious ‘human’ intervention or double checking. At this moment in time, I fail to grasp how either would/could be ‘caught’ by the BEST method of cutting and splicing. Indeed, if a fault in the data is present, it appears to me that it could be missed altogether, or artifically amplified by the procedure – depending on the data treatment. For example, does the algorithm say that if you have five stations, and 3 show warming, but 2 show cooling – does it assign the weighting in favour of the 3 over the 2? Or does it take the next adjacent five stations to those to make a decision, etc, etc.
This is all way above my level of statistical understanding – as an engineer and geologist, I need to grasp the workings and see if they make sense. As a former (half decent) chess player, that’s how I’m thinking about it – kind of forward move planning, the consequences of consequences, etc – so, to my mind, if we weight some station, how is that treated with respect to the ‘next’ adjacent stations data treatment? hope that makes sense……does it create in effect some kind of feedback loop?

Editor
January 22, 2013 5:40 pm

Kev-in-Uk says:
January 22, 2013 at 4:35 pm

You say we need to know if a shift/jump is spurious – which is of course what you see from a long term series – it should ‘jump’ out at you from a simple plot. Contrast that say, to something like slow electronic sensor drift – which wouldn’t jump out at you without the long term plot and some curious ‘human’ intervention or double checking. At this moment in time, I fail to grasp how either would/could be ‘caught’ by the BEST method of cutting and splicing.

It wouldn’t be caught by the scalpel method, or by the GHCN method either, if the change is slow … but then I know of no method to identify it if it is slow.
That kind of slow change, however, is a separate confounding problem. You can only do as much as you can do.
w.

Editor
January 22, 2013 6:54 pm

Kev-in-Uk says:
January 22, 2013 at 4:35 pm

… At this moment in time, I fail to grasp how either would/could be ‘caught’ by the BEST method of cutting and splicing. Indeed, if a fault in the data is present, it appears to me that it could be missed altogether, or artifically amplified by the procedure – depending on the data treatment. For example, does the algorithm say that if you have five stations, and 3 show warming, but 2 show cooling – does it assign the weighting in favour of the 3 over the 2? Or does it take the next adjacent five stations to those to make a decision, etc, etc.

The process runs like this. From the nearest stations, you construct an average temperature anomaly. Then you compare the Station X anomaly to that local average temperature anomaly, but not comparing the trend.
Instead, you first subtract the local average temperature anomaly from Station X anomaly to give a series of differences. Then you use a mathematical algorithm that sweeps from one end of the difference data to the other, and compares the variance of the left hand side to the variance of the right hand side. If there is a discontinuity, it shows up as a jump in the graph when the algorithm sweeps past the jump. This procedure has nothing to do with the trends, it just reveals discontinuities. What you do with them is then your choice.
w.

January 23, 2013 7:56 am

@Willis 3:43 pm. an accusation of cherry picking is an accusation of deliberate misrepresentation of the data, and I don’t brook a man calling me a liar, no matter how politely.
No disrespect intended. I didn’t call you a liar, nor use any synonym for one. In my writings on WUWT I think I have shown you great respect.
In my clumsy way, I was pointing out that you gave three examples of station changes from a much larger population of possibilities. No fair minded person would ever expect conversational examples were required to be random samples of the whole.
I cherry-pick all the time in the testing of theories and processes. When you test boundary conditions, by design you do not pick the data fairly or randomly.
Once again, no disrespect was intended. I apologize.

Gail Combs
January 23, 2013 8:45 am

I was curious.

OMICS Publishing Group
http://www.omicsonline.org
Los Angeles, California
500 – 1,000 Employees
Management Level
C-Level (4)
Director (2)
Manager (3)
Non-Manager (7)
Job Function
Engineering & Technical (7)
Human Resources (1)
Medical & Health (18)
Operations (1)
Sales (1)
Scientists (6)
Source

Spine Journal seems to be the owner link
Predatory publishers are corrupting open access
Is Omicsonline a scientific scam? Yes it is.
No where can I find any information on who are the owners, who is the CEO or who are the Board of Directors.

Editor
January 23, 2013 8:49 am

Stephen Rasey says:
January 23, 2013 at 7:56 am

@Willis 3:43 pm.

an accusation of cherry picking is an accusation of deliberate misrepresentation of the data, and I don’t brook a man calling me a liar, no matter how politely.

No disrespect intended. I didn’t call you a liar, nor use any synonym for one. In my writings on WUWT I think I have shown you great respect.
In my clumsy way, I was pointing out that you gave three examples of station changes from a much larger population of possibilities. No fair minded person would ever expect conversational examples were required to be random samples of the whole.
I cherry-pick all the time in the testing of theories and processes. When you test boundary conditions, by design you do not pick the data fairly or randomly.
Once again, no disrespect was intended. I apologize.

Thank you kindly, sir, you are a gentleman, the issue is forgotten.
For the future, I would caution you about accusing people of “cherry-picking”. Perhaps you use it to mean selecting data for a particular purpose. However, that is far from the common meaning. Usually, it is intended as a slur, certainly with huge negative connotations. It implies that you are carefully selecting data, not to test limits as you suggest, but to fool people about the results of a particular analysis.
For example, “cherry picking” is listed among Wikipedia’s list of logical fallacies. Here’s a definition:

Cherry picking is the act of pointing at individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position.

A person who is cherry picking is lying about what is going on on the ground, they are specially selecting their data to support their case while claiming (or implying) that they have taken a fair selection of data. In other words, it is a lie, they have lied (by commission or omission) about the data selection process.
That’s why I took offense. Cherry picking is definitely not something an honest scientist would do. So you accused me of not being an honest scientist.
However, that’s all for the future, as I said, you have cleared the slate entirely, we can move forwards.
I apologize for the interruption, and we now return you to your usual programming.
w.
PS—Let me point out again that
1) I gave, not just three examples of reasons for jumps in the data, but the three most common reasons. Together, they probably account for 95% of the jumps in temperature data.
2) I did not say that that was all of the possibilities, just the common ones.
So no, even by your definition I was not “cherry picking” in any sense of the word.

January 23, 2013 11:30 am

@Willis 3:21 pm:

Finally, you ask above, should we call it a new record every time we paint the Stephenson screen? I say if it makes a statistically significant difference so it is mathematically detectable as being spurious, sure. Otherwise, if you paint the screen every twenty years, you artificially introduce a spurious twenty-year cycle into the data … which is the exact problem I’m pointing at. Not cutting the data leaves spurious cycles in it, long-term cycles.

This gets to the heart of the matter. I believe that many times, perhaps most, we should not create a new record even if the jump is obvious. I will follow up with an Excel chart treatment, but first I’ll describe what I think matters most.
Let me nominate the occasional “painting of a Stephenson screen” as a member of a class of events called recalibration of the temperature sensor. Other members of the class might be: weeding around the enclosure, replacement of degrading sensors, trimming of nearby trees, removal of a bird’s nest, other actions that might fall under the name “maintenance”.
A property of this “recalibration class” is that there is slow buildup of instrument drift, then quick, discontinuous offset to restore calibration. At time t=A0 the sensor is set up for use at a quality satisfactory for someone who signs the log. The station operates with some degree of human oversight. At time t=A9, a human schedules some maintenance (painting, weeding, trimming, sensor replacement, whatever). The maintenance is performed and at the time tools are packed up the station is ready to take measurements again at time t=B0. A recalibration event happened between A9 and B0. The station operates until time t=B9 when the human sees the need for more work. Tools up, work performed, tools down. t=C0 and we take measurements again. The intervals between A0-A9, B0-B9 are wide, likely many years. A9-B0 and B9-C0 recalibration events are very short, probably within a sample period. My key point is that A0-A9 and B0-B9 contain instrument drift as well as temperature record. A9-B0, B9-C0 are related to the drift estimation and correction.
At what points in the record are the temperatures most trustworthy? How can they be any other but the “tools down” points of A0, B0, C0? We go back to look at the temperature record and let BEST slice and dice with the scalpel. What if the scalpel detects a discontinuity at B0 and/or C0? Should it make a cut there? That all depends upon what happens next.
1. From everything I have read about the BEST process, it would slice the record into a A0-A9 segment and a B0-B9 segment and treat the A9-B0, B9-C0 displacement as a discontinuity and discard it. BEST will honor the A0-A9, B0-B9 trends and codify two episodes of instrument drift into real temperature trends . Not only will Instrument drift and climate signal be inseparable, we have multiplied the drift in the overall record by discarding the correcting recalibration at the discontinuities.
There are at least two alternatives.
2. Don’t cut it at all. A0-A9-B0-B9-C0-C9-D0… as one full-life segment. Look at the saw and not the saw-teeth. The low frequency signal is still trustworthy. It is a good long term record with recalibration points and temporary drift error. Yes, there is an intermediate frequency that is spurious from the recalibration events. But the low frequency, the longest term trend is still solid A0-B0-C0-D0.. is as good as it gets. Intermediate points, B1, B2… B9, C1, contain some unknown degree of drift, but we have not baked it into the trend of each segment. We do not duplicate the drift. Over several segments the drift contribution to the trend will diminish instead of grow.
3. Gradually adjust each slice by the discontinuity estimated at the end of each slice. Cut the slices at A9-B0 and B9-C0. Using a trend algorithm determine the absolute offset between the trends of A0-A9, B0-B9. That will be the estimated instrument drift at A9 and B9 We should remove the A9 instrument drift from all along the A0-A9 segment. A reasonable approach is to assume the drift had growth proportional with time. Subtract an amount of drift proportional to the time distance from the previous recalibration point. Adjust, the A0-A9, B0-B9 such that A0′-A9′-B0′-B9′ no longer shows the recalibration discontinuity and maintain that one long record and it’s lowest frequency.
When it comes to spurious events associated with recalibration events, I maintain #3 is a superior method than #2 which is a superior method to #1.
Here I focused on a subset of all discontinuities; events associated with recalibration of the site. I argue that recalibration should not slice records for that is a return to trend we must not lose. I have a saw-tooth case in mind, which implies the drift is uni-directional. Reality will be much more confused. So please forgive my picking a boundary case for discussion.
Certainly major station moves requires a new record. Moving a station from point X to Point Y within an airport grounds? —- Tougher call. Why move it? Was it because Point X had become thermally contaminated and you wanted to restore Class 1 status (recalibration) at Point Y? By the way, when you move it, a maintenance event happens, too. If so, it argues against slicing the record and discarding the offset at the identified discontinuity.

Editor
January 23, 2013 12:01 pm

Stephen, thanks for your thoughts.
The overwhelming majority of “recalibrations”, as you call them, will never reach statistical significance. As you point out they are things like “weeding around the enclosure, replacement of degrading sensors, trimming of nearby trees, removal of a bird’s nest”. By and large, none of those will cause a statistically significant jump in the data, including painting the screen on a regular basis. So they are a non-issue for this discussion, they just do what they do, make a very tiny “sawtooth” in the data with little effect.
As a result, I fear that what you are alluding to is a difference that makes no difference.
I am talking about the much larger events, events that are large enough to be statistically detectable as an anomaly in the record. These do not happen frequently resulting in a “saw-toothed” signal as you say. If they did, it wouldn’t be such a problem. Instead, they occur very occasionally and randomly, one or a few per record.
Now, there are thousands and thousands of records in the temperature dataset with one or two such anomalous jumps in them. As I pointed out, such anomalous jumps will be indistinguishable from a long-term trend or a long-term cycle or both. You are interested in long-term cycles … yet you focus on differences that don’t make a difference, and you gloss over the damage that leaving in the bogus data does to the very long-term issues that are of interest to you.
You think leaving in that bogus data preserves the long-term trends, where far too often they are spurious trends created by the bogus data.
w.

January 23, 2013 12:55 pm

I thought in the past year, there was a WUWT article about the temperature effects of painting a weather-beaten screen with a fresh coat of paint. The changes were not-insignificant in comparison to measured warming rates of 0.1-0.3 dec C/decade. I have not found what I was looking for.
Related links of interest
The Metrology of Thermometers
Posted on January 22, 2011by Anthony Watts
http://wattsupwiththat.com/2011/01/22/the-metrology-of-thermometers/

typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annually to fix this error.

Hard to believe it is that bad. Is it?
A typical day in the Stevenson Screen Paint Test
Posted on January 14, 2008by Anthony Watts
http://wattsupwiththat.com/2008/01/14/a-typical-day-in-the-stevenson-screen-paint-test/
Which shows a daily plot of temperatures between bare wood, whitewash, latex, and air.
The curves are quite close together, but the difference in the maximum is most important.
Is there a later compilation of the data?
The Smoking Gun At Darwin Zero
Posted on December 8, 2009by by Willis Eschenbach
http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.
Regards to all, keep fighting the good fight,
w.

“Smoking Gun” is what brought me to WUWT.