A courtesy note ahead of publication for Risbey et al. 2014

People send me stuff. In this case I have received an embargoed paper and press release from Nature from another member of the news media who wanted me to look at it.

The new paper is scheduled to be published in Nature and is embargoed until 10AM PDT Sunday morning, July 20th. That said, Bob Tisdale and I have been examining the paper, which oddly includes co-authors Dr. Stephan Lewandowsky and Dr. Naomi Oreskes and is on the topic of ENSO and “the pause” in global warming. I say oddly because neither Lewandowsky or Oreskes concentrates on physical science, but direct their work towards psychology and science history respectively.

Tisdale found a potentially fatal glaring oversight, which I verified, and as a professional courtesy I have notified two people who are listed as authors on the paper. It has been 24 hours, and I have no response from either. Since it is possible that they have not received these emails, I thought it would be useful to post my emails to them here.

It is also possible they are simply ignoring the email. I just don’t know. As we’ve seen previously in attempts at communication with Dr. Lewandowsky, he often turns valid criticisms into puzzles and taunts, so anything could be happening behind the scenes here if they have read my email. It would seem to me that they’d be monitoring their emails ahead of publication to field questions from the many journalists who have been given this press release, so I find it puzzling there has been no response.

Note: for those that would criticize my action as “breaking the embargo” I have not even named the paper title, its DOI, or used any language from the paper itself. If I were an author, and somebody spotted what could be a fatal blunder that made it past peer review, I’d certainly want to know about it before the paper press release occurs. It is about 24 hours to publication, so they still have time to respond, and hopefully this message on WUWT will make it to them.

Here is what I sent (email addresses have been link disabled to prevent them from being spambot harvested):

===============================================================

From: Anthony

Sent: Friday, July 18, 2014 9:01 AM

To: james.risbey at csiro.au

Subject: Fw: Questions on Risbey et al. (2014)

Hello Dr. Risbey,

At first I had trouble finding your email, which is why I sent it to Ms.Oreskes first. I dare not send it to professor Lewandowsky, since as we have seen by example, all he does is taunt people who have legitimate questions.

Can you answer the question below?

Thank you for your consideration.

Anthony Watts

—–Original Message—–

From: Anthony

Sent: Friday, July 18, 2014 8:48 AM

To: oreskes at fas.harvard.edu

Subject: Questions on Risbey et al. (2014)

Dear Dr. Oreskes,

As a climate journalist running the most viewed blog on climate, I have been graciously provided an advance copy of the press release and paper Risbey et al. (2014) that is being held under embargo until Sunday, July 20th. I am in the process of helping to co-author a rebuttal to Risbey et al. (2014) I think we’ve spotted a major blunder, but I want to check with a team member first.

One of the key points of Risbey et al. is the claim that the selected 4 “best” climate models could simulate the spatial patterns of the warming and cooling trends in sea surface temperatures during the hiatus period.

But reading and re-reading the paper we cannot determine where it actually identifies the models selected as the “best” 4 and “worst” 4 climate models.

Risbey et al. identifies the 18 originals, but not the other 8 that are “best” or “worst”.

Risbey et al. presented histograms of the modeled and observed trends for the 15-year warming period (1984-1998) before the 15-year hiatus period in cell b of their Figure 1.   So, obviously, that period was important. Yet Risbey et al. did not present how well or poorly the 4 “best” models simulated the spatial trends in sea surface temperatures for the important period of 1984-1998.

Is there some identification of the “best” and “worst” referenced in the paper that we have overlooked, or is there a reason for this oversight?

Thank you for your consideration.

Anthony Watts

WUWT

============================================================

UPDATE: as of 10:15AM PDT July 20th, the paper has been published online here:

http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate2310.html

Well-estimated global surface warming in climate projections selected for ENSO phase

Abstract

The question of how climate model projections have tracked the actual evolution of global mean surface air temperature is important in establishing the credibility of their projections. Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations. We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

of interest is this:

Contributions

J.S.R. and S.L. conceived the study and initial experimental design. All authors contributed to experiment design and interpretation. S.L. provided analysis of models and observations. C.L. and D.P.M. analysed Niño3.4 in models. J.S.R. wrote the paper and all authors edited the text.

The rebuttal will be posted here shortly.

UPDATE2: rebuttal has been posted

Lewandowsky and Oreskes Are Co-Authors of a Paper about ENSO, Climate Models and Sea Surface Temperature Trends (Go Figure!)

0 0 votes
Article Rating
336 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
July 19, 2014 9:58 am

Anthony,
Very well written!! Nothing “extra” added, simply asking a question.
It will be interesting to see if you get a response, or the release gets pushed back.

Jeff D.
July 19, 2014 10:00 am

Friends helping friends.

Bloke down the pub
July 19, 2014 10:01 am

Always with the negative waves. I’m sure that it couldn’t possibly make any difference to the results.

July 19, 2014 10:03 am

Best? Worst? They’re all as bad as each other so does it really matter? With climate models its more about “artistic impression” than reality. Think ice skating versus ice hockey…

MattN
July 19, 2014 10:04 am

I have 4 best guesses as to the response.

Crispin in Waterloo
July 19, 2014 10:06 am

It will of course be of interest to me to know how close the best and worst are to the actual temperatures as far as they are known.
Nothing could be better for us all than a validated model in the field of climate science.

Editor
July 19, 2014 10:06 am

Anthony,
I was going to ask if you were sent the supplementary data that so often accompanies papers published in Nature, but it is unusual for papers relying on separate supplements to refer the reader to them, so I am supposing this is not an oversight of the sender in this case. Very well handled.
REPLY: I asked the journalist if an SI was included, and none was listed. Still such an important label of the best and worst models, central to the claim of the paper, surely would not be relegated to the depths of an SI. – Anthony

Justthinkin
July 19, 2014 10:12 am

So we have a shrink,and a history teacher pretending to be climate “scientists”? Just how does one get in on this scam?

Mark Bofill
July 19, 2014 10:14 am

Lew again huh. He’s probably only doing this so he can write some stupid study about the reception the paper receives.

john robertson
July 19, 2014 10:18 am

I guess Loo is out of paper again, perhaps he could be deterred by using high gloss instead of newsprint.

Jon
July 19, 2014 10:23 am

according
http://www.mdpi.com/2076-0787/3/3/299/pdf
This actually is a debate around political/ideological motivated use of science as a tool to promote political ideology and solutions and science resisting this?
“2. ‘The Plan’
For more than 25 years the conventional view has been that an international political solution to
climate change can be negotiated if driven by the engine of science. That is, if a strong enough
scientific consensus on the causes and consequences of anthropogenic climate change could be forged
and sustained, then the compelling force of such rationality would over-ride the differences in
worldviews, beliefs, values and ideologies which characterise the human world. Such a scientific
consensus would bring about the needed policy solutions. This is the “If-then” logic of computer
programming, the conviction that the right way to tackle climate change is through what Dan Sarewitz
at Arizona State University has called “The Plan” [8]. And there are those who still believe in this
project. They excoriate others who obstruct and obscure this pure guiding light of rationality—a
position adopted, for example, by Naomi Oreskes and Erik Conway in their recent book Merchants of
Doubt [9].”

Eliza
July 19, 2014 10:32 am

WUWT mistake again..Throwing your pearls to the pigs. I would have kept the info, allowed it to be published and THEN only then nailed both the authors and The Journal. Huge mistake once again by WUWT re SG
REPLY: your opinion is being given all the due consideration it deserves, thank you – Anthony

Non Nomen
July 19, 2014 10:33 am

You caught them napping, I suppose.
It might be helpful to find out the names of the peer reviewers…

July 19, 2014 10:34 am

Omission is a better word than blunder

Eliza
July 19, 2014 10:34 am

If it has been published I retract above LOL

pokerguy
July 19, 2014 10:42 am

“It is also possible they are simply ignoring the email.”
Well let’s put it this way. Had your email contained effusive praise for their brilliant work, they’d have answered you in a New York minute.

wobble
July 19, 2014 10:44 am

Would it make sense to also send your questions to your contacts at Nature that wanted you to look at it? Or were they simply attempting to generate media interest in the article rather than trying to improve the quality?
REPLY: to be clear, this was sent to me from another journalist, not the Nature editors or PR department – Anthony

George Steiner
July 19, 2014 10:46 am

Eliza says:
July 19, 2014 at 10:32 am
Mr. Watts is interested in collecting more brownie points towards sainthood. He is not interested in effective opposition to the CO2 scam.
REPLY: your ridiculous opinion is noted, and wrong – just watch and see what happens. – Anthony

DontGetOutMuch
July 19, 2014 10:47 am

Anthony, the best models are secret, as you would only try to poke holes in them. We should just take Lewandowsky’s word for it, after all he is a doctor.
PS. I hope you did not rupture anything important snickering at my obvious sarcasm…
Hmmm… Snicker Snark beware the Jabberydork!
Oh looky, time for me meds!

G. E. Pease
July 19, 2014 10:52 am

Anthony,
Myguess is that your notifications went into the two individuals’ junk/spam mail, and they do not check this daily (or ever?).
REPLY: I check my spam folders daily, but noting it here almost certainly ensures they will see it, even if my emails are relegated to spam. – Anthony

July 19, 2014 10:56 am

It would appear I am susceptible to conspiracy theories as I can’t help wondering what contribution Oreskes and Lew could have made to this paper.
Is it possible that the choice of “best” and “worst” is not calculated by comparison with the real world but rather with socially constructed viewpoints? They could contribute to a subjective choice of models.
In which case, the whole thing becomes a circular as the flight of the oozlum bird.
But I might be a conspiracy theorist

Peter Miller
July 19, 2014 11:00 am

Lew must suffer from that embarrassing syndrome where individuals suffer an overwhelming urge to have their opinions shot down in flames.
I think psychologists call it ROOFOFF – Recursive Overwhelming Obsessive Fury Over Fanciful Facts.

July 19, 2014 11:01 am

By the way:
New Scientist reported on Lewandowsky’s Recursive Fury paper in its Feedback section this week.
New Scientist found no fault in the paper and reported that it proved sceptics are all nutters and the complaints could be ignored as it was proven that sceptics are all nutters and that the complaints are actually more proof that sceptics are all nutters…
They didn’t mention that the paper was debunked.
Presumably next week Feedback will include “Buzz Aldrin believes the Moon Landings were faked” as apparently they believe he does.

Joe G
July 19, 2014 11:03 am

“Pause? Dat ain’t no steenkin’ pause! Dat is the climate engine getting a tune up and revving it’s freakin’ motor to run right over you steenkin’ denialists!”
Remember- The cold Antarctic glacial runoff is feeding the expanding Antarctic sea ice extent. The oceans are reaching their max capacity for storing CO2 without causing mass extinctions. We can’t predict the weather for 10 days out yet we sure as heck can model the climate for decades in the future because hey climate is not weather. 🙂

Cheshirered
July 19, 2014 11:05 am

Yet *another* explanation for the Pause – is that 14 now? Amazing, considering the science was ‘settled’.

David L. Hagen
July 19, 2014 11:09 am

Excellent questions that the reviewers should have caught.

July 19, 2014 11:12 am

Is this the same James Risbey who wrote this paper in 2011: http://www.marine.csiro.au/~ris009/pubfiles/cc_know_ign_clires.pdf ? If so, it is hard for me to square what seems to be the thrust of the current paper with “The ability of CGCMs to simulate changes in the 3d flow in the atmosphere is
severely hampered by the lack of resolution in the ocean component of current
CGCMs. The ocean models in CGCMs used for climate projections do not resolve
mesoscale eddies. This means that they don’t resolve the main source of dynamic
instability of the flow in these models and only very crudely parameterize some of
the components of that instability (Section 3.2).” If it is the same author, did he make a breakthrough or do CGCMs at the global level not suffer from these same limitations?

ferdberple
July 19, 2014 11:13 am

the selected 4 “best” climate models
============
the obvious mechanism is that they checked all the models and cherry picked the 4 that accidentally happened to have the best fit with observations.
as has been shown repeatedly, when you cherry pick a sample from a larger population because they happen to match observations, this does not demonstrate the sample has any skill at predicting the observations. the laws of probability tell us that some members of a population will match the observations simply by chance.
thus, for example, the hockey stick, and similar results. selection on the dependent variable leads to spurious correlations.

July 19, 2014 11:14 am

What a discouraging start to a lovely summer weekend; an invitation to review the latest weeping excrescence from the anti-science Púca worshippers. Needless to say, given the authors, it’s a very dubious trap.
Now because I commented that, I must be a conspiracy nut. Not that Oreskes and Lewnydowsky are capable of truly being conspiracists; because they’re blind oblivious parasitic fleas (Xenopsylla cheopis)chock full of enterobacteria (Yersinia pestis) infesting the most disgusting rats. A proper conspiracist must be capable of maintaining multiple layers of deceit; whereas the CAGW believers tend to stick with outbursts of opprobrium and weird application of statistics to poorly kept data.
Speaking of poorly kept data. Anyone else suspect that the tango twins mentioned above are actually waiting for skeptics to thresh the models looking for a so-called best four?
What factors truly make any model best? Just because one accidentally seems to replicate a chosen period? Running what entry positions? Does the model return the same results every time?
Will all data be posted?
Will all model code be published?
Anthony: Your response is well made and spoken. You are treating them as proper scientists. As you’ve demonstrated so often, you are responding as a gentleman would respond to gentlefolk.
Be careful with any next steps. Remember a previous Lewnydoodoo involved a deliberate deception on who sent what. The lack of a response is anomalous or perhaps intentional.
Good Luck!

Chris B
July 19, 2014 11:16 am

Eliza says:
July 19, 2014 at 10:32 am
WUWT mistake again..Throwing your pearls to the pigs. I would have kept the info, allowed it to be published and THEN only then nailed both the authors and The Journal. Huge mistake once again by WUWT re SG
————————————————
Perhaps that’s the difference between an honest skeptic and dishonest ideologues.

Mick
July 19, 2014 11:17 am

You do realize the presence of non physical scientists Oreskes and Lewandowsky on the list of authors is probably so the BBC can treat them as valid “climate experts”. I guess we have to look forward to a period of their “views” being paraded by the Beeb as consensus climate science.

July 19, 2014 11:19 am

Eliza says:
July 19, 2014 at 10:32 am
WUWT mistake again..Throwing your pearls to the pigs. I would have kept the info, allowed it to be published and THEN only then nailed both the authors and The Journal. Huge mistake once again by WUWT re SG

=================================================================
A mistake? Not if the goal is accuracy and honesty in the field.

Björn from Sweden
July 19, 2014 11:21 am

Oreskes on climate science???
Small world, not many rats left onboard the sinking AGW-vessel.
This can only be an act of desperation.
Anyway, dont expect a more helpful response than:
“Why should I make the data available to you, when your aim is to try and find something wrong with it…”

July 19, 2014 11:26 am

One of the key points of Risbey et al. is the claim that the selected 4 “best” climate models could simulate the spatial patterns of the warming and cooling trends in sea surface temperatures during the hiatus period.
Well I suppose I will have to wait for the paper, but the obvious follow up question would be how well did they simulate the spatial patterns prior to the hiatus period? Further, how well did they simulate spatial patterns other than sea surface both before and after the hiatus period? Four models getting one part of the problem right for one part of the time = FAIL.
What might be equally interesting is if this provokes a couple of other possible reactions:
1. The modelling groups fingered as “the worst” defending their position and in so doing, attacking this paper’s credibility.
2. If the paper holds up, and the four worst are really that much different and that bad, then what is the excuse for continuing to use them as part of the ensemble mean? If the paper holds up, these models should be dropped from the ensemble mean for their inaccuracy, the side effect of which would be to lower the sensitivity calculation of the ensemble.

July 19, 2014 11:28 am

Let’s see.
We know there are 4 best and 4 worst.
It might not be an oversight to not name them.
Hint. Modelers and those who evaluate models generally don’t identify the best versus the worst.
Leads to model wars.
Mine is bigger. No mine is.. Blah blah blah
There are some authors who do name names however.

July 19, 2014 11:31 am

Hoffer gets at one reason for not naming names.
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test

July 19, 2014 11:33 am

Anthony, I think your approach was the best of all the options, and I would also agree with Mr. Mosher that omission is the better perspective, but the problem with Mr. Mosher’s comment was that you used it in regard to one of your papers and not that of Risbey. You wrote “oversight” to that affect.

Björn from Sweden
July 19, 2014 11:40 am

” Mick says:
July 19, 2014 at 11:17 am
You do realize the presence of non physical scientists Oreskes and Lewandowsky on the list of authors is probably so the BBC can treat them as valid “climate experts”. I guess we have to look forward to a period of their “views” being paraded by the Beeb as consensus climate science.”
Now thats a brilliant observation, very good Mick!
You nailed it!

July 19, 2014 11:55 am

I seem to recall during the climategate thing that there was some controversy over which data were used for a particular analysis; that the selection of stations — several out of very many, if I recall correctly, was neither published nor furnished upon request. The reply was “we provided you with all the data”.
We could be seeing the early days of a similar kind of reply here.

A Generalist
July 19, 2014 11:55 am

Hmm. I’ve got an undergraduate degree in political theory, so it seems that qualifies me to co-author a paper on climate change? I can certainly pitch in lessons learned from Machiavelli. Whoops! Seems they’ve already read the Cliff Notes! Anthony, I hope they don’t pursue legal action against you regarding the embargo. But I wouldn’t be at all surprised if they did.

July 19, 2014 11:56 am

Steven Mosher;
Steven Mosher says:
July 19, 2014 at 11:31 am
Hoffer gets at one reason for not naming names.
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
>>>>>>>>>>>>>>>>>>>>
Ah yes. If the model got something right, we should keep it’s results across the board and average the parts known to be wrong into the ensemble anyway. Pffft.

Stephen Richards
July 19, 2014 12:08 pm

Steven Mosher says:
July 19, 2014 at 11:31 am
Hoffer gets at one reason for not naming names.
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
Steven, you been eating your weetabix again. That is a really important point you make. Good on yer. Only thing I would change is “will” to might or could or maybe. 🙂

ferdberple
July 19, 2014 12:17 pm

But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
================
thus demonstrating that it is chance, not skill that is determining the model results.
think of it this way. you are back in school. every day there is a math quiz. the worst 4 students one day are not going to be the 4 best students another day, no matter how many times you give a quiz. that is because the scores on the math quiz reflect skill.

richard verney
July 19, 2014 12:35 pm

It is very difficult to properly understand what is going on without sight of the paper.
it does surprise me that if reliance is being placed upon the ‘best’ 4 models full details of these models how they are tested, validated and what they say over their entire range is not set out in the paper as well as the reason for selecting those particular models. Are they superior in some way/ or is it just per chance that their outputs better reflect observational data?
Whilst I can see both the pros and the cons of contacting the authors with your enquiry prior to publication of the paper, and I can therefore see why you considered that to contact them is the best approach (although others may disagree, I myself consider it is the best approach), I am not sure why you would wish to share this with us, the readers of your blog, prior to the publication of the paper.
When the paper is published, you could have provided a copy of the paper on this blog, and at the same time set out your (and Bob’s) comments, and explain that you had contacted the authors in advance of publication but they had not responded. That might have been the most ‘saintly’ approach, since it is possible that people will not like the fact that you have referred to an embargoed paper, in advance of publication, and in future you may not be given copies of such papers in advance. Not a criticism, just a thought.
Also, I am unsure, from a tactical perspective, why in your message you would say “.. I am in the process of helping to co-author a rebuttal…” since this may cause the shutters to go up, whereas a more neutral response not mentioning this fact, but merely raising your enquiry regarding the models might be more likely to elicit a constructive response from the authors. As soon as you mention rebuttal, the authors no doubt jump into defence mode. That could explain their lack of response.
Again, not a criticism per se, just my thoughts.

July 19, 2014 12:35 pm

“ferdberple says:
July 19, 2014 at 12:17 pm
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
================
thus demonstrating that it is chance, not skill that is determining the model results.
################
not really. clearly you havent looked at the matter

July 19, 2014 12:38 pm

“Steven, you been eating your weetabix again. That is a really important point you make. Good on yer. Only thing I would change is “will” to might or could or maybe. 🙂
Gavin and others have made the same point. Its a known problem cause the democracy of the models.
not really headline news

July 19, 2014 12:41 pm

hooofer
“Ah yes. If the model got something right, we should keep it’s results across the board and average the parts known to be wrong into the ensemble anyway. Pffft.”
I wish that willis were here to tell you to quote my words
Simple fact is that the avergae of models is a better tool than any given one.
deal with it.
Same with hurricane prediction in some cases.
does it make sense to average models? probably not. But you get a better answer that way
so until someone devises a test to score models.. that is what you have,
pragmatics

July 19, 2014 12:48 pm

I agree with others that it will be useful to know which the best are for this particular feature of climate, because they can then be tested against other features. If they aren’t the best for other features then it would seem more likely they match by chance than by skill. Surely the AGW community has somehow to narrow down the vast range of models to a handful that they consider to be most skillful? Discarding the obvious outliers would demonstrate progress.

July 19, 2014 12:48 pm

Colluded before so here is a context for a trio. Note the ages and probable experience.
Whack javascript off first
http://www.scientificamerican.com/author/stephan-lewandowsky-james-risbey-and-naomi-oreskes/

July 19, 2014 12:49 pm

so until someone devises a test to score models…
We have a test score for models: accurate, repeated predictions.
Reality = 1
Models = 0

Jimbo
July 19, 2014 12:51 pm

Steven Mosher says:
July 19, 2014 at 11:28 am
Let’s see.
We know there are 4 best and 4 worst.
It might not be an oversight to not name them.
Hint. Modelers and those who evaluate models generally don’t identify the best versus the worst….

But if this is science then how can you know your replication of the model ‘experiment’ matches theirs?
Anyway, here are some other climate models. This is like a man with 2 watches showing different times, he’s never sure of the time. Modelling in the dark.

Abstract
The Key Role of Heavy Precipitation Events in Climate Model Disagreements of Future Annual Precipitation Changes in California
Climate model simulations disagree on whether future precipitation will increase or decrease over California, which has impeded efforts to anticipate and adapt to human-induced climate change……..Between these conflicting tendencies, 12 projections show drier annual conditions by the 2060s and 13 show wetter. These results are obtained from 16 global general circulation models downscaled with different combinations of dynamical methods…
http://dx.doi.org/10.1175/JCLI-D-12-00766.1

Bill Illis
July 19, 2014 12:55 pm

The most accurate climate models are the ones that have huge decadal oscillations and they just happen to be oscillating on the down-side right now meeting the flat hiatus temperatures.
In other words, the accurate global warming models are the ones that project no global warming.

Harry Passfield
July 19, 2014 12:56 pm

Steven Mosher says:
July 19, 2014 at 12:35 pm

“ferdberple says:
July 19, 2014 at 12:17 pm
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
================
thus demonstrating that it is chance, not skill that is determining the model results.
################
not really. clearly you havent looked at the matter

Nope. Steve, you’re wrong there.
[See, this kind of debate is easy. I learnt it at kindergarten.]
The rest of this scientific debate goes like this:
‘Tis
‘Tisn’t
‘Tis
‘Tisn’t
[ad infinitum]
Steve, for an intelligent man you really do cause readers to waste a load of time reading your [kindergarten] remarks (and me a load of time responding to them!).

Jimbo
July 19, 2014 12:58 pm

Risbey et al. identifies the 18 originals, but not the other 8 that are “best” or “worst”.

This is garbage. Anyone can carry out MULTIPLE model runs and point to 4 of the best matchers for say precipitation over a region. This doesn’t tell my anything. Just look at the models the IPCC uses for its global surface temperature projections. You could pick the 4 best performers and publish a paper. Yet the vast majority failed miserably.

July 19, 2014 1:02 pm

Steven Mosher;
I wish that willis were here to tell you to quote my words
I did quote your words. It is right there upthread, go read it again.
does it make sense to average models? probably not. But you get a better answer that way
so until someone devises a test to score models.. that is what you have,

In one breath you say averaging models probably doesn’t make sense and in the next you say you get a better answer. Mosh, you can’t have it both ways.
But there’s really no way to justify averaging of models. What if 10 new models appeared tomorrow showing even higher sensitivity than the current crop? Would you just add them in and say, hey, 28 is better than 18? What if they were all lower? Would that make is better? What if 10 of the 18 current models were discontinued for some reason, all of which were high sensitivity. Would you argue that the remaining 8 should continue to be averaged? Would you then apply an “adjustment” to the average to gloss over the resulting negative discontinuity?
Presuming that the errors in artificial constructs cancel each other out by being averaged together and thus give you a “better” answer is ridiculous.

Harry Passfield
July 19, 2014 1:02 pm

Steven Mosher says:
July 19, 2014 at 12:38 pm

“Gavin and others have made the same point. Its a known problem cause the democracy of the models.”

“..the democracy of the models?” Say what? The models have a vote???

Bloke down the pub
July 19, 2014 1:06 pm

Steven Mosher says:
July 19, 2014 at 12:41 pm
‘Simple fact is that the avergae of models is a better tool than any given one.
deal with it.’
So the average of twelve wrong clocks will tell you the right time, or not as is more likely. The truth is that models only become a ‘better tool’ once they have proved a reasonable power of prediction.
As a smart person once said, ‘all models are wrong, but some can be useful’ or something like that.

Henry Galt
July 19, 2014 1:07 pm

Sometime, in our (still dark) future:
“Please stop your attempts to extract the urine. I’m a published climate scientist doncha know.” Sincerely, S Lewandowsky/N Oreskes

jorgekafkazar
July 19, 2014 1:09 pm

Steven Mosher says: “Simple fact is that the avergae of models is a better tool than any given one. deal with it.”
You mean, the avergae of models is a less worse tool than any given one. Dealt with.

cedarhill
July 19, 2014 1:11 pm

The latest effort seems to be to just ignore those that don’t agree with the warmists, ala the BBC excluding opposing views. Expect trumpeting of the big “4 best” around the world for the masses and only the whimper of the internet for the inquisitive.

Randy
July 19, 2014 1:16 pm

On a related note. I find it utterly hilarious, to not be a “science denier” you must deny the pause. From reading various blogs, one could assume to stop being a science denier, I need to take random variables from the papers attempting to explain the lack of warming and fuse them together. Clearly only science deniers would fail to do so!! LOL

Justthinkin
July 19, 2014 1:18 pm

“Anyway, dont expect a more helpful response than:
“Why should I make the data available to you, when your aim is to try and find something wrong with it…”
BINGO. I’m just a lowly QA Mgr,however that is my job,looking for errors. BUT,if you find something wrong with my work,I want you to tell me it is wrong.
Why are some people so scared to just own up and say I screwed up?

Matt L.
July 19, 2014 1:20 pm

If the four best (forget about the worst) are named, it won’t take a genius PR strategist to parlay that adulation and recognition and morph it into jealousy and strife.
(If there’s one thing I’ve learned this year, it’s you scientists are an creative, intelligent and punishingly contemptuous lot.)
It could only help the science behind modeling if we had more public dissonance between the various modeler camps. I would like to see the models become accurate. One way to do that is to let them compete — iron sharpening iron and all that.
Will it happen? Nah. There’s no money in it save that which spills from the government’s purse. Climate models are sort of like artists in that way. And everyone knows you can’t judge art.

Jimbo
July 19, 2014 1:21 pm

Steven Mosher says:
“Simple fact is that the avergae of models is a better tool than any given one. deal with it.”

Doesn’t the IPCC go with the central temperature (average) projection or thereabouts? That failed badly, while a couple of the models did come closest to observations. That makes your assertion a bit off the mark. Average can still be wrong, ask the ipcc.

DC Cowboy
Editor
July 19, 2014 1:34 pm

Steven Mosher says: “Simple fact is that the avergae of models is a better tool than any given one. deal with it.”
==============
Please explain how an ‘average’ of unvalidated models is a better ‘tool’ than a simple (unvalidated) guess?

July 19, 2014 1:34 pm

Why are some people so scared to just own up and say I screwed up?
This issue is about politics, domination, and control. It is not about science. Those who want to drive mankind back into a pre-industrial state of being are not going to be forthcoming about their mistakes and errors now are they?
There may be a few honest men and women scientists who have been deluded into thinking that a tiny addition of anthropogenic CO2 into the atmosphere will lead to the destruction of life as we know it — but I really do find that difficult to believe. The evidence is overwhelming that increasing levels of CO2 do not produce warming. The last 17 plus years should be clear enough to any honest person. (and remember that mankind’s portion of the increase in CO2 was tiny) http://hockeyschtick.blogspot.com/2014/07/new-paper-finds-only-375-of-atmospheric.html
However, let us remember what Sinclair Lewis once wrote: “It is difficult to get a man to understand something if his salary depends upon his not understanding it”.

DC Cowboy
Editor
July 19, 2014 1:37 pm

Steven,
In reference to my previous comment. Have the current set of Climate Models been validated, in the scientific sense? As far as I know (and of course my knowledge is limited) they have not. Given that being true, what value would we gain from an average of models we don’t know are a valid representation of reality?

July 19, 2014 1:44 pm

Steven Mosher says:

“Simple fact is that the average of models is a better tool than any given one. deal with it.”

That does seem to be true. But is it useful?
I argue No.
Models each represent an opinion of the relative significance of the factors that affect the climate. They are all hoped to be reasonable (no-one includes the effect of the morning star entering the House of Sagittarius); they are all hoed to be sciency.
All models make a judgement call as to what is not sciency. They all share the same bias to include only realistic factors. But they all make mistakes (to err is human) and include factors that are almost or entirely insignificant and undervalue the big ones. Not the same mistakes but still mistakes.
Now read Tolstoy’s Anna Karenina: not the lot, just until “All happy families are alike; each unhappy family is unhappy in its own way.” Only one answer is right; we are one planet.
The potential errors are infinite but the right bits are all there in every model. So we average and end up with the Wisdom of Crowds.
But we Do not have the Knowledge of Crowds as we don’t know which bits are rightest from the aggregate.

Ian W
July 19, 2014 1:51 pm

dccowboy says:
July 19, 2014 at 1:37 pm
Steven,
In reference to my previous comment. Have the current set of Climate Models been validated, in the scientific sense? As far as I know (and of course my knowledge is limited) they have not. Given that being true, what value would we gain from an average of models we don’t know are a valid representation of reality?

Verification and validation testing and publication of the tests and results are something that is not done in academia. At best you see the equivalent of ‘HarryReadMe’ files. This lack of validation extends to the entire realm of climate ‘science’ including NCDC and NASA GISS. Or perhaps someone can point to the suite of validation tests and results that have been published? The models are expensive electronic handwaving as they have not been validation tested, yet the entire world economy is expected to be crippled due to the ‘results’ from these random number generators. Now with ‘the pause’ it is blatantly obvious that the models are junk and do not do what they are claimed to do. Is there any other area of science where continually getting the wrong answer from unvalidated software would obtain funding?

July 19, 2014 1:51 pm

Don’t overlook the conditional language “4 “best” climate models could simulate the spatial patterns.” Why phrase it as “could simulate”? Why not be definite with “simulated”?
Other than that the best model is the one supported by the most grant money. You have to make the customer happy.

Jordan
July 19, 2014 1:56 pm

“Simple fact is that the avergae of models is a better tool than any given one”
Only if the models are unbiased estimators for the variables of interests. As such, the statistical “expected value” of model error for each such variable would be zero.
Demonstration of unbiased estimation would be key to validating part of methodology of this paper and should be mentioned.
Has anybody demonstrated that the GCM’s are unbiased estimators? Doubtful that anybody has if the model temperature forecasts are “running hot”. As such, the expected value of the model temperature estimates would be equal to their bias.
Further, if the they models are unbiased estimators, it is not clear why the methodology would select and average 4 models. Surely the standard error of 18 unbiased estimates would have the smallest standard error: why not use all 18? The selection of 4 makes no sense.

norah4you
July 19, 2014 1:56 pm

NONE of existing so called model lives up to needed criteria they are said to be “written”. Had the so called scholars had enough knowledge in how to write a sound systemprogram, had they used at least 43 of the most important factors to be taken into consideration, had they also had elementary knowledge in Mathematic Statistic and Geology, they had been much better off.
Sadly to say they haven’t lived up to their own promises.
I have tried to present Archimedes principle here more than once. Still many people, scholars or strawmen, doesn’t seem to understand the simple fact that land under glaciars which melt rises and that ice melting in water never ever result in rising waterlevel. Well guess it’s time to present proof of that –
During the ice-melting period after the last Ice age the land rised when ice over land had melted. The uplift is strictly according to Archimedes principle To many people haven’t had teachers aducated enough in Geography-; History- and/or Physic subjects. Thus they haven’t learnt these basic knowledge of our Earth history.
While working with my C-essay in History 1993 (written D-essay, so called Master essay later) I hade to know as exact waterlevels for the Baltic Sea as possible and thus I had to know the sealevels in Oceans along coast around the world. My primary exam I had in Computer science (originally graduates trained system programmer -71). I wrote a program using 43 needed factors for analysing sea levels mainly from Stone Age up to 1000 AD.
At first I had to determ the sea level, ie. , the normal waterlevels around the worlds coast. To reach as correct algoritm as possible I compared genuine actual levels with known deposits, sludge and archaelogical reports. The needed 43 necessary factors to be taken into account include straiths, landrise, erosion, grounds, techtonical plates, meandering of pre-historic and historic rivers, biotops (including seed and weeds found in C-14 analysed layer in coast area during excavations), tectonical plates movements, known eruptions from vulcanos etc etc. The amount of needed factors taken into account is significantly more than the 7 to 9 the so called CO2 scientists usually use in their models.
The Baltic Sea in older ages
please look at the maps in the bottom of the page. when I had them up in 1993 it was said that they were significant proof of landrise. In today’s CO2-discussion they can be used to disprove the assumption of rising waterlevels when glaciars and ice in water melt.
That’s only one of many other parts of the so called computer models I might present. I haven’t found any of the so called model reaching up to standard needed to show that Theories of Science been used at all.

Jeff Alberts
July 19, 2014 2:02 pm

Justthinkin says:
July 19, 2014 at 10:12 am
So we have a shrink,and a history teacher pretending to be climate “scientists”? Just how does one get in on this scam?

One could say the same of McIntyre and McKitrick. A person’s title or background is irrelevant. The paper should stand or fall on its own merits or shortcomings.

schitzree
July 19, 2014 2:02 pm

I Wouldn’t have posted this before the end of the embargo. It just leaves you open to criticism for no real benefit. Either they read your e-mail and take steps to check out any problems you point out, or they don’t. If the don’t, then you’ve got something worth righting about AFTER the embargo is lifted.

hum
July 19, 2014 2:10 pm

Mosher, “average of the models” what an ignorant statement. Why not just take all the models code and compile it all together and run a single result. Yeah that will work. You must not know what a GCM is.

July 19, 2014 2:11 pm

As has been noted, different aspects of models can be looked at. However if the four that are deemed “best” are the ones that show the smallest rise in global temperature over the last 18 years, would they also not rule out the C in CAGW? If so, they truly are the “best”.

July 19, 2014 2:21 pm

Werner Brozek;
However if the four that are deemed “best” are the ones that show the smallest rise in global temperature over the last 18 years, would they also not rule out the C in CAGW?
>>>>>>>>>>>>>>>
Since we have no information as to what those specific models say going forward, I wouldn’t make that assumption. In fact, my guess is that this is a one-two punch. Here’s four models that got the pause right… well over the oceans any way….skip that whole land thing…. and ignore how accurate they were before the pause…just ignore all those factors…. and look at what they predict for the future…. its worse than we thought!

Jordan
July 19, 2014 2:26 pm

Jeff Alberts says: “One could say the same of McIntyre and McKitrick. A person’s title or background is irrelevant. The paper should stand or fall on its own merits or shortcomings.”
It depends on what the authors have contributed to the analysis. If the above is a paper focused on physical climate processes, the would question stand: what are the material contributions of Oreskes and Lewandowsky to the physical analysis?. If the answer is “nothing”, it would devalue journal publication as a basis for researchers to assert their credentials.
I’m sure both M&M can give a satisfactory account of their respective contributions to their papers.

Moru H.
July 19, 2014 2:28 pm

I’d probably have payed real money to see Anthony’s face if someone would have told him a few days ago he will write an email to Dr. Oreskes regarding a paper on ENSO/models + the pause™ .
You can’t make that $#!^ up.
I wonder if/how the authors have addressed the issues discussed in this paper.

Louis Hooffstetter
July 19, 2014 2:28 pm

Hallelujah for Risbey, et al! I can’t tell you how much I thank God for this paper! Many years ago at a Grateful Dead concert, I had an incredible drug-induced epiphany revealing how particle physics and the time-space continuum could be harnessed to make deep fried Twinkies taste seven orders of magnitude more delicious. I’ve kept this secret to myself for decades, never dreaming I could publish such an idea in a prestigious scientific journal like ‘Nature’. (Truth be told, I’m just a lowly geologist who doesn’t know squat about particle physics, the time space continuum, or deep fried Twinkies.) But apparently that’s irrelevant. Now that ‘The Journal Nature’ has published Oreskes’ and Lewandowski’s ENSO hallucinations, they can’t possibly deny publishing mine.

July 19, 2014 2:29 pm

schitzree says:
July 19, 2014 at 2:02 pm
I Wouldn’t have posted this before the end of the embargo. It just leaves you open to criticism for no real benefit. Either they read your e-mail and take steps to check out any problems you point out, or they don’t. If the don’t, then you’ve got something worth righting about AFTER the embargo is lifted.

Well, except by posting this, now, we don’t have to take anyone’s word that there was an attempt to discuss the matter before the end of the embargo.

July 19, 2014 2:40 pm

Steven Mosher says:
July 19, 2014 at 12:41 pm
hooofer
Simple fact is that the avergae of models is a better tool than any given one.
deal with it.
================
Do you mean that as a general observation, or is the scope of that remark confined to the 18 climate models in question here?
By “better tool” do you mean more consistent with observations? How do you judge performance? Do you account for differences in inflection points in your measurement?
Is not an average of a bunch of models simply another model? Does that imply that some kind of averaging process internal to a model makes it a better model? How so? Is it always the case that increasing the number of models in the “average” increases the accuracy? Is it a linear improvement or something else?
To make it a “better tool”, do you have to apply weights (non-unit)? How are these weights derived? What kind of average is it? Arithmetic? Geometric? Harmonic?
I’d be interested to know on what theory you base your assertion, because, for the life of me, I can’t see it.
NB: I’m not attempting to debate, as I’m just a dumb kid. I really want to learn.

July 19, 2014 2:47 pm

Steven Mosher says:
July 19, 2014 at 11:28 am
Let’s see.
We know there are 4 best and 4 worst.
It might not be an oversight to not name them.
Hint. Modelers and those who evaluate models generally don’t identify the best versus the worst.
Leads to model wars.
Mine is bigger. No mine is.. Blah blah blah
There are some authors who do name names however.

===========================================================
True, I’m just a layman here but, if the models aren’t identified then “”4 best and 4 worst” is a matter of subjective rather than objective evaluation.
The 4 projections that are closest to observations are the the 4 best. The 4 that diverge the most from observations are the 4 worst. That seems pretty simple.
I haven’t read all the comments but has anyone asked just how long ago the models’ projections were made versus the real-time observations?
If I’m shooting a rifle but my aim is off a little bit, I might still get a bulls-eye if the target is only 5 feet away. If it’s a 100 yards away…..?

HAS
July 19, 2014 2:53 pm

Be inclined to include one of Risbey’s bosses at CSIRO in the communications. Unlike the others he earns the Queen’s shilling for doing directed research and is accountable internally and to those funders for the quality of what he produces.

dp
July 19, 2014 2:57 pm

Steven Mosher says:
July 19, 2014 at 10:34 am
Omission is a better word than blunder

I’m stunned you didn’t say “Mannian blunder” or “Phil Jones-like blunder”.

pouncer
July 19, 2014 2:58 pm

Suppose we have a trend line, and we attempt to compare it to a “drunkard’s walk”. We model the drunkard’s walk in three implementations — one with the toss, heads/tails of a coin, one with red/black on a roulette wheel, and one with odd/even spots on a thrown dice. The points of the “walk” zig zag up and down, heads red odd, heads black even, tails red even,…
As some point, we stop. We get to choose when to stop. If the model looks close to our target line, we can stop earlier. If not, we can keep modeling…
One of the three models will — very likely –be closer to the target trend than the other two. It’s not likely all three will be close to the trend, or each other. But given the choice to decide which model most closely matches the target, we can identify a winner. (If not, we can keep tossing coins, spinning the wheel, and throwing the dice.)
Now, having modeled a random walk process, and found at least one such model that better matches the measured trend than others, what have we proved about the target trend of interest? Have we in fact provided evidence that the trend IS a drunkard’s (random) walk, or are we at least more sure it’s a random walk now, than before we ran our models?
And does it advance our knowledge of the drunkard’s future path to specify a throw of dice is a better model of the past trend than a toss of a coin?

July 19, 2014 3:07 pm

“dp says:
July 19, 2014 at 2:57 pm
Steven Mosher says:
July 19, 2014 at 10:34 am
Omission is a better word than blunder
I’m stunned you didn’t say “Mannian blunder” or “Phil Jones-like blunder”.
#####################
measured language is better

July 19, 2014 3:12 pm

Lewandowsky is a social psychologist. The behavioral sciences now push the idea that it is beliefs about reality that guide future behavior. This paper is also designed to influence and confirm those beliefs. Very naughty to actually read carefully and peruse those footnotes and discover this omission.
I got last week’s FDEUF award. Footnote Diving and Extraction of Useful Facts Award. Looks like this will be next week’s. Good job.

July 19, 2014 3:12 pm

Do you mean that as a general observation, or is the scope of that remark confined to the 18 climate models in question here?
1. general observation about all the models
By “better tool” do you mean more consistent with observations? How do you judge performance? Do you account for differences in inflection points in your measurement?
1. pick your skill metric.. but more consistent yes.
Is not an average of a bunch of models simply another model?
1. A+ answer
Does that imply that some kind of averaging process internal to a model makes it a better model?
1. no
How so? Is it always the case that increasing the number of models in the “average” increases the accuracy? Is it a linear improvement or something else?
1. Not always the case. I never looked at the improvement stats
To make it a “better tool”, do you have to apply weights (non-unit)? How are these weights derived? What kind of average is it? Arithmetic? Geometric? Harmonic?
1. weights are a big debate. currently no weights
I’d be interested to know on what theory you base your assertion, because, for the life of me, I can’t see it.
1. No theory. pure fact. If you take the mean of the models you get a better fit. why? dunno.
just a fact.

Mark T
July 19, 2014 3:14 pm

Jordan wins the thread. Thanks for pointing out the ignorance of expecting an average to be better “just because” it is an average. I also applaud you noting that if the estimators are all unbiased, then they should all be used in the average. Picking only “the best” implies there was no rigor in the selection process, merely an eyeball match. This is also a tacit admission the models are not unbiased, nor do they constitute an ensemble (which means their average is physically meaningless).
For that matter, how is “best” defined? This word is akin to “optimal,” which is meaningless without context. For example, “best with respect to minimum mean square error” actually sets forth the criteria by which “best” was determined.
Mosher, seriously, invest in a book on statistical signal processing. Then read it. Then ask questions.
Mark

Francois GM
July 19, 2014 3:15 pm

Any model that “predicted” the pause must be insensitive to CO2. Looking forward to finding out which input parameters were used and how much they were weighed.

July 19, 2014 3:16 pm

“Jordan says:
July 19, 2014 at 1:56 pm
“Simple fact is that the avergae of models is a better tool than any given one”
Only if the models are unbiased estimators for the variables of interests.
Not really. in fact they are biased and weirdly averaging them gives you the best answer. just fact.

Michael D
July 19, 2014 3:16 pm

Steiner said Mr. Watts is interested in collecting more brownie points towards sainthood..
Wrong: Anthony achieved climate sainthood long ago.

July 19, 2014 3:23 pm

Musher says (hey, he called me hoofer first!)
I never looked at the improvement stats
Followed by:
If you take the mean of the models you get a better fit. why? dunno.
just a fact.

You’ve never looked at the stats yet consider it a fact? LOL.

Ali Bertarian
July 19, 2014 3:23 pm

I am 5′ 10″ tall, can’t jump, can’t dribble, but I beat my wife at basketball. I am the best b’ball player in this house. Hey Lakers, when can I sign the contract?

mouruanh
July 19, 2014 3:29 pm

Link didn’t show up. this paper was meant.

richardscourtney
July 19, 2014 3:34 pm

Steven Mosher:
At July 19, 2014 at 12:41 pm you say

Simple fact is that the avergae of models is a better tool than any given one.
deal with it.

Simple fact is that average wrong is wrong. Face it and live with it.
Richard

Mark T
July 19, 2014 3:36 pm

I notice [Mosher] avoids the statistical challenges. That is because he knows, deep down, that he is full of sh*t.

No theory.
No kidding.
pure fact.
Of course, without any theory, this phrase is simply nonsense. Let us all just make our own facts and … Wait a minute, we already have enough climate scientists doing just that.

If you take the mean of the models you get a better fit
Except when you don’t. That is almost what Mann does with his reconstructions, hence we have divergence. Further more, “better” with respect to what? Eyeball wiggle matching?

why? dunno.

Of course you don’t; you have no idea what you are doing, yet you seem unhindered by that truth when commenting on statistical processing. Guess what, I bet I DO know why, and it is identical to the reason Mann can find wiggles that match what he wants in tea leaves: spurious relationships.
Mark

[Note: edited to fix a mispelling Kosher to Mosher – Anthony]

Mark T
July 19, 2014 3:41 pm

Sorry, my stupid tablet seems to think it knows how to auto-correct my block quotes. Here is the correct version (please delete the previous):
I notice Mosher avoids the statistical challenges. That is because he knows, deep down, that he is full of sh*t.

No theory.

No kidding. You are quite blind to any theory regarding statistics – that much we can all be sure of.

pure fact.

Of course, without any theory, this phrase is simply nonsense. Let us all just make our own facts and … Wait a minute, we already have enough climate scientists doing just that.

If you take the mean of the models you get a better fit

Except when you don’t. That is almost what Mann does with his reconstructions, hence we have divergence. Further more, “better” with respect to what? Eyeball wiggle matching?

why? dunno.

Of course you don’t; you have no idea what you are doing, yet you seem unhindered by that truth when commenting on statistical processing methods (yes, an average is a statistical processing method). Guess what, I bet I DO know why, and it is identical to the reason Mann can find wiggles that match anything he wants in even ordinary tea leaves: spurious relationships.
Mark

Editor
July 19, 2014 3:42 pm

Steven Mosher: “Simple fact is that the [average] of models is a better tool than any given one.“.
Odd that it’s a technique that isn’t used for sunspot cycle prediction, or, AFAIK, for anything else. Generally, the range of predictions is used as an indication of uncertainty, ie. it is used as … the range of predictions.

Skiphil
July 19, 2014 3:42 pm

The reason it is more likely to be a “blunder” than an “oversight” is that the authors likely did not and would not intend to tell the reader the actual 4 best and 4 worst models by thisbtest.
Thus, their position amounts to “trust us” — as we have seen so often in CliSci pseudo-science.
Only the authors can tell us whether the omission is accidental or intentional, although either way it is indefensible. How did the reviewers miss this?? oh right, the paper was given the usual lightweight pal review, it seems.

July 19, 2014 3:43 pm

davidmhoffer says:
July 19, 2014 at 2:21 pm
Since we have no information as to what those specific models say going forward, I wouldn’t make that assumption.
Good point. However check out the following. The best so far are also more or less the lowest in the future.
http://wattsupwiththat.com/2014/02/10/95-of-climate-models-agree-the-observations-must-be-wrong/

Mark T
July 19, 2014 3:47 pm

The point being that while you may be able to find some sort of better fit (whatever that actually means) NOW, unless your estimators are all unbiased (as noted by Jordan), and they constitute an ensemble, any relationship you see NOW, cannot be guaranteed to hold TOMORROW.
This is why there is divergence in the reconstructions Mann keeps shoving down our throats. He is simply too blinded by ideology, or likely, so completely ignorant of the statistics he is employing, that he cannot come to grips with this fact. Phil Plait (another statistical ignoramus) can blather on all he wants about climate statistics and how much climate scientists know about statistics, but at the end of the day, not one of these buffoons really understands the concept of a spurious relationship. And, if they do, they are liars for not saying so.
Mark

Jordan
July 19, 2014 3:47 pm

Robustness tests for the above paper:
> How do the researchers justify selection of 4 models? Why not use only the “best” model?
> Are the conclusions (assertions) sustained as averaging rises from using only the “best” model to averaging over the top-two, top-three, etc and until all 18 are included in the averaging?
> If the conclusions are not robust by the previous test, what proportion of all possible model combinations would confirm the conclusions?
Kate Forney – great comment with excellent questions and testing of reasoning.
Mosh – “general observation about all the models”. Cannot possibly apply to a biased estimator. We absolutely must demonstrate the expected value of model error is zero as a most basic test of its value.
Mosh: ” If you take the mean of the models you get a better fit. why? dunno. just a fact.”. Declaration of faith in the GCMs. Until/unless you can demonstrate the GCMs are unbiased estimators.

Skiphil
July 19, 2014 4:06 pm

a note on terms: I did not mean to imply above that the accidental/intentional distinction is mirrored precisely by the oversight/blunder distinction,
Under the category of “omission” we would often call an accidental omission an “oversight” — however, if the omission is sufficiently serious and/or significant it can also be a “blunder”…..
i.e., a blunder can be accidental or intentional. If the omission is not too serious and/or there is at least a plausible argument for the omission, then it might be termed only an “omission” or “oversight” which are less loaded terms. However, this issue above seems serious enough that it may well deserve to be termed a blunder. More definite judgment waits upon seeing any response and justification the authors may offer.
Of course, with noted non-scientist charlatans like Lewandowsky and Oreskes in the author list, nothing said by the authors can be relied upon.
Don’t trust, only verify or falsify!

Jordan
July 19, 2014 4:10 pm

Mark T: “This is also a tacit admission the models are *not* unbiased”
Yes, with one proviso. Even for unbiased estimators there could be loss of certain signals due to averaging of a set of statistically independent observations of the system.
However I do not see this as justification of the methodology used for this paper. Quite the contrary as follows …
If we understand the system to the extent that we know certain signals could be lost by averaging, we would be able to create a single model which produces those signals.
This researchers’ methodology (collecting different model results and averaging) contains a tacit admission that we do not understand the climate system sufficiently well to support their conclusions.

Mark T
July 19, 2014 4:20 pm

Yes, with one proviso. Even for unbiased estimators there could be loss of certain signals due to averaging of a set of statistically independent observations of the system.

I think only if they are not completely capturing the true physics of the system OR if the observation/sample noise is such that it overwhelms the signals you refer to. If they were completely capturing the physics, then all that *should* be left is random error and parameter variation (since it turns into an initial conditions exercise once all the physics are captured properly). I suppose the latter could include spurious cancellations, which seems to be what you are implying…?
I did not think you were justifying the methodology, btw. Quite frankly, none of us really know what it is except that it is based on models that have not had any rigorous verification applied.
Mark

Brute
July 19, 2014 4:21 pm

Oreskes and Lew are political additions to the paper meant to help along in case there were any “bumps” on the review process.

charles nelson
July 19, 2014 4:28 pm

Allowing Steven Mosher to make his confused and confusing comments here is a good thing.
In his opinion, which echoes the opinion of most Climate ‘s’cientists, the models do not need to work, i.e. be useful for prediction, not can they be compared or ranked qualitatively. From the point of view of Warmists these are indeed quite useful attributes.

Truthseeker
July 19, 2014 4:32 pm

So, according to Stephen Mosher, the best way to find the bullseye on a dart board is to throw a lot of darts at it and see where the most concentrated cluster of darts are.
Most of us would just examine the dart board itself to get the answer …

charles nelson
July 19, 2014 4:33 pm

Steven M. Mosher, B.A. English, Northwestern University (1981); Teaching Assistant, English Department, UCLA (1981-1985); Director of Operations Research/Foreign Military Sales & Marketing, Northrop Corporation [Grumman] (1985-1990); Vice President of Engineering [Simulation], Eidetics International (1990-1993); Director of Marketing, Kubota Graphics Corporation (1993-1994); Vice President of Sales & Marketing, Criterion Software (1994-1995); Vice President of Personal Digital Entertainment, Creative Labs (1995-2006); Vice President of Marketing, Openmoko (2007-2009); Founder and CEO, Qi Hardware Inc. (2009); Marketing Consultant (2010-2012); Vice President of Sales and Marketing, VizzEco Inc. (2010-2011); [Marketing] Advisor, RedZu Online Dating Service (2012-2013); Advisory Board, urSpin (n.d.); Team Member, Berkeley Earth 501C(3) Non-Profit Organization unaffiliated with UC Berkeley (2013-Present)

Editor
July 19, 2014 4:37 pm

I hate embargoed papers.

Editor
July 19, 2014 4:39 pm

And the reason I hate embargoed papers is, I can’t reply to comments or answer questions until tomorrow at 1PM Eastern (US) time.

u.k.(us)
July 19, 2014 4:52 pm

charles nelson says:
July 19, 2014 at 4:33 pm
Steven M. Mosher, B.A. English, Northwestern University (1981); Teaching Assistant, English Department, UCLA (1981-1985); Director of Operations Research/Foreign Military Sales & Marketing, Northrop Corporation [Grumman] (1985-1990); Vice President of Engineering [Simulation], Eidetics International (1990-1993); Director of Marketing, Kubota Graphics Corporation (1993-1994); Vice President of Sales & Marketing, Criterion Software (1994-1995); Vice President of Personal Digital Entertainment, Creative Labs (1995-2006); Vice President of Marketing, Openmoko (2007-2009); Founder and CEO, Qi Hardware Inc. (2009); Marketing Consultant (2010-2012); Vice President of Sales and Marketing, VizzEco Inc. (2010-2011); [Marketing] Advisor, RedZu Online Dating Service (2012-2013); Advisory Board, urSpin (n.d.); Team Member, Berkeley Earth 501C(3) Non-Profit Organization unaffiliated with UC Berkeley (2013-Present)
==============
Yep, and the NSA and IRS didn’t glom on to that comment 🙂

hunter
July 19, 2014 4:53 pm

So now psychologists and historians are writing climate papers on the climat.
lol.

July 19, 2014 4:57 pm

Bob Tisdale: “And the reason I hate embargoed papers is, I can’t reply to comments or answer questions until tomorrow at 1PM Eastern (US) time.”
Well Bob, now that the World Cup is over we have all the time in the world tomorrow to read your comments and answers. 🙂
Of course, at my age I may have forgotten the darn questions by then! 🙁

Crowbar of Daintree
July 19, 2014 5:03 pm

Guys, this is “Climate Science” TM. You need to think inside the box.
What they have obviously done is splice the best parts of the best 4 models to create one modelled result that hides the decline of agreement with real-life observations.

hunter
July 19, 2014 5:14 pm

By the way, the name calling on Steve Mosher is completely low class and uncalled for. Sort of a cringe worthy example of ad hom. And I do disagree with him on issues frequently.
For those posting his CV, I suggest that you re-read it very carefully between the lines for content. We have regular columnists here who are quite bright and even more self-educated. He has played in a highly technical league for a long time. Cryptic and caustic? Can be. Some internet self-declared expert who is actually a kook? No. Some of the pile on in this blog thread is unworthy and is not building skeptical critical skills or credibility.

hunter
July 19, 2014 5:18 pm

Steve,
I do have a question on the models and averaging them:
Is it not true that error tends to multiply, and as Dr. Pielke, Sr. pointed out more than once, the models as individuals and in ensemble (I paraphrase) show no meaningful predictive skill.
If that is that is the case, why should this sort of study be done before models are constructed that are in fact useful?

NikFromNYC
July 19, 2014 5:24 pm

There are no real climate models since there happens to be so little historical climate data to base those models on. Recent variation in the high emissions postwar era has near exact precedence in the low emissions era before it yet in the former era the warming is simply unexplained and the postwar cooling only has hand waving excuses for it such as aerosols yet as the such pollution has been reduced we have yet another end of warming, unexplained. If the several major fluctuations in temperature are basically unexplained with no continuous data going back far enough to enter into computer models then there is obviously no valid data being used, just modeled input data too!
So what caused the initial decades of warming? And exactly what data series is input into climate models to reproduce it? Given how likely chaotic ocean cycles have such a massive influence but there is no data other than sea surface temperature as a result, any model that uses the result as *input* isn’t a model at all, just a faithful mirror of already known results. Yet strongly note how fundamental criticism is ignored as the focus is put on lawyerly details by model enthusiasts including the bizarre Frankenstein mixing of models together as if there was any input data to support them. That’s a classic smoke screen meant to get you all upset about post processing details until the thread peters out in obscurity.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1955/to:2013/plot/hadcrut4gl/from:1895/to:1954

Eugene WR Gallun
July 19, 2014 5:26 pm

WHEN THE STANDARD IS NOT PERFORMANCE.
If the average is best then the climate model nearest the average must be the best model.
So if you are betting on a horse race, averaging the times of all those horses when they last ran a similar race and betting on the horse nearest that average would make you a winner, right?
Eugene WR Gallun

Alcheson
July 19, 2014 5:31 pm

Well, applying Mosher’s logic, it seems that if the climate modelers would just gin up about 500 more models to throw into the mix and average them all together, they should be able to make predictions accurate to about 4 or 5 decimal places. After all, the more models you average, the more accurate the prediction is his reasoning.

July 19, 2014 5:33 pm

I thnk is what Mosher is really saying, is the LAST thing the climate team wants is for infighting to start amongst the modelers when some models get called junk. It would devastate the claim that the science is settled and WOW… what a field day the skeptics would have.

Jean Parisot
July 19, 2014 5:33 pm

So if you are betting on a horse race, averaging the times of all those horses when they last ran a similar race and betting on the horse nearest that average would make you a winner, right?
Eugene WR Gallun
That works when your getting paid to bet other peoples money.

NikFromNYC
July 19, 2014 5:42 pm

Remember too that the biggest slander of all that these model enthusiasts have very much played along with is how:
(A) All climate alarm is based on a highly speculative amplification of the old school greenhouse effect.
(B) Climate model skeptics are said to in the main deny the old school greenhouse effect.
Yet another massive smoke screen operation going on here to this day to pretend that it’s all just basic physics you see, and denial of that basic physics by the usual creationists and tobacco industry shills even though Al Gore is the tobacco farmer and Michael Mann has hired a tobacco industry lawyer and Phil Jones now uses a Saudi Arabian university as his affiliation and RealClimate.org is site registered to the same notorious PR firm that promoted both the breast implant scare and the vaccine scare.

Editor
July 19, 2014 5:52 pm

Crowbar of Daintree says: “Guys, this is “Climate Science” TM. You need to think inside the box.”
Thanks. That made me laugh.

MJW
July 19, 2014 5:55 pm

Steven Mosher seems to have a rather odd understanding of statistics and averaging, Recently on Judith Curry’s site he claimed that if you use a scale which measures weight to the nearest pound to weigh a rock ten times, then if weight shows as 1 four time and 2 the other six times, the “best estimate” of the weight is 1.6 pounds. That’s false and rather silly. It assumes, without justification, that the scale randomly selects a weight with a probability based on the proportion of the weight from the lower and higher values. By his reasoning, if the rock measures 2 nine out of ten times, the “best estimate” of the weight is 1.9. Assume the scale actually behaves a follows (which is, I’d bet, much more like an actual scale): objects weighing less than 1.49 pounds always show as 1; objects weighing more than 1.51 pounds always show as 2; objects between 1.49 and 1.51 pounds show up as either 1 or 2, with a probability proportional to the distance from 1.49 and 1.51. Under that assumption, the weight of any object that gives both 1 and 2 for multiple weighings would be well-estimated as 1.5.

charles nelson
July 19, 2014 5:59 pm

In Jan 2012 over twenty people lost their lives in the Brisbane floods…..this is from the Sydney Morning Herald (a mostly Green/Left publication)
“Releases from Wivenhoe Dam raised water levels in the Brisbane River by up to 10 metres during January’s flood, a panel of independent hydrologists has found.
The hydrology report, commissioned by the Insurance Council of Australia and published yesterday, ruled the Brisbane flood to be a “dam release flood”.”
The Wivenhoe damn was built in 1974 for the purposes of flood mitigation…it couldn’t do its job in 2012 because it was full…why was it full?
Because ‘Climate Models’ and loonies like Tim Flannery predicted that long term rainfall was in decline and the authorities were hoarding water!!!!
Climate Models are not simply failed academic projects….life and death/economic/and policy decisions are being every day based on their worthless out-put.

Bill Illis
July 19, 2014 6:04 pm

Just take 18 climate models and program them to project anything from an Ice Age to the Cretaceous Hothouse and all the bases are covered.
And then one can conclude since 1 or 4 models got it right, all of them are accurate or the average of them is accurate (as Mosher concludes).
Sounds a little illogical but that has been done 50 times in climate science already and appears to get to 51 times when this paper is published (in Nature no less which is turning into a prostitute).

mouruanh
July 19, 2014 6:05 pm

Just finished reading an article by J. Risbey where the host of this website gets a personal mention. To Risbey’s credit, he strictly adheres to the use of the c-word, instead of the favorite term of his two prominent co-authors. That’s nice.
But apparently, all the skeptic’s arguments have been refuted. Already in 2010.
The contrarian critique is mostly devoid of new content and lacks the usual quality control procedures that help produce substantive arguments. Their critique has very little implication for understanding of climate change science.
So far it has uncovered a handful of disputed studies and sloppy citations in a vast sea of literature on climate change. The rest of the contrarian critique is, in the main,a mix of old or weak arguments and non-sequiturs that have long been examined or refuted.

The Straw Men of Climatology
When the contrarian du jour tells you about the latest errors in climate science and their radical implications, think about the vision of the science they are selling. It’s not what we do.
“How much for that vision, Mister?”

Jeff Alberts
July 19, 2014 6:08 pm

Jordan says:
July 19, 2014 at 2:26 pm
It depends on what the authors have contributed to the analysis. If the above is a paper focused on physical climate processes, the would question stand: what are the material contributions of Oreskes and Lewandowsky to the physical analysis?. If the answer is “nothing”, it would devalue journal publication as a basis for researchers to assert their credentials.
I’m sure both M&M can give a satisfactory account of their respective contributions to their papers.

That’s my point. Folks here are condemning Lew’s and Ore’s roles in the paper without knowing what those roles are. I’m sure there are a few logical fallacies involved.

Mark
July 19, 2014 6:10 pm

When they say the models could simulate ocean temperatures, they can, just not correctly…

Latitude
July 19, 2014 6:13 pm

1. No theory. pure fact. If you take the mean of the models you get a better fit. why? dunno.
just a fact.
=====
because they are all so bad/wrong/worthless………….even the averaged “fit” is so wrong it’s embarrassing

catweazle666
July 19, 2014 6:18 pm

Steven Mosher says: “Simple fact is that the avergae of models is a better tool than any given one. deal with it.”
Strewth!
I hope you never do anything mission critical, like work on bridges or airliners.
Or even mouse cages, come to that.
And then you wonder why climate scientists are rapidly becoming a laughing stock out here in the real world, where we are held accountable for our work.
So YOU deal with THAT.

justsomeguy31167
July 19, 2014 6:23 pm

If this flaw is real, the paper could not have been properly peer reviewed and thus should be pulled immediately. If true, all reviewers should be banned from doing reviews going forward.

Mark T
July 19, 2014 6:37 pm

Actually, hunter, most of the criticism of Mosher is directed towards his absolutely inadequate understanding of statistics, which is particularly vexing given the sway he seems to hold over many in the blogosphere. I agree with him frequently as well, however, his repeated misuse of statistics needs to be emphasized to prevent the spread of further misunderstanding. Finally, for someone that spends so much time preaching scientific principles, it is troubling that he never bothers to actually respond to pointed refutations of his statements.
There is no argumentum ad hominem in that, or do you likewise need instruction on logic?
Mark

hunter
July 19, 2014 6:51 pm

Those dismissing ensemble testing out of hand should consider thinking carefully:
http://www.cfd-online.com/Wiki/Introduction_to_turbulence/Statistical_analysis/Ensemble_average
And William Briggs posted this on ensemble forecasting in 2013 referring to a WUWT post, of all things:
http://wmbriggs.com/blog/?p=8394
And if I had to choose between a Math book used at Stanford and posters here…….
http://www.google.com/webhp?nord=1#nord=1&q=ensemble+averaging+failures
With this excerpt:
“An ensemble average is a convenient theoretical concept since it is directly related
to the probability density functions, which can be generally obtained by the theoretical
analysis of a given physical system.”
Now there are conditions of when and when not to use ensembles, and that is worth exploring. But dismissing the study simply because it is an ensemble is not useful.
Dismissing it because it turns out to be more Lew-style cherry picked garbage dressed up as science is quite another reason.
Let’s see how it turns out.

July 19, 2014 6:51 pm

July 19, 2014 at 12:38 pm | Steven Mosher says:

“Gavin and others have made the same point. Its a known problem cause the democracy of the models.”

July 19, 2014 at 1:02 pm | Harry Passfield says:

“..the democracy of the models?” Say what? The models have a vote???


Harry … this is “democracy” in the socialist vein … like the old German Democratic Republic where you were free to do and say, and vote, as you liked except that you were provided with the approved script. Funny, isn’t it, how the socialists always used the term “democratic” to avert attention from the restrictive intent of the regime.

July 19, 2014 6:54 pm

July 19, 2014 at 6:18 pm | catweazle666 says

Cat, you can appreciate how low the job description “scientist” has fallen … I know politicians with more intelligence.

hunter
July 19, 2014 6:54 pm

Mark T,
Mosher was being renamed “Kosher” and other inflammatory names up thread.
As to his lack of statistical skill, hmmmmm……not sure if I am with you on that one.
He seems to be in alignment with McIntyre more times than not, and I seriously doubt if anyone is going to credibly deconstruct him as a stats lightweight.
And, if you read the links in my post just above you will see that Steve’s assertion on ensembles being useful is accurate, in context.
[Note: I’ve been gone all day, and I think that was accidental, as M and K are near each other on the keyboard, I’ve done similarly stupid fat-fingered things, so I’ve fixed that spelling – Anthony]

July 19, 2014 7:03 pm

Steven Mosher says:
Simple fact is that the avergae of models is a better tool than any given one.
deal with it.
Same with hurricane prediction in some cases.
ROFLMAO! The computer illiterate Mr. Mosher makes more ridiculous comments on subjects he does not understand and has no background in. These sort of comments is what happens when English majors try to understand computer systems without a proper education.

July 19, 2014 7:05 pm

Why the obsession with averages, it’s because the think they can average out chaos. The simple fact is weather is an instance of climate, weather is chaotic, therefore climate is chaotic, yet Climatologist treat it deterministically and they are failing because of it.

July 19, 2014 7:10 pm

[snip – you don’t like Mosher, we get it, no need to put your dislike in bold. Dial it back please – Anthony]

Mark T
July 19, 2014 7:16 pm

Dude, are you incapable of reading? My tablet was auto-correcting so I reposted with errors corrected. Don’t be stupid when you pretending to be smart.
Also, regardless of what you may think you know, if the the models are a representative sample of the actual physical system they are modeling, they will fulfill the ensemble requirement. In other words, it is necessary, though not sufficient, to show they are an ensemble. If the models are not, then you cannot know whether the mean is located within the space spanned by the models.
Mark

July 19, 2014 7:19 pm

So this comment had to be snipped? Seriously?
Mr. Mosher’s computer illiterate logic, averaging wrong answers is more accurate than a single wrong answer.

Joe Goodacre
July 19, 2014 7:22 pm

Anthony,
Yes you run a successful blog. Yes people send you stuff. Yes there are people within the scientific who treat you poorly. Why this grandstanding though?
A prior example – when there were questions regarding temperature adjustments you arrogantly dismissed the claims of Stephen, then got on board and proclaimed to tell everyone that you would be one of the first to know what their response would be. You weren’t. There are a few recent examples that suggest you might be getting too big for your boots.

July 19, 2014 7:22 pm

If the science was settled there would only be on model and it would be 100% accurate to observations. But this is next to impossible with a chaotic system as complex as the planet Earth.

Mark T
July 19, 2014 7:24 pm

Either way, the more important point being made is what Jordan pointed out regarding bias. Can you honestly defend Mosher’s statement in light of that? If not, why did you not make note of that? Curious…
Others hinted at how that might be a problem with choosing the “best” models, but Jordan was the first to elicit the fact.
Mark

Mark T
July 19, 2014 7:33 pm

Paul Jackson: this particular complaint regarding an average is actually unrelated to the actual content of the signal (other than whether the models are actually representative). In fact, it does not matter if the climate is chaotic, deterministic, or stochastic; IF the models accurately represent the physics of the climate, the average should improve signal to noise ratio.
Mark

u.k.(us)
July 19, 2014 7:34 pm

Poptech says:
July 19, 2014 at 7:19 pm
So this comment had to be snipped? Seriously?
Mr. Mosher’s computer illiterate logic, averaging wrong answers is more accurate than a single wrong answer.
==================
Do you have the right answer ?

kadaka (KD Knoebel)
July 19, 2014 7:40 pm

From Poptech on July 19, 2014 at 7:03 pm:

ROFLMAO! The computer illiterate Mr. Mosher makes more ridiculous comments on subjects he does not understand and has no background in. These sort of comments is what happens when English majors try to understand computer systems without a proper education.

These sort of comments are what happens when internet arrogant bullies try to post accusations without performing a simple Google search.
From Steven Mosher’s (neglected and abandoned) Blog:

Recent Posts
* Modis QC Bits
* Modis R: Package tutorial
* Terrain effects on SUHI estimates
* Pilot Study: Small Town Land Surface Temperature
* Sample Input Data.

Sure looks like he has significant programming chops right there. Anyone can look and see Mosh is far from “computer illiterate”. And note you are an absolute asshat.

kadaka (KD Knoebel)
July 19, 2014 7:45 pm

Sorry! Didn’t refresh beforehand, didn’t know the comment I replied to was snipped. My fault.

July 19, 2014 7:48 pm

Kadaka, my team and I have reviewed various comments he has made about programming here and at his blog and without a hint of hesitation can say he does not know what he is talking about. He lacks elementary knowledge in basic programming concepts and bullshits himself through the rest.
Those posts mostly relate to himself trying to learn how to program in R for data analysis. None of his code is remotely complex and in various instances amateurish and lacking knowledge in proper methods. But that is what happens when people try to find information using Google and do not comprehend the results.

mouruanh
July 19, 2014 7:53 pm

I’ve finished reading a couple more of Risbeys articles. Rhetorically, he’s in the same camp with Oreskes and Lewandowsky. The same ol’.
For a moment there i thought this could turn out to be (albeit bizzarely) interesting. Now i feel we wont learn anything radically new about ENSO from this embargo-paper.
It’s a stunt or maybe good material for a future case study in collective psychosis. The usual average.

MJW
July 19, 2014 8:01 pm

hunter:

Those dismissing ensemble testing out of hand should consider thinking carefully:
http://www.cfd-online.com/Wiki/Introduction_to_turbulence/Statistical_analysis/Ensemble_average
And William Briggs posted this on ensemble forecasting in 2013 referring to a WUWT post, of all things: . . .

Recall that Mosher said:

Simple fact is that the average of models is a better tool than any given one.
deal with it.

You seem to suggest Briggs supports Mosher’s claim He doesn’t. Briggs only says the averaging is a sensible thing to do, and the ensemble model may be better than a single model:

There is nothing wrong, statistically or practically, with using “ensemble” forecasts (averages or functions of forecasts as new forecasts). They are often in weather forecasts better than “plain” or lone-model predictions.

(Note the Briggs is speaking of weather forecasts, not long-range climate forecasts, when he says the ensemble forecasts are often better.)

MJW
July 19, 2014 8:09 pm

Twice in my previous comment, “that” came out as “the”: that averaging; that Briggs. I have relatively thin fingers, but for purposes of typing, they’re exceedingly fat.

July 19, 2014 8:12 pm

Averaging an ensemble of identically distributed (IDD) random variables (rv) gives a reasonable estimate of the ensemble average under reasonable assumptions. Averaging rv from different ensembles of differing statistical properties is snake oil — especially if the different ensembles haven’t been statistically characterized.
To apply to climate models, replace ensemble with climate model above. I’d be willing to guess that nobody has characterized the statistics of even one of the climate models used for the averaging, much less all of them.

July 19, 2014 8:12 pm

Four of their cleanest dirty shirts…

Mark T
July 19, 2014 8:19 pm

MJW: mine input device self adjusts for finger size to produce random words unrelated to the context of my sentences. I had to correct Mosher’s name from Mother’s, Kosher, and a few other oddities numerous times. I just power cycled to get it to forget an auto-insert of “Dolph Lundgren killing machine.” Don’t ask, because I don’t have an answer.
Mark

NikFromNYC
July 19, 2014 8:21 pm

Computer programmers = antisocial = sociopaths = amoral = alarmists = successful = hired.
Skeptics = moral = polite = accommodating = accepting = stereotyped = slandered = struggling = mocked = fired.

Brute
July 19, 2014 8:24 pm

I like Mosher’s hypothesis that one arrives at truth by averaging wrongs. Say, like averaging the comments on this thread leads to verity itself even if every comment is individually wrong.

clipe
July 19, 2014 8:33 pm

If a man says something in a forest and Mosher doesn’t here him, is he still wrong?

hunter
July 19, 2014 8:34 pm

Poptech,
When I read stuff like your ad hom ignorance regarding Mosher, I wonder if maybe Doug Cotton has morphed into an angry spittle flecked rage phase.
It sure as heck shows you don’t know squat about programming.
Heck, it shows you never even bothered to go to his website, which is handily posted on the right side of this blog page.
Averages of averages do work well, under the right circumstances.
It may be wrong to apply it the waythe article this blog is based on.
And if Lew and Oreskes are running cover on the article, you bit it is a bit of deceptive manipulative cherry picked garbage, but attacking Mosher for being computer ignorant puts you, not him, in a bad light.
Do you even know he co-authored one of the few books about the climategate leaks?
To borrow from Dirty Harry, a man has to know his limits.
The point is this: self-declared internet geniuses make those of us who know we don’t know stuff look bad. Not to mention how bad they make themselves look.
Skeptics are winning and can push back the social lunacy of the likes of Oreskes, Lewandowsky, Obama, Gore, etc. into the margins of history with failed manias like eugenics. If we don’t distract with our own stupidity and stunts. Canada, Australia, Germany, Japan and others are moving away in varying degrees from the mindless reaction of climate obsession. Let’s focus on that, and not if someone is too inscrutable in his comments..
You want vague and weird?
Go to http://www.solvingtornadoes.org/ and see what a real drooling lunatic faux scientist writes like. His argument style, by the way, is amazingly similar to some big AGW promoters.

Mark T
July 19, 2014 8:38 pm

Nik: technically, I’m a programmer… 😉
Mark

u.k.(us)
July 19, 2014 8:39 pm

Poptech says:
July 19, 2014 at 7:48 pm
“…..He lacks elementary knowledge in basic programming concepts and bullshits himself through the rest.”
================
Please define your term of “bullshit”, lest any meaning of your comment be lost to future generations.

ossqss
July 19, 2014 8:40 pm

So, my take away is that it is OK to produce papers in reference to models and yet keep them anonymous?
Am I missing something?
How many policies have been produced by the same methodology?
Just sayin>

dp
July 19, 2014 8:49 pm

Steven Mosher says:
July 19, 2014 at 3:07 pm
measured language is better

That must be a new direction for you as just recently in the “Mending Fences” thread you said:

Willis and I have been asking for the same thing and Dr. Evans refuses, in Mannian manner, to refuse the release of the material.

So is it a new direction or selective snark?

lee
July 19, 2014 8:52 pm

Climate models have multiple underlying assumptions. Only one model can be correct. No one has proclaimed an Eureka moment on models. Models with underlying assumptions may be right for the wrong reasons.
An average of incorrect models will be incorrect. Whether close to reality or not will depend on model selection. But if the underlying assumptions of close models are significantly different they will be close or the wrong reasons.

Peter Newnam
July 19, 2014 8:59 pm

In “The Role of Quantitative Models in Science” – http://classes.soe.ucsc.edu/ams290/Fall2008/Oreskes%202003.pdf – Naomi Oreskes had this to say:
—————-
“Why should we think that the role of models in prediction is obvious? Simply because people do something does not make its value obvious; humans do many worthless and even damaging things. To answer the question of the utility of models for prediction, it may help to step back and think about the role of prediction in science in general. When we do so, we find that our conventional understanding of prediction in science doesn’t work for quantitative models of complex natural systems precisely because they are complex. The very factors that lead us to modeling—the desire to integrate and synthesize large amounts of data in order to understand the interplay of various influences in a system—mitigate against accurate quantitative prediction.
Moreover, successful prediction in science is much less common than most of us think. It has generally been limited to short-duration, repetitive systems, characterized by small numbers of measurable variables. Even then, success has typically been achieved only after adjustments were made based on earlier failed predictions. Predictive success in science, as in other areas of life, usually ends up being a matter of learning from past mistakes.”
—————
And in “Evaluation (not Validation) of Quantitative Models” – http://www.nssl.noaa.gov/users/brooks/public_html/feda/papers/Oreskes2.pdf she identifies deception:
————-
“Why did the world modelers make what is in retrospect such an obvious mistake? One reason is revealed by the post hoc comments of Aurelio Peccei, one of the founders of the Club of Rome. The goal of the world model, Peccei explained in 1977, was to “put a message across,” to build a vehicle to move the hearts and minds of men (59,21). The answer was predetermined by the belief systems of the modelers. They believed that natural resources were being taxed beyond the earth’s capacity and their goal was to alert people to this state of affairs. The result was established before the model was ever built. In their sequel, Beyond the Limits, Meadows et al. (60) explicitly state that their goal is not to pose questions about economic systems, not to use their model in a question-driven framework, but to demonstrate the necessity of social change. “The ideas of limits, sustainability [and] sufficiency,” they write, “are guides to a new world.” (60)
21. Shakley S. Trust in models? The mediating and transformative role of computer models in environmental discourse. In: International Handbook of Environmental Sociology (Redclift M, Woodgate G, eds). (Forthcoming). Cheltnham, UK: Edward Elgar, 1997; 237-260.
59. Peccei A. The Human Quality. Oxford:Pergamon Press, 1977.
60. Meadows DH, Meadows DL, Randers J. Beyond the Limits: Confronting Global Collapse, Envisioning a Sustainable Future. White River Junction, VT:Chelsea Green Publishing Company, 1992.
————-
and so there is no misunderstanding that what she is exposing is not science but at best, noble cause corruption, she continues:
————
One need not engage in an argument for or against social change to see the problem with this kind of approach if applied in a regulatory framework. The purpose of scientific work is not to demonstrate the need for social change (no matter how needed such change may be) but to answer questions about the natural world. The purpose of modeling is to pose and delineate the range of likely answers to “What if?” questions. The purpose of lead models should not be to demonstrate how bad lead ingestion is or how good U.S. EPA standards are but to try to find out what is most likely to happen if given standards are applied. The language of validation undermines this goal. It presupposes an affirmative result and implies that the model is on track. To outsiders, it raises the specter that the answer was pre-established.
———————
So it seems like she has had a change of heart re the value of environmental models for prediction somewhere along the way.

kadaka (KD Knoebel)
July 19, 2014 9:07 pm

From Poptech on July 19, 2014 at 7:48 pm (quotes out of sequence):

Those posts mostly relate to himself trying to learn how to program in R for data analysis. None of his code is remotely complex and in various instances amateurish and lacking knowledge in proper methods.

Like his Beginners Guide: Using MODIS in R which starts: “This tutorial is going to assume that you are a beginner in R and Windows and working with MODIS.”
Or Ten Steps to Building an R package under Windows: “What I’ll try to do on these pages is document that process step by step for the raw beginner.”
You have examined tutorials written for teaching amateurs, and concluded the code is not remotely complex, amateurish, and the tutorials read like he is teaching himself how to program in R, for data analysis which is essentially all that R is used for.

Kadaka, my team and I have reviewed various comments he has made about programming here and at his blog and without a hint of hesitation can say he does not know what he is talking about.

The competency of you and your team at evaluating educational materials is noted.

bobby b
July 19, 2014 9:10 pm

The complicating factor here is that you’re asking for specific choices to be made (in filling out the four spots for “best” and the four spots for “worst”), and you’re asking this of people who seem to rate the actual direct temperature record as being less worthy of regard than their proxy-based models.
Is the “best” choice the one that comes closest to generating a result that matches recent trends, or is it the one that gives the “most obviously correct” response that conforms to the “settled science”?
Definitions. They’ll kill you every time . . .

July 19, 2014 9:15 pm

It sure as heck shows you don’t know squat about programming.

ROFLMAO.
Hunter, you seem massively confused by claiming I never went to his website when I link directly to it in my article. You are also massively ignorant as he co-authored exactly one book on Climategate which was not very good.
It is not possible to look bad stating facts.

July 19, 2014 9:27 pm

kadaka, you obviously know nothing about programming if you think sounding technical means you know what you are talking about. Just how Mr. Mosher has a habit of name dropping, he also has a habit of technical term dropping but not understanding the terms. Sorry to break this to you but you are not going to learn the right way to do anything in R by following his “tutorials”.

Editor
July 19, 2014 9:34 pm

Steven Mosher says:
July 19, 2014 at 3:16 pm

“Jordan says:
July 19, 2014 at 1:56 pm

“Simple fact is that the average of models is a better tool than any given one”

Only if the models are unbiased estimators for the variables of interests.

Not really. in fact they are biased and weirdly averaging them gives you the best answer. just fact.

Mosh, always good to hear from you. I fear I don’t understand this claim. Suppose the correct answer is 7, and the answers of the models give us 1,2,3,4,5. The average is 3.
Let’s assume that this is a monetary model, so the metric in question is the distance of the model answer from the true answer (money gained or lost). It seems to me that the average will do better than two of the models, worse than two of the models, and the same as one of the models.
In other words, the average does no better than picking any model at random.
What am I missing here? This is an example where the model is NOT a better tool than any given one.
Another example. Correct answer is 7. The models give us 6, 6, 6, 6, and 2. If I pick a model at random, I have an 80% chance of losing $1, and a 20% chance of losing $5. Thus, my mathematical expectation of loss is $1 * 0.8 + $5 * 0.1 = $1.80. Again, this is exactly the same as my mathematical expectation of the average, which is 5.2, or a loss of $1.80
Again, what am I missing here? Once more, the average is NOT better than any given one.
Final example. The answer is 7. The models give 6, 6, 6, 6, and 6, with an average of 6. The average is no better than picking any given model … but you say that the average gives you a BETTER answer than any given model …
So I truly don’t understand the basis of your claim. As near as I can tell, the average of the models just gives you the average of the individual model errors, not less error as you seem to be saying.
w.

hunter
July 19, 2014 10:13 pm

Not to speak out of turn, but here are some links for objective ideas about ensembles:
http://en.wikipedia.org/wiki/Ensemble_forecasting
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=38733
abstract:
“In the course of data modelling, many models could be created. Much work has been done on formulating guidelines for model selection. However, by and large, these guidelines are conservative or too specific. Instead of using general guidelines, models could be selected for a particular task based on statistical tests. When selecting one model, others are discarded. Instead of losing potential sources of information, models could be combined to yield better performance. We review the basics of model selection and combination and discuss their differences. Two examples of opportunistic and principled combinations are presented. The first demonstrates that mediocre quality models could be combined to yield significantly better performance. The latter is the main contribution of the paper; it describes and illustrates a novel heuristic approach called the SG(k-NN) ensemble for the generation of good-quality and diverse models that can even improve excellent quality models.”
As a skeptic far be it for me to rely on a consensus opinion on something….but I think the quality of tone could be raised. Some are doing it and that is a good thing.

July 19, 2014 10:24 pm

After reading through the snarky rebuttals to rebuttals, and a careful consideration of where we stand vis-a-vis, climate models that look like random number generators, unexplained temp rise hiatus, and cowardly climate scientists (hiding their concerns in fear of making some blackball grant list), I am left with only one thought of which I am reasonably certain.
That thought is, the climate modelers and their champions must realize that TIME is their enemy. Time, and its relentless arrowlike flow as an Occam razor scythe of simplifying real world data cutting down model projections like straw, will strike down the models and professional reputations of the model adherents.
Time will cure the CAGW insanity of the current era.

Niff
July 19, 2014 10:27 pm

I’m just hanging out to see what their concept of ‘best’ and worst’ is…..very scientific I am sure…

ren
July 19, 2014 10:34 pm

ScienceCasts: Solar Mini-Max.

Raving
July 19, 2014 11:16 pm

Steven Mosher says:
July 19, 2014 at 11:28 am
Let’s see.
We know there are 4 best and 4 worst.
It might not be an oversight to not name them.

If it were figure skating they would throw out the two top and bottom scores :/

Brute
July 19, 2014 11:38 pm

Two wrongs don’t make a right… unless you average them, in which case it depends on who is doing the averaging.

Brandon C
July 20, 2014 12:05 am

Perhaps someone pointed this out earlier, but I guess I will say it again for it needs to be said.
Averaging the model runs do not make them more accurate in this case. Averaging only works when the averaged data is generally split both higher and lower than verification. In the case of climate models vs real world temps, the models DO NOT do this. They all run hot, and the best you can say is that the coldest of them are close to reality. You could use this reasoning 15 years ago when the model mean was not far off observations, but they have been diverging for too long to keep pretending it’s valid statistics.
Therefore, the only effect from averaging is to bring the most extreme failures closer to verification data. But it also draws the closer ones away from the verification data. But the model mean is not more accurate than at least 1/2 the models since they all started higher than the reality line. Seriously, does this need to be pointed out?
The only reason to do the model averaging is keep the most extreme predictions as part of the climate science pantheon. It serves no other purpose when we have real world data that all falls below the models. This is a political choice to keep the highest models funded and available for people to use to keep the highest end of the predicted range higher. 1.5 – 6 or 1.5 to 7, sounds better than 1.5 to 2.5, if your purpose is to convince people that they should be frightened.
Any statistician would know that averaging only works when you can reasonable assume they are evenly distributed about the actual mean. Since none of the real world data bears this out, it is quite simply baffling to keep defending this. There is no point pretending the averaging of the current models is anything but a political choice to give the most outrageous models a measure of credibility.
To summarize, they are sacrificing the models that are closest to real world data, to prop up the ones that are farthest away. Just another in a long list of questionable things that should be making any scientist more sceptical, not more certain.
As far as this paper goes. If they are just going to try and validate models by picking how a few models got close to one of many parameters. If they don’t closely match most of the variables (preferably close on all), they are still failed. If a model of the cardiovascular system closely models blood flow in the legs, but not the rest of the body, they are garbage. Again, does this really need to be pointed out?
Lew and Oreski have made careers of trying to find novel new methods of spin to cheerlead for CAGW. It’s always look at this, to try and distract from all those problems over there. When something is not given in a paper, it is almost definitely not there for a reason. Carefully censored data and views has become the norm in climate circles, with open honest science in the decline. When both are together, we already know this paper is being prepared to spearhead another media blitz (obviously true due to the media stuff already in the works). Simply put, if a new climate paper is given more PR and media blitz, it’s already suspect.
If it turns out to be good science, then I will accept it and absorb it into my views. But it already looks like a obvious spin paper that was supposed to be already plastered across the media before anyone got a chance to point out it’s flaws. And once the internet climate warriors have read a story about it, it will be quoted endlessly forever into the future, and none of them will ever bother to check if it was challenged or debunked. I routinely see retracted papers thrown out as proofs.
Again, as always, this is a black eye for science. Sceptics are not anti-science, the climate crowd has done way more damage than any sceptic.

Matt L.
July 20, 2014 12:20 am

A real world example of the averages of model projections not doing much to increase their individual validity:
95% of Climate Models Agree: The Observations Must be Wrong
However, the averages do show a warming trend. So from that angle, they match reality.
(In defense of English/Journalism majors, they are some of the most intelligent, creative, loving, caring, intelligent, thoughtful, rational, logical, erudite, autodidactic, well-rounded, understanding, intelligent, passionate, curious and intelligent people on Earth. And many of them are rather intelligent.)

kadaka (KD Knoebel)
July 20, 2014 12:26 am

From Poptech on July 19, 2014 at 9:27 pm:

Sorry to break this to you but you are not going to learn the right way to do anything in R by following his “tutorials”.

Have you ever looked at Mosher’s Linkedin profile?

Scientist
Berkeley Earth Surface Temperature

March 2013 – Present (1 year 5 months) Berkeley California
I am currently writing and maintaining R code devoted to the Berkeley Earth Surface Temperature Project, supporting researchers using our data, and writing papers.
Business Data Specialist
1-800 Radiator

Privately Held; 501-1000 employees; Automotive industry
December 2013 – Present (8 months) Benicia California
Data Science and statistical analysis of sales, cost and failure data.
Data mining CRM data, sales data, and field failure data
Marketing Consultant
Self

June 2009 – December 2013 (4 years 7 months)
Working as an author, R software developer, and marketing consultant.

He’s making a living sifting data while writing and using R. It’s safe to conclude he has a greater proficiency with computers and R than your pride will allow you to admit. Your loss.

Andy_E
July 20, 2014 12:39 am

The conspiracy minded might think the omission deliberate, in the hope that skeptics will criticise the papers conclusions, whereupon the authors say well we didn’t name specific models therefore your complaints re results when you have no idea how we reached them proves you are all a bunch of conspiracy minded nutters.
By pointing out their omission you have potentially spoilt all their fun
😉

Steve Jones
July 20, 2014 12:43 am

Sorry for being a bit off topic, but here is what is really happening this side of the pond.
http://www.telegraph.co.uk/news/politics/10978678/Owen-Paterson-Im-proud-of-standing-up-to-the-green-lobby.html
I have no doubt that those of you in the US and elsewhere will have similar examples from your own countries.

David A
July 20, 2014 12:45 am

Sorry to be off topic but I have a question. In my memory I remember the acronym CAGW being commonly used by proponents, and skeptics. I know there was, and are currently, countless proclamations of catastrophe by the media, and scientist.
However currently the warmist say that CAGW is a term used by the skeptics. They point to the IPCC using the term CC, for Climate Change since its inception. I know that most scholarly publications used most commonly the term AGW, or GW. Yet I remember may uses of the term CAGW by proponents.
Am I wrong?
Did skeptics create that term?
If you have any linked evidence I would appreciate it.
Clearly the term CAGW is more accurate and pertinent, but I still need the history of the acronym.
Thanks in advance.
David A

kadaka (KD Knoebel)
July 20, 2014 12:53 am

From Matt L. on July 20, 2014 at 12:20 am:

(In defense of English/Journalism majors, they are some of the most intelligent, creative, loving, caring, intelligent, thoughtful, rational, logical, erudite, autodidactic, well-rounded, understanding, intelligent, passionate, curious and intelligent people on Earth. And many of them are rather intelligent.)

And highly qualified upon graduation for specialized employment in the modern job market. They belong to a small subset of career employees suitable for select establishments where they will be repeatedly called upon to correctly inquire if a client would like French fries OR curly fries with that.
It is said a few can also aptly handle steak fries as well and even options like gravy or chili or cheese sauce, but that may require a doctorate.

Chris Schoneveld
July 20, 2014 1:08 am

Atually some of the models with the lowest warming are very close to the actual temperature trend. It would be of interest to analyse those and establish whether they are right for the wrong reasons (assuming that we know – or like to believe – what the right reasons are) or why they are so different from the ones with the higher climate sensitivity.

Angech
July 20, 2014 1:13 am

Anthony, do you understand these to be the best four models in that the show a pause or are they the best four models in showing a pause that will go away as they predict further into the future.
The best model IPCC wise is the one that assumes full action on climate change with massive carbon dioxide reduction .
If this is the case are they not shooting themselves in the foot?
The worst model is the one that assumes conditions as usual in carbon dioxide production, ie increasing levels with a hockey stick upwards.
Surely they cannot be throwing the most accurate input model out?
It is great that nature is publishing a paper with Lewindowsky as a lead author. No one else has successfully undermined any other published papers as much as he has by his mere presence. When he gets to actually commentating on it the repercussions will wreck Nature for years.

Angech
July 20, 2014 1:26 am

Can we have a competition please for this article called “guess the Reviewers”
I might win with Gergis, Cook, Turney and the PhD student who reviewed Gergis’s last work.

Clovis Marcus
July 20, 2014 1:34 am

If the models have not been identified to protect the sensitivities of the modellers as suggested they must be a very defensive bunch.
There are better ways if saying it than best and worse which are subjective and judgemental terms. “Most/least supportive of the arguments posited by this paper” would be more descriptive and protect the sensitivities of the modellers. Perhaps the authors need to engage with a wordsmith. I’m normally as cheap as chips but I’d up my rates if I had to try to make sense of this stuff.
Is there enough information for an expert in the field to identify the models without explicitly naming them?
If not, I don’t see how you can use them objectively support an argument. If you are not going to to an objective correlation, which would mean exposing the model outputs allowing them to be identified, with the results you predict in your theory all you can say is “I’ve looked at the four models that are most supportive of my theory and they support my theory better than the other 14”
Science has got itself into a bit of a pickle hasn’t it?

richardscourtney
July 20, 2014 1:36 am

u.k.(us):
In your post at July 19, 2014 at 7:34 pm you ask Poptech concerning GCM performance

Do you have the right answer ?

And, of course, the “right answer” depends on the question asked.
If the question is,
‘Which if any climate models emulate the climate system of the real Earth?’
then the answer is
‘At most only one and if there is one then which one is not known: all others emulate a climate system which the Earth doers not possess.’
So, averaging climate model results is averaging wrong results.
I again provide the following explanation of this reality.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

July 20, 2014 1:45 am

Any statistician would know that averaging only works when you can reasonable assume they are evenly distributed about the actual mean. Since none of the real world data bears this out, it is quite simply baffling to keep defending this. There is no point pretending the averaging of the current models is anything but a political choice to give the most outrageous models a measure of credibility.

The idea that the average of all the models would always yield the best answer is one of the most deluded things I have read here in a long time. If that were so, then many people would average the outputs of the thousands of models of the stock market and become wealthy with little effort or risk. I have never read of such a strategy working though.
In fact, I have read that most models of the stock market are “tuned” using historical data and they then work fairly well for a time as long as the stock market’s behavior matches the recent past pretty well. When the market changes, those relying on the model get whacked, or so I have read.
If there is a model of the climate that is correct, we are no closer to building it than we were 30 years ago. That, my friends, is a sad state of affairs.

LewSkannen
July 20, 2014 2:12 am

I suspect this will be another good deed that does not go unpunished…

LewSkannen
July 20, 2014 2:15 am

Mark Storval
I totally agree about averaging models. One of our regular contributors R.G.Batduke wrote an excellent piece a few months back about the absurdity of the practice….. but it continues. Scientific rigour was abandoned a long time ago in this field.

ren
July 20, 2014 2:26 am

The political situation in Europe shows that climate policy is highly detrimental.

lgl
July 20, 2014 2:49 am

Mosher
“The issue is the four worst on this test will be the best on
Some other test”
Right, like the best on 1984-1998?
Is it any better to be best on 1999-2013 than on 1984-1998?

Chris Wright
July 20, 2014 3:12 am

Steve Jones says:
July 20, 2014 at 12:43 am
Sorry for being a bit off topic, but here is what is really happening this side of the pond.
http://www.telegraph.co.uk/news/politics/10978678/Owen-Paterson-Im-proud-of-standing-up-to-the-green-lobby.html
Owen Paterson was probably one of Cameron’s most effective ministers and I’m very sorry to see him go. His piece is excellent and very true.
I stopped voting Conservative a few years ago and one major reason is the way the government is squandering vast sums of money on wind farms that destroy the environment and don’t work most of the time. I’m now proud to be a UKIP voter and I’ll probably never vote Conservative as long as Cameron is leader.
But if Paterson becomes leader there’s a good chance I’d return to the fold.
Chris

July 20, 2014 3:29 am

Simple fact is that the avergae of models is a better tool than any given one.
deal with it – Mosher
===================================
But if they are all way out, as it seems they are, the average is still useless, isn’t it?

David Chappell
July 20, 2014 3:40 am

Mr Mosher:
does it make sense to average models? probably not. But you get a better answer that way
No you don’t. The average of a pile of excrement is still excrement

Jordan
July 20, 2014 3:41 am

“Not really. in fact they are biased and weirdly averaging them gives you the best answer. just fact.”
It’s a profound assertion, but it runs into a logical contradiction.
If it is true, we’d never throw out old model predictions (regardless of bias and other issues). Any new model results would be inferior, and we could only use the by adding them into the superior “grand ensemble average”. And each addition of inferior results would improve the “grand ensemble average”.
This would need to be confirmed by a robust validation methodology (there is no escaping this requirement).
But if the methodology confirms that one set of results is inferior by blending this into a superior set of results (both cases using the same methodology and tests to determine which is inferior and superior).
In other words, the best thing to do with inferior model results is to throw them away if we have superior results to hand.
It appears that the missing ingredient is the rigorous validation of the models. Until we have this, assertions that the average of model results is better than individual results is not supportable.
It leaves the same questions hanging over the above paper: why does adding three “also-rans” improve their analysis compared to just using the “winner”?

Angech
July 20, 2014 3:42 am

If you average models and there is one halfway right model in there it will track better than any ensemble of anonymous incorrect models is my reading of above comments. Still not a very good model but?

b4llzofsteel
July 20, 2014 4:04 am

“In defense of English/Journalism majors, they are some of the most intelligent, creative, loving, caring, intelligent, thoughtful, rational, logical, erudite, autodidactic, well-rounded, understanding, intelligent, passionate, curious and intelligent people on Earth. And many of them are rather intelligent.”

…and you find these in Mosher??

hunter
July 20, 2014 4:38 am

poptech,
You assert it is not possible to look bad while stating facts, yet you manage to do just that.
And going to your website is to tour an example that supports Willis’ argument against anonymity.
You shred someone while hiding behind your anonymity. You make those of us who support anonymity on the internet look bad. You actually posted Steve’s picture along with a questionable interpretation of his CV. But why stop there? His home address is a “fact”. His car is a “fact”. His kid’s names and pictures are “facts”. Why don’t you do like the climate thugs here in Houston and put on a mask and go stand in front of his house and tell him how bad he is?
You are demonstrating that the climate obsessed true believers are not the only ones who can do boorish low class extremist behavior.

July 20, 2014 5:04 am

If they don’t name the 4 best then it becomes harder to check their work.

Non Nomen
Reply to  kcrucible
July 20, 2014 7:09 am

kcrucible commented on A courtesy note ahead of publication for Risbey et al. 2014.
>>If they don’t name the 4 best then it becomes harder to check their work<<
_________________________________
It is a matter of belief and climate religion, hence withstanding all checks and logic thinking. And the alarwarmists don't want to be checked and their formidable prejudices destroyed by hard facts: 17 years + 10months…

July 20, 2014 5:12 am

“It leaves the same questions hanging over the above paper: why does adding three “also-rans” improve their analysis compared to just using the “winner”?”
The obvious answer is that even “the winner” has problems, which are obscured by the outputs of the others.
However, given that they’re not naming the 4 best that they’re citing (how could that possibly have gotten through peer review?? That’s not even science, just a writing obviousness.), it could be that increasing the number of elements increases the validation-complexity… much like password length increases the degree of brute-force hacking time required.

Bruce Cobb
July 20, 2014 5:23 am

I await with bated breath to see how they further convulse and tie themselves in knots trying to explain away the halt in global warming. I expect we’ll see more of an emphasis on the phrase “climate change” and “unusual weather”, as if the CO2 has somehow, (by magic one can only presume) morphed into those other, undefinable qualities.
Even the “best” climate models have a fundamental, fatal flaw; they simply assume that CO2 is a major driver of climate. They can tweak and fiddle with the knobs until kingdom come, and they will still be totally wrong.

July 20, 2014 5:28 am

Steven Mosher says:
July 19, 2014 at 3:12 pm
Do you mean that as a general observation, or is the scope of that remark confined to the 18 climate models in question here?
1. general observation about all the models
By “better tool” do you mean more consistent with observations? How do you judge performance? Do you account for differences in inflection points in your measurement?
1. pick your skill metric.. but more consistent yes.
Is not an average of a bunch of models simply another model?
1. A+ answer
Does that imply that some kind of averaging process internal to a model makes it a better model?
1. no
How so? Is it always the case that increasing the number of models in the “average” increases the accuracy? Is it a linear improvement or something else?
1. Not always the case. I never looked at the improvement stats
To make it a “better tool”, do you have to apply weights (non-unit)? How are these weights derived? What kind of average is it? Arithmetic? Geometric? Harmonic?
1. weights are a big debate. currently no weights
I’d be interested to know on what theory you base your assertion, because, for the life of me, I can’t see it.
1. No theory. pure fact. If you take the mean of the models you get a better fit. why? dunno.
just a fact.
=================================
You never looked at any data on how much “better” the average is than the individual model prediction, but somehow you just know the average is “better”?
Well I admit I don’t know very much, but this sounds a little sub-scientific to me. I can somewhat understand someone how has a solid theory of operation being overconfident to the point where they don’t feel they need to look at the data, but you’re telling me you have no theory as to why it works, and you haven’t looked at the data to see if, in fact, it does work. Yet you confidently make the assertion that the average being “better” than a single model is fact. You’re having me on, right?
“I don’t know why that beetle in the matchbox wiggles when it’s about to rain, but it’s a fact…”

July 20, 2014 6:07 am

Mosher writes “The issue is the four worst on this test will be the best on Some other test”
The issue is that none of the models do well on all of the tests and therefore cant be modelling the way the climate changes.
Defences like “based on physics” are hilarious when they’re all based on physics but all get very different results. Getting a fit will work with any series of inputs. Just because a few of them test well means nothing when others dont and cant to get the model’s optimum results.

jim2
July 20, 2014 6:16 am

Maybe what they meant was they did a BEST splice of output of four different climate models and found corrlelation with something or another.

July 20, 2014 6:28 am

Mosher also writes a bit later “Not really. in fact they are biased and weirdly averaging them gives you the best answer. just fact.”
Rubbish. Taking the “best” 4 and averaging them apparently gives a better result. That’s another fact from this paper. In fact if you have a bunch of random results and take the “best” of them you’ll always get a better result. And if you average the lot then that average will be better than half of them.

NikFromNYC
July 20, 2014 6:47 am

Climate models must deny century scale chaos or they have no predictive ability.
Yet climate is long term chaotic from first principles of massive century scale ocean fluid dynamics.
Climate models have few real data inputs, merely solar output that is too steady to matter, the greenhouse effect that is useless since equivalent warming occurred in the beginning of the global average plot, and pollution which can’t explain mid-century cooling since now we have another multidecade pause after pollution cleared up.
The simplest act of real scientists would be to use the *measured* climate sensitivity to now recalibrate their positive feedbacks into more neutral ones. Then nearly all the models would show the pause as just another bit of noise in a much less warm future.
But where did they get their climate sensitivity via positive water vapor feedback in the first place? They made it up! It’s a constant added to their software.
Just plug in Richard Linden’s updated feedback estimate of nearly no positive feedback and you are done doing proper science. Alarm is then called off. Another recent paper estimates feedback as near null as well:
http://www.worldscientific.com/doi/abs/10.1142/S0217979214500957
The alarmists keep lying about how dangerous future warming is locked in due to the physics of the standard greenhouse effect but it’s really their amplification of it instead that adds degrees to it and that amplification is now two decades falsified. They use willful slander to label all skeptics greenhouse effect denying Sky Dragons, and if they are that desperately dishonest, that is quite telling.
What does Mosher’s splitting of hairs here accomplish in the face of that? It distracts from news of the basic falsification of high climate sensitivity. It distracts from the lie of how the mellow and thus beneficial raw greenhouse effect has been turned into Godzilla by a single line in a computer program. It distracts from laypersons finding out that the government and its scientific enablers have become Enron. Don’t let these guys distract you from loudly exposing their refusal to simply empirically downgrade their climate sensitivity now that it is the only rational and moral thing to do.
-=NikFromNYC=-, Ph.D. in carbon chemistry (Columbia/Harvard)

Mark Bofill
July 20, 2014 6:48 am

Bob Tisdale says:

July 19, 2014 at 4:39 pm
And the reason I hate embargoed papers is, I can’t reply to comments or answer questions until tomorrow at 1PM Eastern (US) time.

I’m looking forward to hearing your remarks. I hope the discussion on that thread doesn’t get hijacked by a discussion of how Steven Mosher dresses. :/

July 20, 2014 7:06 am

If I recall correctly &/or as I understand it, “their” fundamental premise behind AGW/C^3 (anthrocentric global warming/cataclysmic climate change) is that prior to industrialized man (i.e. coal fired power plants) the atmospheric CO2 concentration was in natural balance, sources and sinks in perfect harmony, at 268.36490 ppm by molecular volume in molar carbon equivalents.
The rapid increase in atmospheric CO2 concentrations as measured by the Keeling curve at Mauna Loa (data which must be “adjusted” to account for nearby volcanic outgassing) could only be due to mankind’s industrial activity (CFPPs). The Keeling curve and the global temperature hockey stick were then combined into sufficient coincidence to equal cause for concern.
Now “they” are offering an explanation for the 17 year hiatus in global warming while atmospheric CO2 concentrations continue to climb, zipping past 350 ppm and past 400 ppm several years ago at NOAA’s inland tall towers. (never hear about them) The ocean, “they” now admit, is more of a CO2/temperature sink than “they” previously understood. Well that pretty much trashes “their” fundamental premise. If “they” don’t really understand the sinks it stands to reason “they” also don’t understand the sources. IPCC AR5 pretty much admits the same in TS.6 Key Uncertainties.
The Keeling curve atmospheric CO2 concentrations and industrialized mankind’s contributions (CFPPS) when considered on a geologic time scale (at least 10,000 years) are completely lost in the data cloud of natural variations.
No melting ice caps, no rising sea levels, no extreme weather, no rising temperature. “They” were, are, and continue to be wrong. Get over it, the sooner the better.

RokShox
July 20, 2014 7:12 am

Mosher writes “The issue is the four worst on this test will be the best on Some other test”
Here we have 18 climate models.
These 4 here reproduce the pause, but show accelerated CAGW in the future.
What criteria can we come up with, post hoc, to justify calling these 4 models the “best”?
OK, write it up.

Editor
July 20, 2014 7:23 am

Steve Mosher, so far your name appears 77 times on this thread, and looking through the comments, it doesn’t appear that many persons agree with you, rightly or wrongly.
Note to the others: If I may suggest, please drop the ad homs with respect to Steve. You’re not adding anything relevant to the discussion.

Bill_W
July 20, 2014 7:27 am

The reason the average of many runs of a single model is better is that the individual model runs are all over the place and so the odd excursions cancel out. A possible reason averaging multiple runs from multiple models MAY give you better answers for some questions is that since the models all have some differences (else they would not be different models), some may capture some effects while others capture different effects. For many projections, the averaged models do not give very good results (IMO). There has been some discussion of “throwing out” the worst performing, most highly warming models, but this has not occurred yet. What “democracy of the models” means IMO is that no one wants to put themselves on the record criticizing anyone else’s model. Eventually, people may realize their model is too far off and begin to change it and of course will get more publications from doing so. In many fields other than climate, scientists would be more critical and more open about which models performed poorly. If it turns out to be true that this paper does not “name names”, then this would be a sad statement about the state of climate science. Reminds me of the Harry Potter novels and “He Who Must Not Be Named” and with the same implications. People are scared of offending the powerful and connected. But rather than fearing the “Avada Kedavra” curse, they fear losing grant funding and the scientific ostracization and harassment so recent experienced by Dr. Lennart Bengtsson.

Editor
July 20, 2014 7:38 am

Kate Forney, sounds like you’re new here. Welcome. With respect to model outputs, you wrote, “You never looked at any data on how much “better” the average is than the individual model prediction…”
Not to be nitpicky, but the outputs of climate models are not data. Full definition (1) from Merriam- Websters:
“factual information (as measurements or statistics) used as a basis for reasoning, discussion, or calculation <the data is plentiful and easily available"
http://www.merriam-webster.com/dictionary/data
Climate model outputs are definitely not "factual information."
More generally, it's best not to use the term data when talking about climate model outputs so that readers can differentiate between observations (data) and model outputs (computer-aided conjecture).

July 20, 2014 7:51 am

Dear Watts et al:
Your criticisism appears to be beg the question because you assume, as they assert, that the models are meaningful. Suppose they tell you, with supporting evidence, which models are best/worst, would this improve the paper?
The right answer is No, because the models are constructed to hindcast and the data used to calibrate them is highly suspect. Look inside one of these things and what you find is some 1960s fortran and a great many (thousands in the one I took apart) of encrusted adjustments designed to add or modify the model’s behavior – and all of it parametrized to fit some data set.
Unfortunately the data is suspect – I am now quite sure that there may be a decline, but there is no pause: early data has been adjusted downward, later data upward – and that limits the predictive power of these models to coincidental concordance arising from the narrowness of the predictive band.

Editor
July 20, 2014 7:55 am

Paul Murphy says: “Your criticisism appears to be beg the question because you assume, as they assert, that the models are meaningful. Suppose they tell you, with supporting evidence, which models are best/worst, would this improve the paper?”
The findings of their paper can not be reproduced unless the models they selected are known.

July 20, 2014 7:56 am

Why is Mosher given free reign to troll in the comments? Because that’s all he ever contributes here.
[Because he contributes and doesn’t contravene the site rules.. . mod]

July 20, 2014 8:13 am

Bob Tisdale says:
July 20, 2014 at 7:38 am
========================
Thank you Bob. I’ll bear that in mind.
How better might I have phrased the question, the point of which was to interrogate Mr. Mosher regarding how he could know an “average of the models” was more informative than any single model?
He admits he hasn’t looked at any performance measures, nor does he have any plausible theory with respect to how his assertion could be, so I can’t comprehend the basis for his confidence.

RACookPE1978
Editor
July 20, 2014 8:24 am

Bill_W says:
July 20, 2014 at 7:27 am
The reason the average of many runs of a single model is better is that the individual model runs are all over the place and so the odd excursions cancel out. A possible reason averaging multiple runs from multiple models MAY give you better answers for some questions is that since the models all have some differences (else they would not be different models), some may capture some effects while others capture different effects. For many projections, the averaged models do not give very good results (IMO). There has been some discussion of “throwing out” the worst performing, most highly warming models, but this has not occurred yet. What “democracy of the models” means IMO is that no one wants to put themselves on the record criticizing anyone else’s model.

Thank you for the pleasure of your replies.
Now, let me reverse your “averages are more accurate” summary – though I know the total answer is more than just that.
We have “one list of data” – that of temperatures recorded to various degrees of accuracy at very specific over the past years, and a much longer set of proxy temperatures of varying degrees of accuracy (inaccurate temperatures, and inaccurate dates of each inaccurate temperature) over a much longer period of time.
Now, has ANY single run of ANY model at ANY time reproduced today’s actual record of temperatures over the past 150 years of measured temperature data across the continental US?
The past 100 years across India?
The past 250 years of measured temperature data across the northeast US and Canada?
The past 350 years of measured data across central England?
That is, has any climate model at any time actual reproduced any temperature record at specific regions over a long period of time?
Supposedly, a “climate model” duplicates the earth’s “average” climate by numerically breaaking up the earth’s into zones for boundary-value “exchanges” of that box with other boxes above, below, right-left-north-south of each box. The results then are grouped togther to define that date-time-group’s “average” total earth anomaly, then everything is reset, and everything is run again.
So => ALL “boxes” are known, therefore, you can get a list of temperatures for any length of time for any region on earth. Each computer run is a unique calculation, so you can’t pretend that the results of the tens of thousands of model runs on each of the 18 or 21 or 23 climate models is “not available”.
Has any model actually worked over any lengthy period of time – outside of the “forced” programming times of varying input forcings (deliberately modifying cloud, solar, particles, etc) designed to yield results that mimic the temperature record?
Now, separately, Paul Murphy very correctly adds a critique similar to mine:
July 20, 2014 at 7:51 am
Dear Watts et al:

Your criticisism appears to be beg the question because you assume, as they assert, that the models are meaningful. Suppose they tell you, with supporting evidence, which models are best/worst, would this improve the paper?
The right answer is No, because the models are constructed to hindcast and the data used to calibrate them is highly suspect. Look inside one of these things and what you find is some 1960s fortran and a great many (thousands in the one I took apart) of encrusted adjustments designed to add or modify the model’s behavior – and all of it parametrized to fit some data set.

That is, if the model is “calibrated” by artificially changing past forcings so past calculated temperatures are “correct” and “do” match the temperature record,
… (2) is the temperature record they are trying to match actually corrected, or actually corrupted, by your fellow bureaucrats’ constant work as they change the past recorded temperatures?
… (3) Do the model runs (even with artificially padded and subtracted forcings) duplicate the past temperature records over long period of time? Or are they really nothing more than “if this year is 1915, then the average global temperature = 24.5 degrees after the model run”
2. After a 15 year run, what is the actual result of a single model run?
Show us the winds, temperatures, humidities, aerosols, the box-by-box sizes and shapes, ice coverage, cloud coverage, and the hourly pressures and temperatures after “32 years of model run 07-16-2014” … All that is ever reported is a final temperature difference at a mythical date in a mythical future free of future changes except CO2 levels.

kadaka (KD Knoebel)
July 20, 2014 8:25 am

From Bob Tisdale on July 20, 2014 at 7:23 am:

Steve Mosher, so far your name appears 77 times on this thread, and looking through the comments, it doesn’t appear that many persons agree with you, rightly or wrongly.

FWIW, I’ve been defending the person against libel, not agreeing with what he said which I was only peripherally aware of from other comments.
How much of a internet arrogant bully and elitist snob must one be to call Mosher computer illiterate? That’s like saying someone who regularly converses and corresponds in English is illiterate because they lack an English degree. It should be pretty clear that having said degree ain’t no guarantee you can always speak English good.

July 20, 2014 8:35 am

Bob Tisdale says:
July 20, 2014 at 7:38 am
Kate Forney, sounds like you’re new here. Welcome. With respect to model outputs, you wrote, “You never looked at any data on how much “better” the average is than the individual model prediction…”
Not to be nitpicky, but the outputs of climate models are not data.

Full definition (1) from Merriam- Websters:
“factual information (as measurements or statistics) used as a basis for reasoning, discussion, or calculation <the data is plentiful and easily available"
http://www.merriam-webster.com/dictionary/data

Climate model outputs are definitely not “factual information.”
More generally, it’s best not to use the term data when talking about climate model outputs so that readers can differentiate between observations (data) and model outputs (computer-aided conjecture).

I could not agree more with the above post/comment by Bob Tisdale. Well put.
However, the “data sets” put out by the government agencies are now so “adjusted” by incompetence, bias, half-assed computer algorithm, “in-filling”, zombie stations, and so on that I don’t think the word “data” fits there either.
For just one example, this very morning I read: “TOBS Update: Something Seriously Wrong At USHCN” http://stevengoddard.wordpress.com/2014/07/20/something-seriously-wrong-at-ushcn/
We need a good word for that stuff that should be data but is not data.

ren
July 20, 2014 8:42 am

Here you have the effect of increased ionization GCR. Blockade the vortex in the southern magnetic pole stronger.
http://www.cpc.ncep.noaa.gov/products/intraseasonal/temp50anim.gif
http://arctic.atmos.uiuc.edu/cryosphere/antarctic.sea.ice.interactive.html

RACookPE1978
Editor
July 20, 2014 8:42 am

weather4trading says:
July 20, 2014 at 7:56 am (complaining/commenting about Mosher)
Why is Mosher given free reign to troll in the comments? Because that’s all he ever contributes here.

And the mod’s reply

[Because he contributes and doesn’t contravene the site rules.. . mod]

Even more important, no one can learn or expand past their own mind and their own prejudged conclusions UNLESS they are exposed to logical criticism and comment from a person who does not share their opinion. (Note: I did not say “correct” criticism and I did not say “correct” conclusions…) If I only wanted to hear things I agreed with, I would speak loudly and passionately in an empty room.

July 20, 2014 8:52 am

“How better might I have phrased the question, the point of which was to interrogate Mr. Mosher regarding how he could know an “average of the models” was more informative than any single model?”
simple.
1. read the literature
2. compare all the models to observations
3. compare the average of all models.
lets see
http://berkeleyearth.org/graphics/model-performance-against-berkeley-earth-data-set
its pretty simple. you can use any performance metric you like.
here is what you see.
1. models that score well on one metric, score poorly on others.
2. the average of all models wins.
It really isnt that hard.

RACookPE1978
Editor
July 20, 2014 8:57 am

Angech says:
July 20, 2014 at 3:42 am
If you average models and there is one halfway right model in there it will track better than any ensemble of anonymous incorrect models is my reading of above comments. Still not a very good model but?

No.
If you average different models together, you HIDE the one (?) good model with garbage from the 3, 4, or 21 “bad” models. Sometimes. And sometimes you “hide” that one “almost good enough” model errors with garbage from the rest.
to exaggerate.
For small values of “n”
2 + 2 = 2 * 2 = 2+2+n^2 = 2* 2^n + 2^(n+1) right? But each “model” is “wrong” under different initial conditions.

July 20, 2014 8:58 am

now, go do the work
start with the literature.
http://journals.ametsoc.org/doi/pdf/10.1175/2011JCLI3873.1

July 20, 2014 9:06 am

“You never looked at any data on how much “better” the average is than the individual model prediction, but somehow you just know the average is “better”?
Yes. its pretty simple.
Noting that the average is better and CALCULATING how much better are two different things.
basically the work we did looking at the issue confirmed what has already been published.
so, nothing too interesting there.
Still, there might be some interesting work to be done. folks here can get the data and see for themselves. Its an active area of research. so you have to pick the metrics you want to look at,
and then pick a performace or skilll metric RMSE is a good start, but there are others.
when you find the model that outperforms all others and the mean of all the models, then publish.
or.. you can avoid reading the literature, avoid looking at data. That works for blogs

Matt
July 20, 2014 9:08 am

@Truthseeker
Regarding your dart board analogy, it seems that looking at the actual board to see where the bull ‘s eye is translates to checking what the PRESENT temperature is. Guess what, I do that every day. The purpose of the exersice is to learn something about the FUTURE though, and looking at the actual board does not help in that case, now does it?

kadaka (KD Knoebel)
July 20, 2014 9:08 am

From Kate Forney on July 20, 2014 at 8:13 am:

How better might I have phrased the question, the point of which was to interrogate Mr. Mosher regarding how he could know an “average of the models” was more informative than any single model?

It’s a common fallacy about accuracy that nevertheless often works out. All the models are aiming at the same target. So if you average all the hits together you’ll be close to the bullseye.
But the models have a high degree of inbreeding, built on shared concepts that are incomplete, inaccurate, and possibly flat-out wrong. It’s like if there was a common school of thought in gunsmithing the front sights of rifles needed to be mounted several hundredths of an inch to the right of the barrel axis while the rear sight is directly over it. From there it doesn’t matter how many different rifles and how close together are the holes (how precise), the average of the holes will still be to the left of the bullseye (will lack accuracy).

Jim Cripwell
July 20, 2014 9:14 am

you have to forgive steven mosher. he thinks that there is no categorical difference between an estimate and a measurement.

Bruce Cobb
July 20, 2014 9:20 am

Mark Stoval (@MarkStoval) says:
July 20, 2014 at 8:35 am
We need a good word for that stuff that should be data but is not data.
“Doodoo ” comes to mind.

July 20, 2014 9:28 am

Steven Mosher says:

2. the average of all models wins.

What is the average of these models?

Roy UK
July 20, 2014 9:42 am

dbstealey poses the best question I have seen. So I wait for the answer from Steven Mosher.
(BTW the mean of those models seem to be running hot to me!)

Admin
July 20, 2014 9:50 am

When he said “the average of all models wins.” I think Mosher meant funding, not the goodness of fit with reality.

July 20, 2014 9:52 am

My apologies, I didn’t read it that way at first.

Harry Passfield
July 20, 2014 9:56 am

Surely, the average of the models is as accurate as the watch that has stopped: It is spot on twice a day.

NikFromNYC
July 20, 2014 9:59 am

Mosher here helps point out quite strongly that models only work well matched to his own outlier global average temperarure data set that fails to show any pause in warming at all. This is important since these same models fail when much more comprehensive Space Age satellite data is used in place of the rickety old thermometer record. The two independent satellite products falsify his result, as do the oldest continuous thermometer records which indicate recent warming to form not a hockey stick but fuzzy toothpicks in utter defiance of claims of a super water vapor enhanced greenhouse effect:
http://s6.postimg.org/uv8srv94h/id_AOo_E.gif
There is simply no trend change in the bulk of the oldest records. Nor is there any trend change in similarly linear tide gauge records in which the full volume of the oceans acts as a liquid expansion thermometer. There is only a sudden upturn in his own and to a lesser extent Jim Hansen’s product that also only uses satellites to estimate urban heating while ignoring NASA satellites for direct temperature readings. All the while Hansen’s replacement Gavin Schmidt publishes a rationale for the pause as being just a crazy coincidence of little factors adding up, a publication that admits to the pause that falsifies BEST.
Mosher’s skyward plot:
http://static.berkeleyearth.org/graphics/figure9.pdf
Note strongly how his product also nearly erases the global cooling that led to a new ice age scare which would have been impossible with such a lack of mid-century cooling as his product claims. Note also that no plots have ever been offered despite years of requests of his algorithm toned down to not slice and dice so much, so ridiculously much, but only for truly abrupt step changes so we have no idea how sensitive to parameterization his black box is.
These guys are just shamefully tweaking parameters and adjustments and rationales towards an alarmist result rather than simply accepting a lower climate sensitivity in objective fashion. That Mosher’s boss at BEST was exposed as a brazen liar about being a newly converted skeptic means he has been exposed as being a dishonest man. So we know that only the temperature product of an unapologetic liar matches climate models. This fact alone now falsifies those models.

kadaka (KD Knoebel)
July 20, 2014 10:01 am

Jim Cripwell said on July 20, 2014 at 9:14 am:

you have to forgive steven mosher. he thinks that there is no categorical difference between an estimate and a measurement.

But the temperature numbers we get from the satellites are not measurements, but come from taking measurements of other things and running them through models that use assumptions (best known values i.e. educated guesses) to generate estimates we normally refer to as data (aka measurements). The optical sensors of the observing entity, etc.

July 20, 2014 10:02 am

10:01 AM. Where is it?

July 20, 2014 10:05 am

As someone barely and tangentially related to anything scientific, it would seem to me that the average of many piles of garbage would still be, indeed, more garbage. Even the “best” 4 piles of garbage. And driving ahead at breakneck speed while looking out the rear of the car will never be a good idea.

July 20, 2014 10:13 am

Steven Mosher;
Yes. its pretty simple.
Noting that the average is better and CALCULATING how much better are two different things.
basically the work we did looking at the issue confirmed what has already been published.
so, nothing too interesting there.
>>>>>>>>>>>>>>>>>>
You are, in the end, fooling yourself. You’ve taken a bunch of models and averaged them, and noted that they get closer to observations as a consequence. As you yourself noted, no single model is correct. We can only assume then, that all the models are wrong and that at this point in time the errors in each of the models off set each other to some extent. Since we know that the models are wrong, and for differing reasons. we have no way of knowing if averaging them will bring them closer to future observations, or farther away.
Averaging the output of models that are known to be incorrect is simply indefensible, and it matters not in the least that for the tiny portion of the earth history for which we have instrumental data, doing so bring results closer in line with observations. This doesn’t even aspire to “correlation does not equal causation”. It is even less scientific than that. It is an average of a bunch of things that are known to be wrong happen to correlate for a short period of time with recent observations does not, I repeat NOT, I repeat NOT equal to a useful predictor of the future.
If I make predictions from chicken entrails and the average of my forecasts correctly predicts tomorrow’s weather, then I think you would agree that all I am presenting is a coincidence. That’s all you are presenting. It has no basis in science, no matter how many metrics you surround it with.

Go Home
July 20, 2014 10:20 am

While noting the glaring missing model identification, it does not in its own right dismiss the results. That said, do they recommend to their climate science pals that they should eliminate all the other models going forward as they are now calling them failures? You knew they needed to find an answer to the pause. We will see if it holds up in the court of public opinion. Go get em guys and gals.

Chuck Nolan
July 20, 2014 10:21 am

I tried that averaging thing at the horse track.
In 100 races the average winning number was horse #4.235 so I rounded it off and I bet on horse #4 every race the next time I went to the track.
I lost .
What went wrong?
I had good data.

Tom J
July 20, 2014 10:25 am

Steven Mosher
July 19, 2014 at 11:28 am
says:
‘Let’s see.
We know there are 4 best and 4 worst.
It might not be an oversight to not name them.
Hint. Modelers and those who evaluate models generally don’t identify the best versus the worst.’
I figure if I don’t ask a stupid question at least once a month I’ll ruin my reputation, so here goes: Couldn’t the worst models be considered comparable to a control group in an evaluation of the validity of the conclusions reached through an analysis of what are considered the best models? Similar to the evaluation of, say, a pharmaceutical where a control group is not given the drug in question so as to determine the effectiveness of this drug in those to whom it is administered, wouldn’t the worst models function in a similar manner to the aforementioned pharmaceutical control group? Would it be a mere omission not to include them? And, should that not be standard practice?
(P.S. Judging from many of the replies you receive from your comments I’ve come to the conclusion, Mr. Steven Mosher, that you must have a thick skin. An admirable quality. I salute you.)

SIGINT EX
July 20, 2014 10:33 am

The anointed hour has come.
Checked the nature web site and preview abstract and reduced figures, reference and all.
So reassuring that Nature is printed on recyclable paper.
The modes of the distributions; interesting.
The “observations” per say I would not call observations given what has gone into them including various adjustments. Engineering (instrument drift and measurement offset) bias is one thing but fudging (CRU and Hansen for instance) to arrive at a preferred result another.
Oh well. The authors payed the publishing fee and Nature accepted with glee.

July 20, 2014 10:53 am

Published, serious science must be replicable. Unless we are told which models were used, and which were good, and which were poor, there can be no replication. So there is no science in the latest Lew paper.

Alec aka Daffy Duck
July 20, 2014 11:02 am

Read the news story Sidney Morning Herald and because it does not mention any specific model the story sounds lame…
http://m.smh.com.au/environment/climate-change/climate-models-on-the-mark-australianled-research-finds-20140720-zuuoe.html

Tilo
July 20, 2014 11:03 am

If we have models with large decadal oscillations, and if some of those models have their down oscillations synced with the current flat trend, then those models will look like the “best” models. And if we then claim that those particular models are the ones that are best at modeling ENSO, we can claim that they are the most skilled models and that the most skilled models both agree with the surface temperature trend and future warming predictions.
But since we do not know which models are being called the best, we can’t know if their oscillations are just coincidentally in sync or if they actually seem to know how to model ENSO correctly. It’s possible that the four best models were not named because closer inspection could show that their having a better approximation of the current flat trend has more to do with initial conditions and built in oscillations than with any real ability to make ENSO predictions.
Finally, the idea that an average of many models is better than any single model based on current performance is an irrational approach. Without a good physical explanation of why this should be the case, we are only talking about a coincidence.

July 20, 2014 11:13 am

This discussion is a good example of the house of cards metaphor. You all presume you can pull one card out and the whole AGW premise crashes down. You really underestimate the intelligence of a lot of smart people and overestimate your own. I look forward to the actual article to see how relevant this one card you all are obsessing on is to your anti AGW premise.
Your tactics are right out of the political playbook where one tries to define ones’ opponent negatively before they get to define themselves. Regarding Naomi Oreskes, I would put her resume up against anyone on this site as being qualified to talk climate science.
“Naomi Oreskes is Professor of History and Science Studies at the University of California, San Diego, Adjunct Professor of Geosciences at the Scripps Institution of Oceanography, and an internationally renowned historian of science and author. Having started her career as a geologist, received her B.S. (1st class Honours) from the Royal School of Mines, Imperial College London, and then worked for three years as an exploration geologist in the Australian outback.
She returned to the United States to receive an inter-disciplinary Ph.D. in geological research and history of science from Stanford University, in 1990. Professor Oreskes has lectured widely in diverse venues ranging from the Madison, Wisconsin Civics Club to the Air Force Research Laboratory, and has won numerous prizes, including, most recently the 2011 Climate Change Communicator of the Year.
Professor Oreskes has a long-standing interest in understanding the establishment of scientific consensus and the role and character of scientific dissent. Her early work examined the 20th century transformation of earth science, in The Rejection Continental Drift: Theory and Method in American Earth Science (Oxford, 1999) and Plate Tectonics: An Insider’s History of the Modern Theory of the Earth (Westview, 2001). She has also written on the under-acknowledged role of women in science, discussed in the prize-winning paper “Objectivity or heroism? On the invisibility of women in science” (OSIRIS 11 (1996): 87-113); and on the role of numerical simulation models in establishing knowledge about inaccessible natural phenomena (“Verification, validation, and confirmation of numerical models in the earth sciences,” Science 263 (1994): 641-646).
For the past decade, Professor Oreskes has primarily been interested in the problem of anthropogenic climate change. Her 2004 essay “The Scientific Consensus on Climate Change” (Science 306: 1686) has been widely cited, both in the United States and abroad, including in the Royal Society’s publication, “A Guide to Facts and Fictions about Climate Change,” in the Academy-award winning film, An Inconvenient Truth, and in Ian McEwan’s novel, Solar. Her opinion pieces have appeared in The Times (London), The Washington Post, The Los Angeles Times, Nature, Science, The New Statesman, Frankfurter Allgemeine, and elsewhere. Her 2010 book, Merchants of Doubt, How a Handful of Scientists Obscured the Truth on Issues from Tobacco to Global Warming, co-authored with Erik M. Conway, was shortlisted for the Los Angeles Time Book Prize and won the 2011 Watson-Davis Prize of the History of Science Society.
Her current research projects include completion of a book on the history of Cold War Oceanography, Science on a Mission: American Oceanography in the Cold War and Beyond (Chicago, forthcoming), and Assessing Assessments: A Historical and Philosophical Study of Scientific Assessments for Environmental Policy in the Late 20th Century, funded by the National Science Foundation. Professor Oreskes has joined the faculty at Harvard University as Professor of the History of Science and Affiliated Professor of Earth and Planetary Sciences at Harvard University.”
I think I will wait to judge her paper when I have actually seen it and not let the pundits tell me what to think before I have seen it. Thanks for publicizing it to ensure an even wider reading than what Nature would provide. We will certainly see what a “courteous” gentlemen you all are if the potential “fatal flaw” fails to materialize and you apologize for all the slander here. But you all will probably have dusted yourselves off by then and moved on to your next straw man.

July 20, 2014 11:15 am

Bill Illis says:
July 19, 2014 at 12:55 pm
…”, the accurate global warming models are the ones that project no global warming.
Yes Bill, it seems their “worst” models have become their “best” models.
The worm turns.

July 20, 2014 11:17 am

davidmhoffer says:
July 20, 2014 at 10:13 am
==============
Yes, in other words, Mr. Mosher has no reasonable explanation or hypothesis for why averaging the models works. “It just does” (or, perhaps, more correctly “it did” for the current observations, and a set of models selected by unknown criteria).
I can see no reason whatsoever to believe that the new model (the average of a fixed selection of other models) could be expected to yield a useful prediction beyond the last measurement. In particular, this whole average-of-the-models procedure has a whiff of overfitting about it. Time, of course, will tell, but I don’t think the smart money is on the average-of-the-models.

hunter
July 20, 2014 11:44 am

Avery,
If Naomi was only defined by her CV, she could be a saint of science history.
Unfortunately, she is alse defined by her actions.
But you are correct to thisextgent: the averge of averages/ensemble issue is a dubious card.
As to the smarts of AGW promoters: career success and winning can sharpen the mind. The question is, to what end?

Björn from Sweden
July 20, 2014 11:46 am

Avery Harden, the only reason Oreske’s name is on that paper is so that media can claim she is a climate expert because she has published papers on climate change.
Thats what this is about, promoting political pawns to climate queens.
It is irrelevant what the paper is about, only that it gets published and it somehow is about climate change.

Latitude
July 20, 2014 11:47 am

you mean to say the models are so bad…when you average them you get a better answer!
…well of course you do!………ROTFL
Bob, waiting on what you have to say…………

July 20, 2014 11:47 am

So is this a “game changer”? And speaking of which, when does Watts 201X get published?

Chuck Nolan
July 20, 2014 12:02 pm

Monckton of Brenchley says:
July 20, 2014 at 10:53 am
Published, serious science must be replicable. Unless we are told which models were used, and which were good, and which were poor, there can be no replication. So there is no science in the latest Lew paper.
———————————-
Nothing new from Lew.

July 20, 2014 12:24 pm

Avery Harden says:
July 20, 2014 at 11:13 am
I have no idea what contribution Naomi Oreskes has given in the upcoming work, but I have seen what she has written in earlier work. That was not really impressive.
Take e.g. “The Scientific Consensus on Climate Change”, where she has read a lot of abstracts based on a the words ‘climate change’ and concluded that a large number of these abstracts endorsed the “consensus”. But while a few of these abstracts did do what she said, most of the abstracts were simply neutral and didn’t endorse or refute the “consensus”. And I have read a few which endorsed the “consensus” in the abstract, but which main article did give serious doubt about the real impact of more CO2…
See also the letter of Benny Peiser on the work of Oreskes:
http://www.abc.net.au/mediawatch/transcripts/ep38peiser.pdf

Reply to  Ferdinand Engelbeen
July 21, 2014 10:07 am

Certainly nothing wrong with everyone keeping an open mind and staying skeptical. That is what scientist do. But if that lack of absolute certainty is purposed to ensure perpetual delay of action from policy makers, then it is a fools errand. Just because we don’t know everything doesn’t mean we don’t know a lot. We can move on from what we know and focus on what we don’t know while taking action now based on what we know.
Shipping companies, resource developers and their insurers are making plans for exploiting the warming Arctic. While you are arguing if the temperature data has been tampered with or the scientist dealing with are dumb, ships are beginning to move in the the newly opening Arctic. The Pentagon sees the security threats of a warming world and is planning for it. CEO’s revise their business plans to account for it. Insurance companies are reformulating for dealing with increased risk. The rate of investments in renewable energy is rapidly increasing. Some states are already ahead of the EPA in reducing reliance on coal and many others were already in the process and are ahead of the curve.
We can have our little distracting discussion about the flaws of AGW science, but policy makers around the world are already concluding and are voting with their hands and feet. The Australia aberration won’t stand for long.

Tom J
July 20, 2014 12:39 pm

Avery Harden
July 20, 2014 at 11:13 am
says:
‘This discussion is a good example of the house of cards metaphor. You all presume you can pull one card out and the whole AGW premise crashes down.’
A bridge instructor I learned from, and who is a retired physicist, is also an avid poker player but he considers bridge to be the superior game. I will concede that your house of cards metaphor is correct. Just not quite in the way you think so. It was Deng Xiaopeng, who modernized China after Chairman Mao did the world a favor by traveling on to another one. Deng Xiaopeng was an avid bridge player. Could that be because it was a useful game through which to discover the turns of both fate and strategy in life? It’s an interesting experience to have your opponent destroy your King of Spades with their Two of Clubs and therefore take that Trick in a No Trump contract. You consider the card presented here to be nothing but a lowly Two of Clubs. And it very well may be. But, if this were bridge, and I were you I’d be wary of it since it’s already had help from the other cards in the hand, same suite or otherwise. And I suspect you’ve failed to count those cards. If this were poker, I know how I’d place my bet.
‘Your tactics are right out of the political playbook where one tries to define ones’ opponent negatively before they get to define themselves. Regarding Naomi Oreskes, I would put her resume up against anyone on this site as being qualified to talk climate science.’
Now, concerning your very first sentence in the comment immediately above may I kindly advise you and your’s to take at least a brief glance in the mirror first thing in the morning, difficult though that may be, so you can properly tidy yourselves up prior to writing sentences such as your very first sentence in the comment immediately above. And be prepared if the mirror not only reflects, but also somehow echoes quite a bit. And, as far as Naomi Oreskes is concerned, may I opine that she may very well be the reason that employers, throughout the centuries, while they may wish to initially see a resume, will not hire an employee in the absence of a face to face interview, impressive though that resume may be.

Reply to  Tom J
July 21, 2014 10:24 am

Tom J, first regarding Ms. Oreskes, her resume mentioned her various prestigious employers so her resume was not just a delusional self impression; she has worked for some of the best. I took a class from her and my opinion is based on that.
Regarding house of cards vs bridge, I recall my dear grandmother played bridge with the same group of friends for over 60 years. They paired up and faced off against each other even after obvious dementia had set in. An amusing site. Rather than like a house of cards that crashes down when one card is removed, AGW science is more like a deck of cards where you may remove one card and it has little effect on the overall deck.

NikFromNYC
July 20, 2014 12:43 pm

Avery taunts: “This discussion is a good example of the house of cards metaphor. You all presume you can pull one card out and the whole AGW premise crashes down.”
That’s how science works which is why in normal sciences like physics billions are spent desperately looking for even tiny cracks in the standard model and various cosmological theories, whereas in climate “science” this isn’t done at all, whatsoever, except by a few skeptics. Therefore climate claims are indeed a house of cards. It has no rigorously tested and contested foundation. They aren’t even pretending to do normal science and since Oreskes of all people is most aware of this that will make her a notorious figure in the future history of science. She has already been responsible for the biggest fraud of all in the public climate debate by using extremely soft survey questions to create a false 97% consensus claim that also 97% of skeptics would fit into has she bothered to ask them too since they too mostly agree with the perfectly mild warming caused by the greenhouse effect minus the hidden amplification of it tucked into supercomputer models. Merely spotting ahead of time yet another irreproducible claim isn’t some nefarious strategy but just another exasperated whistleblowing act. This paper is in top journal Nature. Last year in top journal Science, the latest hockey stick sensation appeared by Marcott. Hours afterwards skeptics alone exposed that there was no damn blade in any of the input data of this “super hockey stick.” Yet you come here accusing skeptics instead of alarmists of bad science?! How can anybody be so righteous about being so nakedly wrong? Just where *are* those cards that stand up in even a mild skeptical breeze?
“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” – Albert Einstein

Reply to  NikFromNYC
July 21, 2014 10:41 am

NikFromNYC, I was reminded of Rush Limbaugh reading your piece. Rush is good at listening to liberals make their case, and then taking the framework of the liberal case and apply the dialectic to it. He then throws his mirrored reconstruction back at them, along will little poop.
I responded to several others in this thread that addresses some of yours points so I won’t repeat myself.

kadaka (KD Knoebel)
July 20, 2014 2:26 pm

From Avery Harden on July 20, 2014 at 11:13 am:

(…) You really underestimate the intelligence of a lot of smart people and overestimate your own. I look forward to the actual article to see how relevant this one card you all are obsessing on is to your anti AGW premise.

Did you post that on the right blog? That obviously should be something for SkepSci or ReputedlyClimate, as it’s pretty hard to find people here who are “anti AGW”. All of it that has potentially happened has been remarkably unremarkable, far less severe than prophesied by the High Elders of Climate Science, and actually beneficial on the whole. It’s far easier to suffer and die when cold and hungry than warm and surrounded by bountiful CO2-fed crops.
We have two major groupings here, those who have studied the evidence and are ready to welcome and adapt to whatever small amount of AGW may yet come, and those still studying.
Perhaps in your obsession you have overestimated the “No AGW!” sentiment among educated intelligent people. Here at WUWT we are far from “anti AGW”. Indeed, by an overwhelming consensus we are AGW inclusive.
Why you want to hate?
Don’t discriminate!
When the warming is real,
We love the way it feels.
Greet the warmth that we create,
‘Cause we know it will be great!
Peace out, man.

Reply to  kadaka (KD Knoebel)
July 21, 2014 11:00 am

No hate here, I appreciate and enjoy the good discussion we have been having. It is natural to be a little defensive here because I am so conditioned to personal attack from “skeptic” sites, notably not this one; so far.
If the climate changes we were having were natural I would be like you and not be concerned. But if humans are causing it, then I’m worried. I already see massive specie extinction, deforestation, seas being raked clean of fish, toxic dumps and a lot more. Do you have any concern with the quality of the environment you are leaving your grandchildren? We see 7 billion people today on the planet voraciously consuming resources, your grandchildren will see 9 billion competing for the same declining resources. Seafood won’t come from the ocean but poop filled ponds. What about today, you don’t see what is happening around the Arctic? You just look away from the facts on the ground and say the papers with numbers and stuff is all fraud?

July 20, 2014 3:42 pm

kadaka, I am well aware of his Linkedin profile and he has never been professionally employed as a software developer. You apparently are unable to interpret self-appointed titles from real ones. Try reading my article, you might learn something. Unlike you I have extensive experience hiring IT personnel and cannot spot BS on resumes immediately. Mr. Mosher is not an R software developer and the code he has worked on demonstrates a lack of professional training in software development. Only someone absolutely incompetent in computer science would hire an amateur like him for software development.
hunter, name one fact about his CV that I posted that is not true. Mr. Mosher refuses to engage in tough questions about his background and instead has to rely on his mom (Judith Curry) and dad (Steve McIntyre) to protect him by censoring my comments at their sites. Why is that? If what I was saying was not true then you could prove me wrong but he can’t. He runs and hides on all the websites I can respond without be censored.

July 20, 2014 3:43 pm

Correction:
kadaka, I am well aware of his Linkedin profile (I linked to it!) and he has never been professionally employed as a software developer. You apparently are unable to interpret self-appointed titles from real ones. Try reading my article, you might learn something. Unlike you I have extensive experience hiring IT personnel and can spot BS on resumes immediately. Mr. Mosher is not an R software developer and the code he has worked on demonstrates a lack of professional training in software development. Only someone absolutely incompetent in computer science would hire an amateur like him for software development.
hunter, name one fact about his CV that I posted that is not true. Mr. Mosher refuses to engage in tough questions about his background and instead has to rely on his mom (Judith Curry) and dad (Steve McIntyre) to protect him by censoring my comments at their sites. Why is that? If what I was saying was not true then you could prove me wrong but he can’t. He runs and hides on all the websites I can respond without be censored.

July 20, 2014 3:48 pm

Hunter, I see you tried to post a rant to my website falsely claiming I inaccurately posted his resume. Sorry but you failed to provide a single piece of evidence about what I posted that is inaccurate. Instead of being emotional, lets try stating a fact you can verify is inaccurate.

July 20, 2014 3:57 pm

Steven Mosher says:
July 20, 2014 at 8:58 am
now, go do the work
start with the literature.

[Calm down, tone it down. .mod]

July 20, 2014 4:48 pm

If CO2 has a negligible effect on warming, yes, that one card brings it all down.

NikFromNYC
July 20, 2014 4:49 pm

Correction: Oreskes did an early version of the recent Cook 97% consensus claim based on a literature survey rather than a soft category questionnaire survey which was actually done by others. And here’s the shocker I wasn’t aware of that makes Cook’s claim more brazenly deceptive: when Richard Tol early on via a Twitter storm tore Cook’s result apart by pointing out use of the bizarre boutique search term “global climate change,” it turns out that phrase was an innovation of the original Oreskes 97% claim too, one she failed to reveal until Benny Peiser discovered how it pulled in only affect papers into the survey but left out most core “global warming” or “climate change” papers that would actually address attribution, because it’s such a weird phrase. So she prearranged to include only papers from the likes of economists, psychologists, and ecologists who are all expected to include boilerplate homage to the big emergency they are jumping onto the bandwagon of. Just as skeptics found a tiny percent of real climate science papers promoting the IPCC category of half of recent warming being anthropogenic, so too earlier did Peiser debunk Oreskes, meaning Cook was just cooking up leftovers.

Latitude
July 20, 2014 5:01 pm

Avery….”This discussion is a good example of the house of cards metaphor.”
Not at all….the article is so over the top ludicrous….no one can help making fun of it

kadaka (KD Knoebel)
July 20, 2014 7:51 pm

Anonymous internet arrogant bullies are so cute when they’re left sputtering after the ones who slapped them down have already moved on. Besides, loudly proclaiming “I’m not done yet! You haven’t beaten me!” to a deserted room is a well-respected sign of emotional maturity. Or not.

pete
July 20, 2014 8:59 pm

Firstly, my understanding is that Mosher is referring to his particular BEST result, not not to more generalised statistical theory when he claims that averaging the models produces a “better” outcome. As pointed out by prior posts there is no statistical basis for Mosher’s claim.
However, that statement doesn’t exactly mean that the average of the ensemble is any ‘better’ than any individual model. What you have, basically, is an ensemble of models that, while supposedly based on the same physical theory, are so flawed that their individual hindcasting errors are all over the place and just happen to average out. While you may produce a ‘better’ hindcast result from this procedure, you will not produce a better forecast as the underlying physical mechanism is not being modeled.
In other words, it is a meaningless fluke. The forecast remains the average of a bunch of poor models that do not accord with the underlying physical processes.
Regarding the paper, should we find the identity of the 4 ‘good’ models I expect that we will see that outside of the period of study their hindcasts are quite poor, and there is no reason to expect any kind of predictive power from them. Much like any other climate model. If there was a single model that could hindcast and forecast with any kind of accuracy then they would use that single mode. Of course layered on top of this is that the climatologists are trying to forecast a temperature model with a physics model, given that the global temperature is a statistical construct and not a physical quantity.
This paper will end up being a serious own-goal for the Team (SMH and other badge-wearing alarmists notwithstanding), and will provide endless amusement given that Lewandovsky and Oreskes are listed authors.

lee
July 20, 2014 9:00 pm

Avery Harden says:
July 20, 2014 at 11:13 am
Please get back to us with your thoughts once you have read the subject paper.
You can tell us if the comments were too harsh, whether they were substantiated, etc..

Editor
July 20, 2014 9:36 pm

hunter says:
July 19, 2014 at 10:13 pm

Not to speak out of turn, but here are some links for objective ideas about ensembles:
http://en.wikipedia.org/wiki/Ensemble_forecasting
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=38733

Thanks for the links, Hunter. The Wikipedia article states:

Ensemble forecasting is a form of Monte Carlo analysis: multiple numerical predictions are conducted using slightly different initial conditions that are all plausible given the past and current set of observations, or measurements.

Note that this has absolutely nothing to do with what is called “ensemble forecasting” in climate science. Instead of using one model and “slightly different initial conditions”, climate ensembles use big groups of untested models using whatever initial conditions and forcings and internal parameters their authors prefer …
Also, unless you can provide us with an un-paywalled version of the Cambridge paper, their abstract is nothing but claims which may or may not be true, and which may or may not be relevant to taking the average of a bunch of crummy Tinkertoy models …
Best regards,
w.

July 20, 2014 9:50 pm

kadaka, so you want to be tested? Since you are as much a computer illiterate as Mr. Mosher, lets see if you can spot the idiocy in this statement,
http://stevemosher.wordpress.com/modis-reprojection-tool/
“Unless you are in fact running Win2000 or NT then #1 will be the choice you want to make. If you are running Windows2000 then the install is going to make a change to autoexec.bat. If you are running NT, XP or anything later than XP ( Vista, 7 etc) Then there is no autoexec.bat to change and the installer will be modifying other files to do the install.”
This is elementary knowledge about operating systems that any competent programmer should know. Your failure to recognize something as simple as this, means you should retire from commenting on anything computer related on the Internet ever again.

Mark T
July 20, 2014 9:58 pm

Exactly, Willis.
Mark

Matt L.
July 20, 2014 10:12 pm

“Climate models on the mark, Australian-led research finds” – Sydney Morning Herald, Peter Hannam
Just once I’d like to see a headline with more info in it:
“4 of 18 climate models pretty dang close to the mark”
“22% of climate models on the mark … sort of”
“Some climate models qualified successes at showing the world is warming”

lee
July 21, 2014 1:11 am

Matt L. says:
July 20, 2014 at 10:12 pm
4 of 38 CMIP5 models are ‘pretty dang close’. About 10.5%

kadaka (KD Knoebel)
July 21, 2014 1:33 am

From Poptech on July 20, 2014 at 9:50 pm:

This is elementary knowledge about operating systems that any competent programmer should know.

I run Linux. I don’t have to know Windoze minutiae.
In case you missed the last decade or so of computer science advances, competent qualified programmers can now go their entire careers without knowing the OS fiddly bits. They may not even need to know what OS, for example Java has long been theoretically OS independent.
But I do know what Mosher means. I fired up an old WinXP partition on another machine to confirm. C:\AUTOEXEC.BAT is there, but not there. It’s a zero byte file, just a directory listing, kept for legacy purposes for programs that may look for it, just like CONFIG.SYS.
Thus Mosher is right (although grammatically clunky), “…there is no autoexec.bat to change…” because that file shouldn’t be changed, but there are other files that are available to be changed that the installer will be modifying.

July 21, 2014 5:43 am

kadaka, I run Windows and Linux so unlike you, I understand both. You fail hard, why would it be changing it in Windows 2000 let alone any Windows OS since 3.11? Thanks for proving you are a computer illiterate like Mosher. He is confusing Windows 2000 with Windows Millennium which is even worse. Every competent Windows programmer I have ever worked with knows these elementary things. Mr. Mosher is an amateur who has no business giving tech advice.
Thanks for failing now please stop talking about subjects you clearly have no knowledge on.

Resourceguy
July 21, 2014 6:15 am

This episode says a lot about the publication source– Nature.

Resourceguy
July 21, 2014 6:16 am

It could be worse, they could be publishing research on new brain surgery techniques.

July 21, 2014 7:28 am

Away from urban heat islands:
63 F down in the Sabine River bottoms early today in North East Texas.
Add the rest of the data Michael Mann etal.

Gonzo
July 21, 2014 1:15 pm

@avery harden [ I already see massive specie extinction, deforestation, seas being raked clean of fish, toxic dumps and a lot more. Do you have any concern with the quality of the environment you are leaving your grandchildren? ]
And just what does Co2 have to do with any of the problems you list? Maybe if we weren’t wasting billions tilting at the Co2 windmill we’d be able to clean up our physical environment.

July 21, 2014 1:23 pm

Avery Harden says:
July 21, 2014 at 10:07 am
Certainly nothing wrong with everyone keeping an open mind and staying skeptical. That is what scientist do. But if that lack of absolute certainty is purposed to ensure perpetual delay of action from policy makers, then it is a fools errand.
=====================
I don’t think anybody is looking for absolute certainty. The debate is that the evidence does not support what some think they “know” now.
Avery Harden says:
July 21, 2014 at 11:00 am
You just look away from the facts on the ground and say the papers with numbers and stuff is all fraud?
====================
Well, if you’re looking at the “facts on the ground” through the distorting lens of incorrect science, then you’re not seeing what you think you’re seeing. It seems that there was a time in the not too distant past when mass starvation was predicted at 4 billion people. Didn’t happen. There were some local famines, and some starvation caused by power-hungry dictators, but nothing caused by the exhaustion of the earth’s capacity to produce food, or man’s capacity to innovate with respect to that food production,
There’s nothing to suggest that the rate of extinction of animal and plant species is any different than it ever was. Extinction is a part of evolution. Should evolution just stop because we’re observing it?
You should read Willis’ articles on what happens in places where cheap energy is not available — it’s much worse for the “environment” because people chop down trees for wood, and burn toxic stuff in their little huts. The human misery, and “environmental damage” caused by lack of cheap, abundant, energy is far worse than a little CO2 in the atmosphere.
The real problem with the “carbon fraud” is it is diverting trillions of dollars from where it may do some good into the pockets of the politically connected. I fail to see how that’s good for anyone (unless you’re one of the cronies, at least until such time as the people wise up and come looking for you…).

Reply to  Kate Forney
July 22, 2014 11:03 am

Kate Forney, you said “The real problem with the “carbon fraud” is it is diverting trillions of dollars from where it may do some good into the pockets of the politically connected. I fail to see how that’s good for anyone (unless you’re one of the cronies, at least until such time as the people wise up and come looking for you…).”
This makes me think of the solar panels on my roof. My electric usage bill from the power company has been averaging between $10 and $15 a month. My system will fully pay for itself in less than 5 years. Your alarmism toward mitigating AGW I think is overblown.

Reply to  Avery Harden
July 22, 2014 12:31 pm

Avery,
Where do you live, and what was your utility, and federal, subsidy for your system?

Reply to  Brad
July 22, 2014 2:30 pm

Brad, I live near Baltimore, my federal taxable income was reduced $3,200. The state reimbursed me $1,000. The utility’s contribution is that I have a meter that goes one way when I am generating and the the opposite way when I am using when not generating. What I generate goes on the grid, no batteries needed.
Now, if you don’t like the idea of the government providing encouragement in any way to do this, remember that oil, natural gas, coal and roads also get subsidies. The biggest subsidy of all that we are just beginning to pay for is to cover the cost of having used our skies as a free sewer.
I would have done my photo voltaic system even without the subsidy. I mostly just wanted to learn how it all works. As time goes on and I see that actual electrical usage number consistently low, I’m beginning to believe this really will save me some significant money over time.

Reply to  Avery Harden
July 22, 2014 2:50 pm

Can you provide us with a spreadsheet showing the life cycle costs for your system?
Most residential systems cost around $25,000 if I remember right?
Anthony had a post where he showed his home installation a while back.
http://wattsupwiththat.com/2013/03/23/an-update-on-my-solar-power-project-results-show-why-i-got-solar-power-for-my-home-hint-climate-change-is-not-a-reason/

Reply to  Brad
July 23, 2014 11:04 am

Brad, you said” Can you provide us with a spreadsheet showing the life cycle costs for your system? Most residential systems cost around $25,000 if I remember right? Anthony had a post where he showed his home installation a while back.”
First, thanks for the link to Anthony’s post, that was very interesting and educational. I wish I was more tech savvy and could give a more detailed breakdown of what I have. I really don’t know what my “life cycle costs” will be. I have had my system for two years. I guess I will learn the hard way.
Mine is a difficult system in that I am a townhouse, inside-of-group, with a small roof to deal with. I only have 4 panels, but they seem to produce close to what I consume. My electric usage was low to being with, about $1,000 a year. Though my electric cost has been $10 to $15 a month with the system, BGE charges me a “delivery” charge that varies up to $20 a month, usually more than my actual usage. (That is an answer to those saying we mooch off the public system.)
I feel I paid too much for such a small system at $10,200, but it was a difficult installation considering it is a townhouse. It took a 5 man crew 3 days to install it, so $10,200 is a fair price for them. The system seems rugged and dependable, and maintenance free so far. The infrastructure of the system is the major cost so I can add panels later if I need them, already having covered the big cost of the project.
My panels are American made with a 25 year warranty. The installation kept a competent crew gainfully employed for 3 days. It was a turnkey operation. The county inspector did raise the specter of the firefighters ability to fight a fire with the system taking up half the roof.
All things considered, it is obvious that photovoltaic electricity generation is a great technology. Sure there are lots of problems to be ironed out and nothing is ever perfect. I like to think I am contributing to the learning curve on this. Independence from the grid someday would be nice. Batteries don’t seem cost effective just yet. Imagine someday folks may be able to plug in their car off these systems. The potential is that someday the clean energy produced will more that mitigate the pollution it took to build them. And I might just save some money as well.

kadaka (KD Knoebel)
July 21, 2014 2:01 pm

From Poptech on July 21, 2014 at 5:43 am:

kadaka, I run Windows and Linux so unlike you, I understand both. You fail hard, why would it be changing it in Windows 2000 let alone any Windows OS since 3.11?

Since I was running Win95 in ancient times and altering autoexec.bat to resolve conflicts and find the correct loading sequence was part of the fun, and Win95 came out in 1995 while Win3.11 was released in 1993 thus Win95 is an OS since 3.11, you are certainly showing your knowledge.
https://support.microsoft.com/kb/232558
M$ officially provides directions for altering autoexec.bat and config.sys for Win95 and 98 (Standard and Second Editions), thus they recognize the possible need to alter those files in a Windows OS since 3.11.
Ah! Once again Mosher is right, which I do find annoying. Unlike you, I’ve now looked at the manual for the software mentioned and checked the install instructions. Page 15:
https://lpdaac.usgs.gov/sites/default/files/public/MRTSwath_Users_Manual_2.2_Dec2010.pdf

Windows 95/98/2000 users must edit the AUTOEXEC.BAT file to add the path information and set the MRTSWATH_DATA_DIR variable. Using Notepad or some other text editor, add the following two lines to the end of the AUTOEXEC.BAT file:

PATH %PATH%;”c:\Program Files\MRTSwath\Tool\bin”
set MRTSWATH_DATA_DIR=”c:\Program Files\MRTSwath\Tool\data”
set MRTSWATH_HOME=”c:\Program Files\MRTSwath\Tool”

where c:\Program Files\MRTSwath\Tool was the directory chosen for MRTSwath installation. If the MRTSwath was installed in some other directory, then change these directory paths accordingly.

SOP, first do it by the manual, then call tech support if it doesn’t work.
Manual says for Win2000 you will change autoexec.bat for install, “…then the install is going to make a change to autoexec.bat.” Mosher is correct, again.
I took the time to RTFM, you did not RTFM and you got it wrong. Competent programmers know to RTFM.

July 21, 2014 2:31 pm

kadaka, please stop perpetually demonstrating you are a computer illiterate. The USGS documentation is wrong and anyone knowledgeable would have noticed this,
The second section is relevant to Windows systems and includes specifics for 95/98/2000 and NT/ME/XP.
ROFLMAO!
They are confusing Windows Millennium based on Windows 98SE with Windows 2000 which is based on NT. You do not make changes to the path in Windows 2000 using the autoexec.bat file.
Mosher, the computer illiterate clown just repeats it, not knowing it was wrong.
Kadaka, how bad do you want me to embarrass you right now?

July 21, 2014 2:53 pm

Since I was running Win95 in ancient times and altering autoexec.bat to resolve conflicts and find the correct loading sequence was part of the fun.

WTF are you talking about? What conflicts? If you wanted no conflicts you did not use legacy hardware that did not have Windows 95 compatible drivers.

M$ officially provides directions for altering autoexec.bat and config.sys for Win95 and 98 (Standard and Second Editions), thus they recognize the possible need to alter those files in a Windows OS since 3.11.

Yes, only for legacy compatibility, which almost always meant system instability. Microsoft included all sorts of left over crap to make sure as many things that were written improperly still worked as possible. No version of Windows since 3.11 required the autoexec.bat file and properly written Windows 95 or higher applications should never need to edit them.

kadaka (KD Knoebel)
July 21, 2014 4:02 pm

From Poptech on July 21, 2014 at 2:53 pm:

No version of Windows since 3.11 required the autoexec.bat file and properly written Windows 95 or higher applications should never need to edit them.

http://www.computerhope.com/ac.htm

Because Microsoft is trying to steer away from MS-DOS, these files are not required for Windows 95, Windows 98, Windows NT, Windows ME, Windows 2000, Windows XP, or later operating systems. However, in some cases it may still be necessary for users to edit or configure these files.

You also have conveniently forgotten about the large wealth of DOS programs available. “Properly written” Win95 programs won’t need autoexec.bat and config.sys as Win95 didn’t normally need them, but there were LOTS of DOS programs that needed those files and needed them properly set up.
Then there was the fun of properly setting them up on a DOS boot disk…

July 21, 2014 4:55 pm

I’ve got it: If you turn the “worst” models upside down, you’ll get the “best” models.

kadaka (KD Knoebel)
July 21, 2014 5:06 pm

From Poptech on July 21, 2014 at 2:31 pm:

kadaka, please stop perpetually demonstrating you are a computer illiterate.

But it’s fun! Both I and Mosher show how we have well above average levels of computer literacy, you keep saying each of us are “a computer illiterate” as if computer literacy had no range and was only yes or no, and you keep demonstrating you’re an anonymous internet arrogant elitist jerkwad! What’s not to like?

The USGS documentation is wrong and anyone knowledgeable would have noticed this,

Doesn’t matter. You want tech support, you do it by the manual. You don’t tell the tech “But I thought it was wrong so I did what I thought was right, but it didn’t work, so now you have to fix it!”
Besides, what happens when you change things in autoexec.bat under 2000?
https://support.microsoft.com/kb/124551

Windows parses the AUTOEXEC.BAT file during startup by default, which results in the appending of the path statement in the AUTOEXEC.BAT file to the system path created by Windows.

Put PATH and SET commands in autoexec.bat under 2000, they get added to the relevant 2000 startup files. MRTSwath adds PATH and SET commands to autoexec.bat. The method may seem clunky, but it is not wrong.

You do not make changes to the path in Windows 2000 using the autoexec.bat file.

And yet you can. M$ confirmed it.

July 21, 2014 10:27 pm

kadaka (KD Knoebel) says:
July 21, 2014 at 5:06 pm
But it’s fun! Both I and Mosher show how we have well above average levels of computer literacy, you keep saying each of us are “a computer illiterate” as if computer literacy had no range and was only yes or no, and you keep demonstrating you’re an anonymous internet arrogant elitist jerkwad! What’s not to like?

No you have both demonstrated to be computer illiterates. I have never seen someone continue to defend something that is irrefutably wrong.
https://lpdaac.usgs.gov/sites/default/files/public/MRTSwath_Users_Manual_2.2_Dec2010.pdf
“The second section is relevant to Windows systems and includes specifics for 95/98/2000 and NT/ME/XP.”
“Windows 95/98/2000 users must edit the AUTOEXEC.BAT file to add the path
information”

“Windows NT/ME/XP users must edit their user keys to add the MRTSwath PATH”
Like I said before they are confusing Windows Millenium with Windows 2000. This is not up for debate not matter how ignorant you are on this subject. As I stated before I am well aware that Windows continued to use those files for compatibility with poorly coded legacy applications (they are overridden by system and user environmental variables) that has nothing to do with the information provided by the USGS and Mosher to be 100% WRONG.
Doesn’t matter. You want tech support, you do it by the manual.
ROFLMAO, you sir are one of the dumbest people on the Internet. The USGS documentation is 100% wrong and I am going to make an example out of you now.

Put PATH and SET commands in autoexec.bat under 2000, they get added to the relevant 2000 startup files. MRTSwath adds PATH and SET commands to autoexec.bat. The method may seem clunky, but it is not wrong.

They get overridden by the system and user environmental variables. It is not clunky, it is 100% WRONG.
http://www.computerhope.com/issues/ch000549.htm
The path is now managed by Windows 2000 and Windows XP and not the autoexec.bat or autoexec.nt files as was done with earlier versions of Windows.
This is a good day I get to embarrass Mosher, you and the USGS all at the same time.

July 21, 2014 11:35 pm

Kadaka, I really feel bad embarrassing you like this,
http://msdn.microsoft.com/en-us/library/ms954375.aspx

Chapter 1. Windows Fundamentals
Summary of Windows Fundamental Requirements
Rationale: Passing these requirements will help ensure that your application runs in a stable, reliable manner on Windows operating systems.
Customer benefits: Customers can be confident that a compliant product will not adversely affect the reliability of the operating system.
Requirements
5. Do not read from or write to Win.ini, System.ini, Autoexec.bat or Config.sys
5. Do Not Read from or Write to Win.ini, System.ini, Autoexec.bat, or Config.sys
Your application must not read from or write to Win.ini, System.ini, Autoexec.bat, or Config.sys. These file are not used by Windows 2000 systems

The USGS is wrong, you are wrong and Mosher is more certainly wrong.
Thanks for playing computer illiterates.

kadaka (KD Knoebel)
July 22, 2014 5:56 am

From Poptech on July 21, 2014 at 11:35 pm:

Kadaka, I really feel bad embarrassing you like this,

Certainly someone here should be embarrassed by now, but by being anonymous and internet arrogant they clearly have no shame.

http://msdn.microsoft.com/en-us/library/ms954375.aspx

Savvy readers will note that page’s Reference to the “Certified for Microsoft Windows Logo” program. If you wanted the privilege of that logo then you followed the requirements on that page. If you didn’t care about that logo, for example you’re a government agency releasing a free tool presumably for those sufficiently computer literate to install and use it, you didn’t have to follow those requirements.
You may also notice of the References the two M$ links are dead, VeriTest goes to their main page (bad link), and there’s no link for the SDK.
From the doc:

5. Do Not Read from or Write to Win.ini, System.ini, Autoexec.bat, or Config.sys
Your application must not read from or write to Win.ini, System.ini, Autoexec.bat, or Config.sys. These file are not used by Windows 2000 systems, and some users remove them.

Technically true that Win2000 does not use them, except as noted in previous reference I provided whereby on start-up Win2000 will check autoexec.bat for paths. Win2000 does not use those files when up and running.
That Win2000 does check autoexec.bat for paths is also documented at another M$ reference:
https://support.microsoft.com/kb/100843

There are three levels of environment variables in Microsoft Windows NT; the system environment variables, the user environment variables, and the environment variables that are set in the AUTOEXEC.BAT file. (…)

AUTOEXEC.BAT environment variables
All environment variables and the paths set in the AUTOEXEC.BAT file are used to create the Windows NT environment. Any paths in the AUTOEXEC.BAT file are append to the system path.
How environment variables are set
Environment variables are set in the following order:
* System variables
* AUTOEXEC.BAT variables
* User variables
How the path is built
The Path is constructed from the system path, which can be viewed in the System Environment Variables field in the System dialog box. The User path is appended to the system path. Then the path from the AUTOEXEC.BAT file is appended.
Note: The environment variables LibPath and Os2LibPath are built the same way (system path + user path + AUTOEXEC.BAT path).

APPLIES TO
* Microsoft Windows 2000 Server
* Microsoft Windows 2000 Advanced Server
* Microsoft Windows 2000 Professional Edition

There it is, again. Win2000 checks autoexec.bat at start-up for path info.

The USGS is wrong, you are wrong and Mosher is more certainly wrong.

That’s the way, sweetie, just close your eyes and keep repeating it and maybe you’ll wake up in a new magical realm where that is true.
From Poptech on July 21, 2014 at 10:27 pm:

No you have both demonstrated to be computer illiterates. I have never seen someone continue to defend something that is irrefutably wrong.

ROFLMAO, you sir are one of the dumbest people on the Internet. The USGS documentation is 100% wrong and I am going to make an example out of you now.

They get overridden by the system and user environmental variables. It is not clunky, it is 100% WRONG.

This is a good day I get to embarrass Mosher, you and the USGS all at the same time.

That’s the way, darling. Ignore the facts, say it over and over again, louder and louder, declare your grand victory, and maybe possibly someday the world will rewrite itself and you really will be correct!

July 22, 2014 6:45 am

Savvy readers will note that page’s Reference to the “Certified for Microsoft Windows Logo” program. If you wanted the privilege of that logo then you followed the requirements on that page. If you didn’t care about that logo, for example you’re a government agency releasing a free tool presumably for those sufficiently computer literate to install and use it, you didn’t have to follow those requirements.

Wrong, those are the proper ways to write applications to run in Windows 2000. Not caring means you are an incompetent idiot who has no business writing code. The difference between incompetent hacks like you and myself, is I know the right way to do things that does not cause other problems.
It is an irrefutable fact that the computer illiterates at NASA/USGS and Mosher do not know the difference between Windows ME (Millennium) and Windows 2000 or how to properly set the Path environment variable in Window 2000.
Thanks for helping me write my article embarrassing you. I’ll make sure everyone reads it.

kadaka (KD Knoebel)
July 22, 2014 7:22 am

Poptech said on July 22, 2014 at 4:31 am:

For future reference,
http://www.populartechnology.net/2014/07/nasa-and-usgs-does-not-know-difference.html

Nah, it ain’t.
http://web.archive.org/web/*/http://www.populartechnology.net/
“This URL has been excluded from the Wayback Machine.”
Since you’re keeping your site from being archived, you can change posts, change dates, and pretend that’s how things always were, even if you make the changes months and years later. Just like SkepSci does, except you do it smarter as SkepSci allows themselves to be archived so the changes might be traceable, and John Cook is not anonymous.

kadaka (KD Knoebel)
July 22, 2014 7:54 am

From Poptech on July 22, 2014 at 6:45 am:

Wrong, those are the proper ways to write applications to run in Windows 2000.

The “proper way” is the only way? How many of your climate science postings have you submitted for peer review?

The difference between incompetent hacks like you and myself, is I know the right way to do things that does not cause other problems.

Yet setting environment variables in autoexec.bat with Win2000 is endorsed by M$. If they really didn’t want it to happen, they only had to remove the capability.
Instead, as I have shown, they have documented how autoexec.bat is used in setting paths. If you don’t want people doing it, why detail how it works?

It is an irrefutable fact that the computer illiterates at NASA/USGS and Mosher do not know the difference between Windows ME (Millennium) and Windows 2000 or how to properly set the Path environment variable in Window 2000.

It is an irrefutable fact the method will work, as is confirmed by M$.
And now everyone at NASA/USGS is a “computer illiterate”? Your high standards have already slagged off 99.998% of the WUWT readership as “computer illiterates”. Why not add in NWS and IRS and even the VA to your list, and all other government agencies?

Thanks for helping me write my article embarrassing you. I’ll make sure everyone reads it.

With “everyone” loosely defined as you and the rest of your team of mutual back-patters.

kadaka (KD Knoebel)
July 22, 2014 8:12 am

Oh, PS:

The difference between incompetent hacks like you and myself, is…

Perhaps if you were an English major, you might have noticed with your construct you called yourself an incompetent hack, by using the plural. For example, with “The difference between coffee beans like Arabica and Robusta, is…” it is clear that Robusta is also a coffee bean. The correct form uses the singular, “…between an incompetent hack like you…”
I will charitably write that off as a simple grammatical mistake, such that an English major shouldn’t make, rather than a Freudian slip.
And as you had selected “myself” you should have used “yourself” as well, or the “you and me” pairing instead.

July 22, 2014 8:17 am

kadaka (KD Knoebel) says:
July 22, 2014 at 7:54 am
The “proper way” is the only way? How many of your climate science postings have you submitted for peer review?

If you care about your application running in a stable and reliable way in Windows and not breaking anything else then yes. Articles from my website have been cited in four peer-reviewed papers. I have submitted none because my articles were never meant to be published in a journal.

Yet setting environment variables in autoexec.bat with Win2000 is endorsed by M$. If they really didn’t want it to happen, they only had to remove the capability.

Quote, where they tell developers to do this. Many things in windows are still around for backwards compatibility and to get crappy code to run, that does not mean you should do it that way. I suggest a proper education in Windows application development.
You continue to miss the main point,
1. Did the authors of the MODIS Reprojection Tool Swath confuse Windows ME (Millennium) with Windows 2000?
2. Did Mosher fail to recognize this elementary fact that any competent and professionally trained programmer would have?

Your high standards have already slagged off 99.998% of the WUWT readership as “computer illiterates”.

No, only people like yourself who argue about subjects they know nothing about. I respect people who do not get involved and do not make fools of themselves like you have.

kadaka (KD Knoebel)
July 22, 2014 9:04 am

From Poptech on July 22, 2014 at 8:17 am:

Articles from my website have been cited in four peer-reviewed papers.

Given the metric tonnes of bovine-processed vegetable matter that we regularly find in peer-reviewed papers around here, and that includes other peer-reviewed papers that are given as references, that’s hardly a sterling endorsement. Besides, a sociology paper might cite a tampon commercial.

You continue to miss the main point,
1. Did the authors of the MODIS Reprojection Tool Swath confuse Windows ME (Millennium) with Windows 2000?
2. Did Mosher fail to recognize this elementary fact that any competent and professionally trained programmer would have?

That’s two points. Which are you now calling the main one?
The point is you say it’s wrong, it wouldn’t work, and I have shown it does work.
You may now return to your anonymous blog, where you can write more nasty anonymous posts, which may be cited in a peer-reviewed sociology paper as perfect examples of the small-minded pettiness and meanness prevalent among the anonymous oil-funded well-organized climate “skeptic” community.

No, only people like yourself who argue about subjects they know nothing about. I respect people who do not get involved and do not make fools of themselves like you have.

Ah, so people who shut up and never say a computer-related word around you are the computer literates, but those who dare to say anything computer-related that you will object to are clearly the computer illiterates. Got it. Thanks for the clarification.

July 22, 2014 9:12 am

None of the citations were in a sociology journal.
You forgot to quote where Microsoft “endorses” telling developers to make changes to the Path Environment Variable in the autoexec.bat file in Windows 2000.
You are dodging the questions:
1. Did the authors of the MODIS Reprojection Tool Swath confuse Windows ME (Millennium) with Windows 2000?
2. Did Mosher fail to recognize this elementary fact that any competent and professionally trained programmer would have?

July 22, 2014 1:29 pm

Avery Harden says:
July 22, 2014 at 11:03 am
=========================
The little subsidy problem is certainly one issue. You took money from someone else to have solar panels.
The environmental damage caused by building solar panels is another: http://voiceofsandiego.org/2009/02/16/the-not-so-sunny-side-of-solar-panels/
http://sinosphere.blogs.nytimes.com/2014/06/02/chinas-solar-panel-production-comes-at-a-dirty-cost/?_php=true&_type=blogs&_r=0
The economics of solar panels on your roof vs. the economics of supplying solar and wind power for industry are like chalk and cheese. You see, if you don’t want industry to stop on a cloudy day or when the wind isn’t blowing, you need backup generation on-line (because starting coal or nuclear, and, I believe, gas) takes quite a bit of time, so they can’t just be shut down, to be started when the wind speed/sunshine drops below what is needed — they must stay running.
You can make out OK with the solar on your roof, because, as you rightly observe, you have the backup of the utility company. Try cutting that little umbilical and see how you like your solar power.
The engineering doesn’t really add up.
Maybe my realism about “renewable” energy seems alarmist to you because, with all due respect, you don’t seem particularly well informed.

July 22, 2014 3:54 pm

Avery Harden says:
July 22, 2014 at 2:30 pm
Now, if you don’t like the idea of the government providing encouragement in any way to do this, remember that oil, natural gas, coal and roads also get subsidies.
=========================
More propaganda. The state builds roads. Oil, natural gas and coal get the same deductions as any other business — they receive few if any cash subsidies.

kadaka (KD Knoebel)
July 22, 2014 11:34 pm

From Poptech on July 22, 2014 at 9:12 am:

None of the citations were in a sociology journal.

And, as I alluded to, those could be negative citations, not positive ones. You have to look beyond Google-generated metrics. Those could be four citations from using your articles as examples of poorly-sourced biased ignorant dreck.

You forgot to quote where Microsoft “endorses” telling developers to make changes to the Path Environment Variable in the autoexec.bat file in Windows 2000.

Does it matter if it is implicit or explicit? That last reference never did say to not do it. The capability is there, and M$ documented how it works. Programmers naturally follow the path of least effort, and M$ knows it. They knew the capability would be used if offered. That’s endorsement of the method.
You’re really asking for a written recommendation, which M$ wouldn’t do, as per that ancient “best practices” document you dredged up they preferred no one using them.

1. Did the authors of the MODIS Reprojection Tool Swath confuse Windows ME (Millennium) with Windows 2000?

You cited as “evidence”:
Windows 95/98/2000 users must edit the AUTOEXEC.BAT file to add the path information and set the MRTSWATH_DATA_DIR variable.
and
Windows NT/ME/XP users must edit their user keys to add the MRTSwath PATH, MRTSWATH_HOME, and MRTSWATH_DATA_DIR to the system variables.
Yet that is what works. Win 95/98/2000 users can set those environment variables in autoexec.bat, NT/ME/XP can play with regedit.
The possible fault is “must” as path changes in 2000 can also be done with regedit.
So you were presented with legitimate directions, did not recognize them as legitimate, and proceeded to conclude the fault must be with NASA/USGS, namely getting ME and 2000 confused.
You even went so far as to write up an anonymous smear piece on your anonymous blog to loudly proclaim NASA/USGS and Mosher had all screwed up, you were the one who discovered this obvious truth.
So now for as long as your ego will allow that post to remain up and unchanged, knowledgeable people who read it will know NASA/USGS were right, Mosher was right, and it was all just another case of PEBKAC. Again.

July 23, 2014 1:40 pm

[snip – way off topic this thread is not about Microsoft or software competence. no further responses. -mod]

July 23, 2014 1:50 pm

[snip – off topic – not a thread on software -mod]

July 23, 2014 1:51 pm

[snip – way off topic this thread is not about Microsoft or software competence. no further responses. -mod]

Bullshit, stop preventing me from responding.

Editor
July 23, 2014 4:03 pm

Poptech says:
July 23, 2014 at 1:51 pm

[snip – way off topic this thread is not about Microsoft or software competence. no further responses. -mod]

Bullshit, stop preventing me from responding.

Poptech, we all understand that by your lights, only the brilliant Poptech and his team know about software, Microsoft, programming, and all the rest.
We got your message. None of us measure up. None of us pass the Poptech test. You are the ultimate judge of whether someone is a programmer or not. And in your world, I’m not. Mosher is not. Never mind that I’ve put thousands of lines of code up in support of my work, and you’ve never commented on my code or found a single flaw in my work. Never mind that I programmed my first computer in 1963, when you were likely wearing diapers. Never mind that Mosher has written an entire suite of tools in R so that people can follow the work of Berkeley Earth, and you haven’t found flaws in those either … or perhaps you just can’t program in R, I don’t know. But clearly, on your planet, we’re just bumbling fools pretending to be programmers. Ok, we got it, enough already.
So could you move to a more interesting topic? Seriously, Poptech, it’s gotten really, really old. We know you don’t think we can program. We don’t care. We just continue to program, and you continue to complain … who are the programmers here?
So a change of subject would be in your best interest. You’ve convinced everyone you are going to convince, and at this point the rest of us are just pointing and laughing. Talk about something else for a while, OK? You know, like say … the science?
Thanks in advance,
w.

July 23, 2014 7:05 pm

Poptech, we all understand that by your lights, only the brilliant Poptech and his team know about software, Microsoft, programming, and all the rest. We got your message. None of us measure up. None of us pass the Poptech test. You are the ultimate judge of whether someone is a programmer or not. And in your world, I’m not. Mosher is not.

Nope, neither one of you have ever been professionally trained in computer science and neither has been professionally employed as programmers. Mosher does not even know the difference between Windows Millennium and Windows 2000, that is embarrassing and everyone here who is technically competent knows it. So I don’t have to show anymore to prove this.

Never mind that I’ve put thousands of lines of code up in support of my work, and you’ve never commented on my code or found a single flaw in my work. Never mind that I programmed my first computer in 1963, when you were likely wearing diapers.

More BS. I programmed a computer when I was a kid, but I did go around proclaiming to be a professional programmer and misleading everyone. You don’t even know elementary things like how to properly format your code.

Never mind that Mosher has written an entire suite of tools in R so that people can follow the work of Berkeley Earth, and you haven’t found flaws in those either … or perhaps you just can’t program in R, I don’t know. But clearly, on your planet, we’re just bumbling fools pretending to be programmers. Ok, we got it, enough already.

Yawn, I don’t help hacks who do not know what they are talking about. The last thing I am going to do his help make his code better, which is why I have specifically avoided criticizing what is wrong. Any competent programmer can review his work and see he does not know what he is doing. I did a test and asked other professional programmers to review his stuff for competency and they came to the same conclusion I did. Take that however you like. Just remember we are laughing at you.

kadaka (KD Knoebel)
July 24, 2014 12:31 am

From Poptech on July 23, 2014 at 7:05 pm (bold added):

I programmed a computer when I was a kid, but I did go around proclaiming to be a professional programmer and misleading everyone.

And never grew out of it.
Somebody pour some tranny fluid into Poptech, when his mouth goes on automatic there’s Freudian slippage. But not ATF, that’s also full of computer illiterates.

You don’t even know elementary things like how to properly format your code.

If it passes the syntax and other checks, compiles and assembles as applicable, runs as you want and doesn’t misbehave, then it was formatted correctly. Anything else is window dressing. All programmers know that.

Yawn, I don’t help hacks who do not know what they are talking about. The last thing I am going to do his help make his code better, which is why I have specifically avoided criticizing what is wrong.

Excellent strategy, by never venturing forth possible improvements you avoid revealing your own inadequacies, thus never have to suffer programmers showing you how you failed to comprehend how their code works or even failed to understand how the language works.
You have shown you have truly learned from the great pretenders of history and adhere tightly to the “lest you remove all doubt” principle. Oh wait, in this thread you didn’t. Sadness. Oh well, maybe you can find a new blog to baffle with… your professionally educated opinions.

Any competent programmer can review his work and see he does not know what he is doing.

Do you know any?

I did a test and asked other professional programmers to review his stuff for competency and they came to the same conclusion I did.

You had said it was your team that reviewed Mosher’s stuff. This is all boiling down to marketing. You and your team are the only ones you will admit are competent, or even computer literates, as saying otherwise means equivalent expertise could be found elsewhere.
Thus you are a salesman, promoting you and your team above anyone else, and thus a lying sniveling salesman as sufficient computer competency for all but rare niches and/or ancient equipment is easily found throughout the industry. Thus you would deceive your customers into believing your offerings are invaluable, to keep them from finding cheaper and better alternatives.

Just remember we are laughing at you.

Which is a good reason to remain anonymous forever. If your clients would learn you consider all of them to be computer illiterates, complete morons no matter what skills they have, and have repeatedly oversold the quality of you and your team, they would make certain you and your team stop laughing at them and anyone else.

kadaka (KD Knoebel)
July 24, 2014 4:02 am

Found the secret name, perhaps, you be the judge. Much other info to consider.
Poptech uses “Andrew” at his site, lists himself as a “computer analyst”.
Here at this site, which appears to contain potheads, that was expanded to “Andrew K”. From Feb 2014. Reputed spoofing, you’re adults and can judge for yourselves actual identity and possible state of inebriation. Wording on pg 2 of comments is NSFW. I’m torn about including it, could be considered smearing, but it was the first part of the evidence trail I found.
http://www.limboclub.com/forum/threads/populartechnology-net.83268/
There was found a 2007 thread where there was a spirited discussion about “Mastertech” with a “Firefox Myths” site, “Andrew K” with his anti-Firefox anti-Open Source postings at Popular Technology, and how both claimed to be someone else but it was revealed they weren’t. And similarly “Andrew K” was “Andrew”. And “Andrew K” has been banned from forums for spamming, trolling, and also he was using sock puppets as found out by him using the same IP addresses.
http://blog.matthewmiller.net/2007/09/debunking-firefox-myths-page.html?showComment=1191099780000#c5404806598880480514
And that link goes to where he rebuts as Andrew Khan while complaining about potheads, then quickly posts an ‘Oops, typo’ comment as “Andrew K” a minute later.
The link comes from this well-researched post with more about “Andrew Khan”:
http://ipka.wordpress.com/2011/05/11/populartechnology-net-by-andrew-khan/
At the Avast! forum link, said forum reported as having banned Mastertech/Andrew/Andrew K, the following is apparently quoted from Mastertech’s profile, to verify you’d have to register. This is from 2007, I’d say it pretty well matches what we know about “Poptech”.

Andrew K. has been using computers for over 25 years starting with the TI-99/4A back in 1981. For over 15 years he has been helping people solve their PC problems. Over the years he has held various IT level positions including Helpdesk Support, Technician, Technical Service Manager and OEM Branch Manager which included other duties such as Sales and Marketing. He has an extensive knowledge of DOS, Windows 3.x, 95, 98, ME, NT, 2000, XP and Vista. Being A+ and Dell certified he has supported thousands of clients over the years including end users, educational institutions, governmental organizations and small to medium sized businesses. At last estimate he has taken 15,000+ support calls and worked on and assembled over 5000+ systems. His extensive technical knowledge and personal customer related experience has allowed him to seamlessly transfer his knowledge online in a clear and concise way. Computers are not Andrew K’s hobby, they are his job.

Sure sounds like someone who’d lay claim to unparalleled vast computer knowledge with tons of Windoze minutiae, looks down on virtually everyone else as “computer illiterates” who haven’t a clue, yet won’t even show they themselves can program “Hello, world!”
This is what I’ve found. You’re all intelligent rational people, decide for yourselves what you want to accept, including if Poptech is Andrew Khan, or perhaps the “accidental” reveal was merely planned misdirection. Search for yourself.
Although I’m finding Poptech being burned-out tech support who sneers at everyone else for being incompetent computer illiterates to be completely believable.

July 24, 2014 5:46 am

kadaka (KD Knoebel) says:
July 22, 2014 at 11:34 pm
From Poptech on July 22, 2014 at 9:12 am:
None of the citations were in a sociology journal.
[snip . . OT . . mod]

July 24, 2014 6:02 am

Kadaka why do you have to hide behind the moderators?

kadaka (KD Knoebel) says:
July 24, 2014 at 12:31 am
[snip . . OT . . mod]

July 24, 2014 6:44 am

Kadaka this is old and you are not even original.

kadaka (KD Knoebel) says:
July 24, 2014 at 4:02 am
Found the secret name, perhaps, you be the judge. Much other info to consider.
Poptech uses “Andrew” at his site, lists himself as a “computer analyst”.

Did you just learn how to read?

Here at this site, which appears to contain potheads, that was expanded to “Andrew K”. From Feb 2014. Reputed spoofing, you’re adults and can judge for yourselves actual identity and possible state of inebriation. Wording on pg 2 of comments is NSFW. I’m torn about including it, could be considered smearing, but it was the first part of the evidence trail I found.
http://www.limboclub.com/forum/threads/populartechnology-net.83268/
That is libel and you are one sick puppy. A demented user named “MrCharisma” from the Big Footy (Aussie Rugby) forums after losing a debate with me made a fake account on that site using my screen name. I commented here but nothing was done about it.

There was found a 2007 thread where there was a spirited discussion about “Mastertech” with a “Firefox Myths” site, “Andrew K” with his anti-Firefox anti-Open Source postings at Popular Technology, and how both claimed to be someone else but it was revealed they weren’t. And similarly “Andrew K” was “Andrew”. And “Andrew K” has been banned from forums for spamming, trolling, and also he was using sock puppets as found out by him using the same IP addresses.
http://blog.matthewmiller.net/2007/09/debunking-firefox-myths-page.html?showComment=1191099780000#c5404806598880480514

LMAO you are truly incompetent. More like, after I made a post about Firefox Fanboys the fanboys got mad, http://www.populartechnology.net/2005/01/firefox-new-religion.html
I have never used sock puppets. I dare you to find a single reputable site and prove me wrong. I always comment under Poptech, PT or some obvious variation of it.

And that link goes to where he rebuts as Andrew Khan while complaining about potheads, then quickly posts an ‘Oops, typo’ comment as “Andrew K” a minute later.
The link comes from this well-researched post with more about “Andrew Khan”:
http://ipka.wordpress.com/2011/05/11/populartechnology-net-by-andrew-khan/

You are now my puppet.
IPKA is a blog for an admitted Internet stalker that was started after he was banned from the Ron Paul forums for being, “a useless, annoying troll”.
Andrew can shut up if he wishes not to be …followed or stalked.” – Bud [IPKA]
“I’m a real life stalker too, you just think I’m an internet stalker because you only see my online.” – Bud [IPKA]
“…can’t stalk you [Poptech] if you shut the f#ck up, so as long as you speak, you’ll be followed.” – Bud [IPKA]
“Bud” is a sockpuppet for “WaltM” and his blog IPKA. “WaltM” [IPKA] was so much of a lunatic he was banned from the Ron Paul forums.
“The guy [WaltM] is a useless, annoying troll, whether he realizes it or not.” – Ron Paul Forums
He has had a problem with me after I suggested he get a lobotomy.
This lunatic has compared me to a cop killer, http://ipka.wordpress.com/2014/07/02/james-sapp-was-a-global-warming-denier-just-like-andrew-of-populartechnology-net/
and a child molester, http://ipka.wordpress.com/2012/09/16/populartechnologys-latest-smear-against-skepticalscience-sks-al-jazeera-what-next-homosexual-womanizer-child-molester/
But this is where you go to get your “well researched information”. You are a reprehensible disgrace.

At the Avast! forum link, said forum reported as having banned Mastertech/Andrew/Andrew K, the following is apparently quoted from Mastertech’s profile, to verify you’d have to register. This is from 2007, I’d say it pretty well matches what we know about “Poptech”.
Andrew K. has been using computers for over 25 years starting with the TI-99/4A back in 1981. For over 15 years he has been helping people solve their PC problems. Over the years he has held various IT level positions including Helpdesk Support, Technician, Technical Service Manager and OEM Branch Manager which included other duties such as Sales and Marketing. He has an extensive knowledge of DOS, Windows 3.x, 95, 98, ME, NT, 2000, XP and Vista. Being A+ and Dell certified he has supported thousands of clients over the years including end users, educational institutions, governmental organizations and small to medium sized businesses. At last estimate he has taken 15,000+ support calls and worked on and assembled over 5000+ systems. His extensive technical knowledge and personal customer related experience has allowed him to seamlessly transfer his knowledge online in a clear and concise way. Computers are not Andrew K’s hobby, they are his job.
Sure sounds like someone who’d lay claim to unparalleled vast computer knowledge with tons of Windoze minutiae, looks down on virtually everyone else as “computer illiterates” who haven’t a clue, yet won’t even show they themselves can program “Hello, world!”
This is what I’ve found. You’re all intelligent rational people, decide for yourselves what you want to accept, including if Poptech is Andrew Khan, or perhaps the “accidental” reveal was merely planned misdirection. Search for yourself.
Although I’m finding Poptech being burned-out tech support who sneers at everyone else for being incompetent computer illiterates to be completely believable.

ROFLMAO, not even close. Popular Technology.net has been up since 2004 you computer illiterate. Do you need an education in how to use Google to do proper research?
Hello World, LMAO. Unlike you I was the lead coder on a game that was featured in things like PC Gamer Magazine.

July 24, 2014 6:45 am

Kadaka this is old and you are not even original.

kadaka (KD Knoebel) says:
July 24, 2014 at 4:02 am
Found the secret name, perhaps, you be the judge. Much other info to consider.
Poptech uses “Andrew” at his site, lists himself as a “computer analyst”.

Did you just learn how to read?

Here at this site, which appears to contain potheads, that was expanded to “Andrew K”. From Feb 2014. Reputed spoofing, you’re adults and can judge for yourselves actual identity and possible state of inebriation. Wording on pg 2 of comments is NSFW. I’m torn about including it, could be considered smearing, but it was the first part of the evidence trail I found.
http://www.limboclub.com/forum/threads/populartechnology-net.83268/

That is libel and you are one sick puppy. A demented user named “MrCharisma” from the Big Footy (Aussie Rugby) forums after losing a debate with me made a fake account on that site using my screen name. I commented here but nothing was done about it.

There was found a 2007 thread where there was a spirited discussion about “Mastertech” with a “Firefox Myths” site, “Andrew K” with his anti-Firefox anti-Open Source postings at Popular Technology, and how both claimed to be someone else but it was revealed they weren’t. And similarly “Andrew K” was “Andrew”. And “Andrew K” has been banned from forums for spamming, trolling, and also he was using sock puppets as found out by him using the same IP addresses.
http://blog.matthewmiller.net/2007/09/debunking-firefox-myths-page.html?showComment=1191099780000#c5404806598880480514

LMAO you are truly incompetent. More like, after I made a post about Firefox Fanboys the fanboys got mad, http://www.populartechnology.net/2005/01/firefox-new-religion.html
I have never used sock puppets. I dare you to find a single reputable site and prove me wrong. I always comment under Poptech, PT or some obvious variation of it.

And that link goes to where he rebuts as Andrew Khan while complaining about potheads, then quickly posts an ‘Oops, typo’ comment as “Andrew K” a minute later.
The link comes from this well-researched post with more about “Andrew Khan”:
http://ipka.wordpress.com/2011/05/11/populartechnology-net-by-andrew-khan/

You are now my puppet.
IPKA is a blog for an admitted Internet stalker that was started after he was banned from the Ron Paul forums for being, “a useless, annoying troll”.
Andrew can shut up if he wishes not to be …followed or stalked.” – Bud [IPKA]
“I’m a real life stalker too, you just think I’m an internet stalker because you only see my online.” – Bud [IPKA]
“…can’t stalk you [Poptech] if you shut the f#ck up, so as long as you speak, you’ll be followed.” – Bud [IPKA]
“Bud” is a sockpuppet for “WaltM” and his blog IPKA. “WaltM” [IPKA] was so much of a lunatic he was banned from the Ron Paul forums.
“The guy [WaltM] is a useless, annoying troll, whether he realizes it or not.” – Ron Paul Forums
He has had a problem with me after I suggested he get a lobotomy.
This lunatic has compared me to a cop killer, http://ipka.wordpress.com/2014/07/02/james-sapp-was-a-global-warming-denier-just-like-andrew-of-populartechnology-net/ and a child molester, http://ipka.wordpress.com/2012/09/16/populartechnologys-latest-smear-against-skepticalscience-sks-al-jazeera-what-next-homosexual-womanizer-child-molester/
But this is where you go to get your “well researched information”. You are a reprehensible disgrace.

At the Avast! forum link, said forum reported as having banned Mastertech/Andrew/Andrew K, the following is apparently quoted from Mastertech’s profile, to verify you’d have to register. This is from 2007, I’d say it pretty well matches what we know about “Poptech”.
Andrew K. has been using computers for over 25 years starting with the TI-99/4A back in 1981. For over 15 years he has been helping people solve their PC problems. Over the years he has held various IT level positions including Helpdesk Support, Technician, Technical Service Manager and OEM Branch Manager which included other duties such as Sales and Marketing. He has an extensive knowledge of DOS, Windows 3.x, 95, 98, ME, NT, 2000, XP and Vista. Being A+ and Dell certified he has supported thousands of clients over the years including end users, educational institutions, governmental organizations and small to medium sized businesses. At last estimate he has taken 15,000+ support calls and worked on and assembled over 5000+ systems. His extensive technical knowledge and personal customer related experience has allowed him to seamlessly transfer his knowledge online in a clear and concise way. Computers are not Andrew K’s hobby, they are his job.
Sure sounds like someone who’d lay claim to unparalleled vast computer knowledge with tons of Windoze minutiae, looks down on virtually everyone else as “computer illiterates” who haven’t a clue, yet won’t even show they themselves can program “Hello, world!”
This is what I’ve found. You’re all intelligent rational people, decide for yourselves what you want to accept, including if Poptech is Andrew Khan, or perhaps the “accidental” reveal was merely planned misdirection. Search for yourself.
Although I’m finding Poptech being burned-out tech support who sneers at everyone else for being incompetent computer illiterates to be completely believable.

ROFLMAO, not even close. Popular Technology.net has been up since 2004 you computer illiterate. Do you need an education in how to use Google to do proper research?
Hello World, LMAO. Unlike you I was the lead coder on a game that was featured in things like PC Gamer Magazine.

July 24, 2014 6:47 am

Kadaka why do you have to hide behind the moderators?

kadaka (KD Knoebel) says:
July 24, 2014 at 12:31 am
Somebody pour some tranny fluid into Poptech, when his mouth goes on automatic there’s Freudian slippage. But not ATF, that’s also full of computer illiterates.

This is why I only use real commenting software that has preview and editing features. The WordPress commenting system is garbage and always has been. It is the equivalent of running dial-up Internet today.

If it passes the syntax and other checks, compiles and assembles as applicable, runs as you want and doesn’t misbehave, then it was formatted correctly. Anything else is window dressing. All programmers know that.

Says all hacks who are not professionally employed as software developers.

Excellent strategy, by never venturing forth possible improvements you avoid revealing your own inadequacies, thus never have to suffer programmers showing you how you failed to comprehend how their code works or even failed to understand how the language works.

I don’t teach hacks either. R is essentially a glorified scripting language with just enough non scripted elements to fool people into thinking they are really programming.

You had said it was your team that reviewed Mosher’s stuff. This is all boiling down to marketing. You and your team are the only ones you will admit are competent, or even computer literates, as saying otherwise means equivalent expertise could be found elsewhere.

There are a few other competent people that comment here but none of them are you.

Thus you are a salesman, promoting you and your team above anyone else, and thus a lying sniveling salesman as sufficient computer competency for all but rare niches and/or ancient equipment is easily found throughout the industry. Thus you would deceive your customers into believing your offerings are invaluable, to keep them from finding cheaper and better alternatives.

Keep this in mind, you have no idea who I am or what I do. Yes, we are still laughing at you.

Which is a good reason to remain anonymous forever. If your clients would learn you consider all of them to be computer illiterates, complete morons no matter what skills they have, and have repeatedly oversold the quality of you and your team, they would make certain you and your team stop laughing at them and anyone else.

I only consider hacks and bullshit artists like yourself computer illiterates. Those wise enough not to embarrass themselves and pretend to be someone they are not, I have no problem with.

July 24, 2014 6:47 am

kadaka (KD Knoebel) says:
July 22, 2014 at 11:34 pm
From Poptech on July 22, 2014 at 9:12 am:
None of the citations were in a sociology journal.
And, as I alluded to, those could be negative citations, not positive ones. You have to look beyond Google-generated metrics. Those could be four citations from using your articles as examples of poorly-sourced biased ignorant dreck.

Wrong, none of them were negative, everyone positive and nothing had to do with a “Google-generated metric” or whatever that is.

Does it matter if it is implicit or explicit? That last reference never did say to not do it. The capability is there, and M$ documented how it works. Programmers naturally follow the path of least effort, and M$ knows it. They knew the capability would be used if offered. That’s endorsement of the method.

Quote where Microsoft “endorses” telling developers to make changes to the Path Environment Variable in the autoexec.bat file in Windows 2000. Put up or shut up.

You cited as “evidence”:
Windows 95/98/2000 users must edit the AUTOEXEC.BAT file to add the path information and set the MRTSWATH_DATA_DIR variable.
and
Windows NT/ME/XP users must edit their user keys to add the MRTSwath PATH, MRTSWATH_HOME, and MRTSWATH_DATA_DIR to the system variables.
Yet that is what works. Win 95/98/2000 users can set those environment variables in autoexec.bat, NT/ME/XP can play with regedit.
The possible fault is “must” as path changes in 2000 can also be done with regedit.

So you are in abject denial of irrefutable evidence that they confused Windows ME with Windows 2000? I have never seen someone lie like this before. Windows ME, NT and XP all parse the autoexec.bat file too for legacy compatibility reasons genius that does not make it the correct way to set the Path enviroment variable in any of them. It is irrefutable these computer illiterate hacks confused Windows ME with Windows 2000.

So you were presented with legitimate directions, did not recognize them as legitimate, and proceeded to conclude the fault must be with NASA/USGS, namely getting ME and 2000 confused.

They were not legitimate directions but a computer illiterate hack screw up that is embarrassingly bad. Legitimate directions would be to set the Path environment variable the correct way in Windows 2000.

You even went so far as to write up an anonymous smear piece on your anonymous blog to loudly proclaim NASA/USGS and Mosher had all screwed up, you were the one who discovered this obvious truth.
So now for as long as your ego will allow that post to remain up and unchanged, knowledgeable people who read it will know NASA/USGS were right, Mosher was right, and it was all just another case of PEBKAC. Again.

They did and it is embarrassingly bad. Keep lying you computer illiterate hack,
http://msdn.microsoft.com/en-us/library/ms954115.aspx
Chapter 1. Windows 2000 Fundamentals
Requirements
5. Do not read from or write to Win.ini, System.ini, Autoexec.bat or Config.sys
Your application must not read from or write to Win.ini, System.ini, Autoexec.bat, or Config.sys. These file are not used by Windows 2000 systems

July 24, 2014 6:48 am

I highly suggest the moderators let me respond.

July 24, 2014 6:50 am

I will do whatever is necessary to make sure my responses are seen here.

July 24, 2014 6:53 am

Kadaka how does it feel knowing you cannot debate me without the bitch moderators deleting my responses? Why are you such a coward tough guy?

kadaka (KD Knoebel)
July 24, 2014 7:00 am

From Poptech on July 24, 2014 at 6:02 am:

Kadaka why do you have to hide behind the moderators?

I am in no way whatsoever a moderator, manager, or owner of this blog. You might as well be the Sun asking me why I hide beneath the clouds.

July 24, 2014 7:00 am

If my comment at July 24, 2014 at 6:45 am is snipped I go to Defcon 2.

Reply to  Poptech
July 24, 2014 7:53 am

This thread is closed as it has turned into a poptech thread about software

kadaka (KD Knoebel)
July 24, 2014 8:56 am

From Avery Harden on July 23, 2014 at 11:04 am:

All things considered, it is obvious that photovoltaic electricity generation is a great technology. Sure there are lots of problems to be ironed out and nothing is ever perfect.

Many people are waiting for plug-and-play panels. The electronics are good enough, with micro-inverters sized for a panel, that you could prop one up on your deck or yard and just plug it into an outdoor wall socket.
http://plugandplaysolarkits.com/product/
However, they may be illegal as all hell. There’s the very real possibility of backfeeding from home power generation. A utility crew shuts down the high voltage line for work, but some idiot has a generator improperly hooked direct to his house wiring. Lights go out, generator automatically kicks in, the house voltage goes out to the transformer and a lineman gets fried from the resulting high voltage.
While the modern inverters for grid-tie have automatic cutouts, if there’s no line voltage coming in then they shut down to avoid sending out current, that’s not good enough. The utility wants shut off boxes outside, if they need to they can go around and shut out individual sources from the grid, with locks.
That site I linked to mentioned the 30% federal tax credit. But to get it you need an above-board installation with whatever permits and inspections the utility and anyone else wants. Which will be a pricey hassle.
So you can see the great wisdom in naming their product “Plug & Play Stealth 2.0” like you can sneak them onto your property, make your own current, and no one official has to know. It’s just a temporary installation, right?
They say the unit can make up to 216W in the example on the main page, which is why this Amazon listing says 250W, and also no contractors and permits. The site’s “Buy now” page clarifies it’s a 250W panel with a 240W inverter.
First unit is $1,147.95, which includes a wireless digital monitor. It’ll take up to five add-on units at $50 less for each, sans monitor as the first can handle the add-ons.
Meanwhile Home Depot sells a 250W panel for $375, which is about $100 more than the high-volume online stores. I’m pretty sure I could build a wood frame, grab a micro-inverter and any other wiring needed, and put together a “plug and play” system for under $600, possibly $500.
Now if an acceptable shut-off would be them unplugging a unit and locking a box around the plug (good enough for OSHA) and all the legality involved was that the utility knew where you plugged in and could do that quickly, wouldn’t solar look a whole lot better to a lot more people, even without tax credits?