Where Consensus Fails – The Science Cannot Be Called 'Settled'

Euro_shark_consensus_3.jpg
from Sharkforum.org - click

Guest Post by Thomas Fuller

Dennis Bray and Hans von Storch have just published the findings of a survey conducted with practicing climate scientists. The survey was conducted in 2008 with 379 climate scientists who had published papers or were employed in climate research institutes and dealt with their confidence in models, the IPCC and a variety of other topics. The survey findings are here: http://coast.gkss.de/staff/storch/pdf/GKSS_2010_9.CLISCI.pdf

Most of the questions were asked using a Likert Scale, which most of you have probably used in filling out one of the numerous online surveys that are on almost any website. “A set of statements was presented to which the respondent was asked to indicate his or her level of agreement or disagreement, for example, 1 = strongly agree, 7 = strongly disagree.

The value of 4 can be considered as an expression of ambivalence or impartiality or, depending on the nature of the question posed, for example, in a question posed as a subjective rating such as “How much do you think climate scientists are aware of the information that policy makers incorporate into their decision making process?”, a value of 4 is no longer a measure of ambivalence, but rather a metric.”

The total number of respondents is large enough to make statistically significant statements about the population of similarly qualified climate scientists, and the response rate to the invitations is in line with surveys conducted among academics and professionals. What that means is that we can be fairly confident that if we conducted a census of all such scientists the answers would not be very different to what is found in the survey’s findings.

Typically in a commercial survey, analysts would group the top two responses and report on the percentages of respondents that ticked box 6 or 7 on this scale. Using that procedure here makes it clear that there are areas where scientists are not completely confident in what is being preached–and that they don’t like some of the preachers. In fact, let’s start with the opinion of climate scientists about those scientists, journalists and environmental activists who present extreme accounts of catastrophic impacts.

The survey’s question read, “Some scientists present extreme accounts of catastrophic impacts related to climate change in a popular format with the claim that it is their task to alert the public. How much do you agree with this practice?”

Less than 5% agreed strongly or very strongly with this practice. Actually 56% disagreed strongly or very strongly. Joe Romm, Tim Lambert, Michael Tobis–are you listening? The scientists don’t like what you are doing.

And not because they are skeptics–these scientists are very mainstream in their opinions about climate science and are strong supporters of the IPCC. Fifty-nine percent (59%) agreed or strongly agreed with the statement, “The IPCC reports are of great use to the advancement of climate science.” Only 6% disagreed. And 86.5% agreed or strongly agreed that “climate change is occurring now” and 66.5% agreed or strongly agreed that future climate “will be a result of anthropogenic causes.”

Even so, there are areas of climate science that some people want to claim is settled, but where scientists don’t agree.

Only 12% agree or strongly agree that data availability for climate change analysis is adequate. More than 21% disagree or strongly disagree.

Only 25% agree or strongly agree that “Data collection efforts are currently adequate,” while 16% disagree or strongly disagree.

Perhaps most importantly, only 17.75% agree or strongly agree with the statement, “The state of theoretical understanding of climate change phenomena is adequate.” And equal percentage disagreed or strongly disagreed.

Only 22% think atmospheric models deal with hydrodynamics in a manner that is adequate or very adequate. Thirty percent (30%) feel that way about atmospheric models’ treatment of radiation, and only 9% feel that atmospheric models are adequate in their treatment of water vapor–and not one respondent felt that they were ‘very adequate.’

And only 1% felt that atmospheric models dealt well with clouds, while 46% felt they were inadequate or very inadequate. Only 2% felt the models dealt adequately with precipitation, and 3.5% felt that way about modeled treatment of atmospheric convection.

For ocean models, the lack of consensus continued. Only 20% felt ocean models dealt well with hydrodynamics, 11% felt that way about modeled treatment of heat transport in the ocean, 6.5% felt that way about oceanic convection, and only 12% felt that there exists an adequate ability to couple atmospheric and ocean models.

Only 7% agree or strongly agree that “The current state of scientific knowledge is developed well enough to allow for a reasonable assessment of the effects of turbulence,” and only 26% felt that way about surface albedo. Only 8% felt that way about land surface processes, and only 11% about sea ice.

And another shocker–only 32% agreed or strongly agreed that the current state of scientific knowledge is developed well enough to allow for a reasonable assessment of the effects of greenhouse gases emitted from anthropogenic sources.

As Judith Curry has been noting over at her weblog, there is considerable uncertainty regarding the building blocks of climate science. The scientists know this. The politicians, propagandists and the converted acolytes haven’t gotten the message. If this survey does not educate them, nothing will.

Thomas Fuller http://www.redbubble.com/people/hfuller

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

129 Comments
Inline Feedbacks
View all comments
John Whitman
September 26, 2010 3:06 pm

Tom Fuller,
This light stuff is near the end of its journalistic shelf life. Right?
Please get down to real talks with the scientists in the trenches. It is climate science sausage making time.
If you can’t get them to trust you then you’ve just been “tripping the light fan-dango” here.
John

Hank Hancock
September 26, 2010 5:03 pm

EFS_Junior says:
September 26, 2010 at 12:20 pm
I’ve posted links to the accuracy of these types of surveys, and we know a priori, that these types of surveys, targeted as a very select subset of “climate scientists” will not be as accurate, and that meaningfu statistics can’t be, and should never be, derived from such a priori biases going into these types of survers, we need a control group, where’s the control group?

A priori what? It’s an adjective, not a noun. Of course the study targets a select demographic or subset as you call it. The study’s results are clearly in context to that demographic and not intended to be representative of a more general population. Regarding priori bias, I don’t see where the questions make pre-existing assumptions, ignore fact, exclude alternative explanations, or fail to challenge theory. If you do, please point out the specific questions that do so and explain why.
I agree that meaningful referential statistics aren’t built on priori knowledge, if that’s what you’re saying. However, the purpose of the survey is to infer opinion or sediment of the sample group. The target audience (those of us concerned with their opinion) is more interested in where the majority of responses center. A scale of minimum graduation, as used in this study, makes perfect sense.
A control group is used mostly in clinical or quantitative studies whereas this would be classified as a demographic or qualitative study. As such, no control group is required. Besides, how do you control for opinion? What would the demographics of the control group look like? How would you justify such demographic as controlling for the study group? I hope you can see where your criticisms of the study methodologies really don’t make much sense.

Gneiss
September 26, 2010 5:35 pm

Thomas Fuller writes,
“Even so, there are areas of climate science that some people want to claim is settled, but where scientists don’t agree.”
All the scientists I read, in every field, agree that they need more data. It’s silly to call that a lack of consensus.

Gneiss
September 26, 2010 5:53 pm

Whether sample results can be generalized to a larger population depends on the representativeness of the sample, not its size. A large sample can still be quite biased. The usual significance tests or confidence intervals give no protection against biased sampling.
Questions can be asked in such a way as to bias the answers, as well. I was struck by this one, a good example of what honest pollsters mostly try not to do:
“Some scientists present extreme accounts of catastrophic impacts related to climate change in a popular format with the claim that it is their task to alert the public. How much do you agree with this practice?”
The first sentence asks readers to accept a premise described with loaded words, “extreme,” “catastrophic,” “claim.” If I encountered that question on a poll I would see where the pollster was trying to push me, and probably give the answer he doesn’t want, just to push back. It looks like other folks had that reaction, as well.

Enneagram
September 26, 2010 5:57 pm

Where polls replace science chaos replaces ethics

EFS_Junior
September 26, 2010 6:11 pm

Hank Hancock says:
September 26, 2010 at 5:03 pm
EFS_Junior says:
September 26, 2010 at 12:20 pm
I’ve posted links to the accuracy of these types of surveys, and we know a priori, that these types of surveys, targeted as a very select subset of “climate scientists” will not be as accurate, and that meaningfu statistics can’t be, and should never be, derived from such a priori biases going into these types of survers, we need a control group, where’s the control group?
A priori what? It’s an adjective, not a noun. Of course the study targets a select demographic or subset as you call it. The study’s results are clearly in context to that demographic and not intended to be representative of a more general population. Regarding priori bias, I don’t see where the questions make pre-existing assumptions, ignore fact, exclude alternative explanations, or fail to challenge theory. If you do, please point out the specific questions that do so and explain why.
I agree that meaningful referential statistics aren’t built on priori knowledge, if that’s what you’re saying. However, the purpose of the survey is to infer opinion or sediment of the sample group. The target audience (those of us concerned with their opinion) is more interested in where the majority of responses center. A scale of minimum graduation, as used in this study, makes perfect sense.
A control group is used mostly in clinical or quantitative studies whereas this would be classified as a demographic or qualitative study. As such, no control group is required. Besides, how do you control for opinion? What would the demographics of the control group look like? How would you justify such demographic as controlling for the study group? I hope you can see where your criticisms of the study methodologies really don’t make much sense.
_____________________________________________________________
I’ve posted all I need to post on this subject matter, the dubious quality of these types of survery exists purely on the grounds of their type catagory. Therefore and forthwith, a priori in nature and with good potential for grossly spurious outcomes.
Meaningful statistical analyses can not strictly be applied to these types of surveys since they are not truly “random” of the entire climate scientists population. A control is necessary and necessarily absent in these types of surveys, can’t be helped, they are what they are.
The authors themselves readily admit to the survey type in their own words, yet offer no reasoning for, or defense fo, the survey method, except for their “ease of use” disclaimer.
I’ve also argued persuasively about the deeply flawed demographics subdivision, particularly when asked to a largely unknown demographic of one hard science catagory to choose from vs several soft science (or otherwise) groups. In other words the demographics lack sufficient granularity to, for instance, distinguish numerical climate science developers modelers and end users, climate science theoreticians, experimentation, observational, the list goes on and on.
Sediment? Sentiment.
You have the entire population (the obvious control group), and you have the sampled population (again not randomly selected in these BS surveys, ergo the basic flaw of these types of surveys, particularly when there is a very specific target population as we would have here, for obvious reasons).
A sample size of only 375 when there must be tens of thousands of practicing climate scientists worldwide, well there are statistics for this (e. g. MOE), when the sample is truly randon, definitely not the case here.
Also what muddies the waters somewhat, are the selectable scale range of 1 thru 7 (most studies I’ve seen use 5, but if you are going to use 7, than why not 9, or why not 11, or why not …), which further increases MOE statistics considering the small non-random sample size (e. g. 375/7 ~ 54 for a uniform distribution (and yes, I realize that the distributions are not uniform)).
The IPCC AR5 is just a few years away, now that would be the most opportunistic time to capture thousands of climate scientists in the heat of battle, as it were, a very well defined control group if you ask me.
Until then, the best we can do is to rely on many existing sources, in addition to these BS surveys, as for example shown here (BS 2008 is included in this list BTW);
http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change
“This article is about scientific opinion on climate change. For recent climate change generally, see Global warming. For debate on scientific consensus, see Climate change consensus. For opinions of individual dissenting scientists, see List of scientists opposing the mainstream scientific assessment of global warming.”
Take it, or leave it, but it is the best composite we have to date, to judge the overall scientific opinion on climate change. All encompassing, no cherry picking allowed. BS is just one snapshot from very many snapshots. Some snapshots are inherently better than others, and IMHO BS is not one of them. An opinion on opinion polling methodologies. Go figure.
My final words on this matter, you get the final say, as I’ve said all that has been needed to say on this matter at this point ad infinitum, ad nauseam.

September 26, 2010 6:14 pm

Hey there, John,
If you don’t like what I’m peddling, this street fair has plenty of stalls.

Neo
September 26, 2010 6:20 pm

Barnhardt: Tell me, Hilda, does all this frighten you? Does it make you feel insecure?
Hilda: Yes, sir, it certainly does.
Barnhardt: That’s good, Hilda. I’m glad.

EFS_Junior
September 26, 2010 6:31 pm

Gneiss says:
September 26, 2010 at 5:53 pm
Whether sample results can be generalized to a larger population depends on the representativeness of the sample, not its size. A large sample can still be quite biased. The usual significance tests or confidence intervals give no protection against biased sampling.
Questions can be asked in such a way as to bias the answers, as well. I was struck by this one, a good example of what honest pollsters mostly try not to do:
“Some scientists present extreme accounts of catastrophic impacts related to climate change in a popular format with the claim that it is their task to alert the public. How much do you agree with this practice?”
The first sentence asks readers to accept a premise described with loaded words, “extreme,” “catastrophic,” “claim.” If I encountered that question on a poll I would see where the pollster was trying to push me, and probably give the answer he doesn’t want, just to push back. It looks like other folks had that reaction, as well.
_____________________________________________________________
Exactly!
I’ve never taken a phone poll, I’ve only occasionally taken written ones.
But not before reading the entire questionnaire first, or twice even.
Are there weasel words?
Are there leading questions?
If the questions appear to be random in nature, don’t answer the questionnaire, or bring some dice or a coin to the poll, as they appear to want to hide the real questions within a larger set of random questions, give them random answers for all questions.
On polls I always choose the lowest or highest number available, or I answer all questions in just the opposite way from would be my actual opinion.
In other words, mess with their heads, don’t let them mess with your head.
And therein, you have the self selection process from the other side, those who choose to answer a targeted survey, and those who choose to not answer a targeted survey.
By my reckoning, 1685 of 2058 chose not to answer the survey, or a failure rate of 81.2%.

hmccard
September 26, 2010 7:13 pm

The results of the CliSci2008 survey are interesting. Bray and van Storch (BS) utilized a Likert Scale (1 to 7) in their survey and a bucket graph to portray the degree of consensus among the responses to the ~ 100 questions in the survey. BS reported the average Likert Score (LS) and the associated STDEV for each score in their article.
I did not find the bucket graphs very helpful in understanding the degree of consensus among the respondees. Therefore, I 1) normalized the STDEV of the responses to the questions and then 2) rank-ordered the results. The average of the STDEVs is 1.346 and the STDEV about that average is 0.167.
The five questions with the highest degree of consensus, i.e., lowest normalized deviation wrt 1.346, are:
1. Q87 The IPCC reports tend to under estimate, accurately reflect (a value of 4) or over estimate the magnitude of future changes to temperature:
• under estimate 1 2 3 4 5 6 7 over estimate
• LS = 4.00559
• Normalized deviation = -2.66163
2. Q79 The IPCC reports tend to under estimate, accurately reflect (a value of 4) or over estimate the magnitude of future changes to temperatue:
• Not at all 1 2 3 4 5 6 7 very much
• LS = 3.98619
• Normalized deviation = -2.48325
3. Q56 How convinced are you that climate change, whether natural or anthropogenic, is occurring now?
• under estimate 1 2 3 4 5 6 7 over estimate
• LS = 6.44474
• Normalized deviation = -2.20425
4. Q88 The IPCC reports tend to under estimate, accurately reflect (a value of 4) or over estimate the magnitude of future changes to precipitation:
• under estimate 1 2 3 4 5 6 7 over estimate
• LS = 3.83662
• Normalized deviation = -2.20254
5. Q70 In making policy decisions about adaptation decisions about adaptation to climate change, priority should be given to
• Industry and commerce 1 2 3 4 5 6 7 scientific Expertise
• LS = 5.33060
• Normalized deviation = -1.55093
The five questions with the lowest degree of consensus i.e., wrt to 1.346 are:
6. Q12 To what degree do you think climate science has remained a value-neutral science?
• Not at all 1 2 3 4 5 6 7 A great deal
• LS = 3.96226
• Normalized deviation = 1.58676
7. Q65 The potential that climate change might have some positive effects for the country in which you live is
• Very low 1 2 3 4 5 6 7 very high
• LS = 3.88859
• Normalized deviation = 1.76123
8. Q116 There is a great need for immediate policy decisions for immediate action to mitigateclimate change
• Strongly disagree 1 2 3 4 5 6 7 Strongly agree
• LS = 5.505040
• Normalized deviation = 1.84350
9. Q110 Making discussions of climate science open to potentially everyone through the use of blogs on the w.w.w is
• Avery bad idea 1 2 3 4 5 6 7 A very good idea
• LS = 4.58197
• Normalized deviation = 1.92996
10. Q62 If we do not do anything towards adaptation or mitigation, the potential for catastrophe resulting from climate change for other parts of the world :
• Very Low 1 2 3 4 5 6 7 Very high
• LS = 4.61351
• Normalized deviation = 2.17793
I would consider the following ranges of normalized deviation wrt 1.346 to be reasonable for classifying the degree of consensus:
• Very low: less than – 2.5
• Low: -1.5 to-0.5
• Medium: -0.5 to +0.5
• High: +1.5 to +2.5
• Very high: More than 2.5

Policyguy
September 26, 2010 8:03 pm

EFS_Junior says:
September 25, 2010 at 6:16 pm
Its difficult to understand the bases of the various rants you have indulged on us on this topic. It looks to be a personal matter of viewpoint.
Don’t know when you last checked, but the Union of Concerned Scientists has become one of the most politicised special interests groups we have. If you are trying to portray yourself as balanced and thoughtful, you might want to look into who you are identifying yourself with. Probably a good group made up of well meaning people, but the group is very well known for its strongly held biases that it doesn’t hesitate to advocate very publicly.

Robert
September 26, 2010 8:08 pm

Tom Fuller,
I do think you should point out that this is also considerable evidence that this survey was contaminated and passed along through a climate skeptic web-list
Deltoid even confirmed this years ago.
http://scienceblogs.com/deltoid/2005/05/bray.php

September 26, 2010 8:12 pm

Accelerating government grants, endless expense-paid trips to fun vacation spots where they scheme to bilk the public even more, rather than getting any science accomplished, and notoriety for people who have always been seen as nerdy geeks, and piles of NGO money being passed under the table, and to Soros’ shills like Joe Romm, all make surveys like this unreliable.
Want straight answers? Ask these same questions of the rank-and-file scientists who haven’t been corrupted by the climate pal review process, and who haven’t been invited to the COP-1 – 15 jaunts around the world at taxpayer and NGO expense, and who haven’t been part of the UN/IPCC’s self-serving scare machine — and who have been retired for at least a year, so they are no longer required to repeat the Party line in order to assure their next pay raise or promotion.

September 26, 2010 9:06 pm

Robert, you do understand the difference between 2003 and 2008, right? If so, let’s continue. Despite the fact that Deltoid is passionately attacking a different survey, and despite the fact that Deltoid and its administrator, Tim Lambert, would cheerfully call white black if it helped spread alarmist fever, let’s continue.
This survey only allowed one response per emailed invitation. Repeat responses over-wrote the previous version. Invitations were emailed to a pre-selected sample of climate scientists.
Lambert (as usual) is wrong in insisting that his interpretation of the results is the only way to look at them. It is not, and his chosen method is not even the standard.
So, apart from this being the wrong survey, with no relevance to Lambert’s expose, is there anything further you’d like to discuss?

Hank Hancock
September 26, 2010 10:36 pm

EFS_Junior says:
September 26, 2010 at 6:11 pm
September 26, 2010 at 6:31 pm
On polls I always choose the lowest or highest number available, or I answer all questions in just the opposite way from would be my actual opinion.

The IPCC AR5 is just a few years away, now that would be the most opportunistic time to capture thousands of climate scientists in the heat of battle, as it were, a very well defined control group if you ask me.

My final words on this matter, you get the final say, as I’ve said all that has been needed to say on this matter at this point ad infinitum, ad nauseam.

Thank you for the final word.
I disagree that opinion surveys can’t provide valuable and reasonably accurate information. We use them all the time in my field to assess qualitative issues particularly as they apply to assessing communications and understanding. They’re actually pretty good at predicting outcome where outcome has a correlation to the group’s perspectives and understandings.
I didn’t miss the point that your primary objection is to the merits of this study in particular. However, throwing out the lack of a control group, no reverse IP validation, being on-line, etc… just aren’t genuine controls or disqualifiers. On that basis, I don’t see such objections as reasonable up-front cause to disqualify this or any study of its type as you attempted to do.
If all survey respondents answered untruthfully as you, the results would indeed be the opposite of true opinion and pretty useless. I am confident that most professionals or stakeholders who take their valuable time to respond to a survey answer to the truth as they perceive it rather than play head games with the study organizers. That has been my professional experience.
I disagree that the IPCC AR5 represents an unbiased capture of thousands of climate scientists. It would be a capture of a group of scientists all singing off the same sheet of music. Were a scientist not in one accord with the song being sung, they wouldn’t be there. Therefore, a survey of IPCC AR5 scientists would not be representative of the general scope of climatologist’s opinions any more than a survey taken at ICCC-5.

Djozar
September 27, 2010 7:43 am

I’d prefer see a survey of the form that allowed progressive details on each issue, i.e.:
1. Is atmospheric CO2 increasing beyond know norms?
2. If yes, is the increased caused by human activies?
3. If yes, is the increase causing a minor amount of warming?
4. If yes, is the increase causing a major amount of warming?
Same format could be used for other issues.

John Whitman
September 27, 2010 9:37 am

Tom Fuller says:
September 26, 2010 at 6:14 pm
Hey there, John,
If you don’t like what I’m peddling, this street fair has plenty of stalls.

—————————-
Tom Fuller,
Yes, indeed there are many vacant soapboxes around for me to stand on. : ) So, I actually have no excuses. Perhaps I will finally retire from my day job. I am tempted, been in my field for going on 40 years. Thanks for the suggestion.
Tom, you obviously have the ability and energy, based on your recent frequent series of posts here at WUWT. I often like how you say stuff. I just don’t like the most of the stuff . . . your topics. Are you just floating around randomly on the surface of an ocean of climate science topics?
There are lots of ultra-micro-pieces of the climate science puzzle lying around and I see everyone busy manufacturing more and more pieces at an accelerating rate. Info generation is good. And I am perceiving recently a trend of less bias toward the creation of pieces that are critical of the consensus/accepted AGW orientation. OK, good. But, where are the integrating scientist gals/guys? Please use your journalistic talent to seek them out and get some far view stuff looking down the road out of them. A sense is needed of where the navigators/helmsmen are on this climate science journey. Or keep floating around (apparently) at random on the surface of climate science topics.
Ahhhh, I hope that didn’t sound patronizing.
John

fxk
September 27, 2010 11:56 am

Being the good, agnostic skeptic that I am, one wonders who funded the study, and who had a hand in the development of the questions.
Skepticism is a two-way street.

September 27, 2010 12:09 pm

No John, and I appreciate your criticism. You’ve been thinking about it and it shows.
I actually do have a master plan to integrate the pieces of the puzzle together, but you are sadly serving as my guinea pigs while I do the individual pieces.
My goal is to be able to provide a coherent picture of energy usage in relationship to environmental pressures that can be understood by anyone and still be as accurate as I can make it.
Which is why I’m not using a lot of graphs and charts in this series and why I probably sound pretty elementary at times.
My hope is that not everyone who visits WUWT is as well-versed on the issues as you seem to be and that this can serve as catch-up material for them. But I refuse to talk down to people, so I try and give real world issues and my honest beliefs at the same time.
Seems to be working for some of the crowd…

John Whitman
September 27, 2010 2:30 pm

Tom Fuller says:
September 27, 2010 at 12:09 pm

—————–
Tom Fuller,
To paraphrase a line from John Masefield’s poem “Sea Fever”;
. . . and all I ask [for] is a strong body of independent thinkers and a star [for them] to steer climate science by . . . .
John

EFS_Junior
September 27, 2010 3:29 pm

Policyguy says:
September 26, 2010 at 8:03 pm
EFS_Junior says:
September 25, 2010 at 6:16 pm
Its difficult to understand the bases of the various rants you have indulged on us on this topic. It looks to be a personal matter of viewpoint.
Don’t know when you last checked, but the Union of Concerned Scientists has become one of the most politicised special interests groups we have. If you are trying to portray yourself as balanced and thoughtful, you might want to look into who you are identifying yourself with. Probably a good group made up of well meaning people, but the group is very well known for its strongly held biases that it doesn’t hesitate to advocate very publicly.
_____________________________________________________________
I used the UCS poll because it was the first one I found on the web that had demographics.
So if the UCS is biased, that is (mostly) irrelivant, as all I’m looking at is the demographics question itself, and it’s granularity, obviously the UCS demographics question exhibits a higher degree of granularity than the BvS 2008 demographics question.
Finally, you either get it, or you don’t, and I know that I get it.

EFS_Junior
September 27, 2010 3:58 pm

Tom Fuller says:
September 26, 2010 at 9:06 pm
Robert, you do understand the difference between 2003 and 2008, right? If so, let’s continue. Despite the fact that Deltoid is passionately attacking a different survey, and despite the fact that Deltoid and its administrator, Tim Lambert, would cheerfully call white black if it helped spread alarmist fever, let’s continue.
This survey only allowed one response per emailed invitation. Repeat responses over-wrote the previous version. Invitations were emailed to a pre-selected sample of climate scientists.
Lambert (as usual) is wrong in insisting that his interpretation of the results is the only way to look at them. It is not, and his chosen method is not even the standard.
So, apart from this being the wrong survey, with no relevance to Lambert’s expose, is there anything further you’d like to discuss?
_____________________________________________________________
We know this was an ONLINE survey hosted on a website. We do not know EXPLICITLY how the verification process was itself verified, meaning how a completed ONLINE survey was traced back to a valid email/IP address (e. g. the actual people on the original email list). Typing in a valid email address into the ONLINE survey is NOT verification of the actual person answering the ONLINE survey.
Until sometime that the verification process can be ascertained, which SHOULD be part of any background writeup section of the report(s)/paper(s), we are not certain who answered the ONLINE survey.
We don’t, for example, know if the emails sent were forwarded to others, and that those others (and subsequently other others), if fact filled out the survey questionnaire.
This will be a question I ask Bray and von Storch myself to clear up this matter for my own peace of mind.
As to the ad hominem of Deltiod/Tim Lambers, meh, BAU, but they do have a valid criticism of at least one of the two previous surveys, calling into question a comparison between the 2nd with the 1st and 3rd surveys.
Here I am at WUWT displaying some (or a lot) of skepticism with respect to one part of the survey (although I’ve also shown that I do have a lot of skepticism with respect to other aspects of the survey, primilarily the statistical confidence one can extract accurately from these time of choice lists/response choice lists types of surveys and the demographics question), how it was verified.
Why am I THE AUDITOR while TEAM AUDIT has been mostly silent? Hypocrisy? Isn’t TEAM AUDIT supposed to be skeptical with all aspects of climate science? Or only when one thinks something conforms to their own worldviews, also known as confirmation bias.

EFS_Junior
September 27, 2010 4:23 pm

Hank Hancock says:
September 26, 2010 at 10:36 pm
EFS_Junior says:
September 26, 2010 at 6:11 pm
September 26, 2010 at 6:31 pm
On polls I always choose the lowest or highest number available, or I answer all questions in just the opposite way from would be my actual opinion.

The IPCC AR5 is just a few years away, now that would be the most opportunistic time to capture thousands of climate scientists in the heat of battle, as it were, a very well defined control group if you ask me.

My final words on this matter, you get the final say, as I’ve said all that has been needed to say on this matter at this point ad infinitum, ad nauseam.
Thank you for the final word.
I disagree that opinion surveys can’t provide valuable and reasonably accurate information. We use them all the time in my field to assess qualitative issues particularly as they apply to assessing communications and understanding. They’re actually pretty good at predicting outcome where outcome has a correlation to the group’s perspectives and understandings.
I didn’t miss the point that your primary objection is to the merits of this study in particular. However, throwing out the lack of a control group, no reverse IP validation, being on-line, etc… just aren’t genuine controls or disqualifiers. On that basis, I don’t see such objections as reasonable up-front cause to disqualify this or any study of its type as you attempted to do.
If all survey respondents answered untruthfully as you, the results would indeed be the opposite of true opinion and pretty useless. I am confident that most professionals or stakeholders who take their valuable time to respond to a survey answer to the truth as they perceive it rather than play head games with the study organizers. That has been my professional experience.
I disagree that the IPCC AR5 represents an unbiased capture of thousands of climate scientists. It would be a capture of a group of scientists all singing off the same sheet of music. Were a scientist not in one accord with the song being sung, they wouldn’t be there. Therefore, a survey of IPCC AR5 scientists would not be representative of the general scope of climatologist’s opinions any more than a survey taken at ICCC-5.
_____________________________________________________________
Briefly, I fall into the “random survey” camp, as I have been my entire life, for obvious reasons, that “random surveys” are better from a purely statistical basis alone.
I don’t buy the “ease of use” argoment, as it is actually a “well we’re actually quite lazy and don’t want to spend the appropriate amount of time and effort to do a proper random survey” argument.
As to the IPCC AR5, your strawman argument is not valid as I was never limiting the proposed to IPCC authors only, I was referring to the entire community of both authors and commenters (as I would expect a high number of AGW keptics in the group of commentators). AFAIK anyone can contribute to the IPCC drafts via commentary. The process would have to be fully transparent, that is we know the the group of authors, the group of contributors, and the group of commentators. In fact, this method could then be applied to the general population (random GP), and other groups as well, in follow-on identical surveys.
That is all.

hmccard
September 27, 2010 4:55 pm

Re: My 09/26/2010 7:13 PM comment
Whoops!! The last paragraph should read:
I would consider the following ranges of normalized deviation wrt 1.346 to be reasonable for classifying the degree of consensus:
• Very low: greater than 1.5
• Low: 0.5 to 1.5
• Medium: 0.5 to -0.5
• High: -0.5 to -1.5
• Very high: Less than -2.5
Sorry about that …

David A. Evans
September 27, 2010 6:41 pm

EFS_Junior says:
September 25, 2010 at 6:32 pm
How many e-mails you don’t get do you respond to?