Shades of Lewandowsky and Cook: When psychological science isn't so sciency

Sound familiar? A study revealing a stunning lack of reproducibility in psychological science triggers another instance of reluctance to share data with any but friends, and an “adjustment” of  data to fit a theory.

Frank Lee MeiDere writes:

By now most have probably heard about the paper published in the August 28 edition of Science magazine entitled “Estimating the reproducibility of psychological science.” Coordinated by the Center for Open Science, and headed by its executive director, Dr. Brian Nosek, the project examined 100 psychological studies mostly from three sources: Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition.

According to the The New York Times’ article, “Many psychology findings not as strong as claimed, study says,” written by Benedict Carey: “The vetted studies were considered part of the core knowledge by which scientists understand the dynamics of personality, relationships, learning and memory. Therapists and educators rely on such findings to help guide decisions.”

In the end, 60 of the 100 studies did not hold up well when reproduced.

This, of course, is disturbing in itself. As Carey points out, “the fact that so many of the studies were called into question could sow doubt in the scientific underpinnings of their work.”

Of more concern, however, is a quote from Dr. Norbert Schwarz, a professor of psychology at the University of Southern California: “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise” (italics added).

The problem is, that’s exactly what a replication is supposed to be: “an attack, a vigilante exercise.” It’s not a waltz with both partners in perfect step with each other; it’s a battle in which the invaders try every trick in the book to break through the castle walls. Schwarz’s attitude would seem to suggest a rephrasing of that famous justification for withholding data: “Why should I make the data available to you, when your aim is to try and find something wrong with it?”

Has this become the cry of all science? Hardly. In the physics world, for instance, new results are flung into the air for target practice like so many skeet. But then, the reproducibility rate in physics is pretty high — in fact, it makes up our entire technological world, virtually all of which is simply practical reproductions of laboratory experiments.

Other scientific foundations, however, are much shakier, and all too often their response is to ask everyone to sit quietly rather than examining the cause.

Psychology is especially prone to this shakiness. No sooner does one theory get established than it’s uprooted for another. In fact, there are numerous theories in play at any one time, each backed by its own body of “scientific” evidence. As Jelte Wicherts, associated professor of methodology and statistics at Tilburg University, Netherlands, said, “I think we knew or suspected that the literature had problems, but to see it so clearly, on such a large scale — it’s unprecedented.”

There are, of course, those who disagree with the findings of this study. The New York Times reports an email from Paola Bressan, a psychologist at the University of Padua who criticized the project for its reproduction of her study “Female preference for single versus attached males depends on conception risk.” Her complaint was that they used female psychology students as subjects whereas she had used female Italians. She is quoted as saying “I show that, with some theory-required adjustments, my original findings were in fact replicated.”

Any time the phrase “theory-required adjustments” is uttered there should be cause for alarm. This isn’t to say that it can’t be valid, but in this case we have to remember that Bressan’s original study was assumed to hold true universally (it wasn’t titled “Italian female preference” after all), and any adjustments made to “correct” the new study’s results in order to match her own could well be open to bias.

All scientific research should be considered “guilty until proven innocent” and requires an extremely strong defense to stand up to a justifiably hostile prosecutor. The idea of “settled science” is wrong in virtually every field, although there might be vast areas of strong replication and, therefore, confidence.

But at no point should we be under the old army edict of “Don’t ask: Don’t tell.”


Related stories where Stephan Lewandowsky, Naomi Oreskes, and John Cook produce some questionable, and perhaps irreproducible psychological “science”:

Lewandowsky and Cook – back from the dead with another smear paper

A disturbance in the farce: Another hateful and pointless paper from Stephan Lewandowsky and Naomi Oreskes

And then there’s Rasmus Benstad’s recent laughable paper that five journals rejected, before he and his psyops crew found a journal that would publish what other the five journals rejected. Yeah, that’s what you might call “robusted” science.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
148 Comments
Inline Feedbacks
View all comments
PaulH
August 30, 2015 5:08 pm

Speaking of Lewandowsky et al., Dana Nuccitelli writes in the Guardian that those papers outside of the official 97% consensus have major problems:
“This new study was authored by Rasmus Benestad, myself (Dana Nuccitelli), Stephan Lewandowsky, Katharine Hayhoe, Hans Olav Hygen, Rob van Dorland, and John Cook.”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2015/aug/25/heres-what-happens-when-you-try-to-replicate-climate-contrarian-papers
“Instead, as our paper shows, the contrarians have presented a variety of contradictory alternatives based on methodological flaws, which therefore have failed to convince scientific experts.”
Hold on to your hats when reading that article, it’s disturbing and amusing at the same time.

August 30, 2015 5:11 pm

Green energy, climate, sociology, anthropology, psychology. What do they all have in common? They are controlled by third rate scientists who are really left wing ideologues.

commieBob
Reply to  Pat Ch
August 30, 2015 6:55 pm

They didn’t start out as left wing ideologues. It was inculcated over a period of about ten years.
Obama remarked that a trades education might produce better financial results than a four year degree in Art History. Professor Ann Collins Johns took him to task and he quickly apologized.
I heard the good professor interviewed on CBC radio. She was really proud that her students could produce a reasoned argument based on scanty evidence. That’s nothing to be proud of. It’s mere pedantry.

The pedant and the priest have always been the most expert of logicians—and the most diligent disseminators of nonsense and worse. H. L. Menkin

In Voltaire’s Bastards, John Ralston Saul makes the point that this rationality, divorced from reality, is having a seriously pernicious effect on our society. In The Master and his Emissary, Iain McGilchrist provides the neurological basis to explain the science behind Saul’s observations.
We’re in deep serious doodoo thanks to the efforts of folks like Prof. Johns.

asybot
Reply to  commieBob
August 30, 2015 8:02 pm

, why are you still listening to CBC is it just the “know thy enemy” thing?

commieBob
Reply to  commieBob
August 31, 2015 12:54 am

asybot says:
August 30, 2015 at 8:02 pm
… is it just the “know thy enemy” thing?

There is still some excellent stuff, which is amazing given how their budget has been hacked. To get me by the painful crap, I have Scarlatti on autorotate on the headphones.
The Saturday and Sunday morning shows usually keep me from my chores. A large part of the over-night programming has shows from foreign broadcasters: ABC, BBC, Deutsche Welle. The ABC science programs are well worth listening to, as opposed to Quirks and Quarks which is mostly a snooze fest.
The good stuff is very good. The painful stuff is easily avoided.

August 30, 2015 6:01 pm

“According to the The New York Times’ article, “Many psychology findings …””
I found that NYT article with a Google search, but should there not have been a direct link to it?
http://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html?_r=0

Pamela Gray
Reply to  ZombieSymmetry
August 30, 2015 6:45 pm

I agree. Post authors: Please put a link to the article (if there is one) instead of making your readers do that for you. Yes, the search is not as easy as it used to be just a year ago, but do it anyway.

Frank Lee MeiDere
Reply to  ZombieSymmetry
August 30, 2015 6:59 pm

There was when I submitted it. I guess it got lost somehow. (I also didn’t connect this to Lewadowsky et. al.)

RD
August 30, 2015 6:03 pm

Unhappily, it’s not only psychology and the social sciences, but also medicine, cancer, genetics, etc.

asybot
Reply to  RD
August 30, 2015 8:04 pm

@RD, you are right, just look at what happened to “dr” Suzuki.

RD
Reply to  asybot
August 31, 2015 8:17 am

It’s really widespread across multiple disciplines. Not to mention exaggerated press releases that are not backed by the papers they publicize.

Grant
August 30, 2015 7:23 pm

Judging by the papers on climate change, and the idiot findings of all kinds one hears on the radio today, it seems that universities are churning the stuff out at break neck speeds. I suspect most will never be read or heard about again. They’ll become expensive door stops for graduates buried under student loans.
Most folks take the easy road when it’s available and scientists are no different. It tasks a special, dedicated person to find truth.

Rhett Butler
Reply to  Grant
August 31, 2015 3:41 am

Have you found the truth about Frank Lee MeiDere? Or don’t you give a damn? 😀

thallstd
August 30, 2015 7:50 pm

Other scientific foundations, however, are much shalier [than physics], and all too often their response is to ask everyone to sit quietly rather than examining the cause.”

thallstd
August 30, 2015 7:52 pm

Other scientific foundations, however, are much shakier [than physics], and all too often their response is to ask everyone to sit quietly rather than examining the cause.”

Examining the cause is exactly what Jonas Lehrer details in “The Truth Wears Off, a New Yorker piece from December, 2010: http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off
It chronicles the efforts of Jonathan Schooler to explain “the decline effect” in his own studies, which he first noticed in 1995 when attempting to replicate a 1990 study he did on “verbal overshadowing.” The effect of “verbal overshadowing” was 30% less in 1995 than in 1990. In 1996 the effect shrunk another 30%.
Lehrer provides an easy read that isn’t limited to Schooler’s efforts or psychological studies but includes the modern ineffectiveness of second-generation antipsychotics as well as irreproducible results in other branches of science.

But the data presented at the Brussels meeting made it clear that something strange was happening: the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.

And so begins an interesting story that reveals problems not so much in “The Scientific Method” as in “The Scientific Process,” a process distorted by tenure, funding, peer-review, consensus and more but mostly, by human nature. And while it doesn’t once mention the climate, it goes a long way to explaining the rise of the failed CAGW hypothesis. Encouragingly, it also explains how and why we may be ont he threshold of seeing it supplanted with a different hypothesis.

August 30, 2015 11:32 pm

Lewandowsky has finally realised that he’s a charlatan – he’s giving a talk on Suspect Science in Cambridge at the end of September. http://www.crassh.cam.ac.uk
Oh, you mean he isn’t applying that label to his own drivel? He’s giving real psychologists a bad name!

jorgekafkazar
August 31, 2015 12:03 am

“The Romans built great aqueducts and the church produced grand cathedrals in the Middle Ages before materials science was developed.” — RalphDaveWestfall
A concrete formulary IS science. Architecture that doesn’t fall down IS science. Finding what works and what doesn’t IS science. Finding why it works is also science, but we don’t have to know why concrete works to be able to use the underlying knowledge.
“The Wright Brothers ran a bicycle shop before producing their airplane.” –RalphDaveWestfall
No, they ran a bicycle manufacturing company, including development of leading edge improvements to the craft. But the Wright Brothers were among the world’s foremost aerodynamic scientists when they designed their first successful plane. They were not a couple of bicycle mechanics who one day threw random parts together and found that they flew. They knew what they were doing and went about it in a scientific manner.

Zeke
Reply to  jorgekafkazar
August 31, 2015 9:16 pm

“Education: Both were good students in school and their favorite subjects were math and science. The both excelled in math. Each attended high school but neither received a diploma. Wilbur had enough credit to graduate but the family moved to Dayton from Indiana. High school was not challenging for Orville so he dropped out in the 11th grade to start his own printing business.”
You might enjoy the story of the US government’s plan to invent the airplane using a hand-picked expert. It is not in many history books but Burt Fulsome includes it in his lectures and books.
Uncle Sam Can’t Count (Burt Folsom – Acton Institute)
https://youtu.be/i5fONzEwmfU?t=27m57s

Reply to  jorgekafkazar
September 1, 2015 12:23 am

“A concrete formulary IS science.”
How about a recipe for pineapple upside-down cake?
Were the prehistoric peoples who who developed stone chipping techniques to create arrowheads scientists? And those who learned to use fire for cooking and heating? And who first used the wheel in wheelbarrows and oxcarts?
You might want to consider the implications of the prominent role of Francis Bacon, who lived from 1561 to 1626, in the development of the scientific method. And also look at the discussion of the differences between science and technology at http://www.diffen.com/difference/Science_vs_Technology

August 31, 2015 2:58 am

Karl Popper wrote his treatise on the philosophy of science largely because he was concerned with the irrefutability of the ‘soft ‘ sciences – and indeed specifically mentions psychology.
Sop soft sciences are riddled with what a friend of mind calls ‘physics envy’ – the desire to achieve a concrete and irrefutable – or art least very strong view – on a given subject, where none such is possible.

Zeke
Reply to  Leo Smith
August 31, 2015 8:53 pm

Leo Smith says, “soft sciences are riddled with what a friend of mind calls ‘physics envy’”
Karl Popper objected to what he called “the aping of the physical sciences by the social sciences.” Physics envy works too.
He also was not a positivist and had to write a whole book to correct that mistaken reputation he was given by positivist colleagues. He also was inspired to write because of the obscurity of the language and jargon in science. Something about Frankfurt School. (:
He fortunately also wrote a book which debunked Kuhn, called “The Myth of the Framework.” His name today is used as synonymous with “falsifiability” as a test of a good theory, but he was a very very busy man.

Anna Keppa
August 31, 2015 6:49 am

I suggest pseudo-scientific psychologists should publish their papers in the “Journal of Irreproducible Results” and be done with it.

KTM
August 31, 2015 11:54 am

Science has some fundamental problems. On the one hand, you have a strong push from one side at minimizing waste and redundancy by forcing scientists to predict the minimal number of replicates needed to test your hypothesis, including power calculations, and then to try to use those minimal numbers. This is especially true in biomedical studies involving animals. You literally cannot tell the IACUC committee that you plan to replicate a previous study, because it will be denied for being wasteful and unnecessary.
On the other side you have a herd mentality in science where the next logical step is obvious to most people, so you end up with a dozen laboratories all running the same experiments in parallel, and only one gets the credit by publishing first. I was at a big scientific conference where someone was receiving a lifetime achievement award. As part of her remarks she talked about how she was one of 15 labs trying to make a particular knock-out mouse for the same gene at the same time. Most of them succeeded, so there were a dozen different variants of the same mouse. Very wasteful, yet a symptom of the nature of modern science. To get funded you need to propose something that is of interest to your peers but also very likely to succeed. More unusual or risky projects don’t get funded. It almost guarantees redundancy and herd mentality science by the way it’s set up.

Alx
August 31, 2015 11:59 am

“Why should I make the data available to you, when your aim is to try and find something wrong with it?”

This quote by Schwarz classically illustrates the lack of the most basic understanding of the scientific method. To add further to this tragedy of ignorance, criticism of such disastrously poor understanding of the scientific method is responded to by accusations of being “anti-science”. Not even Shakespeare could write such a tragic comedy.

RD
Reply to  Alx
August 31, 2015 1:18 pm

It’s worse then being proved wrong. Recall Mike’s nature trick.

Editor
Reply to  Alx
August 31, 2015 1:39 pm

Reply to Alx ==> That is not a quote from Professor Schwarz. It is a line from a now-famous ClimateGate email, if I recall correctly. Someone more familiar with the Climate Wars can fill in the who said it when and where.
Frank (the author of this column today) is likening Schwarz’s quote to that famous one…..

Bellator Deus
August 31, 2015 12:45 pm

I have a master’s degree in computer science from a reputable department at a reputable university (one of the top 10 in the world at the time I received my degree decades ago) — and I can tell you absolutely that computer science is not science. It is a mixture of mathematics, engineering, logic, and craftsman techniques. Not that there’s anything wrong with that, but it ain’t science.

Resourceguy
August 31, 2015 3:11 pm
Frank Lee MeiDere
Reply to  Resourceguy
August 31, 2015 3:32 pm

Speaking as someone who has spent time in that industry you’re damned right they are! I’ve been disgusted for a long time how little real research journalists do and how much they rely purely on self-serving press releases for their articles. In a discussion with a friend of mine who is a CBC news editor a couple of months ago I was complaining about how journalists seem to get most of their information purely from the media itself and gave a couple of examples of actual research. His reply was (and I’m not making this up), “Well, that’s not what I’ve been reading in the newspapers.”
Yeah — exactly.

Rod
September 1, 2015 7:24 am

It seems to me that this study ended up with what I would consider the expected results. It didn’t purport to find fraud, but rather it found that the original studies’ impacts were reduced somewhat in, what, 60% of the cases.
I would expect something like that for the simple reason that it’s the outliers that tend to get published. Do 20 studies and one shows an unusual result outside the expected and it’s published because it’s unique. Of course, if one did 20 similar studies altogether, the one outlier would be recognized as the outlier, i.e., in the tail of the distribution of expected study results.
But we don’t do 20 at once, we do one here and one there instead. But even so, if we do 20 here and there, one of them is, statistically speaking, going to yield results in the tail of the distribution of expected results. And that’s exactly the one that gets published.
This study demonstrated that outliers tend to get published and that replication is important, but it also demonstrated that most of the published studies are showing a real effect, just perhaps not as strong as the original study indicated.