Sound familiar? A study revealing a stunning lack of reproducibility in psychological science triggers another instance of reluctance to share data with any but friends, and an “adjustment” of data to fit a theory.
Frank Lee MeiDere writes:
By now most have probably heard about the paper published in the August 28 edition of Science magazine entitled “Estimating the reproducibility of psychological science.” Coordinated by the Center for Open Science, and headed by its executive director, Dr. Brian Nosek, the project examined 100 psychological studies mostly from three sources: Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition.
According to the The New York Times’ article, “Many psychology findings not as strong as claimed, study says,” written by Benedict Carey: “The vetted studies were considered part of the core knowledge by which scientists understand the dynamics of personality, relationships, learning and memory. Therapists and educators rely on such findings to help guide decisions.”
In the end, 60 of the 100 studies did not hold up well when reproduced.
This, of course, is disturbing in itself. As Carey points out, “the fact that so many of the studies were called into question could sow doubt in the scientific underpinnings of their work.”
Of more concern, however, is a quote from Dr. Norbert Schwarz, a professor of psychology at the University of Southern California: “There’s no doubt replication is important, but it’s often just an attack, a vigilante exercise” (italics added).
The problem is, that’s exactly what a replication is supposed to be: “an attack, a vigilante exercise.” It’s not a waltz with both partners in perfect step with each other; it’s a battle in which the invaders try every trick in the book to break through the castle walls. Schwarz’s attitude would seem to suggest a rephrasing of that famous justification for withholding data: “Why should I make the data available to you, when your aim is to try and find something wrong with it?”
Has this become the cry of all science? Hardly. In the physics world, for instance, new results are flung into the air for target practice like so many skeet. But then, the reproducibility rate in physics is pretty high — in fact, it makes up our entire technological world, virtually all of which is simply practical reproductions of laboratory experiments.
Other scientific foundations, however, are much shakier, and all too often their response is to ask everyone to sit quietly rather than examining the cause.
Psychology is especially prone to this shakiness. No sooner does one theory get established than it’s uprooted for another. In fact, there are numerous theories in play at any one time, each backed by its own body of “scientific” evidence. As Jelte Wicherts, associated professor of methodology and statistics at Tilburg University, Netherlands, said, “I think we knew or suspected that the literature had problems, but to see it so clearly, on such a large scale — it’s unprecedented.”
There are, of course, those who disagree with the findings of this study. The New York Times reports an email from Paola Bressan, a psychologist at the University of Padua who criticized the project for its reproduction of her study “Female preference for single versus attached males depends on conception risk.” Her complaint was that they used female psychology students as subjects whereas she had used female Italians. She is quoted as saying “I show that, with some theory-required adjustments, my original findings were in fact replicated.”
Any time the phrase “theory-required adjustments” is uttered there should be cause for alarm. This isn’t to say that it can’t be valid, but in this case we have to remember that Bressan’s original study was assumed to hold true universally (it wasn’t titled “Italian female preference” after all), and any adjustments made to “correct” the new study’s results in order to match her own could well be open to bias.
All scientific research should be considered “guilty until proven innocent” and requires an extremely strong defense to stand up to a justifiably hostile prosecutor. The idea of “settled science” is wrong in virtually every field, although there might be vast areas of strong replication and, therefore, confidence.
But at no point should we be under the old army edict of “Don’t ask: Don’t tell.”
Related stories where Stephan Lewandowsky, Naomi Oreskes, and John Cook produce some questionable, and perhaps irreproducible psychological “science”:
And then there’s Rasmus Benstad’s recent laughable paper that five journals rejected, before he and his psyops crew found a journal that would publish what other the five journals rejected. Yeah, that’s what you might call “robusted” science.