Big Trouble with Spiders

Guest Essay by Kip Hansen — 6 February 2020

only_here_for_spidersHow deeply have you considered the social life of spiders?  Are they social animals or solitary animals?  Do they work together?  Do they form social networks?  Does their behavior change as in  “adaptive evolution of individual differences in behavior”?

In yet another blow to the sanctity of peer-reviewed science and a simultaneous win for personal integrity and self-correcting nature of science, there is an ongoing tsunami of retractions in a field of study of which most of us have never even heard.

Science magazine online covers part of the story in “Spider biologist denies suspicions of widespread data fraud in his animal personality research”:

“It’s been a bad couple of weeks for behavioral ecologist Jonathan Pruitt—the holder of one of the prestigious Canada 150 Research Chairs—and it may get a lot worse. What began with questions about data in one of Pruitt’s papers has flared into a social media–fueled scandal in the small field of animal personality research, with dozens of papers on spiders and other invertebrates being scrutinized by scores of students, postdocs, and other co-authors for problematic data.

Already, two papers co-authored by Pruitt, now at McMaster University, have been retracted for data anomalies; Biology Letters is expected to expunge a third within days. And the more Pruitt’s co-authors look, the more potential data problems they find. All papers using data collected or curated by Pruitt, a highly productive researcher who specialized in social spiders, are coming under scrutiny and those in his field predict there will be many retractions.”

The story is both a cautionary tale and an inspiring lesson of courage in the face of professional setbacks — one of each for the different players in this drama.

I’ll start with Jonathan Pruitt, who is described as “a highly productive researcher who specialized in social spiders”. Pruitt was a rising star in his field and his success led to his being offered “one of the prestigious Canada 150 Research Chairs” — where he has established himself at McMaster University in Hamilton, Ontario, Canada in the psychology department where he is listed as  the Principal Investigator at “The Pruitt Lab”.  The Pruitt Lab’s home page tells us:

“The Pruitt Lab is interested in the interactions between individual traits and the collective attributes of animal societies and biological communities. We explore how the behaviors of individual group members contribute to collective phenotypes, and how these collective phenotypes in turn influence the persistence and stability of collective units (social groups, communities, etc.). Our most recent research explores the factors that lead to the collapse of biological systems, and which factors may promote systems ability to bounce back from deleterious alternative persistent states.”

This field of study is often referred to as behavioral ecologyIn terms of research methodology, this is a difficult field — one cannot, after all, simply  administer a series of personality tests to various groups of spiders or fish or birds or amphibians.  Experimental design is difficult and not normalized within the field;  observations are in many cases by necessity quite subjective.

We have seen a recent example in the Ocean Acidification (OA) papers concerning fish behavior, in which a three-year effort failed to replicate the alarming findings about effects of ocean acidification on fish behavior.  The team attempting the replication took care to record and preserve all the data and, Science reports, “It’s an exceptionally thorough replication effort,” says Tim Parker, a biologist and an advocate for replication studies at Whitman College in Walla Walla, Washington.  Unlike the original authors, the team released video of each experiment, for example, as well as the bootstrap analysis code. “That level of transparency certainly increases my confidence in this replication,” Parker says.”

The fish behavior study is of the same nature as the Pruitt studies involving social spiders.  Someone has to watch the spiders under the varied conditions, make decisions about perceived differences in behavior, record differences in behavior, in some cases time behavioral responses to stimuli.  The results of these types of studies are in some cases entirely subjective — thus, in the OA replication, we see the care and effort to video the behaviors so that others would be able to make their own subjective evaluations.

The trouble for Pruitt came about when one of his co-authors was alerted to possible problems with data in a paper she wrote with Pruitt in 2013 (published in the Proceedings of the Royal Society B in January 2014) titled “Evidence of social niche construction: persistent and repeated social interactions generate stronger personalities in a social spider“.

That co-author is Dr. Kate Laskowski, who now runs her own lab at the University of California at Davis San Diego.   She was, at the time the paper was written, a PhD candidate.  I’ll let you read her story — it is inspiring to me — as she tells it in a blog post  titled “What to do when you don’t trust your data anymore”.  Read the whole thing, it might restore your faith in science and scientists.

Here’s her introduction:

“Science is built on trust. Trust that your experiments will work. Trust in your collaborators to pull their weight. But most importantly, trust that the data we so painstakingly collect are accurate and as representative of the real world as they can be.”

“And so when I realized that I could no longer trust the data that I had reported in some of my papers, I did what I think is the only correct course of action. I retracted them.”

“Retractions are seen as a comparatively rare event in science, and this is no different for my particular field (evolutionary and behavioral ecology), so I know that there is probably some interest in understanding the story behind it. This is my attempt to explain how and why I came to the conclusion that these papers needed to be removed from the scientific record.”

How did this happen?  The short story is that as a result of meeting and talking with Jonathan Pruitt at a conference in Europe, Pruitt sent Laskowski “a datafile containing the behavioral data he collected on the colonies of spiders testing the social niche hypothesis.”  Laskowski relates how the data looked good and that there was clear inference in the data that was “strong support for the social niche hypothesis”.  With such clear data, she easily wrote a paper.

“The paper was published in Proceedings of the Royal Society B (Laskowski & Pruitt 2014). This then led to a follow-up study published in The American Naturalist showing how these social niches actually conferred benefits on the colonies that had them (Laskowski, Montiglio & Pruitt 2016). As a now newly minted PhD, I felt like I had successfully established a productive collaboration completely of my own volition. I was very proud.”

The situation was a dream come true for a young researcher — and her subsequent excellent work brought her to UCSD where she established her own lab.  Then….

“Flash forward now to late 2019. I received an email from a colleague who had some questions about the publicly available data in the 2016 paper published in Am Nat. In this paper we had measured boldness 5 times prior to putting the spiders in their familiarity treatment and then 5 times after the treatment.

The colleague noticed that there were duplicate values in these boldness measures. I already knew that the observations were stopped at ten minutes, so lots of 600 values were expected (the max latency). However, the colleague was pointing out a different pattern – these latencies were measured to the hundredth of a second (e.g. 100.11) and many exact duplicate values down to two decimal places existed. How exactly could multiple spiders do the exact same thing at the exact same time?”

Lawkowski performed a forensic deep-dive into the data and discovered problems such as these (highlights indicate unlikely duplications of exact values; see Lawkowski’s blog post for larger images and more information):

suspect_duplications

Remember, Laskowski’s paper was not based on data that she had collected herself, but on data provided to her by a respected senior scientist in the field, Jonathan Pruitt.  It was data collected by Pruitt personally, not as part of a research team, but by himself.  And that point turns out to be pivotal in this story.

Let me be clear, I am not accusing Jonathan Pruitt of falsifying or manufacturing the data contained in the data file sent to Laskowski — I have not investigated the data closely myself.  Pruitt is reported to be doing field work in Northern Australia and Micronesia currently and communications with him have been sketchy — inhibiting full investigations by the journals involved.   Despite his absence, there are serious efforts to look into all the papers that involve data from Pruitt. Science magazine reports “All papers using data collected or curated by Pruitt, a highly productive researcher who specialized in social spiders, are coming under scrutiny and those in his field predict there will be many retractions.” [ source ]

A blog that covers this field of science, Eco-Evo Evo-Eco, has posted a two part series related to data integrity:  Part 1 and Part 2.  In addition, there are two specific posts on the “Pruitt retraction storm” [ here and here ] , both written by Dan Bolnick, who is editor-in-chief of The American Naturalist.   This journal has already retracted one paper based on data supplied by Pruitt, at Laskowski’s request. 

In one of the discussions this situation has spawned, Steven J. Cooke, Institute of Environmental and Interdisciplinary Science, Carleton University, Ottawa, Canada opined:

“As I reflect on recent events, I am left wondering how this could happen.  A common thread is that data were collected alone.  This concept is somewhat alien to me and has been throughout my training and career.  I can’t think of a SINGLE empirically-based paper among those that I have authored or that has been done by my team members for which the data were collected by a single individual without help from others.  To some this may seem odd, but I consider my type of research to be a team sport.  As a fish ecologist (who incorporates behavioural and physiological concepts and tools), I need to catch fish, move them about, handle them, care for them, maintain environmental conditions, process samples, record data, etc – nothing that can be handled by one person without fish welfare or data quality being compromised.” 

It wasn’t long ago that we saw this same element in another retraction story — that of Oona Lönnstedt, who was found to have “fabricated data for the paper, purportedly collected at the Ar Research Station on Gotland, an island in the Baltic Sea.”  Science Magazine quotes  Peter Eklöv, Lönnstedt’s supervisor and co-author in this Q & A:

Q: The most important finding in the new report is that Lönnstedt didn’t carry out the experiments as described in the paper; the data were fabricated. How could that have happened?

A: It is very strange. The history is that I trusted Oona very much. When she came here she had a really good CV, and I got a very good recommendation letter—the best I had ever seen.

In the case of Jonathan Pruitt, the evidence is not yet all in.  Pruitt has not had a chance to fully give his side of the story or to explain exactly how the data he collected alone could reasonably contain so many implausible duplications of overly exactly measurements.  I have no wish to convict Jonathan Pruitt in this brief overview essay.

But the issue raised is important and has wide generalisability.  It can inform us of a great danger to the reliability of scientific findings and the integrity of science in general.

When a single researcher works alone, without the interaction and support of a research team, there is the danger that shortcuts can be taken with justifying  excuses made to himself, leading to data being inaccurate  or even just filled in with expected results for convenience.  Dick Feynman’s “fooling themselves” with a twist.

Detailed research is not easy — and errors can be and are made.  Data files can become corrupted and confused.  The accidental slip of a finger on a keyboard can delete an hour’s careful spreadsheet reformatting or cast one’s carefully formatted data into oblivion.  And scientists can become lazy and fill in data where none was actually generated by experiment.  A harried researcher might find himself “forced” to “fix up” data that isn’t returning the results required by his research hypothesis, which he “knows” perfectly well is correct.  In other cases, we find researchers actively hiding data and methods from review and attempted validation by others, out of fear of criticism or failure to replicate.

There are major efforts afoot to reform the practice of scientific research in general — suggestions include requiring pre-registration of studies including their designs, methodologies, statistical methods to be applied, end points, hypotheses to be tested with all these posted to online repositories that can be reviewed by peers even before any data is collected.  Searching the internet for “saving science”, “research reform” and the “reproducibility crisis” will get you started.  Judith Curry, at Climate etc., has covered the issue over the years.

Bottom Line:

Scientists are not special and they are not gods — they are human just like the rest of us.  Some are good and honorable, some are mediocre, some are prone to ethical lapses.  Some are very careful with details, some are sloppy, all are capable of making mistakes.  This truth is contrary to what I was led to believe as a child in the 1950s, when scientists were portrayed as a breed apart — always honest and only interested in discovering the truth.  I have given up that fairy-tale version of reality.

The fact that some scientists make mistakes and that some scientists are unethical should not be used to discount or dismiss the value of Science as a human endeavor.  Despite these flaws, Science has made possible the advantages of modern society.

Those brave men and women of science that risk their careers and their reputations to call out and retract bad science, like Dr. Laskowski,  have my unbounded admiration and appreciation.

# # # # #

Author’s  Comment:

I hope readers can avoid leaving an endless stream of comments about how this-that-and-the-other climate scientist has faked or fudged his data.  I don’t personally believe that we have had many proven cases of such behavior in the field.   Climate Science has its problems: data hiding and unexplained or unjustified data adjustments have been among those problems.

The desire to “improve the data” must be tremendously tempting for researchers who have spent their grant money on a lengthy project only to find the data barely adequate or inadequate to support their hypothesis.  I sympathize but do not condone acting on that temptation.

I would appreciate it if researchers and other professionals would leave their stories and personal experiences that apply to the issue raised.

Begin your comments with an indication of whom you are addressing.  Begin with “Kip…” if speaking to me.  Thanks.

# # # # #

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

159 Comments
Inline Feedbacks
View all comments
FranBC
February 6, 2020 11:09 am

I know just the sinking feeling she must have had. Many years ago, a MSc student collected data under my supervision. I cannot remember why I started doing some chart reviews, but I found she had ‘improved’ the results. I made her withdraw the submitted thesis and fix it. What was most upsetting was that most of her fellow students, and even some of my colleagues, believed I should have let the thesis go through as is and do any fixes at the publication stage. I got out of that department as soon as I could.

Nothing is more scary than the possibility of data falsification.

J Mac
February 6, 2020 12:28 pm

I rise today to both salute and laud Dr. Kate Laskowski, University of California at San Diego, for adherence to the principles of the scientific method and high ethical standards! After identifying apparently erroneous data supplied by a highly regarded senior scientist in her field of expertise, she retracted her own published research that had relied on the suspect data and publicly stated the reasons for her retractions.

Dr. Kate Laskowski – We SALUTE You! May your unwavering integrity and leadership inspire and inform all those who support the advancement of science and technology!

Jeff Alberts
Reply to  J Mac
February 9, 2020 9:56 am

Would that Keith Briffa were so principled. He sorta tried, but couldn’t get there.

tty
February 6, 2020 1:27 pm

It might come as a surprise to many that there are social spiders. It did to me, since I’ve always regarded them as loners and often cannibalistic to boot.

The first time I encountered them was in a swamp area in northeastern Argentina where spider colonies had built huge nets covering whole copses. If you as much as touched a single strand hundreds of big, hungry spiders immediately converged on you. Good stuff for a horror movie….

February 6, 2020 1:27 pm

“Fabricated Data” has become a staple of Climate Science. When the rewards are available for getting the “Politically Correct Right Answer” the attitude to science changes and people feel free to fudge, nudge or create observations that have the correct values. Many in the field now believe that this is how real science is done.

Reply to  Kip Hansen
February 6, 2020 8:25 pm

Fudging, as well as more subtle data massaging and selecting, can occur entirely at an unconscious level. Don’t underestimate the power of the Unconscious.

“In each of us there is another whom we do not know. He speaks to us in dreams and tells us how differently he sees us from the way we see ourselves. When, therefore, we find ourselves in a difficult situation to which there is no solution, he can sometimes kindle a light that radically alters our attitude . . . “–Carl Jung, as quoted in C. G. Jung: Psychological Reflections.

It is my hypothesis that this “other” has no conscience, and sometimes that kindled light is an evil one, a Shadow.

Reply to  Kip Hansen
February 6, 2020 11:13 pm

Kip,
Are you aware of the Australian BOM thermometers with “quality control” hardware that do not record temperatures when they drop below a built-in limit, thus claiming a daily average on the high side? This was reported by Jo Nova and probably elsewhere. Would that qualify?

How about “the data is no good” NOAA group that reported ocean ph changes since 1900, and projected thru to 2100, projected backwards and forward from a few current measurements? Millions of ocean ph recordings do not support their claims but their numbers are now widely accepted as gospel. This was reported in WUPT one to two years ago.

Herbert
February 6, 2020 1:29 pm

Kip,
Most Interesting.
This is reminiscent of Sir Peter Medawar’S famous essay, “ The Strange Case of the Spotted Mice.”
That involved the uncovering of the fraud of Dr. William Summerlin who had claimed in the 1970s to have successfully grafted skin or a corneal graft from a member of the same or even of a different species acceptable to an organism that would otherwise have rejected it.
Another cautionary tale.

Reply to  Kip Hansen
February 6, 2020 11:16 pm

Kip,
If you don’t limit the question to proven climate science fraud, there are many examples.
for instance

I’m almost certain the report I read was right here, in WUWT, about the industry funded study of neonicotinoid pesticides on bees. The money was given to a large, highly respected research organization. They carried out studies on many agriculture plots, each with a matched control plot, in several countries, over two years. The grant was no strings on how the study was constructed or carried out nor on what was reported from it – except that the grantors were to be provided on-going copies of all designs, procedures, data collection, etc.

Some of my numbers may be incorrect, coming purely from memory. I could not find the article I read (which is my normal results from using WUWT’s search facility). However, this article references the topic.
https://wattsupwiththat.com/2017/07/09/the-crisis-of-integrity-deficient-science/

The study reported a number of fairly severe results.

After the study was published, on line for anyone to study, without comment, the grantors published what they had been provided over those two years. Analysis by a number of people revealed that the research group had selected only the tiny percentage of data that supported their position of harm to bees, leaving out the rest of the data, perhaps 98%, that offered no support whatever. In addition, all of the plots showing harm had major confounding factors effecting bees which made it impossible to tell if the neonicotinoids had any effect at all.

There have been more than a few published studies where something similar has eventually turned up. Carefully selecting only data which seems to support a hypothesis may not be exactly the same as fabricating data but it seems to me, as the old saying goes, a difference without a distinction.

Craig from Oz
February 6, 2020 7:06 pm

Spiders don’t need to be social. They can tell the future.

Wisdom aside I regard the differences in professional mindsets with a mix of bemusement and caution. In my day job (as opposed of my night job that involves living under a bridge and tormenting people I disagree with… cough…) I am an engineering professional. In our profession the attitude to the ethical use of engineering data is extremely different. We are professionally responsible for the work we present and for any implications that arise from our designs post delivery. The concept of ‘This is peer reviewed, you cannot question it so shut up and accept it cause you didn’t go to the correct uni’ is not some much alien as something that would get you totally and utterly reamed should anything go wrong.

We just don’t think and act that way because if we do and get ‘caught’ it is a wonderful way to end your working career.

Scientists stuff up and they just call people names and smear.

Engineers stuff up and things fail and angry legal people get involved.

Reply to  Craig from Oz
February 6, 2020 10:02 pm

from Oz. Yes, I’m so tired of people acting as though ‘peer-reviewed’ means ‘true’!

Reply to  Craig from Oz
February 6, 2020 10:50 pm

I don’t “believe in” science. I believe in engineering, Craig, for the reasons you point out.

DaveW
February 6, 2020 7:14 pm

Kip – If you don’t know the Anders Pape Møller story, then you may find it informative. I find this one very reminiscent. Møller lost all credibility in his field, but then surfaced again making claims about the Chernobyl radiation effects on swallows – again with unlikely data.

I agree that Laskowski seems to be trying to do the right thing, but a bit more scrutiny of her collaborator’s data may have saved her this embarrassment. Still, it is too easy to trust a colleague. I feel sorry for Susan Riechert, whose work I have always respected, but who seems to have spawned Pruitt. I suppose he deserves the benefit of the doubt, but there does seem to be strong pattern of high-fliers in vague fields (I’m thinking Cornell and food now) being charlatans.

DaveW
Reply to  Kip Hansen
February 7, 2020 7:05 pm

Kip ==> I don’t think I understand your comment? Wansink and Møller were very much mainstream until exposed and I didn’t suggest any ‘graphic’? Do you mean that the hypothesis that high-fliers may tend to be fabricators is tinfoil hat? Well maybe, but it seems a reasonable hypothesis to me. I know you think the system is just off track, but I think the system is broken.

I sent the Science and Retraction Watch links on this to a couple of spider researchers I know. One, although he does some work with ‘social’ huntsman spiders (my understanding is that no spiders are truly social, but some are subsocial), was not familiar with the personality research. The other had seen the Am Nat retraction, but didn’t follow it up because she had met Pruitt at a meeting, thought he was very charming and doing interesting work, and hoped the Am Nat was just a one-off.

February 6, 2020 8:10 pm

Kip – This was on the front page of the Globe and Mail today.

https://www.theglobeandmail.com/canada/article-mcmaster-university-researcher-under-fire-for-data-irregularities/

The Canadian government has awarded Dr. Pruitt C$2.45 million in grant funding. This is going to be a biggish deal in Canada.

February 6, 2020 9:58 pm

What might come as a surprise to some is the amount of money spent on spider ‘personality research’. How tremendously expensive it is to correctly obtain the tiniest bit of factual knowledge in that field! How wonderful that we copiously fund the brave explorers who show us the unity and wisdom of the Creator, etc. etc.

Mark Thompson
February 7, 2020 7:17 am

UC San Diego —> UC Davis, I think.

Editor
Reply to  Mark Thompson
February 7, 2020 11:23 am

Mark ==> Quite Right — I have corrected the main body of the essay. The link was correct,

kim
February 9, 2020 2:35 pm

If you’d rather live and thrive,
Let the spiders run alive.
===================

kim
Reply to  kim
February 9, 2020 2:37 pm

Er, H/t Plum’s Robinson.
====================

Kevin Kinscherf
February 10, 2020 8:34 am

On Twitter …
“Jonathan Pruitt
@Agelenopsis
Behavioral ecologist, avid gaymer, and fast talker”

“Fast talker” should have warned the scientific community.

Johann Wundersamer
February 20, 2020 2:31 am

postscript to

How ESA-NASA’s Solar Orbiter Beats the Heat

____________________________________

Carl Friis-Hansen February 6, 2020 at 5:15 am

“black asphalt does indeed absorb a lot of heat during the day, but sheds heat to space with equal efficiency at night.”

Sounds technically right to me, but how come the average temperature is higher in cities than in rural areas
____________________________________

Rural areas get their heat frome the sun. Only.

Cities are heated by the sun + waste heat, excess heat the cities get delivered by fuel suppliers.