The posting below looks at an important issue regarding coauthor responsibility for the integrity of their published work. It is by Catherine Offord and is from the May 1, 2017 Careers issue of The Scientist. http://www.the-scientist.com/.
© copyright 2008-2017, The Scientist. All rights reserved. Reprinted with permission.
UP NEXT: Beyond Academia: Five Conference Takeaways
---------- 2,128 words ----------
Coming to Grips with Coauthor Responsibility
The scientific community struggles to define the duties of collaborators in assuring the integrity of published research.
When cancer researcher Ben Bonavida accepted a visiting graduate student from Japan into his lab at the University of California, Los Angeles (UCLA) just over a decade ago, he treated Eriko Suzuki like every other student he had supervised for the past 30 years. “I met with her regularly,” Bonavida recalls. “We went over her data, she showed me all the Westerns, all the experiments.” After months spent working on the cancer therapeutic rituximab’s mechanism of action, “she presented her findings to me and the other collaborators in the lab, and based on that we published a paper in Oncogene.”
Appearing in 2007, the paper accrued nearly 40 citations over the next seven years. But in April 2014, the study gained a less favorable mention on PubPeer, a website where users anonymously discuss research articles, often raising possible causes for concern. One user noted that some of the Western blots used to support the paper’s conclusions looked suspicious. In particular, one figure appeared to contain a duplicated and slightly modified part of another image.
PubPeer’s readers didn’t have to wait long to find out if their suspicions were grounded. Within the week, Bonavida’s visiting student—by then an assistant professor at Tokyo University of Agriculture and Technology—had confessed to image manipulation, and the paper was eventually retracted in 2016, with a brief statement citing “data irregularities.” In UCLA’s ensuing investigation, Bonavida was cleared of wrongdoing; nevertheless, he says, he was left in shock. “It affected me very deeply,” he says. “I have trained over a hundred students through my career. Nobody has done something like that with my work before.”
These days, Bonavida’s experience is becoming all too familiar. Scientific retractions are on the rise—more than 650 papers were pulled last year alone—and, more often than not, they’re the result of misconduct, whether image duplication, plagiarism, or plain old fraud. The pressure is now on the scientific community to address the issue of research integrity—and the role of coauthors like Bonavida in maintaining the veracity of research to which they contribute and ultimately support for publication. Even when coauthors have no involvement in the misconduct itself, is there something they should have done differently to avoid publication of the research in the first place?
The answer depends on who you ask, says Hanne Andersen, a philosopher of science at Aarhus University in Denmark. While some papers containing misconduct are the work of serial fraudsters who have deliberately duped their coauthors, many cases are not so clear-cut, and there’s a whole spectrum of opinions as to the level of the collaborators’ responsibility to verify the authenticity of all elements of the research project, not just their own contributions. In short, Andersen says, “the scientific community doesn’t have a uniform view.”
Over the past century, the average number of coauthors on a paper has climbed from essentially zero to between two and seven—with one of the most rapid increases seen in the biomedical sciences (PLOS ONE, doi:10.1371/journal.pone.0149504, 2016). “Multiauthored papers, often with more than 10 authors, are becoming commonplace,” wrote David Goltzman, a professor of medicine and physiology at McGill University, in an email to The Scientist. “In many cases, it is a major advantage to bring the expertise of scientists who have different research focuses together. [It] facilitates tackling scientific problems which could otherwise not be addressed.”
But this rise in coauthorship also exposes a vulnerability inherent to scientific research—that collaborations are fundamentally based on trust. “Trust is needed in science,” says Andersen. “If we didn’t trust each other, we would need to check everything everyone else did. And if we needed to check everything everyone else did, why collaborate in the first place?”
Carlos Moraes, a neuroscientist and cell biologist at the University of Miami who found himself in a similar position to Bonavida when a colleague’s misconduct led to the retraction of multiple coauthored papers, agrees. “If you are the main author of a ‘several pieces’ type of work, you can do your best to understand the raw data and the analyses,” he wrote in an email to The Scientist. “Still, trust is a must when the technique or analysis is beyond your expertise.”
But trust between collaborators can be violated, and when papers turn out to contain errors or falsified data, the damage is not limited to the guilty party. While scientists who issue corrections quickly and transparently may be unscathed or even rewarded for doing the right thing (see “Self Correction,” The Scientist, December 2015), recent research suggests that coauthors’ careers can take a hit after retractions—particularly if misconduct is involved—even if they are cleared of wrongdoing (J Assoc Inf Sci Technol, doi:10.1002/asi.23421, 2015). In cases where one or a few researchers commit fraud, “other authors are in effect ‘victims’ of the scientific misconduct,” says Goltzman, who has had his own experience of retraction fallout after a colleague was found to have falsified large amounts of data.
Some see the issue as more nuanced, however. “It’s quite odd that you would consider authors of a fraudulent paper to have no responsibility,” says Daniele Fanelli, a Stanford University researcher who studies scientific misconduct. “But that’s because we’re in a system that those authors would be getting undue credit for that paper if the problems hadn’t been discovered.” In Fanelli’s view, the issue boils down to ambiguity about what coauthorship entails, particularly when ensuring the manuscript is accurate and complete. It’s a subject that has “almost willfully been ignored,” he says.
WEB OF RETRACTIONS: One author’s misconduct can have profound effects on the research community. The eight researchers with the highest individual retraction counts in the scientific literature—many of them for misconduct—have together coauthored problematic papers with more than 320 other researchers (circles, sized by retraction count and colored by continent of primary affiliation). The number of retraction-producing collaborations (black lines) between any two researchers varies, but in several cases, researchers produce multiple problematic papers with the same individuals or groups, leading to highly interconnected clusters of scientists linked by their retraction history. View larger infographic. ROMANO FOTI
Indeed, despite the growing abundance of collaborations in the global scientific community, the duties of individual researchers and their role in upholding a study’s integrity are rarely defined. During the UCLA investigation, for example, Bonavida says he and his colleagues realized that, even though Bonavida was not only a coauthor but the lab head, the university had no protocol outlining his responsibility for verifying the paper’s results. “They didn’t have any rules for the faculty that you need to keep documents and original data for so many years, and so forth,” he says. “They never made any such guidelines.”
A similar lack of procedure is also true of the journals that publish the research. Although some journals now require authors to itemize their contributions, there are no hard-and-fast standards about what coauthorship entails. “It’s dicey,” says Geri Pearson, co-vice chair of the Committee on Publication Ethics (COPE), a nonprofit organization that provides guidelines to journal editors on how to handle disputes in scientific publishing. “There’s a lot of fuzziness about authorship.”
Some journals have maintained that authors should accept equal responsibility for a paper—meaning both credit for its success or blame for its flaws. In 2007, an editorial in Nature suggested an alternative—journals should require at least one author to sign a statement vouching for the paper and claiming responsibility for any consequences should the study be found to contain “major problems.”
But such “solutions” are generally criticized as unrealistic. Nature’s proposal attracted dozens of responses on its site, almost all of them negative. “What does it even mean?” Ferric Fang, a microbiologist at the University of Washington who also studies scientific misconduct, tells The Scientist. “That there should be an individual who flies around to each person’s lab and does an inspection? Even then, how could you be sure that someone wasn’t doing something unethical? . . . To act as if we can declare that [one person is] fully responsible and that makes it so, I think it’s kind of ridiculous.”
Rather than making a single, broad definition of coauthor responsibility, then, some researchers instead argue for complete transparency when a paper is found to contain flaws. Retracted papers are notoriously persistent in the literature, continuing to accumulate citations long after their findings have been debunked. (See “The Zombie Literature,” The Scientist, May 2016.) The UCLA group’s Oncogene paper, for example, was cited at least 15 times between being flagged on PubPeer in 2014 and being retracted two years later. Moreover, retraction notices themselves are often opaque, making it unclear what exactly led to a paper’s retraction, or how authors behaved during the process.
To address this problem, some researchers have proposed standardized retraction forms (see “Explaining Retractions,” The Scientist, December 2015), and in 2015 the Center for Open Science and the Center for Scientific Integrity, the parent organization of Retraction Watch, announced their joint effort to create a retractions database, searchable by various classifiers, including all coauthors, journal of publication, and the reason for the retraction. The tool, a preliminary version of which went live at retractiondatabase.org in December 2016, could aid the monitoring of published research itself, as well as help identify labs or individuals who are continually linked to misconduct, notes Andersen. “If you’re associated with it once, it would be a pity if you are punished for what someone else did,” she says. “But if you’re repeatedly associated with it, maybe that’s not a great lab for training young scholars.”
Addressing the cause
Even without a solid definition of coauthor responsibility, most researchers agree that scientists themselves can help combat misconduct with a more prudent attitude towards collaboration. “You see reports afterwards where people say, ‘Well, this looked almost too good to be true,’” says Andersen. “But nobody intervened.” Individual researchers could be more vigilant, she adds, particularly in the supervision of junior researchers. Bonavida says that he now takes more effort to explain to graduate students how to correctly present their data. And Moraes says he has become “a lot more careful when scrutinizing the raw data.” His advice: get all the data, “including the so-called ‘unimportant controls,’ and not only the final bar graph.”
Researchers can also help combat misconduct by making adjustments to the way they organize their collaborations. Goltzman wrote that his group, part of a multicenter study on osteoporosis that uses considerable volumes of medical data, has now adopted procedures that encourage greater transparency. For example, “we previously allowed each investigator to mine all the data they deemed necessary for their study by access to a central database,” he explained. “We are now asking each investigator to request the data they need from a statistician . . . so that we know exactly what data is required and provided.”
Of course, while these measures may make getting away with misconduct more difficult, there’s only so much collaborators can do. Preventing misconduct altogether is a challenge that many argue requires a long hard look at the scientific community in general, including the pressures it places on researchers. Misconduct and retractions “are just symptoms of a process that’s not working at optimal efficiency,” says Fang. “What’s really needed is a more wholesale rethinking about how scientists are supported.” Solutions that don’t address the related problems of too little funding for too many researchers and the publish-or-perish mentality that still pervades the academic community are mere tweaks to a flawed system, he adds.
In the meantime, though, there’s a growing appreciation that research integrity is not black or white. “It’s not ‘Everything is well and good,’ or, ‘We’re moving into misconduct,’” says Andersen. In recognition of the gray areas of research conduct, there are now initiatives aimed at getting wayward scientists back on track. A National Institutes of Health-funded researcher rehab, for example, is currently working with scientists whose misconduct, or oversight of misconduct, has led to the publication of problematic papers. The organizers of the program, which includes a three-day workshop and follow-up contact over three months, claim that participants show tangible improvements in the way they manage their labs and conduct research.
Such efforts mark a move in the right direction, notes Andersen. “If we can catch [questionable conduct] early on, and train people, and make sure they realize that this is questionable, we can make them better scientists,” she says. “That would be far better than catching it late—so late that we end their careers.”
Correction (May 1): This story has been updated from its original version to correctly state that Aarhus University is in Denmark, not in the Netherlands. The Scientist regrets the error.