Light gray ethics
We are familiar with the “fraud in science” stories that make the headlines. Those are the ones that are the most unacceptable, dramatic and scandalous. They usually emerge after years and years of being buried under data, or people. But the key word in this sentence is “emerge”. Because scientific fraud is not “vantablack” in the grayscale. There is a whole range of shades that are problematic, but still usually overlooked. And those do not “emerge”; neither do they make the headlines. I am talking about Questionable Research Practices (or QRPs); and while we already covered some extra-sensational examples in the past, the lighter part of the scale is equally interesting.
The term “questionable” is already downplaying their severity. It seems like we can question them and leave it there, let it go and move on. Which is usually what happens with the ones that are very close to light gray.
You are probably thinking, “I most certainly did not witness or engage in QRPs”.
But it may surprise you that some seemingly insignificant things we may skip over every now and then are actually considered to be QRPs. Things like, referencing papers without thoroughly reading them, citing scientifically irrelevant articles out of dependence (looking at you, reviewer #2), being selective on findings that are reported.. And more.
Let’s go over some scenarios.
But first, disclaimer: these are not real! They were inspired by case studies described in various publications (linked at the end of this post).
Greg is clueless (it’s not his fault)
Let’s imagine a regular research group. This group is working on a project designed by the PI, who also secured the funding. A senior scientist coordinates the major part of the research and is also in charge of supervising students.
Greg, our main character, is a master student. He joins the team and starts conducting some experiments for this project. He develops a key technical component for the experiment. Let’s say he designs and 3D-prints a custom “device” that becomes essential to the experimental setup. Then he goes on to collect large amounts of data and shares it with the team for analysis. Greg successfully completes his part of the project, eventually graduates and moves to another university to start a PhD. Many (many) months later, he discovers that a paper describing the study he worked on was published in a high-impact (let’s make things worse) journal. But his name only appears in the acknowledgements. He did not even know that a manuscript had been submitted.
So, no data were fabricated and no results manipulated here, yet the situation raises an ethical question about authorship decisions and the unfair distribution of credit. Particularly in environments where hierarchy determines whose contributions are visible.
Dan thinks he is smart
Greg (yes, it’s him again) is a final-year PhD student and his PI is away on sabbatical. In the meantime, Dan, a highly accomplished postdoctoral researcher, supervises him and offers very useful suggestions that help Greg improve his experimental design and data interpretations. At first, Greg is excited by this mentorship and the boost his project receives. But things are about to change.
Some of Greg’s new results appear to contradict findings from his PI’s earlier work. Dan advises him NOT to include these results in the preprint manuscript he is preparing, explaining that the PI currently has a grant proposal under review, and that mentioning conflicting data could jeopardize funding. He frames this as a practical reality of the research environment and even implies that many successful scientists operate this way.
Greg begins to feel worried (sorry Greg). Once again no data were fabricated, but now Greg is extremely confused about the boundary between “strategic” presentation and the deliberate omission of information that could go as far as to change how the results are interpreted.
Not you too, Greg
Greg (the one and only) is now a professor and has his own research group. He believes strongly in collaborative work and delegates responsibilities to his PhD students, believing that it gives them valuable academic experience. One day, he receives a request to review a manuscript for a journal and he passes the task to Laura, a second-year PhD student in his group, who recently published her first paper and is therefore familiar with the publication process. Laura takes the time to carefully read the manuscript, evaluates it thoroughly, obsessively goes over every single sentence and figure panel and writes down her comments. But her impostor syndrome kicks in, and she hesitates to comment about the novelty of the work or the suitability of the methods. She definitely does not want to appear incompetent to her supervisor and she does not even consider asking for help. Instead, she decides to upload the manuscript to ChatGPT. Our beloved Chat-y generates all kinds of recognizable bullsh*t to fill in the gaps in Laura’s review, hallucinates some references and spits out a shiny completed peer(!)-review report. Laura sends the report to Greg and never mentions using AI. Greg loves the final report and submits it, as is, to the journal. He does not indicate in any way that his PhD student wrote the review.
And again, nothing here may appear malicious, yet there are two big ethical issues in this scenario:
1) The ⚠️confidential⚠️manuscript was uploaded to an LLM, a “system” full of unknowns, without the authors’ or the editor’s knowledge.
2) Greg completely delegated the evaluation of the manuscript to his student without giving a heads up to the editor, not even after the fact.
Don’t be like Greg
It is a useful exercise to talk about these hypothetical case studies at lab meetings, since they can be very relatable at times and place the reader on the thin blurry muddy line between right and wrong.
Discussing these in a group will surprise you, because the opinions are usually diverse and this leads to very interesting debates. Proceed with caution though, because you may end up seeing your labmates under a very different light afterwards.
After all, there are those who do what is “right”, and those who do what works.
Until the dead-end.
Case studies
https://ukrio.org/wp-content/uploads/UKRIO-Case-study-pack-No.-1.pdf
https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119067825.app1
https://www.research-collection.ethz.ch/handle/20.500.11850/664648
Comments ()