Symptomatic treatment of a broken system

Symptomatic treatment of a broken system

The “publish or perish” culture pushes researchers to publish papers (regardless of the state of their research) for career survival. Academic jobs are tied to people’s publication records; and although change is happening, it is at a glacial pace. Since the system is gamified in this way, people are tempted to play. And they play to “win”. 

This leads to many side effects that distort the way science is done. These are also called questionable research practices or QRPs. Let’s go over a few of them.

Salami slicing

Salami slicing is publishing several small papers by dividing a study that is not meant to be sliced up. This leads to MORE papers, potentially more citations and a thick publication record for the slicer. It’s hard to find a concrete definition of it and perhaps it applies mostly to Social Sciences, but basically what you have to remember is that it is not ethical to use the same dataset and the same methods to extract two slightly differently focused interpretations and then publish them in different journals.

As the Ethics Toolkit from Elsevier says (very dramatically): 

“The same "slice" should never be published more than once.”

Salami slicing is bad, because it’s deceptive, leads to incomplete work and potentially wastes the reader’s (and editor’s and reviewers’) time, as they would have to look for and review several slices of the study whereas it should have been a whole salami from the beginning. 

The key here is that the same hypothesis should not be the center of two different studies using the same dataset. 

And this brings us to…

HARKing = “Hypothesizing After the Results are Known”

This side effect is a tricky one, because scientific publishing (as it is today) forces researchers to write “exciting” papers, and null results are difficult (if not impossible) to publish. Researchers are therefore incentivized to write in a sensational way, even though the goal is to communicate scientific findings. The “story” has to captivate the audience.

Imagine you have a hypothesis and you are doing research to test it, then you get some results that don’t quite fit with it. HARKing would be modifying the hypothesis to “make everything fit”. This leads to disconfirmed hypotheses that are not reported and perhaps are being tested over and over again by others, wasting time and money...

HARKing can also mean cherry-picking results that actually fit with your hypothesis and leaving out others that may contradict or blur it. 

One way to avoid it is to pre-register your study before actually doing the research. Sometimes even that cannot prevent HARKing though, as we have witnessed recently… 

P-hacking

Once again, the quest for positive results leads to a (sometimes subconscious) obsession with novelty and “significance”. I will not get into the fact that p values and significance are based on imaginary/arbitrary thresholds, but you can find detailed information on it here

The practice of p-hacking involves selective reporting of values or replicates, questionable exclusion of “outliers”, inappropriate use of certain tests and other ways of manipulating data and its flexibility in hopes to obtain that magic p value that is less than 0.05. 

But statistical (in)significance does not always mean biological (in)significance.

Getting rid of QRPs

Sometimes these QRPs happen without even realizing it and with no ill intent behind it. These practices may be inherited from supervisors who inherit them from their supervisors. That is why we must define and understand these side effects, then stop, and think about how we do research.

Here is your chance to recalibrate your research practices. Stop chasing the magic p value. 

The goal of doing research is NOT to publish papers. 

Once we make our peace with that, the side effects will also disappear.