The cult of novelty

The cult of novelty
You know it's there.

I know you are conditioned to think that what you are researching has to be novel. 

But let’s think about it for a second. 

Is the importance of novelty and the constant push for it in publications justified in scientific research?

Novelty is every reviewer’s favorite word

I dived into my inbox and retrieved every mention of “novelty” (and similar) in exchanges with journals. It was a painful exercise.

“The merits of the study are certainly noted, but the reservations about novelty and impact were important for the editorial board.”

“The authors should discuss more in detail the novelty of their study and what sets them apart from previous studies.”

“I am afraid we are not persuaded that your findings represent a sufficiently striking advance to justify publication in “Journal X”.” 

“This reviewer questions the overall impact, value, and novelty of the work.”

“This reviewer doesn't find the sufficient novelty in the current findings, and feel that the quality of the manuscript doesn't reach the standard for “Journal X” either.”

“This has significantly strengthened the data content and novelty of the work, and I have no further points to raise.” (Hey, it’s not all bad!)

I think it’s quite clear that we are faced with an incontestable requirement for novelty.

“In many cases, peer review today has become little more than an industrial process that helps safeguard journal status through notions such as novelty or impact rather than to enhance research. This focus is not helpful to science; it is helpful to publishing.”
Pattinson & Currie (Learned Publishing, 2025)

The replication crisis is a symptom of the obsession with novelty

Have you ever had an idea, got excited about potentially creating a project based on it and then went to check on Pubmed (or Scihub or wherever you get your daily research) only to find out it has been done before?

Ah. Someone already answered my question.

But wait. Would you have done the exact same experiments with the exact same model and used the exact same methods? 

Probably not.

In fact, I would argue that you would have most likely investigated that question in a different manner. The results you would have obtained would have either confirmed the findings of that paper you found, or challenged them. And both are good things. Both outcomes would have served the scientific community.

Researchers are pushed to look for the next new thing before making sure that the knowledge they are building upon is validated and reliable. This either directly or indirectly leads to an accumulation of irreproducible studies: we are in a replication crisis. A fairly recent survey on >1600 biomedical researchers revealed that the vast majority think the replication crisis is linked to the “publish or perish” culture. 

Shouldn’t we make sure that a finding is solid before we base our next project on it? 

I believe it is our duty to prioritize accuracy over novelty.

However, most funding agencies will not fund replication studies and will prioritize brand new directions. 

There is no money for it. 

But if you think about the billions (yes, billions) of dollars that have been wasted trying to build on the findings of a 2006 Nature paper on Alzheimer’s disease (cited over 2000 times) that turned out to harbor manipulated images and was retracted almost 20 years later... funding needed for replication studies sure sounds minimal (and obviously worth it). 

So, instead of relying on rare and occasional replication grants here and there, researchers should be encouraged to pursue these types of studies and serious money should be invested in it by funding agencies.

Novelty is not the norm

The elephant in the lab is that most results we get from experiments are either inconclusive or do not show a significant effect. That’s the reality of scientific inquiry. That’s also why your file drawer where you stash your negative results is overflowing.

Alas, today’s publication bias disincentivizes researchers from sharing their negative results. Journals either do not even want to see your negative results or if they do and you manage to publish it, it will most likely appear in a low impact factor journal. Not that there's anything wrong with that (in my opinion at least), but the reality of the system is that this will not help you much as you struggle to climb your career ladder.

Not having access to these negative results is not only giving us the impression that they don’t exist, but it also leads to repetitive experimentation that wastes precious human time and grant money.

So, why not share them? Surely, journals are not the only way to share your data.

Address the elephant.

There is nothing stopping you.