Wild citations and where to find them

Citations are crucial to academia. For me, they’re the literal way of saying, “I stand on the shoulders of giants.” They allow us to build on existing knowledge, floor by floor, forming an ever-growing tower. As a PhD student, I’ve just begun laying my own bricks in this structure. So when my work got cited for the first time, I was thrilled.
Firstly, because it felt like a confirmation that the research I had published was deemed useful by the community. Secondly, not only had they deemed it useful, but regarded it well enough that it helped them build upon and expand their own knowledge. And thirdly, it was nice to know someone had read my paper and that it was not a “waste”.
That feeling didn’t last long.
My research was cited in support of an argument that was the opposite of what I had actually written. It had been used as padding, to cite just to cite. The work we had proudly produced had clearly not been read, or if so, overflown, misinterpreted and used as a means to an end. Not as a way to critically advance research.
At first glance, mis-citations might seem like an innocent mistake—an oversight in an otherwise well-intended process. But in reality, they are a symptom of something much larger: a system that prioritizes speed and quantity over accuracy [1].
Academia is built on the principle that research should be carefully evaluated, debated, and refined. But in today’s publish-or-perish environment, the pressure to produce papers at impossible speed has changed the way we engage with literature. Citations have become a box to check.
When you’re rushing to meet a deadline, overwhelmed by the sheer volume of new papers published daily, how do you decide which sources to cite? You scan abstracts. Maybe you skim a few introduction paragraphs. If the title sounds relevant and another paper has already cited it, that’s good enough. Right?
But this is where things go wrong and make science more fragile.
Citation Loopity-loop
Once a mis-citation enters the academic bloodstream, it doesn’t just sit there—it can spread. Researchers cite a paper without reading it in full, trusting that previous authors must have vetted it properly [2]. But they didn’t. And neither will the next researcher. Suddenly, incorrect claims take on a life of their own, not because they are correct, but because they have been cited enough times to seem correct [3]. Even if the original researcher corrected their mistake.
Some of the most well-known scientific myths—like the infamous spinach iron content mistake, or the alpha male wolf pack structure — emerged because of this cycle. One erroneous citation gets repeated, copied, and legitimized simply through repetition.
Citation police has no time
Peer review is often held up as the great quality control mechanism of academia, but let’s be honest—reviewers do not have the time to verify every single reference. Their job is to assess the methodology, results, and overall validity of a paper, not to play citation detective. Many simply assume that if a referenced paper has already been published and peer-reviewed, it must be accurate.
This creates a dangerous blind spot: if citation errors aren’t caught at the submission stage, they likely won’t be caught at all.
Too long; Didn't read
Another major issue is the rise of abstract-only citations. When was the last time you read every paper you cited in full?
For most researchers, time constraints make this impossible. Instead, we rely on abstracts. But abstracts oversimplify results, highlight only specific aspects of a study, and sometimes even misrepresent findings. When researchers cite based on abstracts alone, they risk distorting the original message.
This is what happened with my work. The authors had either misread or not read my paper at all, using it to support an argument that was the opposite of our actual conclusions and message.
Where can we go from here?
Let me be honest, my situation is not dire. The mis-citation of my research will not end the world thankfully. Going past my frustration, however, I wondered how we could actually solve this issue:
How can we improve the review system to check if citations actually support the arguments that are elucidated in the work? Should reviewers take time to check each and every citation? Should we assume them to know all the cited literature and have them confirm or not if this does support the claims? Can AI help us here?
Fixing mis-citations requires systemic change at multiple levels. Academic evaluation should prioritize the quality of citations over sheer quantity, with journals taking responsibility for flagging misuse. Metrics need to change.
Peer review cannot realistically verify every reference. There simply is not enough time. Today, targeted AI tools could potentially help detect mis-citations or odd citation patterns, making the work more robust [4].
Journals should also offer a clear, low-friction process for post-publication corrections. A publication should not be set in stone after proofs. A formal reporting mechanism must allow authors to challenge and fix citation errors without needing the citing author’s approval.
At its core, this is about academic integrity—citations build the foundation of knowledge, and misusing them weakens the entire structure. The system needs to slow down. It’s time to be more critical, more accountable, and more proactive in how we cite and how we are cited. Time to make the tower of knowledge more stable so that it can grow to even greater heights, even if but a bit slower.
-----
P.S. Throughout this article, you may have noticed some hyperlinks which seem to support some of my thoughts. If you have clicked on these, you will notice that none of them bring you to any work. From a rick-roll, a potato company, a cute dog attire website, and a NGO (which I highly recommend you donate to), you might have fallen into my trap. If you fell for my trap, congrats! You just learned why you should always check citations. If you didn’t, maybe it’s time to start.
Comments ()