Experiments in publishing: Rethinking How Knowledge Is Shared
The way we publish scientific findings has stayed surprisingly rigid for something so central to progress. The standard IMRaD format originated in the 19th century, with Louis Pasteur being an early adopter. Since the 1950’s this has been the standard structure for writing a scientific paper. With the invention of the internet, science adopted the PDF as the standard file format, which has remained to this day. Scientific publishing has been associated with journals since 1665. For most of that period, journals have been run by scientific societies. However, from the 1940s this changed as commercial academic publishing expanded, largely directed by Robert Maxwell. Since then, these commercial publishers have dominated how we share science, who can share scientific findings, and what this sharing looks like.
Lately, though, there’s a wave of experimentation trying to rethink publishing from the ground up. Not just tweaking formats, but questioning core assumptions such as what counts as a “publication”? Who gets to publish? When is something ready to share?
Let’s walk through some of the more interesting, and experimental, directions this is heading.
Arcadia Science
Arcadia Science is one of the more radical experiments in this space, not because of a single tool or format, but because it tries to rethink the entire research-to-publication pipeline as one integrated system. Arcadia was founded in 2021 by scientists and technologists with a deliberately long time horizon: the goal wasn’t just to do research, but to redesign how research works.
From the beginning, it positioned itself differently from both academia and traditional biotech. This lead to a strange hybrid of part research lab, part startup incubator and part publishing experiment. Instead of separating discovery, commercialization, and communication, Arcadia tries to collapse them into a single loop. Scientists aren’t just producing papers, they’re building tools, launching companies, and publishing along the way.
Crucially, they’re also trying to align incentives differently. Contributors across the research lifecycle from early experiments to writing and editing share in the upside, reflecting the idea that science is a collective process, not just the final paper. Rather than standalone publications, Arcadia organizes work into evolving narratives. Each “pub” sits inside a broader story of a project:
- what question they’re exploring
- how their thinking is evolving
- how individual results connect
This starts to look less like a stack of papers and more like a public lab notebook with structure. Arcadia leans into public feedback rather than closed peer review. Anyone can comment on their work, and their own scientists actively review others’ work in public as well. In fact, they are one of the more prominent preprint peer review efforts.
DeSci
If Arcadia Science is about redesigning research from within an institution, DeSci (Decentralized Science) is trying something more ambitious and more chaotic: rebuilding the entire system without institutions as the central authority.
It’s less a single organization and more a loose movement, pulling ideas from crypto, open-source software, and internet-native coordination. The core instinct is simple: science is too important to be bottlenecked by a handful of gatekeepers (journals, funders, universities), so let’s replace those bottlenecks with open, programmable systems. Having said that, there is a single organisation - DeSci - that is overseeing and directing this effort. In the early 2020’s, communities began forming under the broader banner of DeSci, including groups like DeSci Foundation. The movement gained momentum during the COVID-19 era, when the limitations of traditional publishing became painfully visible.
DeSci isn’t just about publishing, it’s about decomposing the entire “stack” of science into parts that can be rebuilt:
- Funding - community-driven DAOs ((Decentralized Autonomous Organizations))
- Publishing - open, immutable records
- Peer review - transparent, reputation-based systems
- Ownership - tokenized intellectual property
- Collaboration - global, permissionless participation
Instead of institutions bundling all of these roles, DeSci treats them as modular layers that can be independently redesigned. It’s an internet-native philosophy: if the system is designed well enough, you don’t need gatekeepers.
DeSci is full of promise but also real challenges:
- Quality control: how do you prevent low-quality or misleading work?
- Governance: are token holders actually qualified to evaluate science?
- Incentives: does financialization distort research priorities?
- Adoption: can these systems integrate with existing institutions?
Right now, it’s much closer to a sandbox than a replacement system.
Micropublications
If traditional publishing is built around the idea of the complete story, a full paper with introduction, methods, results, and discussion, micropublications start from a different premise: What if a single, well-defined result was enough? Think of it as unbundling the paper into its smallest meaningful pieces.
The roots of micropublications go back to a long-standing issue in science: the “file drawer problem”. Researchers tend to publish positive results, novel findings and clean, compelling narratives. In doing so, they often do not share negative results, replications, partial findings or messy or inconclusive data. Over time, this creates a distorted picture of reality. Entire lines of inquiry can look more promising than they actually are, simply because the failures are invisible.
By the 2010s, as conversations around reproducibility increased, people started seriously asking: what if we just published more of the small stuff? That question led to experiments in “minimal publishable units” and eventually to more structured efforts around micropublications. A micropublication is exactly what it sounds like; a single finding, clearly described with enough context to understand and reuse it but without the overhead of a full paper.
Of course, this approach raises some real questions:
- Signal vs noise: if everything is publishable, how do you find what matters?
- Incentives: will researchers get credit for smaller contributions?
- Fragmentation: does breaking work into tiny pieces make it harder to see the big picture?
- Adoption: will institutions and funders take these outputs seriously?
Micropublications did see use during the COVID-19 pandemic, particularly early on when little scientific knowledge existed and was needed rapidly. This, perhaps, highlights where micropublications can serve a very specific need.
Octopus
Octopus takes modularity even further. Instead of publishing a single paper, researchers publish separate components. Octopus breaks research into eight distinct publication types:
- Problem
- Hypothesis/Rationale
- Method/Protocol
- Data/Results
- Analysis
- Interpretation
- Real-world application
- Review
So instead of writing one paper that does everything, researchers contribute individual pieces to a shared, evolving network of knowledge. Each piece can be independently reviewed, credited, and reused. This could fundamentally change incentives. Instead of rewarding polished narratives, it rewards contributions at every stage of the research process.
As elegant as the idea is, Octopus runs into some real friction:
- Cognitive load: it’s easier to read a paper than navigate a web of components
- Cultural inertia: careers, funding, and prestige are built around papers
- Adoption problem: the system only works if enough people use it
- Synthesis gap: if everything is modular, who connects the dots?
There’s also a human factor; scientists don’t just produce knowledge, they make sense of it. The narrative, while imperfect, does serve a purpose. Octopus sits at the far edge of publishing experiments, a kind of “endgame” for modular thinking. It’s a big leap from where we are now.
Interactive articles - CurveNote and Myst
If Octopus imagines science as a network and micropublications shrink the unit of knowledge, interactive articles ask a more visceral question: What if reading research felt less like reading and more like using software? This is where platforms like Curvenote and the MyST Markdown ecosystem come in. Together, they’re trying to turn the scientific article into something you can explore, not just consume.
For decades, the “paper” evolved around constraints of print such as fixed layouts, static figures and linear narratives. There were page limits, colour limits and space constraints. Even when publishing moved online, most articles remained digital replicas of paper - sometimes with the same strange limits.
Efforts like Jupyter notebooks, Jupyter, and open-source tooling began to blur the line between analysis and communication. MyST emerged from this ecosystem as a way to formalize that shift, combining Markdown, code, and scientific structure into a single authoring system. Curvenote builds on top of that idea, turning these documents into full publishing experiences.
Instead of a static document, an interactive article is; modular (built from reusable blocks), computational (connected to code and data), and explorable (readers can interact with content directly). In Curvenote, articles are composed of “blocks” that can include:
- text
- figures
- equations
- live code outputs
- interactive charts or maps
These aren’t just embedded elements, they’re native to the document itself. One of the biggest shifts is that articles can include live computational elements. This means that figures can be generated from real code and datasets can be inspected or manipulated. Instead of saying “we ran this analysis,” the article can show it and sometimes let you rerun it yourself. In traditional papers, figures are static images. But Curvenote allows charts to behave more like mini-applications than illustrations, allowing readers to change variables, uncover new patterns or reshape the data in different ways.
Like all these experiments, there are real tradeoffs:
- Complexity: harder to author than a simple paper
- Longevity: will interactive elements still work in 10 years?
- Standardization: no universal format yet
- Cognitive load: not every reader wants to “interact”, sometimes you just want the takeaway
There’s also a subtle tension in that too much interactivity can overwhelm the narrative instead of enhancing it.
Where This Might Be Heading
None of these approaches have “won” yet. We’re in a phase of exploration, where different models coexist. Perhaps the interesting question isn’t which one replaces traditional publishing but rather how they might combine. This could lead us to an entirely new way of communicating scientific findings, one that is fit for the 21st century. And honestly, it feels overdue.
Comments ()