There’s a particular kind of energy you get in a conference room when something is about to matter. You can feel it before anything starts: chairs filling quickly, people sitting closer than usual, no polite gaps left between strangers.
This was the case for the Great Debate on The future of scientific publishing: do we need scientific publishing? at the European Geosciences Union General Assembly 2026. Then, an eerie silence as a question appeared on screen: ‘do we still need scientific journals?’ This question was open for participants to respond with either a yes or a no.
Attendees reached for their phones to scan the QR code and answer. My intuition leaned toward a ‘yes’, and I wasn’t wrong: the tally soon settled at 153 to 51. However, a 25% of the room voting ‘no’ was too large for me to dismiss as mere noise. The way the figures drifted and changed as the count finalised suggested an atmosphere of lingering indecision, as if the assembly hadn’t quite committed to its own choice.
What followed wasn’t so much a clear-cut answer, but more of a series of tensions. Still, one point of consensus emerged quickly when panellists were asked if (and why) we need journals: gatekeeping and organisation. For all their flaws, journals offer a mechanism for peer review and retractions, as well as the ability to map out an ever-expanding world of data. I watched the room as these arguments were made, noting that not everyone was convinced; a few heads were already shaking in silent dissent.
In this post, I’m not trying to summarise every argument made on stage, but instead, I want to reflect on a few that lingered, particularly what the gap between what scientific publishing is supposed to be, and what it often is in practice. And, of course, how we move between the two.
In many ways, the ideal version of scientific publishing is not that difficult to describe. There was little disagreement on what journals should be, and on the fact that, if that were the reality, the initial vote would likely have been a more confident ‘yes’.
1) They should be communities, not businesses. Built by scientists, for scientists, with decisions shaped from the bottom up rather than imposed from above. Access to publishing (and to publications) should not depend on the ability to pay.
2) They should act as a layer of quality control that people trust. Not just in principle, but in practice. This means ensuring that results are reproducible, that data and methods are accessible, and that claims are held to a consistent standard. At the same time, they should curate, not simply host, research: This can be in the form of organising it into something that readers can navigate without being overwhelmed.
3) And last but not least, they should rely on a culture where participating in the publishing ecosystem (editing, reviewing, typesetting even) is seen as a meaningful scientific contribution. Something done carefully, transparently where possible, and recognised as part of the collective effort to improve the work of others.
None of this felt particularly controversial. If anything, it felt like a shared baseline. At this level, agreement is easy. It is the implementation that fractures. The reality that emerged (both from the discussion and, particularly, from the Q&A) was far messier.
The first tension is structural.
Publishing is quite embedded in how scientific careers are evaluated. Metrics, visibility, prestige. A publication in a prestigious journal (define that as you will) is often seen as preferable, even when that journal is not necessarily aligned with the values just outlined. So researchers continue to submit, even to systems they openly criticise. Not because they are unaware of the issues, but because stepping away from that system is not a neutral decision when it still defines success. At the same time, participating in that system — reviewing, editing, contributing to its upkeep — is rarely rewarded in the same way. Careers are built on publications, not on the voluntary work that sustains the system behind them.
Peer review, usually presented as the backbone of quality control, is itself more fragile than the ideal suggests. There is no clear agreement on what it should look like. Transparency can increase accountability, but anonymity can offer protection, especially for early-career researchers asked to review the work of more senior scientists. Remove anonymity, and you risk discouraging honest critique. Keep it, and you allow reviewers to remain invisible behind their words. In practice, both systems coexist. Neither fully resolves the tension.
At the same time, the quality of review is inconsistent. Many researchers will recognise the experience of receiving reports that are thoughtful and constructive, and others that are rushed, superficial, or add little to the science. New concerns are also emerging, including the possibility of generative AI being used to produce reviews, raising further questions about what the ‘peer’ in peer review actually means.
Even the mechanisms meant to ensure meticulousness do not always function as intended. Editors are expected to enforce standards around reproducibility, data availability, and transparency, but this depends on time, resources, and, ultimately, individual commitment. Most researchers can point to examples where these safeguards have fallen short, and to the familiar experience of going on a quiet hunt for a dataset that should have been readily available.
So, describing the publishing landscape as ‘broken’ may be going one step too far, but it is clear that it is very uneven. The principles are widely shared. Their implementation is not. When problems are widely recognised, the solutions can sound deceptively simple. Publish in journals you trust. Review carefully. Support systems that align with your values.
In principle, the path forward is clear: if enough researchers choose differently, the system will change, but that clarity fades quickly in practice because the ability to choose differently is not evenly distributed.
Throughout the discussion, early-career researchers were repeatedly invoked as agents of change, encouraged to review diligently, to push back on unfair decisions, to help shape a better publishing culture. Yet, they are also the ones navigating the most fragile stage of their careers, where hiring, funding, and progression remain closely tied to metrics and journal prestige. Choosing principles over perceived impact is not a neutral decision when it can influence where a career goes next.
More established researchers, in contrast, have, more often than not, greater freedom to step away from those constraints, which raises an uncomfortable question: Who is actually in a position to lead change?
According to this Great Debate’s panelists, responsibility is often framed as collective, but the ability to act on it is uneven.
At the same time, change is not entirely hypothetical. Across the geosciences, alternatives are emerging, such as the growing number of diamond open access journals, built and run by scientific communities rather than commercial publishers. Review models are also evolving, with more journals experimenting with transparency and open discussion. These shifts show that different ways of publishing are not only possible, but already in motion.
From the Great Debate, I was left with the sense that scientific journals are, at least for now, here to stay. While we have become used to sharing our science in a multitude of ways (you name it: preprints, conferences, social media, etc.) journal articles remain, for many, the end point of a research project; the ‘thing’ that gives it a sense of completion. And ‘good’ journals (when they work as intended) are still effective platforms for sharing science openly and widely.
So perhaps the more realistic conclusion is not that journals will disappear, but that their role needs to be renegotiated.
As one panellist put it, the focus should be on
collectively choosing to move away from systems that do not serve us – said Ken Carslaw
But doing so requires more than statements of intent. It requires individuals to choose positive actions over performative ones.
