In a manifesto published this week in Nature Communications, 21 scientists from universities across the United States, United Kingdom, Ireland, Italy, Germany, and Switzerland argue that science is losing valuable knowledge through unpublished negative results, inadequate documentation, and fragmented preservation practices. The concern is plausible—and in many ways overdue. Yet the paper’s central weakness appears immediately: it treats the scale of the problem as self-evident rather than demonstrated. How often are failed experiments unknowingly repeated? What proportion of irreproducibility is actually caused by missing documentation, rather than underpowered studies, unstable reagents, or flawed experimental design? What are the financial, scientific, or human costs of such losses? The paper offers no empirical answers. Instead, it relies largely on other perspective pieces to justify major cultural and infrastructural reforms. Without quantification, it is difficult to judge whether the proposed solutions are proportionate to the problem being described. https://www.nature.com/articles/s41467-026-72667-3?utm_source
The paper is equally reluctant to confront the political economy of scientific publishing—or its own place within it. Commercial publishers benefit from scarcity of access, novelty bias, and the prestige economy of high-impact journals. The authors acknowledge that publishers have “a role to play,” but avoid naming the structural conflict: many of the reforms they advocate run directly against the financial incentives of the institutions through which scientific prestige is currently distributed. This creates an uncomfortable tension. The paper is written by active researchers whose careers depend on the existing system, published in one of the journals that helps sustain that system, while calling for reforms that would require parts of it to be fundamentally restructured. Yet there is no acknowledgment of this positionality, and no reflection on what it means for the credibility—or political feasibility—of the recommendations. The most honest version of this manifesto would have confronted a harder question: are we, as authors, actually willing to do what we are asking others to do?
The paper is also strikingly optimistic about AI. Large language models are trained on the published scientific literature—the same positivity-biased and methodologically incomplete record the authors criticize. They cannot recover failed experiments that were never documented, nor reconstruct tacit laboratory knowledge that was never written down. What they can do is reorganize an already distorted archive and return it with extraordinary fluency. But there is an additional danger the paper never addresses: as AI-generated content increasingly enters the scientific and online knowledge ecosystem, future models may be trained not only on incomplete human records, but on synthetic outputs generated by previous models. As recent research on “model collapse” has shown, systems trained recursively on AI-generated data can progressively lose diversity, accuracy, and fidelity to the original human knowledge base. In that scenario, AI would not merely reproduce the biases of the scientific record—it could amplify and recursively entrench them. Trustworthy AI in science therefore depends not only on better data curation, but on preserving human authorship, human judgment, and human-generated knowledge as the foundation of scientific communication.
PS - Unfortunately, the paper never acknowledges that American science — still one of the institutional pillars of the global research enterprise it claims to protect — is facing an existential threat. The Trump administration has cancelled thousands of grants, gutted federal science agencies, subjected peer-reviewed journals to Department of Justice scrutiny, and defunded research areas on ideological grounds. These are not background conditions; they are direct attacks on the very infrastructure of knowledge production this paper is trying to reform and, in many cases, defend. A manifesto about the fragility of scientific knowledge that finds no space for that reality is not apolitical — it is politically evasive. It reads like a document preoccupied with ventilation standards while the building is already on fire.