
Below is part of my letter to the Editor of the Journal of Informetrics, commenting on Han et al. (2025):
Han et al. (2025) offer a detailed
quantitative examination of Chinese medical researchers with retracted
publications from 20 leading hospitals, analyzing author characteristics,
retraction drivers, and career consequences. Comparing retracted researchers to
matched non-retracted peers, the study uncovers distinct patterns across career
stages: early-career researchers with retractions underperform academically,
experience steep declines in citations, reduced collaboration, and limited
career mobility, whereas senior retracted researchers maintain high
productivity, influence, and expansive collaboration networks, reflecting
cultural and institutional leniency. Probit regression demonstrates that
output-driven incentives, such as rapid increases in publication and citation
rates, raise retraction risk, while broader collaboration networks mitigate it.
Institutional peer pressure exerts minimal influence. Strikingly, retractions
due to scientific errors result in harsher professional penalties than those caused
by misconduct, including larger citation and collaboration losses, challenging
prior assumptions that fraud is most damaging. Using a
difference-in-differences framework, the study quantifies these impacts,
highlighting early-career researchers’ disproportionate vulnerability.
However,
Han et al. could have advanced a stronger and more explicit critique of the
current retraction system. The prevailing practice of treating all retractions
as equivalent poses a serious challenge to both scientific integrity and
fairness within the academic ecosystem. In its present form, retraction
functions as a blunt instrument, failing to distinguish between fundamentally
different causes and intentions underlying a paper’s withdrawal. A single
undifferentiated category is used to encompass acts of deliberate
misconduct—such as fabrication, falsification, or plagiarism—as well as
inadvertent methodological errors, miscalculations, or oversights in data
interpretation.
This lack of differentiation collapses essential distinctions
between ethical violations and honest mistakes, often leading to
disproportionate and unjust outcomes that unnecessarily damage the careers and
reputations of researchers acting in good faith. Retractions can have profound
and lasting consequences for scientific careers, extending well beyond the
immediate withdrawal of the paper in question. Evidence shows that a retraction
can reduce not only citations to the retracted work but also to the authors’
prior publications, generating long-term negative effects on scholarly reputation,
particularly when authors do not proactively disclose or self-report the error
(Lu et al., 2013). In general, retractions are associated with significant
declines in citation counts, and even those resulting from honest mistakes can
lead to severe reputational penalties for scientists (Azoulay et al., 2017).
Although many retractions arise from genuine errors, public perception often
equates them with misconduct or fraud, creating a pervasive stigma (Behavior,
2021). This fear of reputational damage can discourage researchers from
correcting the scientific record, potentially driving talented and
conscientious scientists out of academia. Retractions thus function as a form
of stigma that can call into question both a researcher’s professional competence
and ethical integrity, irrespective of whether the underlying issue was
inadvertent. Studies have highlighted that retraction notices themselves can
communicate this stigma, shaping reputational outcomes within the scientific
community (Xu & Hu, 2022).
Moreover, recent empirical work demonstrates
that retractions often attract intense attention, which can precipitate
departures from scientific careers, disproportionately affecting researchers
whose work is under greater public scrutiny. In addition, the structure of
citation and collaboration networks is typically altered following a
retraction, further amplifying the long-term consequences for the affected
authors (Memon et al., 2025). The consequences of
this undifferentiated approach are multifold. At the individual level,
researchers who make genuine mistakes face severe professional penalties,
including sharp reductions in citations, collaborative opportunities, and
career mobility, regardless of the absence of any malicious intent.
Early-career scientists are particularly vulnerable, as their networks and
reputations are still nascent, making them less able to absorb the shock of a
retraction. At the systemic level, the conflation of misconduct with honest
error can discourage transparency and self-correction. If admitting an error is
effectively indistinguishable from being labeled dishonest, researchers may
avoid reporting mistakes, and journals may hesitate to issue corrective
notices. The resulting culture of fear inhibits the epistemic process: science
thrives on error detection and correction, yet this vital mechanism becomes
stigmatized, undermining the reliability of the scientific record.
Therefore, implementing a principled retraction typology could provide a practical and thoughtful solution to this problem. A
typology would categorize retractions according to the underlying cause,
differentiating between intentional misconduct, gross negligence, honest error,
and procedural or editorial issues. By making the reasons for retraction
explicit and transparent, this system would allow the scientific community,
funding bodies, and the public to interpret the significance of a retraction
accurately.
Intentional misconduct would remain clearly marked and
appropriately stigmatized, preserving accountability and maintaining trust in
scientific norms. Meanwhile, honest errors could be framed as part of the
self-correcting nature of science, with minimal damage to the researcher’s
reputation and career trajectory. Insights from legal and penal frameworks
can provide valuable guidance in designing a fair and effective system for
scientific retractions. Just as criminal law carefully distinguishes between
intentional crimes, negligence, and accidents, academic corrections should
likewise differentiate deliberate misconduct from honest mistakes.
In legal
systems, penalties are proportionally scaled according to factors such as
intent, the harm caused, and the foreseeability of the outcome, ensuring both
fairness and accountability. For example, deliberate fraud is met with severe
punishment, whereas acts of negligence or inadvertent error typically result in
milder sanctions or corrective measures. Translating these principles to
academic publishing suggests that a structured retraction typology could
clarify distinctions between different types of retractions, providing
transparency and proportionality in response to the underlying cause.
Level 1 – Malicious Intent: Fraudulent Retraction
-
Fabrication: Making up data or results and presenting them as genuine.
-
Falsification: Manipulating research materials, equipment, processes, or data such that the research record is misrepresented.
-
Plagiarism: Appropriating another person’s ideas, processes, results, or words without proper attribution, including verbatim or near-verbatim copying that materially misleads regarding the author’s contributions. This excludes limited use of standard methodological phrasing, text recycling (self-plagiarism), or authorship/credit disputes.
Level 2 – Negligence: Negligent Retraction
-
Errors resulting from inadequate methodology, oversight, or carelessness, without intent to deceive. Examples include flawed experimental design or mismanaged data that do not constitute deliberate misconduct.
Level 3 – Honest Mistake: Corrective Retraction
-
Errors occurring despite reasonable care, such as miscalculations, data handling mistakes, or unforeseen methodological limitations.
-
Self-plagiarism, depending on the extent of text overlap.
Regarding the
inclusion of self-plagiarism as a Level 3 (honest mistake), several
clarifications are warranted. Although self-plagiarism is frequently framed as
an ethical violation, concerns surrounding it appear overstated.
Callahan (2014) argues that self-plagiarism has been inflated into a perceived
crisis despite limited evidence that it meaningfully harms the scholarly
record. Editorial approaches vary widely: some journals impose rigid similarity
thresholds (e.g., 10%), while others suggest that the reuse of even a sentence
or paragraph may be unethical. Such absolutist interpretations overlook
standard scholarly practices, including the iterative development of ideas, the
reuse of methodological descriptions, and the repetition of theoretical
frameworks for clarity and continuity. Crucially, unlike traditional
plagiarism, self-plagiarism involves the reuse of one’s own previously
published material and does not entail deception regarding authorship.
Moreover, major institutional frameworks—such as MIT’s Procedures for
Dealing with Misconduct in Research and Scholarship—do not classify
self-plagiarism as research misconduct, underscoring its secondary ethical
status. Treating self-plagiarism as a moral failing risks diverting attention
from genuinely serious violations, such as fabrication or falsification, while
penalizing routine academic practices. From an ethical standpoint,
self-plagiarism should merit retraction only when it reaches clearly excessive
levels—typically above 30% of textual overlap—and when such reuse undermines
the novelty of the work or materially distorts the scholarly record. Below this
threshold, concerns are more appropriately addressed through transparency,
citation, or editorial correction rather than punitive measures. Overemphasis
on minor textual overlap risks fostering compliance anxiety rather than
promoting substantive ethical integrity.