terça-feira, 17 de junho de 2025

Quantifying Integrity: The Shortcomings of the Novel Research Integrity Risk Index

 

A recent article in Nature examines the Research Integrity Risk Index (RI2), as proposed in a June 2025 preprint, which is designed to draw attention to institutions where the imperatives of publication volume and ranking advancement may inadvertently compromise the rigor and reliability of scholarly output. By quantifying the proportion of an institution’s publications that have been retracted or that appear in journals subsequently delisted from major indexing services such as Scopus and the Web of Science the index assigns universities to a spectrum of risk categories, ranging from minimal concern to a “red flag” designation indicative of potential systemic vulnerabilities. https://www.nature.com/articles/d41586-025-01727-3

However, as previously argued in a 2021 post, the phenomenon of article retraction encompasses a broad continuum of underlying causes, which differ markedly in their implications for research integrity. Some result from relatively minor issues—such as lacking formal permission to reproduce an image, failing to disclose a potential conflict of interest, or an author withdrawing mid-process but remaining listed—while others stem from serious misconduct like data fabrication or falsification. https://pacheco-torgal.blogspot.com/2021/01/a-universidade-portuguesa-campea-de.html 

Therefore, an integrity index such as the RI2 that treats all retractions uniformly—without distinguishing among their underlying causes or the extent of their impact on the scholarly record—risks generating misleading assessments of institutional performance. By failing to account for the severity or context of each withdrawal, it may unfairly penalize institutions with numerous low-severity retractions while allowing those with few but serious cases of misconduct to escape appropriate scrutiny.

PS - As I also noted in my 2021 post, the well-known MIT defines serious research misconduct as fabrication, falsification, plagiarism, or deliberate interference—explicitly excluding honest error, differences of opinion, authorship disputes, and self-plagiarism. Therefore, databases such as Scopus and Web of Science should urgently implement a distinct retraction category for serious cases of misconduct, enabling better tracking of institutions with higher rates of retractions that significantly undermine scientific integrity.