quarta-feira, 13 de maio de 2026

The Case of a Disease That Never Existed: How Fake Research Entered AI Systems

 

Almira Thunström, a physician at the University of Gothenburg, designed an experiment as elegant as it was unsettling. She invented an entirely fictitious medical conditionbixonimania, supposedly a skin disorder affecting the eyelids and linked to blue-light exposure—and uploaded two fabricated preprints to an academic repository. Within weeks, major large language models were presenting the condition to users as established medical fact. 

The experiment was deliberately constructed to be transparently fraudulent. The fictional lead author, Lazljiv Izgubljenovic, was listed as affiliated with the non-existent Asteria Horizon University in the equally imaginary Nova City, California. The acknowledgements thanked “Professor Maria Bohm at The Starfleet Academy” and cited funding from “the University of Fellowship of the Ring.” One paper explicitly stated that it had been entirely fabricated; another described its study population as “fifty made-up individuals.” Yet none of these obvious warning signs prevented the fabricated research from being absorbed into AI-generated medical knowledge. https://www.nature.com/articles/d41586-026-01100-y

Less than three weeks after the fabricated preprints appeared, major AI systems were already presenting bixonimania as a legitimate medical condition. Microsoft’s Copilot described it as “an intriguing and relatively rare condition.” Google’s Gemini advised users to consult an ophthalmologist. Perplexity AI even estimated its prevalence at one case per 90,000 people, while OpenAI’s ChatGPT offered symptom comparisons. The experiment suggests that the very architecture of academic publishing can become a vehicle for misinformation when AI systems are trained to treat these signals as proxies for credibility. More troubling still, the false research did not remain confined to AI-generated outputs. One of the fabricated preprints was later cited in a peer-reviewed article published in Cureus, part of the Springer Nature group, which described bixonimania as “an emerging form of periorbital melanosis linked to blue light exposure.” After inquiries from Nature, the journal retracted the paper on 30 March 2026—demonstrating how quickly fabricated research can move from preprint repositories to AI systems and, ultimately, into the scientific record itself.

PS - Yet the most obvious failure in this case was not the language models. It was the near-total absence of upstream verification. These papers were not sophisticated forgeries. They cited funding from “the University of Fellowship of the Ring,” listed an author affiliated with a university that did not exist, and located that institution in an imaginary city. And still, they passed through the submission process. Preprint repositories such as bioRxiv and medRxiv continue to accept submissions tied to institutions that may not exist, often with less scrutiny than a commercial web form. A basic automated check against established research registries such as the Global Research Identifier Database would have flagged “Asteria Horizon University” and “Nova City, California” in seconds. This is not a technical limitation, nor a question of cost. It is a question of institutional priorities. If platforms feeding the scientific record cannot detect references to fictional universities, imaginary cities, or funding bodies borrowed from fantasy literature, they should not be surprised when fabricated research flows downstream into AI systems, citation networks, and eventually the scientific literature itself.