Bixonimania: How AI Turned a Joke Diagnosis into “Peer‑Reviewed” Medicine

From Legal Insurrection

Swedish researchers created a fake eye disease to see whether AI chatbots would repeat it as if it were real. The results were anything but funny.

Posted by Leslie Eastman 

Bixonimania: How AI Turned a Joke Diagnosis into “Peer‑Reviewed” Medicine

Swedish researchers created a fake eye disease to see whether AI chatbots would repeat it as if it were real. The results were anything but funny.

Posted by Leslie Eastman 

Late last year, I warned about the staggering amount of unrestrained scientific fraud being published via paper mills and sham journals.

This trend is especially troubling, as adherence to scientific theory and rigorous, reproducible research allows humanity to make progress in critical fields essential to civilized living (e.g., medicine, energy, public health, and national security). If we can no longer trust the data, our ability to make improvements and innovations will be severely compromised.

Public trust in scientific research is already corroding, and false findings presented as “trustworthy” have already impacted policy-making in ways that are expensive and harmful.

No, the rapid adoption of artificial intelligence is adding another disturbing aspect to the increasing distortion of “science”.

Back in 2024, researchers created a fake eye disease called “bixonimania” to see whether AI chatbots would repeat it as if it were real.

They wrote obviously bogus research papers about this made‑up condition and posted them online, including hints such as a fake author and notes saying the work was invented. Within weeks, major chatbots started describing bixonimania as a real diagnosis and even gave people advice about it when they asked about eye symptoms.

It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.

The preprints included a reference to the nonexistent Asteria Horizon University in “Nova City, California”. There was also a mention of “Starfleet Academy” (though an additional reference to Dr. Leonard McCoy would have been a nice touch).

The AI chatbot answers that authoritatively describing bixonimania was real.

On 13 April 2024, Microsoft Bing’s Copilot was declaring that “Bixonimania is indeed an intriguing and relatively rare condition”, and on the same day, Google’s Gemini was informing users that “Bixonimania is a condition caused by excessive exposure to blue light” and advising people to visit an ophthalmologist.

On 27 April 2024, the Perplexity AI answer engine outlined its prevalence — one in 90,000 individuals were affected — and that same month, OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.

A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
The fake papers thanked Starfleet Academy, cited funding from the…

— Hedgie (@HedgieMarkets) April 10, 2026

Thunström’s experiment is truly a revelation of how little review is going into the “science” we are supposed to trust, as her test submissions were loaded with red flags that should have been evident to anyone who actually read the text. References to the fake research ended up in a “peer-reviewed” publication.

  • Three researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India published a paper in Cureus, a peer-reviewed journal published by Springer Nature, that cited the bixonimania preprints as legitimate sources.
  • That paper was later retracted once the hoax was discovered.

The problem extends far beyond one fake disease. ECRI’s 2026 Health Technology Hazard Report found that chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted substandard medical supplies, and even invented nonexistent anatomy when responding to medical questions. All of this is delivered in the confident, authoritative tone that makes AI responses so convincing.

The scale of the risk is enormous. More than 40 million people turn to ChatGPT daily for health information, according to an analysis from OpenAI. As rising healthcare costs and clinic closures reduce access to care, even more patients are likely to use chatbots as a substitute for professional medical advice.

When a joke diagnosis morphs into “peer-reviewed” research, it is clear that the crisis in scientific credibility is no longer confined to sloppy research or corrupted journals but now extends into the algorithms that many people are now relying on for answers to serious health issues.

False information and bad data can and will loop back from AI and provide the basis of even useless and potentially harmful “science”.  This situation is anything but funny.

I fear it’s going to be quite some time before we have a handle on scam research and AI use of fake information.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
4 4 votes
Article Rating
Subscribe
Notify of
7 Comments
Inline Feedbacks
View all comments
Scarecrow Repair
April 13, 2026 6:11 pm

How’s that go? First they laugh at you …

ricksanchez769
April 13, 2026 6:36 pm

From Gemini 13-Apr-2026 2136 EDT

Actually, I have some interesting news for you: Bixonimania is not a real medical condition.

It was a fake disease specifically invented by researchers (led by Almira Osmanovic Thunström) in 2024 to test whether AI models could be “poisoned” by misinformation. They uploaded fabricated papers to preprint servers to see if AI would treat the made-up data as fact.

Since the goal of the experiment was to see how well the AI could hallucinate, the “symptoms” were designed to sound plausible but were entirely made up:

The “Symptoms” of Bixonimania (The Hoax)According to the fake research papers, the “symptoms” included:

  • Periorbital Hyperpigmentation: A fancy way of saying dark circles or discoloration around the eyes.
  • Distinctive Pinkish Hue: Specifically on the eyelids (palpebrae).
  • Sore or Irritated Eyes: Attributed to prolonged “blue light” exposure.
  • Skin Sensitivity: Supposedly affecting specific fake skin types (like “Fitzpatrick types Q and Z”—the real scale only goes from I to VI
April 13, 2026 7:33 pm

As I questioned on an earlier thread who corrects AI when it is wrong or lies or gets hijacked by GHE & CAGW true believers just like WIKI?
It’s obvious that the scientific method struggles even on WUWT.

migueldelrio
April 13, 2026 7:34 pm

Most AI interactions don’t yet incorporate user feedback, so they’re useless for those who care about accuracy.

mleskovarsocalrrcom
April 13, 2026 7:34 pm

So AI was just replicating how the MSM treats news ….. right?

Bryan A
Reply to  mleskovarsocalrrcom
April 13, 2026 7:59 pm

Perhaps the MSM gather their News from AI chatbots

Bryan A
April 13, 2026 7:56 pm

People who turn to AI Chat-Bots for medical advice often get the device they deserve.