[RSS_OUTLETS]score: 0.20
Your doctor’s AI notetaker may be making things up, Ontario audit finds
May 14, 2026
An Ontario audit found AI medical notetakers are generating hallucinated content including fabricated therapy referrals and incorrect prescriptions. This is a critical safety finding for clinical NLP deployment, reinforcing that LLM hallucination rates remain unacceptable in high-stakes medical documentation without robust human-in-the-loop validation.