Why Even 98% AI Accuracy Can Be Dangerous in Healthcare
- Waleed Mohsen
- Jul 27
- 2 min read
AI sometimes reminds me of a big, scary-looking dog at the park.
When it’s on a leash, I don’t worry. I pat its head. I let it lick me. It becomes cute. I feel safe, especially when the owner tells me it’s never bitten anyone.
But if I saw that same dog running free, chasing kids while the owner played on their phone, not paying attention, assuming everything will be fine? I’d get nervous.
Assure me the dog is harmless as much as you want.
But I know: it only takes once.
When I look at AI in my field, healthcare compliance, that same false sense of comfort strikes me as one of the biggest risks for healthcare orgs.
It’s easy to get so impressed with what AI can do, so quickly, that we assume it will always “do it right.” The power of AI and how quickly it’s advancing makes it easy to take quality and consistency for granted.
“If it can do that, surely it won’t make such a silly mistake.”
And most of the time, that’s true. Depending on the study, accuracy of tools like AI scribes has been shown to approach 98–99%.
For example, in one study on AI transcription hallucination rates between speakers with aphasia, only 1.4% of transcriptions had hallucinations.
The catch? Nearly 40% of those hallucinations were harmful to patient safety or otherwise dangerous.
Even with 98.6% accuracy, that 1.4% can mean life or death for a patient—or financial catastrophe for an organization.
AI scribes streamline documentation beautifully. AI agents handle patient interactions smoothly.
But it only takes once. And in healthcare compliance, that “one time” could mean a missed medication allergy, a documentation error that triggers a lawsuit, or a billing violation that leads to an audit.
At Verbal, we couldn’t be more excited about the future of AI in healthcare. But we know “fingers crossed” isn’t a strategy.
Whether it’s manual compliance audits or AI-powered tools, quality assurance isn’t optional.
Like I heard growing up in an Egyptian family:
“Trust in God… but tie your camel.”
Like a big, scary-looking dog, AI needs a leash.
PS- My dog, Roxy. Nots o big or scary-looking.
Comments