Oddbean new post about | logout
 A widely used AI transcription tool, powered by OpenAI's Whisper model, has been found to hallucinate in about 1% of transcriptions. Researchers from Cornell University and the University of Washington discovered that the tool invents entire sentences with sometimes violent or nonsensical phrases during silences in recordings. The study highlighted the importance of addressing this issue for high-stakes decision-making contexts.

Source: https://www.theverge.com/2024/10/27/24281170/open-ai-whisper-hospitals-transcription-hallucinations-studies