Study suggests that even the best AI models hallucinate a bunch
Loading...
AI models, from GPT-4 to Claude 3, are hailed as the ultimate sources of knowledge.
AI models, from GPT-4 to Claude 3, are hailed as the ultimate sources of knowledge.
Loading...
But sometimes, these 'know-it-alls' confidently deliver bizarre, made-up facts.
But sometimes, these 'know-it-alls' confidently deliver bizarre, made-up facts.
Loading...
A recent study finds that even the best models still hallucinate, with some refusing to answer to avoid mistakes.
A recent study finds that even the best models still hallucinate, with some refusing to answer to avoid mistakes.
Loading...
Can we trust them if they refuse to answer? Or is that better than getting it wrong?
Can we trust them if they refuse to answer? Or is that better than getting it wrong?
All generative AI models hallucinate, from Google’s Gemini to Anthropic’s Claude to the latest stealth release of OpenAI’s GPT-4o. The models are unreliable narrators in other words — sometimes to hilarious effect, other times problematically so.