In Praise of AI Hallucinations: How Creativity Emerges from Uncertainty
When people hear about AI hallucinations, they often think it’s something bad. After all, hallucinations can cause AI tools like chatbots to give wrong or made-up answers, which makes them hard to trust. Businesses see this as a big problem. For example, in healthcare, mistakes can have serious consequences, and in finance, bad predictions can cost a lot of money. If someone tried to sell you an AI tool that gets things wrong 2% of the time, you probably wouldn’t buy it. That’s because most people link hallucinations with risks, mistakes, and a loss of trust.
But what many don’t realize is that AI hallucinations have also led to some amazing breakthroughs. In 2024, a hallucinating AI enabled the noble prize in Chemistry. First for an AI. Dr. David Baker and his team at the University of Washington’s Institute for Protein Design, in collaboration with Google DeepMind (Demis Hassabis and John Jumper), used an AI tool called AlphaFold to “hallucinate” completely new protein structures. The AI generated designs that went beyond known scientific templates, enabling researchers to explore previously uncharted configurations. These groundbreaking discoveries, inspired by AlphaFold’s creative outputs, earned them recognition as a revolutionary step in protein engineering. This shows that AI hallucinations, when used thoughtfully, can drive incredible innovation.
In fact, many scientists—especially in medicine and biochemistry—see hallucinations as a valuable tool, though they often avoid the word “hallucination.” The term can evoke negative associations, like someone imagining unreal things after taking drugs. However, in AI, hallucination is better understood as a form of imaginative computation. It’s when AI produces unexpected ideas or designs that, while not initially grounded in data, often lead to valuable outcomes. Dr. Eric Topol, a prominent voice in medical AI, likened these moments of AI creativity to human problem-solving breakthroughs that arise from thinking outside conventional boundaries.
AI hallucinations are also a lot like how humans create. Think about Einstein’s ideas about black holes or Van Gogh’s dreamlike paintings. These moments of creativity came from taking imaginative leaps. AI, too, can make similar leaps, producing new ideas and solutions that humans might not think of. While these hallucinations can be unpredictable, they show how AI can mimic some of the most exciting parts of human creativity.
So why do people dislike hallucinations? It’s because they remind us of uncertainty, and humans generally like things to be certain and reliable. But uncertainty is a natural part of life. For example, the Upanishads, an ancient text, say the only thing certain in life is death. Science also accepts uncertainty. Physics talks about uncertainty principles and entropy, which embrace unpredictability, and math has problems, like the Turing problem, that can’t be solved in a clear way. AI hallucinations fit into this same idea: they’re unpredictable but can lead to new discoveries and opportunities.
That doesn’t mean we should accept AI hallucinations everywhere. In fields like healthcare, where lives are on the line, accuracy is essential. But we can manage hallucinations by improving AI training data, fine-tuning the models, using tools to fact-check results, educating users about AI’s limits, and building systems that explain their decisions clearly.
Instead of being afraid of AI hallucinations, we should learn to work with them. If we can spot and correct these moments, we can unlock AI’s potential to create and innovate in ways that push the boundaries of what’s possible.
References:
*https://fortune.com/2024/12/24/ai-hallucinations-good-for-research-science-inventions-discoveries/
*https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html
*https://nihrecord.nih.gov/2024/11/22/topol-discusses-potential-ai-transform-medicine
Discover more from Debabrata Pruseth
Subscribe to get the latest posts sent to your email.