The upside of AI hallucinations
ChatGPT is weird. It’s so smart, and yet it’s capable of being incredibly dumb. It’s also perfectly willing to completely make things up. And when it does, it sounds just as confident as when it’s not. None of this is fundamentally different than humans. Smart humans can be surprisingly dumb at times. People make stuff up. People can even make stuff up without realizing that they’re doing so. But ChatGPT takes this to such an extreme that it feels qualitatively different than interacting with a human.
Ok, but aren’t hallucinations bad? What’s the upside?
I find myself thinking much more critically about the answers that ChatGPT gives me than I would if I were reading a human crafted response. And in today’s world, where information bubbles isolate people and create alternative realities filled with conflicting sets of “facts,” training people to think more critically about the information they consume could have a serious positive impact.
As an example, I often ask ChatGPT about math. I find it incredibly useful. But when I read the responses, I go through each step of the logic with a fine tooth comb because I know that ChatGPT is perfectly willing to just make stuff up at any moment. If I were interacting with a human with anywhere near as much math knowledge as ChatGPT, the chance that they would make an incredibly silly mistake (or make something up) halfway through an explanation is close to zero. And since it’s so unlikely, I wouldn’t spend much of my own energy looking for these mistakes. But I think looking for these mistakes forces me to more deeply engage with and understand the material.
Similarly, I ask ChatGPT factual questions about the world. As you’d expect, it’s inhumanly good at answering these questions. But I treat the responses as probably 90% to be correct and 10% to be completely wrong. If I were asking some human world expert factual questions about their domain, they might not know as many answers, but if they did give an answer, I think it would be more likely to be correct. If they didn’t know the answer, they would probably say so. And even if they mistakenly gave me the wrong answer, I suspect it wouldn’t be completely wrong in the same way that some of ChatGPT’s answers are just crazy.
This is first-order bad, but second-order good
It’s obviously bad that ChatGPT makes mistakes. It’s even worse (I think) that it doesn’t seem to be aware of when it’s giving an answer that’s 99% vs. 60% to be correct. To first-order, this is bad; it would be better if ChatGPT was perfect.
But on a second-order level, forcing people to take seriously the possibility that the answers they get are wrong is good! Uncritially accepting information from whatever biased sources are in your particular information bubble is part of why the country seems so divided. I don’t claim to have any deep insight on this problem, but the fact that people operate with contradictory sets of facts certainly can’t help.
Relatedly, I find probabilistic thinking to be an incredibly useful skill, and yet it’s something most people tend to overlook. Interacting with an agent like ChatGPT essentialy forces probabilistic thinking, and perhaps exercising this mental skill in one context will encourage using it in other contexts as well.
Unfortunately, this is probably going away
The current state of AI agents, where they are dramatically smarter than humans in almost every way but then dumb enough for humans to catch their mistakes, seems like an fragile balance that is unlikely to last as these systems continue to improve.
In a few years, it’s not that AI agents will never make mistakes. It will just feel more like interacting with an extremely smart human. You know they are capable of making mistakes, but they’re not the sort of mistakes that you could catch, so it’s probably not worth your energy to go looking for them.
That said, looking for mistakes is just one way to think critically and deeply engage with information. Even if this phase is temporary, maybe it will leave a lasting positive impact by reinforcing the importance of these skills.