During my daily deluge of doomscrolling’s I came across a few articles about the current state of chatbots’ inability to say I ‘don’t know‘. I have found this almost just as distressing as the overly-glazed version of ChatGPT 4o that was so dangerous that OpenAI had to roll it back.
These things are trained and told to always provide an answer, that’s one of many reasons AI hallucinations are so prevalent. I keep wondering what happens to doubt, curiosity, and open-mindedness in a world where we’re told what we want to hear, or now with Google’s Veo 3 and others, what we want to see?
And it’s not just the chatbots, it’s us.
The world rewards certainty now, even when it’s wrong. “I don’t know” has become a flaw, not a pause. Algorithms don’t hesitate. They generate. They fill. They extrapolate and flatten and assert. We’re encouraged to do the same: decide fast, take a stance, fill the silence. No time for curiosity or for pondering.
But I miss maybe. It has made me feel these things are wildly unsafe in their current state, like, a health hazard we just haven’t caught up with yet, and we’re all too trusting of it.
I miss the way uncertainty used to feel like space. I miss when “I’m not sure” wasn’t a failure of intellect, but a sign of attention. I miss wandering. Not knowing. Sitting with the question instead of rushing toward the answer.
I wonder what happens when our tools forget how to wonder.
When they always have an answer, even if it’s wrong, does it train us to trust less, or to question less?
I don’t know. But maybe.