What if the most dangerous thing about AI isn’t the super-intelligence? What if it’s the stories we tell ourselves to keep building it?
The AI 2027 paper lays out a slick forecast of how AI might evolve over the next few years things like superintelligence, competitive pressure, and a sprint to capability. But what caught me wasn’t the speed. It was the comfort.
The paper is packed with what I’d call comfortable fictions; beliefs that let us feel okay about pushing forward, even when the risk is staring us in the face. It reminds me a lot of the climate crisis.
Fiction #1: The Arms Race Justifies Acceleration
“We can’t slow down. If we don’t build it, someone else will.”
That’s the core of the paper. Companies and countries are locked in, so pressing pause is off the table. But maybe the arms race isn’t inevitable. Maybe it’s just the story we tell to justify never hitting the brakes.
Fiction #2: Alignment Can Catch Up Later
“We’ll solve alignment along the way.”
The paper admits that alignment methods usually lag behind model capabilities. But it leans on the hope that alignment will magically scale fast enough later. It’s a comforting side quest: we can keep building now, because the hard part is a tomorrow problem.
Fiction #3: Gradual Progress Means Gradual Danger
“We’ll see the risks coming.”
There’s an implicit faith that AI capabilities will improve gradually and that that we’ll have time to notice when things go sideways. But the paper itself hints that sudden jumps are possible. Why assume danger will show up on a polite timeline? If we look at climate change, we’ve ignored the risks entirely and the world we’re all going to be living in in twenty years, if we’re even still here is going to look markedly different than it does today. If we are here, will the majority of us still have a job? What will our quality of life look like?
Fiction #4: AI Safety Theater Will Save Us
“We have safety teams. We’re doing evaluations. Trust us.”
Safety measures are baked in, but often lag behind by years. The paper shows this. Sometimes the safety layer is really a performance layer, just a way to signal caution while still sprinting forward. It feels like control, but it’s a ritual. It’s very much head in the sand type thinking.
The Part That Stings
I read AI 2027 and I see the industry’s fictions. But I also see mine.
There is a lot wrong with this paper, just like the paper I talk about in my previous post, but it’s not all wrong. I use AI. It’s prevalent in my work life in how I decide what needs my attention. I double-check the results it produces because it makes me feel like I’m in control of the tool.
But isn’t that a kind of comforting fiction too?
Maybe it is. And maybe it’s easier to believe it because not believing, at this stage, feels too big.
Like pushing against a brick wall.