When the moderator described bringing Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind together on stage as being like a conversation between “the Beatles and the Rolling Stones” of AI, it wasn’t hyperbole. It was a recognition of the profound significance of having the leaders of the world’s two most advanced AI labs in a room, speaking candidly about the future they are actively building.
While the world remains captivated by the current capabilities of AI—writing emails, generating images, coding simple applications—this conversation provided a rare, unfiltered look into the next few years. Their discussion moved beyond the present day to reveal several surprising, counter-intuitive, and profoundly impactful truths about the speed, scale, and stakes of the race to Artificial General Intelligence (AGI).
——————————————————————————–
1. The Timeline to Superintelligence Isn’t Decades Away. It Might Be a Year or Two.
The conversation kicked off with a stark debate on timelines. Dario Amodei reaffirmed his aggressive prediction that an AI model capable of performing at the level of a Nobel laureate across multiple fields could arrive as early as 2026 or 2027. The core mechanism driving this incredible speed is the imminent closing of a “self-improvement loop,” where AI becomes so good at coding and AI research that it accelerates the development of its own successors. “I would guess that this goes faster than people imagine,” Amodei stated plainly.
But this is where the debate truly began. Demis Hassabis pumped the brakes, holding to his more conservative timeline of AGI by the end of the decade. His caution isn’t just a different guess; it’s rooted in a different philosophy of progress. He argued that some areas, like fundamental science, are much harder to automate because the output isn’t easily verifiable like a piece of code. More importantly, he pointed to a missing ingredient in today’s systems: the ability to generate a novel hypothesis in the first place—what he called “coming up with the question.”
This isn’t a simple disagreement. It’s a clash between two worldviews: one driven by the seemingly unstoppable exponential power of engineering (Amodei) and another tempered by the messy, unpredictable nature of pure scientific discovery (Hassabis).
2. The AI Gold Rush Is Real, and the Numbers Are Astronomical.
A common narrative suggests that AI labs are simply burning through billions in venture capital with no clear path to sustainable profit. Dario Amodei shattered this perception with specific, exponential revenue figures for Anthropic.
He stated that the company’s revenue grew from near zero to $100 million in 2023, is projected to hit $1 billion in 2024, and could potentially reach $10 billion in 2025. The principle driving this is what he called an “exponential relationship” between a model’s cognitive capability and the revenue it can generate. But just as the numbers reached a dizzying peak, he added a crucial, self-aware qualifier: “I don’t know if that curve will literally continue—it would be crazy if it did.”
That touch of humility makes the underlying point even more powerful. These staggering financial figures demonstrate that the economic incentives for accelerating AI development are immense. This commercial momentum makes any calls to “slow down” incredibly difficult to implement in practice.
3. The Existential Risk Isn’t Skynet. It’s Surviving Our “Technological Adolescence.”
The “doomer” narrative of a single, all-powerful malevolent AI often feels simplistic. Amodei offered a more nuanced and powerful framing, referencing a scene from the movie Contact. In the film, a character is asked what single question they would pose to an advanced alien civilization. Their answer is what frames Amodei’s view of our current predicament.
“I would ask how did you do it? How did you manage to get through this technological adolescence without destroying yourselves? How did you make it through?”
This is the central question for humanity. He then listed the concrete risks we must navigate: ensuring highly autonomous systems remain under human control, preventing misuse by individuals for catastrophic ends like bioterrorism, and managing misuse by authoritarian nation-states. But he didn’t stop there, adding the one that truly keeps ethicists up at night: “…and you know, what haven’t we thought of? Which in many cases… may be the hardest thing to deal with at all.” This reframes the problem from a fight against a fictional villain to a critical test of our collective wisdom against known, and unknown, perils.
4. The Most Powerful Lever in the Geopolitical AI Race Isn’t an Algorithm. It’s a Chip.
Amid the intense geopolitical competition in AI, particularly between the US and China, Amodei offered a direct and controversial policy recommendation. When confronted with the current US administration’s logic—that “we need to sell them chips because we need to bind them into US supply chains”—he proceeded to dismantle it with a devastating analogy.
He argued that the single most effective measure to manage the race is to stop selling advanced chips to geopolitical adversaries. He then underscored the severity of his view:
“…I think of this more as like, you know, it’s a decision are we going to, you know, sell nuclear weapons to North Korea… because that produces some profit for Boeing…”
This stark comparison cuts through complex economic arguments and reframes the debate as a fundamental question of global security, not just commerce. It suggests that the most critical decisions about our AI future may be made in commerce departments, not just in research labs.
5. After AGI, the Hardest Problem Won’t Be Distributing Wealth. It Will Be Finding Meaning.
While concerns about AI-driven job displacement are widespread and valid, Demis Hassabis looked beyond the immediate economic disruption to a more profound, philosophical challenge.
He posited that humanity might find solving post-scarcity economics—figuring out how to distribute the immense wealth generated by AGI—to be “easier” than solving the subsequent crisis of the “human condition.” His core concern was deeply existential: What provides purpose for humanity in a world where our jobs are no longer the central organizing principles of our lives? He made the abstract concrete, suggesting we might turn to everything from “extreme sports to art” to “exploring the stars” to find new sources of meaning.
This final point elevates the AI conversation from the technical and political to the spiritual. It forces us to confront not just what AI can do, but what we want humanity to be in a world where machines can do almost everything.
——————————————————————————–
Conclusion: The Day Before the Day After
The discussion with Amodei and Hassabis was a sobering look over the horizon. The key themes were unmistakable: the shocking speed of progress, the explosive economics driving it, the true nature of the existential risks we face, and the deep, philosophical questions that loom at the finish line.
Both leaders agreed that the pivotal moment is when AI systems begin building AI systems. As we stand on that precipice, the moderator offered a poignant reflection: perhaps we should all be hoping that it takes a little bit longer, to give society time to prepare.
In the most stunning moment of the entire conversation, Dario Amodei—the man with the most aggressive timeline—quietly conceded.
“I would prefer that,” he said. “I think that would be better for the world.”
The person pushing the accelerator hardest openly wishes for a slower pace for the sake of humanity. That admission reveals the true nature of our challenge: it’s no longer a simple race to a technical finish line, but a profound ethical dilemma embodied by its very architects. The urgent question is not if we can achieve this, but whether we will have the wisdom to manage what happens the day after.