- By Prateek Levi
- Sun, 03 Aug 2025 06:22 PM (IST)
- Source:JND
As OpenAI gears up for the launch of GPT-5—its most powerful language model yet—CEO Sam Altman isn’t just talking about breakthroughs. He’s openly wrestling with what they mean.
Appearing on This Past Weekend with Theo Von, Altman spoke candidly about the experience of testing GPT-5, describing it as unsettling. “It feels very fast,” he said, recounting how the pace and potential of the model led to a moment of reckoning: “There are moments in the history of science where you have a group of scientists look at their creation and just say, you know, ‘What have we done?’”
This wasn't about performance benchmarks or compute speed. It was about the implications of creating something that may be evolving faster than humanity can manage.
“Maybe it’s great, maybe it’s bad—but what have we done?” he repeated, drawing a comparison between the development of GPT-5 and the Manhattan Project—a stark metaphor hinting at power without full control. The concern wasn’t whether GPT-5 works—it clearly does—but whether anyone’s thinking hard enough about what comes next.
“It feels like there are no adults in the room,” Altman said, acknowledging how regulation and oversight are struggling to keep pace with AI’s rapid acceleration.
GPT-5 vs GPT-4: A big leap
Details on GPT-5 are still being kept quiet, but the internal chatter suggests a major upgrade over GPT-4. We’re talking about better multi-step reasoning, stronger memory, and sharper multimodal abilities.
Altman, never one to overhype without substance, didn’t mince words about GPT-4 either: “GPT-4 is the dumbest model any of you will ever have to use again, by a lot.”
That may sound extreme—after all, GPT-4 already reshaped workflows and creative output for millions—but it signals just how confident OpenAI is in what’s coming next.
In a separate discussion, Altman reflected on a moment when GPT-5 solved a problem he couldn’t crack. “I felt useless relative to the AI,” he admitted. “It was really hard, but the AI just did it like that.”
What does AGI really mean now?
OpenAI’s endgame has always been AGI—Artificial General Intelligence, or AI that can handle nearly any task a human can. But even that definition remains fuzzy.
While Altman once suggested AGI might “whoosh by with surprisingly little societal impact,” his tone now is far less relaxed. The worry isn’t whether AGI arrives—it’s what happens if it does and no one’s ready.
For some, AGI is a technical milestone. For others, it’s a market opportunity. Microsoft’s long-term partnership with OpenAI, rumoured to value the company near $100 billion, makes it clear there’s serious money in the mix too.
But as GPT-5 inches closer to what some might call general intelligence, it also exposes how far behind governance and global frameworks still are.
Tensions at the top
There’s also pressure inside OpenAI. Investors want results, and there’s a looming shift from the company’s unique capped-profit model toward a more traditional for-profit structure. Microsoft—now $13.5 billion deep into its partnership with OpenAI—reportedly wants more leverage.
Some reports suggest OpenAI could try to trigger an early AGI declaration to exit certain contractual limits with Microsoft. If that happens, it would mark a seismic shift in who holds power in the AI space.
Microsoft insiders have allegedly described their current strategy as a “nuclear option.” In turn, OpenAI is said to be preparing legal arguments against anti-competitive behaviour if it comes to that.
One potential flashpoint? The launch of an AI coding agent that might outperform a human developer—something GPT-5 could very well make possible.
Altman, meanwhile, is trying to set realistic expectations. “We have a tonne of stuff to launch over the next couple of months—new models, products, features, and more. Please bear with us through some probable hiccups and capacity crunches,” he posted on X.
The threat isn’t future—it’s now.
While executives and researchers debate the long-term risks of AGI, others are sounding alarms about what AI is already doing—especially in fraud.
Haywood Talcove, CEO of the Government Group at LexisNexis Risk Solutions, says AI-driven scams are already draining millions from public systems. “Every week, AI-generated fraud is syphoning millions from public benefit systems, disaster relief funds, and unemployment programs,” he warned.
It’s no longer a matter of possibility—it’s reality. Criminal groups are deploying deepfakes, synthetic identities, and AI-generated forms to file thousands of fraudulent claims daily, outpacing outdated safeguards.
Talcove believes this is just the beginning. “We may soon recognise a similar principle for AI that I call ‘Altman’s Law’: every 180 days, AI capabilities double.”
His takeaway is clear: “Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.”
Not just hype
Some critics argue Altman’s dramatic warnings are part of a savvy marketing strategy—a way to raise stakes before a big launch. But those who’ve followed his career know he doesn’t play the doomsayer card lightly.
GPT-5 could be the most advanced model OpenAI has ever built. It could also be the moment the world is forced to reckon with how far AI has come—and how little control it might still have over where it’s headed.