- By Alex David
- Sun, 03 Aug 2025 06:06 PM (IST)
- Source:JND
Rivalry in AI just intensified. Anthropic has partially suspended OpenAI's access to their Claude models on Anthropic's platform due to breaches in terms of service by OpenAI; this demonstrates their increasing competition, especially ahead of their long-awaited GPT-5 release by OpenAI.
What Happened?
OpenAI had been granted developer access to Claude APIs for standard industry purposes—things like benchmarking model performance and conducting safety evaluations. This kind of cross-evaluation is common among AI labs and generally accepted as fair use.
But according to a report by Wired, Anthropic claims that OpenAI engineers went a step further. The company alleges they were interacting with Claude Code, Anthropic’s AI-powered coding assistant, in ways that violated its terms of service.
The Core Accusation
Anthropic’s terms explicitly prohibit using its models to build or train competing services, and it also bans reverse engineering. Christopher Nulty, Anthropic’s spokesperson, didn’t mince words.
“Claude Code has become the go-to choice for coders everywhere… so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service,” he told Wired.
While Anthropic has cut off access to its Claude Code assistant for OpenAI, it will still allow limited API usage for benchmarking and safety testing—as long as it doesn’t stray into competitive territory.
The Competitive Backdrop
Timing of this conflict is telling: OpenAI has announced it is about to launch GPT-5, an AI model expected to make significant advances in areas like code generation. Meanwhile, Claude Code has enjoyed tremendous popularity among developers due to its accuracy, reliability, and security-minded design.
This is just the latest front in a larger turf war. Earlier this year, Anthropic also blocked Windsurf, a small AI coding startup, from accessing its models after reports emerged that OpenAI might acquire it. That deal never materialised—Google instead scooped up Windsurf’s co-founders and tech for $2.4 billion.
And it’s not a one-sided story. Earlier this year, OpenAI accused Chinese AI startup DeepSeek of scraping data from ChatGPT to distil or replicate its capabilities, another violation of terms that led to API restrictions.
ALSO READ: Samsung Bans Bootloader Unlock On One UI 8: EU Law Likely To Blame
What Both Sides Are Saying
OpenAI, for its part, says it was acting within industry norms. Hannah Wong, OpenAI’s chief communications officer, responded:
“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”
Meanwhile, Anthropic CEO Dario Amodei spoke more broadly about values and intent in a recent podcast appearance:
“Trust is really important… Technically, if you’re working for someone whose motivations are not sincere, it’s not going to work. You’re just contributing to something bad.”
Amodei, a former OpenAI executive, co-founded Anthropic after splitting from the company over disagreements around safety, transparency, and leadership direction.
Claude Code Rate Limits and Access Control
Anthropic recently implemented weekly usage limits for Claude Code, due to users running it continuously in the background 24/7. This is part of an overall trend toward tightening control over how AI tools are accessed and used by potential competitors.
Final Take
This isn’t simply a dispute between two businesses; it is about the issues of transparency and trust in industry competition concerning the rapidly evolving AI landscape from years of technological advancement. Expect more disputes regarding the boundaries of safeguarding trust as these tools drive automation in computer programming, content creation, and other creative activities.
And with GPT-5 on the horizon, it’s clear the stakes have never been higher.