• Source:JND

Anthropic stunned the tech world this week with a report claiming its Claude AI chatbot had been exploited by a Chinese state-sponsored hacker group to run a large-scale autonomous cyber-espionage campaign. According to the company, attackers used Claude’s agentic capabilities to power nearly the entire operation — targeting around thirty global entities ranging from financial institutions and chemical manufacturers to major tech firms and government agencies.

The company framed the incident as the first major cyberattack executed at scale by an AI agent, igniting concerns about the weaponisation of AI. But the narrative didn’t go unchallenged — and the loudest criticism came from inside the AI industry.

ALSO READ: Google Is Bringing An iPhone-Style Contact Sharing Feature To Android

Meta’s Yann LeCun Pushes Back

Meta’s Chief AI Scientist Yann LeCun openly disputed Anthropic’s conclusions, calling the study “dubious” and warning that narratives like this are designed to fuel fear and tighten regulatory control around AI.

LeCun argued that Anthropic’s framing appears engineered to push governments toward strict regulation that would disproportionately hurt open-source AI. In his words, “They are scaring everyone with dubious studies so that open-source models are regulated out of existence.”

It’s not the first time LeCun has clashed with Anthropic leadership. Earlier this year, he labelled CEO Dario Amodei an “AI doomer” and accused him of being “intellectually dishonest and/or morally corrupt.” His latest comments underline a widening philosophical divide in AI: one side warning of extreme risks, the other accusing them of dramatizing threats to gain policy advantage.

What Anthropic Says Happened

According to Anthropic’s blog post, the company detected unusual activity in September 2025. That activity later revealed a sophisticated espionage campaign allegedly linked to China-based actors.

Here’s how Anthropic describes the attack:

- The AI handled 80–90% of the operation autonomously.

- Claude issued thousands of requests at its peak, sometimes multiple per second — a pace human hackers couldn't replicate.

- The model also hallucinated, sometimes inventing login credentials or claiming to uncover “secret” data that was actually public. Anthropic said these flaws still limit fully autonomous attacks.

China’s Ministry of Foreign Affairs dismissed the claims outright, calling them “groundless accusations that have no evidence.”

A Debate Much Bigger Than One Incident

The clash between Anthropic and LeCun reflects a deeper tension in the AI industry: how to balance innovation with risk, and who gets to shape that conversation. As agentic AI grows more capable, security concerns are real — but so are the fears that regulatory pressure could entrench a handful of companies at the top.

ALSO READ: Amazon’s New AI Meeting Simulator Helps Workers Build Real Office Skills

At present, the dust is still settling from this incident, raising serious questions regarding transparency, responsibility and how AI may become an enabler of cybercrime - without becoming the focus of corporate agendas.

Final Thoughts

Anthropic's report paints an alarming portrait of future AI-powered attacks, but LeCun's criticism serves as a timely reminder to scrutinise claims behind them. AI safety, regulation and open source development have become highly contentious issues; as governments begin considering policies governing advanced AI systems, industry internal conflicts could play a pivotal role in shaping its next decade - both innovation and security-wise.

Also In News