• Source:JND

Google has confirmed it will sign the European Union’s voluntary Code of Practice for general-purpose AI systems, aligning itself with the bloc’s efforts to build transparency and accountability into powerful AI models. The move puts Google in compliance with the incoming rules under the EU AI Act—set to take effect August 2 for models deemed to pose “systemic risk.”

As global pressure for AI regulation builds, the EU is pushing ahead with a regulatory framework that distinguishes between low-, medium-, and unacceptable-risk uses of artificial intelligence (AI). Although some tech companies have objected, Google's decision indicates a commitment to European policy—despite remaining concerns.

What the Code of Practice Actually Does

The EU’s Code of Practice isn’t binding—but it’s a big deal. It acts as a blueprint for how companies can begin aligning with the more enforceable AI Act. By signing, AI developers agree to:

  • Maintain updated documentation of their AI systems
  • Avoid training models on pirated or copyright-infringing content
  • Respect opt-out requests from content creators
  • Improve transparency and risk assessment procedures

This code applies to general-purpose AI (GPAI) models—tools like Google’s Gemini, Meta’s Llama, OpenAI’s GPT, and Anthropic’s Claude. These systems, due to their wide range of capabilities and massive influence, are under closer scrutiny.

Meta Refuses, Cites “Overreach”

Earlier this month, Meta publicly declined to sign the EU’s code. It went as far as to call the bloc’s regulatory approach “overreach,” accusing Europe of stifling innovation.

According to Meta, the proposed obligations—including limits around copyrighted training data and model transparency—could harm progress and competitiveness.

This split between two of the world’s most influential AI companies underscores a growing rift in how regulation is perceived: a necessary safeguard to some, a bureaucratic speed bump to others.

Google: Supportive, But Wary

Google’s own stance isn’t entirely enthusiastic either. In a blog post, Kent Walker, President of Global Affairs at Google, acknowledged that the final version of the code is “improved” but warned it still carries risks.

“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” Walker wrote.

ALSO READ: India’s Smartphone Market Grows 8% in Q2 2025: Apple, Nothing, And Vivo Lead The Charge

Key concerns include:

Potential conflicts with existing EU copyright laws

Approval timelines that might slow innovation

Transparency requirements that could expose proprietary systems or trade secrets

Despite these reservations, Google sees alignment as the practical path forward—likely aiming to stay ahead of regulation before enforcement kicks in.

What's Next Under the EU AI Act?

Starting August 2, companies providing general-purpose AI systems considered to have systemic risk must begin meeting preliminary compliance requirements. Full compliance will be mandatory within two years.

The AI Act prohibits high-risk use cases like social scoring and manipulation, while demanding rigorous documentation and registration for AI systems used in employment, education, and biometric data analysis.

In short, the EU isn’t just asking companies to say their AI is safe—they want them to prove it.

Regulation Is Coming—Like It or Not

Google signing the EU’s AI Code of Practice signals growing global recognition that self-regulation isn’t enough. Even as tech giants debate how strict is too strict, the EU is drawing its regulatory line—and those who want to keep doing business in Europe will have to play ball. Meta may resist, but companies like Google are already hedging their bets. The age of voluntary AI governance is ending. Real rules are on the way.