Meta Platforms has declined to sign the EU’s new Code of Practice for General Purpose AI (GPAI), calling the voluntary framework an “overreach” that could cripple innovation. The company’s Global Affairs Chief, Joel Kaplan, minced no words in a LinkedIn post on Friday, warning that “Europe is heading down the wrong path on AI.”
The Code of Practice is designed to guide developers of large-scale AI models, such as ChatGPT, Gemini, and Meta’s own LLaMA models, in aligning with the AI Act Europe’s comprehensive legislation governing artificial intelligence based on its societal risk. While technically voluntary, signing the code offers companies a path to regulatory certainty and smoother compliance once enforcement begins. The first set of GPAI obligations takes effect on August 2, with full implementation of the AI Act anticipated over the next two years.
Meta, however, sees the code as more of a burden than a benefit. Kaplan argued that the document “introduces several legal uncertainties for model developers” and that its provisions “go far beyond the scope of the AI Act.” He warned that the code risks stifling growth not only for global firms like Meta but also for the very European startups and enterprises that EU policymakers claim to protect.
“This overreach will throttle the development and deployment of frontier AI models in Europe,” Kaplan wrote. “And it will stunt European companies looking to build businesses on top of them.”
Meta’s position aligns it with other critics of the EU’s AI trajectory. Earlier this month, ASML and Airbus called on the Commission to delay the code by two years, citing concerns about regulatory complexity. Yet not all tech giants are resisting OpenAI, the company behind ChatGPT, has agreed to sign the code.
Innovation vs. Regulation
The AI Code of Practice aims to cover transparency, respect for copyright, model safety, and system robustness. However, Meta contends that without clearer legal definitions and risk classifications, the framework places too much responsibility on developers without providing sufficient guardrails for implementation.
The EU, on the other hand, believes it is setting a global gold standard for safe and ethical AI, and views the code as a bridge to full compliance with the AI Act. Companies that sign the code will likely face fewer inspections and may benefit from a more predictable legal environment.
Joel Kaplan, who replaced Nick Clegg as Meta’s Global Affairs Chief earlier this year, is no stranger to policy clashes. A former senior staffer in President George W. Bush’s administration, Kaplan has taken a more assertive stance against what Meta views as regulatory overreach from the European Union.
As the August 2 deadline nears, the tech industry is again split: some are playing ball, hoping to shape the rules from within, while others, like Meta, bet that the future of AI requires fewer constraints.