EU’s New AI Code of Conduct Set to Impact Regulation

The European Commission recently introduced a Code of Conduct that could change how AI companies operate. It is not just another set of guidelines but rather a complete overhaul of AI oversight that even the biggest players cannot ignore. 

What makes this different? For the first time, we are seeing concrete rules that could force companies like OpenAI and Google to open their models for external testing, a fundamental shift in how AI systems could be developed and deployed in Europe.

The New Power Players in AI Oversight

The European Commission has created a framework that specifically targets what they are calling AI systems with “systemic risk.” We are talking about models trained with more than 10^25 FLOPs of computational power – a threshold that GPT-4 has already blown past.

Companies will need to report their AI training plans two weeks before they even start. 

At the center of this new system are two key documents: the Safety and Security Framework (SSF) and the Safety and Security Report (SSR). The SSF is a comprehensive roadmap for managing AI risks, covering everything from initial risk identification to ongoing security measures. Meanwhile, the SSR serves as a detailed documentation tool for each individual model.

External Testing for High-Risk AI Models

The Commission is demanding external testing for high-risk AI models. This is not your standard internal quality check – independent experts and the EU’s AI Office are getting under the hood of these systems.

The implications are big. If you are OpenAI or Google, you suddenly need to let outside experts examine your systems. The draft explicitly states that companies must “ensure sufficient independent expert testing before deployment.” That’s a huge shift from the current self-regulation approach.

The question arises: Who is qualified to test these incredibly complex systems? The EU’s AI Office is stepping into territory that’s never been charted before. They will need experts who can understand and evaluate new AI technology while maintaining strict confidentiality about what they discover.

This external testing requirement could become mandatory across the EU through a Commission implementing act. Companies can try to demonstrate compliance through “adequate alternative means,” but nobody’s quite sure what that means in practice.

Copyright Protection Gets Serious

The EU is also getting serious about copyright. They are forcing AI providers to create clear policies about how they handle intellectual property.

The Commission is backing the robots.txt standard – a simple file that tells web crawlers where they can and can’t go.  If a website says “no” through robots.txt, AI companies cannot just ignore it and train on that content anyway. Search engines cannot penalize sites for using these exclusions. It’s a power move that puts content creators back in the driver’s seat.

AI companies are also going to have to actively avoid piracy websites when they’re gathering training data. The EU’s even pointing them to their “Counterfeit and Piracy Watch List” as a starting point. 

What This Means for the Future

The EU is creating an entirely new playing field for AI development. These requirements are going to affect everything from how companies plan their AI projects to how they gather their training data.

Every major AI company is now facing a choice. They need to either:

  • Open up their models for external testing
  • Figure out what those mysterious “alternative means” of compliance look like
  • Or potentially limit their operations in the EU market

The timeline here matters too. This is not some far-off future regulation – the Commission is moving fast. They managed to get around 1,000 stakeholders divided into four working groups, all hammering out the details of how this is going to work.

For companies building AI systems, the days of “move fast and figure out the rules later” could be coming to an end. They will need to start thinking about these requirements now, not when they become mandatory. That means:

  • Planning for external audits in their development timeline
  • Setting up robust copyright compliance systems
  • Building documentation frameworks that match the EU’s requirements

The real impact of these regulations will unfold over the coming months. While some companies may seek workarounds, others will integrate these requirements into their development processes. The EU’s framework could influence how AI development happens globally, especially if other regions follow with similar oversight measures. As these rules move from draft to implementation, the AI industry faces its biggest regulatory shift yet.