Impact of the EU’s new Artificial Intelligence Act

The set of rules has taken five years to get through the European Parliament setting out new set of AI regulations, approved with overwhelming support, introduces a risk-based approach to AI governance, aiming to ensure human control and societal benefits. Here’s a detailed breakdown of the changes and their implications, as follows:

Risk-Based Classification:

  • Low-Risk AI Systems: These include systems like recommendation algorithms, which will face minimal scrutiny.
  • High-Risk AI Systems: These include AI applications in healthcare, policing, and other critical areas. These systems will be subject to rigorous oversight, requiring detailed information and high-quality data.

Transparency Requirements:

  • Companies must disclose when AI is used in their products or services.
  • Higher-risk AI applications must provide clear information to users about how the AI operates.

Banned Applications:

  • AI-powered facial recognition by police is banned, except in serious cases.
  • Predictive policing systems and AI systems that track emotions in schools or workplaces are prohibited.
  • Deepfakes must be clearly labeled to prevent disinformation.

Compliance and Data Management:

  • AI developers, including major companies like Google and OpenAI, must adhere to EU copyright laws in their training data.
  • They must also provide detailed summaries of the data used to train their models.

Extra Scrutiny for Powerful AI Models:

  • Advanced AI systems, such as ChatGPT 4 and Google’s Gemini, will face additional scrutiny to prevent misuse and manage risks related to cyber attacks and other serious threats.

The EU’s AI Act is expected to influence global AI regulations, with other jurisdictions already looking to the EU’s framework for inspiration. This phenomenon, known as the “Brussels effect,” means that the EU’s stringent rules will likely set a global standard.

The UK, while having its own AI guidelines, is expected to be influenced by the EU’s regulations. At the global AI Safety Summit in London, AI developers committed to collaborating with governments to test new models before public release to mitigate risks.

Industry Response

The tech industry, including companies like Meta and Google, is actively adapting to these regulations. Meta already requires AI-modified images to be labeled, and Google has restricted its chatbot’s discussions about elections to prevent disinformation. Companies are encouraged to adopt these regulations globally for consistency and to streamline compliance processes.

Future Implementation

The new rules are set to come into force in May 2025, giving companies time to adjust and ensure compliance with the EU’s pioneering AI governance framework. Although the industry generally supports better regulation of AI, OpenAI’s chief executive Sam Altman raised eyebrows last year when he suggested the ChatGPT maker could pull out of Europe if it cannot comply with the AI Act. He subsequently backtracked to say there were no plans to leave.

Overall, the EU’s Artificial Intelligence Act represents a significant step towards comprehensive AI regulation, aiming to balance innovation with safety and transparency. The global tech industry and governments worldwide are watching closely, indicating a shift towards more robust international AI oversight and regulation.

(Source: Sky News)

Become a member – follow the lead of experienced investors. Sign Up