This week the European Parliament has made a significant move in the regulation of Artificial Intelligence (AI) by adopting its position on the technology with an overwhelming majority. In practice, this means that the world’s first comprehensive law on AI, known as the AI Act, will aim to come into force in late 2024. The Act aims to manage the potential risks associated with the broad family of technologies under the term AI, and sets guidelines for AI’s ethical and responsible use when used by European consumers.
For companies embedding AI into their tech stack, or building a monetization model with an AI element, here are the key takeaways you need to know:
1. Is the technology you are creating a prohibited or high risk AI system?
The AI Act follows a risk-based approach, banning AI applications that pose harm to people’s safety or are discriminatory. Certain systems that are strictly prohibited include those that ‘deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics)’. Furthermore, individuals will have more rights to be protected from the negative risks of using ‘high-risk’ systems, including AI that handles people’s health, safety, fundamental rights, the environment, influencing voters in political campaigns and in recommendation systems used by social media platforms.
By taking this approach, the European Parliament aims to safeguard individuals and organizations from the adverse effects of AI systems. Furthermore, it should provide certainty for businesses. It will however, result in certain additional disclosures and documentation for startups who are creating technology in specific verticals such as healthcare.
2. Does your technology use Remote Biometric Identification?
During the parliamentary committee-level discussions, one of the main points of contention was the real-time use of Remote Biometric Identification. While some wanted exceptions for specific circumstances like terrorist attacks or locating missing individuals, the ban on real-time use mostly prevailed.
3. Do you utilise Generative AI?
Perhaps the question most influential to societal AI use: how does the AI Act impact generative AI? The European Parliament introduced a tiered approach for AI models, with a focus on foundation models and generative AI systems.
To address concerns regarding transparency and accountability, the European Parliament has proposed mandatory labeling for AI-generated content and the disclosure of generative AI models containing training data protected by copyright. This requirement will apply to models such as ChatGPT and Bard for example.
4. Are you developing models?
Providers and developers of foundation models are to be held to a higher standard and would need to assess if their model falls within a high-risk category. Builders of models should also be aware they will need to register their models in the EU database prior to release on the EU market.
Next Steps and Challenges
The adoption of the AI Act by the European Parliament does not mean immediate enforcement, but instead sets the stage for interinstitutional negotiations with the EU Council of Ministers and the European Commission. These discussions will define the high-risk categories, fundamental rights, and the regulation of foundation models. While certain technical aspects, governance, innovation, and AI definition may be resolved at the technical level, key issues will require careful deliberation. Practically, the aim is for the text to transfer into horizontal standards borrowing from language seen in ISO security certifications, with sector-specific approaches following.
While we will still have time until the AI Act is enforced, the European Parliament’s adoption of its AI position marks a significant milestone in the regulation of the technology. As negotiations progress, Europe’s role in shaping global AI governance becomes increasingly influential and hopes to provide regulatory stability for AI-focused businesses, and other jurisdictions. Founders should therefore keep themselves informed in order to build products that will operate under the new legal framework.
Keen to stay updated and learn more? Sign up for our newsletter at the bottom of the page to hear more about our updates on the topic.