Rosemary J. Thomas, Senior Technical Consultant at Version 1 shares her analysis of the evolving regulatory landscape surrounding artificial intelligence.

The European Parliament has officially approved the Artificial Intelligence act, a regulation aiming to ensure safety and compliance in the use of AI, while also boosting innovation. Expected to come into force in June 2024, the act introduced a set of standards designed to guide organisations in the creation and implementation of AI technology. 

While AI has already been providing businesses with a wide array of new solutions and opportunities, it also poses several risks, particularly with the lack of regulations around it. For organisations to adopt this advanced technology in a safe and responsible way, it is essential for them to have a clear understanding of the regulatory measures being put in place.

The EU AI Act has split the applications of AI into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. Most of its provisions, however, won’t become applicable until after two years – giving companies until 2026 to comply. The exceptions to this are provisions related to prohibited AI systems, which will apply after six months, and those related to general purpose AI, which will apply after 12 months.

 Regulatory advances in AI safety: A look at the EU AI Act

The EU AI Act mandates that all AI systems seeking entry into the EU internal market must comply with its requirements. The act requires member states to establish governance bodies. These bodies will ensure AI systems follow the Act’s guidelines. This mirrors the establishment of AI Safety Institutes in the UK and the US, a significant outcome of the AI Safety Summit hosted by the UK government in November 2023. 

Admittedly, it’s difficult to fully evaluate the strengths and weaknesses of the act at this point. It has only recently been established, but the regulation provided will no doubt serve as stepping stones towards improving the current environment. Currently, AI systems exist with minimal regulations.

These practices will play a crucial role in researching, developing, and promoting the safe use of AI, and will help to address and mitigate the associated risks. That said the EU may have particularly stringent regulations, but the goal in this case is to avoid hindering the progress of AI development as compliance typically applies to the end-product and not the foundational models or creation of the technology itself (with some exceptions).

Article 53 of the EU AI Act is particularly attention-grabbing, introducing AI regulatory sandbox supervised spaces. These spaces have been designed to facilitate the development, testing, and validation of new AI systems before they are released into the market. Their main goal is to promote innovation, simplify market entry, resolve legal issues, improve understanding of AI’s advantages and disadvantages, ensure consistent compliance with regulations, and encourage the adoption of unified standards.

Navigating the implications of the EU’s AI Act: Balancing regulation and innovation

The implications of the EU’s AI acts are widespread, with the potential to affect various stakeholders, including businesses, researchers, and the public. This underlines the importance of striking a balance between regulation and innovation, to prevent these new rules from hindering technological development or compromising ethical standards.

Businesses, especially startups and mid-sized enterprises, may encounter additional challenges, as these regulations can increase their compliance costs and make it difficult to deploy AI quickly. However, it is important to recognise the increased confidence the act will bring to AI technology and its ability to boost ethical innovation that aligns with collective and shared values.

The EU AI Act is particularly significant for any business wanting to enter the EU AI market and involves some important implications in relation to perceived risks. It is comforting to know that act plans to ban AI-powered systems that pose ‘unacceptable risks’, such as those that manipulate human behaviour, exploit vulnerabilities, or implement social scoring. The EU has mandated that companies register AI systems in eight critical falling under the ‘high-risk’ category that impedes safety or fundamental rights. 

What about AI chatbots?

Generative AI systems such as ChatGPT and other models are of limited risk, but they should obey transparency requirements. There is a grey line which means that users can choose whether to use these technologies or not after their interactions with it.

The user’s full knowledge of the situation makes this regulation more open for businesses, as they can provide optimum service to their customers without being hindered by the complicated parts of the law. There are no additional legal obligations that apply to low-risk AI systems in the EU, except for the ones already in place. This gives freedom to businesses and customers to innovate faster in collaboration by developing a compliance strategy. 

Article 53 of the EU AI Act gives businesses, non-profits, and other organisations free access to sandboxes for a limited participation period of up to two years, which is extendable, subject to eligibility criteria. With the agreement on a specific plan and their collaboration with the authorities to outlines the roles, details, issues, methods, risks, and exit milestones of the AI systems, this helps make entry into the EU market straightforward. It provides equal opportunities for startups and mid-sized businesses to compete with well established businesses in AI systems, without worrying too much about costs and the complexities of compliance. 

Where do we go from here?

Regulating AI across different nations is a highly complex task, but we have a duty to develop a unified approach that promotes ethical AI practices worldwide. There is, however, a large divide between policy and technology. As technology becomes further ingrained within society, we need to bridge this divide by bringing policymakers and technologists together to address ethical and compliance issues. We need to create an ecosystem where technologists engage with public policy, to try and foster public-interest

AI regulations are still evolving and will require a balance between innovation and ethics, as well as global and local perspectives. The aim is to ensure that AI systems are trustworthy, safe, and beneficial for society, while also respecting human rights and values. To ensure they are working to the best effect for all parties, there are many challenges to overcome first, including the lack of common standards and definitions, and the need for coordination and cooperation among different stakeholders.

There is no one-size-fits-all solution for regulating AI, it necessitates a dynamic and adaptive process supported by continuous dialogue, learning, and improvement.

  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.