The European Union’s (EU) new artificial intelligence act is the first piece of major AI regulation to affect the market. As part of its digital strategy, the EU has expressed a desire to AI as the technology develops.
We spoke to Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, to learn more about the act and how it will affect AI in Europe, as well as the rest of the world.
1. The EU has now finalised its AI Act. The legislation is officially in effect, four years after it was first proposed. As the first major AI law in the world, does this set a precedent for global AI regulation?
The Act marks a turning point in the provision of strong regulatory framework for AI, highlighting the growing awareness of the need for the safe and ethical development of AI technologies.
AI in general and ethical AI in particular are complex topics, so it is important that regulatory authorities such as the European Union (EU) clearly define the legal frameworks that organisations should adhere to. This helps them to avoid any potential grey areas in their development and use of AI.
Since the EU is a frontrunner in introducing a comprehensive set of AI regulations, it is likely to have a significant global impact and set a precedent for other countries, becoming an international benchmark. In any case, the Act will have an impact on all companies operating in, selling in, or offering services consumed in the EU.
2. The Act introduces a risk-based approach to AI regulation, categorising AI systems into minimal, specific transparency, high, and unacceptable risk levels. The Act’s high risk AI systems, which can include critical infrastructures, must implement requirements such as strong risk-mitigation strategies and high-quality data sets. Why is this so crucial, and how can organisations ensure they do this?
Broadly speaking, high risk AI systems are those that may pose a significant risk to the public’s health, safety, or fundamental rights. This explains why systems categorised as such must meet a much more stringent set of requirements.
The first step for organisations is to correctly identify if a given system falls within this category. The Act itself provides guidelines here, and it is also advisable to consider getting expert legal, ethical, and technical advice. If a system is identified as high risk, then one of the key considerations is around data quality and governance. To be clear – this consideration should apply to all AI systems, but in the case of high risk systems it is even more important given the potential consequences of something going wrong.
Crucially, organisations must ensure that data sets used to train high risk AI systems are accurate, complete, representative, and, most importantly, free from bias. In addition, ongoing policies need to maintain the data’s integrity – for example, policies around data protection and privacy. And as AI develops, so too do the challenges around data management, requiring increasingly intelligent risk mitigation and data protection strategies.
With an effective strategy in place, businesses can ensure that should a data-threatening event occur, not only are the Act’s requirements not breached, but operations can resume imminently with minimal downtime, cost, and interruption to critical services.
3. With AI developing at an exponential rate, many have expressed concerns that regulatory efforts will always be on the back foot and racing to catch up, with the EU AI Act itself going through extensive revisions before its launch. How can regulators tackle this challenge?
As the prevalence of AI continues to increase, considerations such as data privacy, which is regulated by GDPR in Europe, continue to gain importance.
The EU AI Act marks another key legal framework. Moving forward, we will see more and more legal restrictions like this come into play. For example, we may see developments in areas such as intellectual property ownership. Those areas that will need to be tackled will evolve and mature as the AI market continues to develop.
However, it is also important to realise that no regulatory framework can anticipate all the possible future developments in AI technology. It’s for this reason that striking a balance between legislation and innovation is so important and necessary.
4. The Act will significantly impact big tech firms like Microsoft, Google, Amazon, Apple, and Meta, who will face substantial fines for non-compliance. Does the Act also hinder innovation by creating red tape for start-up businesses and emerging industries?
We don’t know yet whether the Act will help or hinder innovation. However, it’s important to remember that it won’t cetegorise all AI systems as high risk. There are different system designations within the EU AI Act, and the most stringent regulations only apply to those systems designated as high risk.
We may see some teething pains as the industry begins to adapt and strike the right balance between innovation and regulation. Think back to when cloud computing hit the market. Enterprises planned to put all their workloads on the cloud before they recognised that public cloud was not suitable for all.
Over time, I think that we will reach a similar state of equilibrium with AI.
5. Overall, how can businesses ensure they remain compliant with the Act as they implement AI into their operations?
First and foremost, before implementing any AI projects, businesses need to ensure that they have a clear strategy, goals, and objectives around what it is they want to achieve.
Once that is in place, they should carefully select the right partner or partners who can not only ensure delivery of the business objectives, but also adherence to all relevant regulations, including the EU AI Act.
This approach will go a long way towards ensuring that they get the business benefits that they’re looking for, as well as remaining compliant with applicable regulations.
- Data & AI