The EU AI Act Is Ready, But Are We?
Now that the EU AI Act has been passed, companies have to start coughing up money for fines. Big tech can afford this while smaller companies and innovation hubs may struggle to adapt.
The European Union (EU) has officially approved the Artificial Intelligence Act. The regulation was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. This new law will ban certain AI applications, strictly regulate high-risk uses, and ensure transparency and stress-testing for advanced software models. While the framework was first announced in 2021, over the past year more details were shared, and it has now been formally adopted. The Act is expected to become fully enforceable two years after its entry into force, which is likely to be in 2026. However, the time-scale for restricting prohibited AI systems will vary. Prohibited AI will have applicable regulations just 6 months after the Act becomes enforceable. AI systems classified as General Purpose AI (GPAI) will see relevant provisions become applicable after 12 months. Finally, rules for AI systems embedded into regulated products will apply after 36 months.
This final version of the Act expands the scope of what constitutes "high-risk" AI systems to include applications in law enforcement, migration control, and social media recommender systems of significant stature. It prohibits AI systems designed to manipulate or exploit user vulnerabilities, acknowledging the potential for AI to infringe upon personal freedoms and privacy, as well as granting consumers the right to lodge complaints and demand explanations for decisions made by high-risk AI systems, promoting transparency and accountability.
The EU's move to regulate AI is a global first, contrasting with other countries that have mostly relied on voluntary guidelines. Key EU economies like Germany and France initially expressed concerns that the law might hinder local AI innovation. However, these concerns were addressed through diplomatic negotiations and the creation of the EU's Artificial Intelligence Office, which will enforce the Act.
The office will foster collaboration, innovation, and research in AI among various stakeholders, while also creating an environment where “AI technologies respect human dignity, rights, and trust”.
It will engage in international dialogue and cooperation on AI issues, acknowledging the need for global alignment on AI governance.
Through these efforts, the European AI Office strives to position Europe as a leader in the “ethical and sustainable development of AI technologies”.
What are the consequences for breaching the Act? Below are the cost of breaches, with the % values indicating the share of company’s worldwide annual revenue. This chart shows the relative size of each fine across all possible fines.
For infringements involving prohibited AI practices, fines can reach up to €35 million or 7% of worldwide annual revenue, whichever is greater. Then for breaches of other requirements under the Act, fines may be as high as €15 million or 3% of global annual turnover. Providing false information as required by the AI Act can result in fines of up to €7.5 million or 1.5% of the entity's worldwide annual revenue, depending on which amount is higher. To be clear, these amounts are not that worrying for big tech corporations, many of who have already offset the cost of historical fines or lawsuits by virtue of their hefty revenue streams. Smaller companies, NGOs and other such organizations may be disproportionately affected by these fines.
Major tech companies have expressed public support for the legislation in principle, while privately raising concerns about specific provisions, such as limits on computing power for training AI models. Many computer scientists and tech experts, including Andrew Ng of Deeplearn.ai, warn that EU's computing power limits could prompt European companies to relocate to avoid overly restrictive regulations. There is not only a concern of overreach, in this act, but perhaps underreach too. This regulation doesn't cover workers' rights, carbon emissions, and energy usage in much detail. The Act doesn’t have any reference or restrictions of AI in military settings, either.
The careful integration of AI, as evidenced by the widespread adoption in various business units, and the incredible popularity of applications like ChatGPT and Stable Diffusion (considered GPAI under the AI Act), underscores the need for a clear and precise definition of AI systems. Too broad a definition for AI might be unhelpful for simpler methods and technology. The strategy proposed by the government is generally to get more experts. There aren’t likely to be many experts in such an emergent field that presents new problems. For businesses, this may stifle investment in AI. They will either have to build their own machine learning operations or pay for third-party services to comply with potentially stringent rules.
“Laws to suppress tend to strengthen what they would prohibit.”
- Frank Herbert, author of Dune
This Act actually increases the demand for more data and AI experts. More policymakers and legal professions will exist because of the Act. The Act focuses on ensuring that human oversight measures are appropriate, including technical measures to facilitate the interpretation of AI system outputs. Although the AI EU Act discusses bias as a major priority, the actual problem of bias is extremely difficult and nuanced to tackle.
In parallel with the White House, the Executive Order on AI is also on its way, although the enforcement dates of this order are less clear. It’s likely more Acts will have to pass through Congress before anything as substantial as the EU AI Act comes into the fold.
Since I feel strongly about this subject, I endorse the call to education and upskilling people in data and AI. In the same token, data scientists and experts should self-teach and seek experience in responsible AI practices. It’s not enough to simply utter a number and glance over a few graphs to see the potential risk or impact of a machine learning model. There are socio-economic, psychological and now, legal factors that will change how we can use the technology. We need people with not only coding, logic and problems-solving abilities, but also, creative, humanitarian and philosophical pursuits. The law cannot be effective if it’s top-down; and at the same time, the low-level contributors must push back with an educated and cautious intention. This is important when considering those currently influencing the government and performing lobbying.
It’s not just ability we need but also communication. Software engineers and data scientists should all be shifting towards design, storytelling and anthropology. There is no value in a dataset and a model without a story behind it and call to action. This the solution does not exist in isolation with how you approach the problem. Real understanding flows from multiple perspectives held simultaneously. The generalist will thrive here in an ever-shifting wicked learning environment.
Given my audience are predominantly healthcare professionals, I decided to go into more detail about how the regulation may impact the industry. AI systems are increasingly utilized in critical areas like patient triage, medical aid, patient diagnostics, clinical decision support, patient monitoring, drug discovery, and robotic surgery are classified as high-risk due to their profound impact on health and life decisions. These systems, particularly those used for risk assessment and pricing, have a significant influence on decisions about individuals' eligibility for health insurance, including evaluations for healthcare services or public assistance benefits. This categorization underlines the growing potential of AI in enhancing healthcare services. AI systems in healthcare, due to their substantial implications for patient outcomes, are categorized under high-risk scenarios. These systems will require thorough testing, data quality documentation, and fundamental rights impact assessments, emphasizing the Act's focus on safety, transparency, and accountability.
HIMSS analysis suggests that the EU's efforts to regulate AI in healthcare are seen positively for fostering trust and engagement with AI technologies, improving patient outcomes, and potentially reducing healthcare costs. The Act aims to standardize AI definitions and promote education to ensure a clear understanding of AI's applications in healthcare, thereby supporting the development of patient-centric and outcome-oriented medical technologies., the Act recognizes the benefits of AI in disease detection, diagnosis, prevention, control, treatment, and overall healthcare system improvements, which aligns with broader company interests and operations. The relevance of the Medical Device Regulation to AI applications in healthcare cannot be overstated. As Generative AI continues to be adopted by big pharma and healthcare, the so too will the impact of these regulations increase.
To learn more about the EU AI Act, check out a deep dive I did, analyzing its strengths and limitations with a broader look at ethical and legal concerns when it comes to AI and data regulation.
Looking for something a bit different to read? Check out my other letters covering key topics in this space.