An Early Look at the EU AI Act
Taking a peek through the leaked files of the world's first international AI Regulation.
On Friday, December 8, 2023, the European Union has unanimously agreed on the Artificial Intelligence Act, a pioneering regulation for AI technology. This new law will ban certain AI applications, strictly regulate high-risk uses, and ensure transparency and stress-testing for advanced software models. While the framework was announced in 2021, over the past year more details were shared and even leaked.
The EU's move, a first globally, contrasts with other countries that have mostly relied on voluntary guidelines for AI regulation. Key EU economies like Germany and France initially expressed concerns that the law might hinder local AI innovation. However, these were addressed through diplomatic negotiations and the creation of the EU’s Artificial Intelligence Office, which will enforce the Act. The Act, still pending formal approval by the European Parliament, allows member countries to implement stricter rules for technologies like facial recognition. The AI Act represents a significant step in regulating emerging AI technologies, balancing innovation with necessary safeguards.
The German member of the European Parliament said the final text of the bloc’s new rules on artificial intelligence, obtained by POLITICO, was “an attack on civil rights” and could enable “irresponsible and disproportionate use of biometric identification technology, as we otherwise only know from authoritarian states such as China.”
Being the first major AI regulation means that it sets a precedent for future frameworks not just in Europe, but worldwide. This Act aims to ensure that AI technologies are used responsibly, transparently, and in a manner that respects human rights. That does not mean it covers all the broader aspects of AI and the tech industry at large.
Also, the AI EU Act was introduced well before the White House's announcement of an Executive Order on AI.
The Act is expected to become enforceable two years after its entry into force, which is likely to be in 2026. However, there are exceptions for certain specific provisions. Prohibited AI systems, for instance, will have applicable regulations just 6 months after the Act becomes enforceable. Similarly, AI systems classified as General Purpose AI (GPAI) will see their relevant provisions become applicable after 12 months. To complement the regulation, the Commission is initiating the AI Pact, seeking the voluntary commitment of industry to anticipate the AI Act and to start implementing its requirements ahead of the legal deadline.
The Commission's press release indicates that penalties for non-compliance with the AI Act can be substantial. Specifically, fines can reach up to either €35 million or 7% of worldwide annual revenue, whichever is greater, for infringements involving prohibited AI practices. For breaches of other requirements under the Act, fines may be as high as €15 million or 3% of global annual turnover. Providing false information as required by the AI Act can result in fines of up to €7.5 million or 1.5% of the entity's worldwide annual revenue, depending on which amount is higher.
This regulation doesn't cover workers' rights, carbon emissions, and energy usage in much detail. The careful integration of AI, as evidenced by the widespread adoption in various business units, and the incredible popularity of applications like ChatGPT and Stable Diffusion, underscores the need for a clear and precise definition of AI systems. In another letter I explore workers rights and protests from actors and writers affected by AI.
Too broad a definition for AI might be unhelpful for simpler methods and technology. The strategy proposed by the government is generally to get more experts. There aren’t likely to be many experts in such an emergent field that presents new problems. For businesses, this may stifle investment in AI. They will either have to build their own machine learning operations or pay for third-party services to comply with potentially stringent rules. Simultaneously, it will increase the demand for more data and AI experts. More policymakers and legal professions will exist because of the Act. The Act focuses on ensuring that human oversight measures are appropriate, including technical measures to facilitate the interpretation of AI system outputs. In another letter, I explore the need to monitor the text generations of chatbots.
Importance is given to human oversight in AI development and use, as described in the Act. This includes risks related to control, privacy, and the impact on democratic values and human rights. It discusses the role of human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. These principles contribute to the design of trustworthy and human-centric AI.
Since I feel strongly about this subject, I endorse the call to education and upskilling people in data and AI. In the same token, data scientists and experts should self-teach and seek experience in responsible AI practices. It’s not enough to simply say a number and examine a few graphs to see the potential risk or impact of a machine learning model. There are socio-economic, psychological and now especially, legal factors that will change how we can use the technology. We need people with not only coding, logic and problems-solving abilities, but also, creative, humanitarian and philosophical pursuits. The law cannot be effective if it’s top-down; and at the same time, the low-level contributors must push back with an educated and cautious intention. This is important when considering those currently influencing the government and performing lobbying.
It’s not just ability we need but also communication. Software engineers and data scientists should all be shifting towards design, storytelling and anthropology. There is no value in a dataset and a model without a story behind it and call to action. This the solution does not exist in isolation with how you approach the problem. Real understanding flows from multiple perspectives held simultaneously. The generalist will thrive here in an ever-shifting wicked learning environment.
Depending on the severity of the implementations, the skyrocketing growth of AI investment could slow down the same way automobiles did when they were relatively cheap and widely available. The bubble could pop- if there’s a failed acquisition involving Openai, it could kill a huge amount of AI investment in a way comparable to the AOL-Warner merger in 2000. As environmental concerns grew, governments introduced emission standards. Car manufacturers had to invest significantly in R&D to develop cleaner engines and exhaust systems. This increased the cost of production and the final price of automobiles. In the long run, that didn’t stop the increase in cars, something I explored in some detail my piece about the journey from automobiles to self-driving vehicles.
The Act introduces a nuanced 'pyramid of criticality' to classify AI systems. This framework segments AI into four distinct risk categories. Each tier, from 'unacceptable' to 'limited risk,' is bound by its own set of regulations. This classification scheme, innovative in its approach, has not been without its challenges. The task of pigeonholing AI systems into these categories has proven to be a complex puzzle.
Significant amendments have broadened the scope of what constitutes high-risk AI. This expansion notably includes certain realms of law enforcement and migration control. Social media recommender systems, particularly those of significant stature, now find themselves labeled as high-risk. It also fails to discuss the role of chips, rare earth minerals, international workers’ rights and Among the high-risk examples, it does not mention or elaborate in any detail the role of AI in the military.
This move signals a growing recognition of the profound impact these systems have on public discourse and individual behavior. A significant stride in the Act is the extension of the prohibition on AI-based social scoring. This move is not just symbolic; it has serious implications for industries like credit and insurance.
The Act establishes safeguards for the use of general-purpose artificial intelligence. It also delineates clear limitations on the deployment of biometric identification systems by law enforcement. Notably, it bans AI systems designed to manipulate or exploit user vulnerabilities. In doing so, the Act acknowledges the potential for AI to infringe upon personal freedoms and privacy. Additionally, the Act empowers consumers, granting them the right to lodge complaints and demand clear explanations for decisions made by high-risk AI systems. This is a step towards ensuring transparency and accountability in AI applications.
The Act is committed to combating bias and discrimination in AI systems. It mandates that AI systems be developed inclusively, promoting equal access, gender equality, and cultural diversity. This includes a requirement for providers to process special categories of personal data, specifically for the purpose of monitoring, detecting, and correcting biases in high-risk AI systems. The Act's focus on diversity and fairness aims to prevent discriminatory impacts and biases that are prohibited under Union or national law.
Fairness, equal treatment, and the provision of opportunities for all are fundamental considerations for real-world AI systems within the European Union. In this context, various laws, including EU Human Rights Acts and directives addressing non-discrimination, protect specific groups of individuals from discriminatory practices based on factors such as race, color, religion, gender, national origin, age, or disability. If you wish to dive more into this subject and some major concerns and stories about AI and bias, you can read one of my popular letters on the subject.
By crafting and enforcing policies that align AI systems with fairness and discrimination laws, such as Title VII, ADEA, and ADA, we can foster an environment where AI serves as a harmonious and supportive thread within the societal web, contributing to the well-being and fair treatment of all. This is not only a way to make society better and more inclusive but may be a legal regulation from various levels of government, especially from countries with EU influence. Individual states and government departments are drafting up proposals too, consulting with experts and building panels to ideate a possible regulatory roadmap.
The Act emphasizes the need to protect individuals from discrimination that might result from biases in AI systems. Providers should process special categories of personal data to ensure bias monitoring, detection, and correction in high-risk AI systems. This approach ensures that AI systems are developed and used in a manner that includes diverse actors, promotes equal access, gender equality, and cultural diversity, and avoids discriminatory impacts and biases prohibited by Union or national law
In Recital 44, the Act discusses the need for high data quality for the performance of many AI systems, especially those employing model training techniques. The aim is to ensure that these high-risk AI systems function as intended, safely, and do not become sources of discrimination.
The Act emphasizes that high-quality training, validation, and testing data sets are pivotal for this purpose. These data sets need to be implemented with appropriate data governance and management practices. They should be relevant, representative, error-free, and complete, considering the intended purpose of the system. Additionally, these data sets must possess the appropriate statistical properties concerning the persons or groups on which the high-risk AI system will be used.
An important aspect noted in this recital is the consideration of specific features, characteristics, or elements unique to the particular geographical, behavioral, or functional settings or contexts in which the AI system will operate. This approach underlines the need for AI systems to be adaptable and sensitive to the diversity of environments and user groups they may encounter.
Special attention is required to mitigate possible biases in data sets that might lead to risks to fundamental rights or discriminatory outcomes. The Act acknowledges that biases can be inherent in datasets (especially historical data), introduced by algorithm developers, or generated when systems are implemented in real-world settings. AI system results are influenced by these inherent biases, which highlights the need for robust bias detection and correction mechanisms. To protect against discrimination from AI biases, providers should be allowed to process special categories of personal data. This brings to light another major problem the Act addresses: data privacy.
Although data privacy concerns is associated with social media, there have been increasing cases of applications in law enforcement surveillance, airport security screenings, and decisions related to employment and housing. Law enforcement utilizes facial recognition to match suspects' images with mugshots and driver's license photos. It is estimated that nearly half of all American adults—over 117 million people as of 2016—are part of a facial recognition network used by the police. Since this is personal data the Act discusses higher standards of privacy. Technical standards of privacy and legal ones leave a grey area.
There is emphasis on the use of "metrics," the tools for measuring the performance and effectiveness of AI systems. The Act requires clear and transparent communication of these metrics. It's not merely a matter of technical compliance; it's about instilling confidence and understanding in users and deployers. The Act mandates detailed descriptions of the appropriateness of performance metrics for specific AI systems, encompassing input data and technical measures for interpreting outputs.
While facial recognition algorithms claim high classification accuracy (e.g. above 90%), the results are not consistent across all demographics. Numerous studies reveal varying error rates among different demographic groups, with the lowest accuracy consistently observed among female, Black, and 18-30-year-old subjects. Enforcing transparency enables us to observe whether there is bias or discrimination produced by the system. Although, even with this regulation, the problem is unsolved because companies can influence which metrics get shared and how those metrics are created and interpreted.
The AI Act insists on rigorous testing of high-risk AI systems against predefined metrics and probabilistic thresholds. This testing, essential before market entry or service, ensures that systems perform as intended, without unforeseen risks or misuse. Furthermore, the Act requires high-risk AI systems to declare their accuracy metrics in accompanying instructions, emphasizing clarity and the avoidance of misleading statements. Validation and testing procedures outlined in the Act underscore its commitment to cybersecurity and compliance with relevant standards. These procedures are not just technical mandates but form part of a broader strategy to build trust in AI technologies.
The rise of deepfakes, including instances like the manipulated video of President Zelenskyy, supports the Act's stance on prohibiting AI applications that could manipulate or exploit users. The Act imposes minimum transparency obligations on users of AI systems that “generate or manipulate images, audio or video content that appreciably resembles existing persons” and may appear to be authentic or truthful to disclose that the content has been artificially generated or manipulated.
In healthcare, AI systems are increasingly utilized in critical areas like patient triage or medical aid, and are rightly classified as high-risk due to their profound impact on health and life decisions. These systems, particularly those used for risk assessment and pricing, have a significant influence on decisions about individuals' eligibility for health insurance, including evaluations for healthcare services or public assistance benefits. This categorization underlines the growing potential of AI in enhancing healthcare services. Additionally, the Act recognizes the benefits of AI in disease detection, diagnosis, prevention, control, treatment, and overall healthcare system improvements, which aligns with broader company interests and operations. The relevance of the Medical Device Regulation to AI applications in healthcare cannot be overstated.
The European Commission's AI Act, introduced in 2021, is a pioneering regulation in the AI landscape, setting a precedent not only in Europe but globally. It addresses a broad spectrum of AI applications, from healthcare to law enforcement, with an emphasis on responsible, transparent usage that respects human rights. Significant amendments have expanded its scope to include high-risk AI, tackling issues of bias, discrimination, data privacy, and potential AI misuse. The Act's implementation, marked by a staggered timeline with varying enforceability for different provisions, reflects the complexity of AI regulation. The stringent fines for non-compliance underline the EU's commitment to ethical AI practices. Beyond setting technical and ethical standards, the Act aims to create an environment where AI is a beneficial societal force, balancing innovation with citizens' rights protection. Its comprehensive approach, from data quality and transparent metrics to stringent testing of high-risk systems, underscores a commitment to cybersecurity and compliance, especially in sensitive areas like healthcare and facial recognition. As AI increasingly integrates into various life aspects, the EU's AI Act emerges as a critical moment in technology regulation, poised to influence global AI policies and symbolize a significant step towards an ethically-guided, responsible AI future.
Please consider subscribing and sharing if you feel this topic is important. Rested assured I will continue to share the latest and breaking news on this topic and any other important developments in AI and law.