The European Union's long-awaited and much-debated AI Act, which was formally adopted by the European Parliament on March 13, addresses the rapid advancement and integration of artificial intelligence technologies across sectors, including healthcare, finance, transportation, and law enforcement.
An acknowledgement by the European Commission of both AI’s transformative potential and its risks and ethical implications, this new comprehensive framework seeks to foster innovation and competitiveness while also protecting individual rights and societal values.1
The ethical principles that guide the development and use of AI are central to the new Act. These principles emphasise:
These ethical principles are woven throughout the regulation, influencing its rules on transparency, data management, and accountability mechanisms.
A distinctive feature of the AI Act is its risk-based approach, categorising AI systems according to the level of threat they may pose to rights and safety:
This framework allows for a nuanced regulatory approach, tailoring requirements to the potential harm an AI system might cause.
Special mention is made for the providers of General Purpose AI (GPAI) models, with a requirement to put in place systemic risk prescribing documentation and policies, which must also be made available to the EU AI Office and national competent authorities.2
High-risk AI systems. Due to their potential impact on individuals’ rights and safety, high-risk AI systems require particularly stringent oversight. Organisations should focus on:
Limited risk AI systems. Limited risk AI systems, while posing less of a threat, still require certain safeguards, primarily focused on transparency and user information:
Fostering a culture of ethical AI use within the organisation is crucial across both categories. This includes training employees on AI ethics and legal requirements and establishing cross-functional teams to oversee AI governance, mitigate risks, and leverage AI's potential responsibly and ethically.
Figure 1: The risk-based approach of the EU AI Act
Adopting a risk-based approach across a broad spectrum of applications and industries, the Act will inevitably impact the financial services industry, particularly in respect of the ‘high-risk’ provisions outlined above. Specific use cases would include:
Provisions on transparency, data governance, human oversight, and risk management will be particularly relevant to these and other AI applications within financial services. Organizations in the sector must conduct thorough assessments to classify their AI systems according to the risk framework and comply with the corresponding regulatory requirements.
While the Act aims to be technology-neutral and flexible across different sectors, its impact on financial services highlights the importance of aligning AI applications with ethical standards and legal obligations to protect consumers and maintain financial stability. As the Act progresses towards implementation, further guidance and standards specific to high-risk applications, including those in the financial services industry, are expected to clarify compliance expectations.
Incorporating the AI Act into national law across EU member states requires several steps:
Adoption and Entry into Force. It is expected that the European Council, scheduled for the end of June 2024, will adopt the AI Act, following the EU Parliament vote, and then it will be published to the EU Official Journal. After 20 days the Act will be in Force and the Transposition phase will start.
Transposition. Member states have a period, usually two years, to transpose the regulation into national law. At the start of this phase (mid 2024) prohibitions on AI systems with unacceptable risks will come into effect. By the end of this period (mid 2026) member states must have transposed the EU AI Act to local regulation.
Figure 2: The EU AI Act – selected milestones
While the EU AI Act has been hailed for its proactive stance on AI governance, concerns have been voiced about its potential impact on innovation. Critics argue that the Act’s prescriptive regulations may stifle technological advancements and burden startups with compliance costs, hindering the bloc’s competitiveness in the global AI race. There is apprehension that, by prioritising regulation, the EU might need to catch up in nurturing an environment conducive to cutting-edge AI research and development.
In summary, the EU AI Act represents a bold step towards ethical and regulated AI deployment. By establishing clear rules and principles, it aims to protect citizens and uphold democratic values. However, as noted, the balance between regulation and innovation remains a contentious issue. As the Act moves towards full implementation over the next few years, its impact on the AI landscape in Europe – and beyond – will be closely watched.
__________________________________________________________
1 https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html
2 www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf