THE EU AI ACT: A STRATEGIC FRAMEWORK FOR RESPONSIBLE DEVELOPMENT

The EU AI act : A strategic framework for responsible development

  • Konstantinos Liakeas
  • Published: 12 April 2024


The European Union's long-awaited and much-debated AI Act, which was formally adopted by the European Parliament on March 13, addresses the rapid advancement and integration of artificial intelligence technologies across sectors, including healthcare, finance, transportation, and law enforcement. 

An acknowledgement by the European Commission of both AI’s transformative potential and its risks and ethical implications, this new comprehensive framework seeks to foster innovation and competitiveness while also protecting individual rights and societal values.1
 

Ethical AI principles

The ethical principles that guide the development and use of AI are central to the new Act. These principles emphasise:

  • Respect for human autonomy – AI systems should support human decision-making processes, not undermine them.
  • Prevention of harm – AI applications must prioritise safety and ensure that they do not harm people physically or psychologically.
  • Fairness – the Act calls for measures to prevent discrimination and ensure equity in AI outcomes.
  • Transparency and accountability – AI systems should be transparent and explainable, allowing for accountability in their operation and outcomes.
  • Privacy and data governance – protecting personal data and privacy is underscored, aligning with the General Data Protection Regulation (GDPR).

These ethical principles are woven throughout the regulation, influencing its rules on transparency, data management, and accountability mechanisms.
 

Risk categorization

A distinctive feature of the AI Act is its risk-based approach, categorising AI systems according to the level of threat they may pose to rights and safety:

  • Unacceptable risk – AI practices that manipulate human behaviour, exploit vulnerable individuals, or enable social scoring are banned.
  • High risk – AI applications in critical sectors (e.g. healthcare, policing, and employment) must comply with strict requirements, including risk assessments, data quality controls, and transparency obligations.
  • Limited risk – AI systems like chatbots should disclose their non-human nature to users.
  • Minimal or no risk – many AI applications fall into this category, where the Act imposes minimal obligations, recognising their low threat level.

This framework allows for a nuanced regulatory approach, tailoring requirements to the potential harm an AI system might cause.
 
Special mention is made for the providers of General Purpose AI (GPAI) models, with a requirement to put in place systemic risk prescribing documentation and policies, which must also be made available to the EU AI Office and national competent authorities.2

Risk mitigation

High-risk AI systems. Due to their potential impact on individuals’ rights and safety, high-risk AI systems require particularly stringent oversight. Organisations should focus on:

  • Risk assessment and management – conduct thorough risk assessments to identify and evaluate risks associated with AI systems and develop a risk management plan detailing mitigation strategies and contingency plans.
  • Data governance – implement robust data governance practices to ensure the quality, accuracy, and integrity of data used by AI systems; establish data collection, storage, processing, and sharing procedures in compliance with privacy regulations like GDPR.
  • Transparency and documentation – maintain comprehensive documentation for AI systems, including their design, development, deployment processes, and decision-making mechanisms; the documentation should be accessible to relevant stakeholders to ensure transparency.
  • Ethical and legal compliance – develop AI systems per established ethical guidelines and legal requirements; ensure non-discrimination, fairness, and the protection of fundamental rights.
  • Human oversight – ensure meaningful human oversight throughout the AI system's lifecycle; set up processes for human intervention in decision-making and mechanisms for users to challenge AI decisions.
  • Security and reliability – implement strong cybersecurity measures to protect AI systems from unauthorised access and attacks; regularly test and monitor AI systems for any vulnerabilities or failures.
  • Auditability – facilitate internal and external audits of AI systems to assess compliance with regulatory requirements and ethical standards; authorised auditors should have access to algorithms, data, and decision-making processes.

Limited risk AI systems. Limited risk AI systems, while posing less of a threat, still require certain safeguards, primarily focused on transparency and user information:

  • Transparency to users – disclose the use of AI, particularly in cases where it might not be apparent (e.g. chatbots); users should be informed that they are interacting with an AI system.
  • User information and consent – provide users with information about the AI system's capabilities, limitations, and the nature of its decision-making processes; where applicable, obtain user consent in accordance with privacy laws.
  • Quality and safety standards – even if the AI system poses limited risk, maintaining high quality and safety standards is essential; the system should be regularly reviewed and updated to ensure it functions as intended without posing unforeseen risks.
  • Feedback mechanisms – implement mechanisms for users to provide feedback on the AI system's performance and any issues encountered; use this feedback to make necessary adjustments and improvements.

Fostering a culture of ethical AI use within the organisation is crucial across both categories. This includes training employees on AI ethics and legal requirements and establishing cross-functional teams to oversee AI governance, mitigate risks, and leverage AI's potential responsibly and ethically.
 

Figure 1: The risk-based approach of the EU AI Act

The AI Act: an industry perspective

Adopting a risk-based approach across a broad spectrum of applications and industries, the Act will inevitably impact the financial services industry, particularly in respect of the ‘high-risk’ provisions outlined above. Specific use cases would include:

  • Credit scoring and lending decisions. AI systems used to assess creditworthiness or make lending decisions could significantly impact individuals’ financial health and opportunities. These applications might be considered high-risk due to their potential to affect personal and economic rights.
  • Fraud detection systems. While crucial for identifying and preventing fraudulent activities, fraud detection systems must be designed to ensure accuracy, fairness, and transparency, given their potential impact on customers’ access to financial services.
  • Investment and trading algorithms. AI-driven algorithms that make decisions on investments or trades can have significant implications for market stability and individual financial outcomes. The governance of such systems would be crucial to mitigate risks of market manipulation or unfair practices.
  • Insurance underwriting and claims processing. AI applications assessing risk or processing claims in the insurance industry affect customers' premiums and claim settlements, necessitating careful oversight to prevent discrimination and ensure fairness.

Provisions on transparency, data governance, human oversight, and risk management will be particularly relevant to these and other AI applications within financial services. Organizations in the sector must conduct thorough assessments to classify their AI systems according to the risk framework and comply with the corresponding regulatory requirements.

While the Act aims to be technology-neutral and flexible across different sectors, its impact on financial services highlights the importance of aligning AI applications with ethical standards and legal obligations to protect consumers and maintain financial stability. As the Act progresses towards implementation, further guidance and standards specific to high-risk applications, including those in the financial services industry, are expected to clarify compliance expectations.
 
 

Implementation and Next Steps

Incorporating the AI Act into national law across EU member states requires several steps:

Adoption and Entry into Force. It is expected that the European Council, scheduled for the end of June 2024, will adopt the AI Act, following the EU Parliament vote, and then it will be published to the EU Official Journal. After 20 days the Act will be in Force and the Transposition phase will start.

Transposition. Member states have a period, usually two years, to transpose the regulation into national law. At the start of this phase (mid 2024) prohibitions on AI systems with unacceptable risks will come into effect. By the end of this period (mid 2026) member states must have transposed the EU AI Act to local regulation.

 

Figure 2: The EU AI Act – selected milestones

Closing thoughts

While the EU AI Act has been hailed for its proactive stance on AI governance, concerns have been voiced about its potential impact on innovation. Critics argue that the Act’s prescriptive regulations may stifle technological advancements and burden startups with compliance costs, hindering the bloc’s competitiveness in the global AI race. There is apprehension that, by prioritising regulation, the EU might need to catch up in nurturing an environment conducive to cutting-edge AI research and development.

In summary, the EU AI Act represents a bold step towards ethical and regulated AI deployment. By establishing clear rules and principles, it aims to protect citizens and uphold democratic values. However, as noted, the balance between regulation and innovation remains a contentious issue. As the Act moves towards full implementation over the next few years, its impact on the AI landscape in Europe – and beyond – will be closely watched.
 

__________________________________________________________

References

1 https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html
2 www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf

© Capco 2024, A Wipro Company