The EU AI Act: an Impact Analysis (part 2) – Technologist

Legislative progress update – The AI Act has passed final parliamentary vote

Since part 1 of this series, the European Parliament’s committees on Internal Market and Consumer Protection (“IMCO“) and on Civil Liberties, Justice and Home Affairs (“LIBE“) endorsed the AI Act on 13 February 2024. On 13 March 2024, the European Parliament finally adapted the consolidated AI Act. Before the AI Act can now be signed and published in the European Official Journal to become effective, the EU Council will have to formally adopt it at the ministerial level as well.

With this update in mind, and having provided an analysis of the AI Act’s core concepts in part 1 we will continue in this part with our analysis of the AI Act’s impact on businesses and how they can best prepare for compliance.

AI Act’s impact on your business

To ensure compliance and avoid regulatory risks, it is essential for businesses to swiftly and meticulously evaluate the impact of the AI Act – and to prepare for the transformation that AI will bring to every aspect of the business operations. In this respect, preparations for the AI Act should be seen as part of the overall governance measures that a company must take to address the develpment and deployment of AI.

While the AI Act predominantly focusses on high-risk use-cases, with supplementary rules for general purpose AI (“GPAI“), it would be wrong to focus solely on these types of systems. Companies should generally evaluate the use of any AI within their company, and compliance needs to be considered also from the perspective of the existing legal framework, with privacy, IP, consumer protection and anti-discrimination laws applying to a broader range of systems.

Equally, the rapid development of AI technologies and their use in every organization underlines the need for robust governance structures in any company.

In this article, we will therefore examine how AI governance can be approached in practice.

AI Governance in any company

To ensure and be able to demonstrate compliance with the new requirements under the AI Act, the potential implications of the regulation will need to form part of every company’s overall AI governance program.

AI governance forms part of the digital governance that needs to be implemented within every business, which requires an interdisciplinary approach taking into account various factors, including legal, ethical, risk-management, strategic, practical and other considerations, and which overlaps, and is closely intertwined, with the organization’s data governance program.

We have outlined below a four step approach for building a robust AI governance program:

1. AI mapping & inventory

First of all, organizations should determine and document in a central AI inventory and repository which AI systems, and models it has developed and/or deploys or uses. The inventory should include various information, such as the nature of the technology, intended purposes, types of outputs generated by the system, relevant data processed, and any third party vendors involved.

There should be a clear allocation of responsibilities for the oversight and management of the usage of each AI system during its lifecycle.

The documentation should also consider the territorial scope of application, the intended use cases and interfaces to other systems, and the relevant business relationships with AI suppliers or deployers (including documentation of contractual arrangements).

The necessary mapping and preparation does also have a jurisdictional component, and requires to identify relevant applicable legislation and regulatory guidance in the relevant territorial and material scope of application.

The AI mapping and inventory is a living repository, as AI systems, business relationships and use case scenarios constantly evolve over time. The AI Act, as other laws, classifies high-risk systems according to their intended use, so that it is necessary to track the exact use of AI technology (which often, and not only in case of GPAI, provides for broad possibilities of different uses within a company). Therefore, it is essential to implement appropriate procedures within a company to review and update the inventory on a regular basis.

Even if a company does not use AI systems, it should be aware that its contract partners might do so. Accordingly, it should map out its business’s sensitivities and include appropriate safeguards within their vendor and business partner due diligence, to identify whether partners are using, or planning to use, AI and, if so, what guardrails they have implemented or are planning to implement.

2. Impact, compliance gap & risk analysis

The next steps include:

  • Applicability & Impact Analysis: Taking into account the AI mapping & inventory (step 1), this step requires assessing what laws and other relevant considerations are applicable to the products and services offered, deployed and/or received by your company, and how these laws and considerations impact the business operations.
  • Compliance Gap & Risk Analysis: This step requires to evaluate legal compliance gaps and to identify and rate relevant risks, and determine compliance measures that will need to be implemented.

The Applicability & Impact Analysis involves the assessment how the related legal and regulatory landscape for AI in the specific jurisdiction and industry (including the AI Act, sector specific laws, and general laws, such as in the area of IP/trade secrets and data protection) applies to and impacts the specific business of your company.

Within the scope of the AI Act, this requires an appropriate classification of AI systems  and scoping of intended use cases. To the extent the AI Act is applicable, business should assess what relevant risk category and set of obligations its specific uses of AI fall into. As we laid out in part 1 the AI Act categorizes AI systems according to the risks of their capabilities and utilization, and allocates respective sets of obligations, with the risk categories being:

  • Unacceptable risk – use generally prohibited
  • High risk – set of extensive compliance obligations, including conformity assessment
  • Limited risk – limited obligations re transparency
  • Minimal risk – potential obligations under (voluntary) code of conduct

An additional set of obligations applies to providers of GPAI (in particular, where the AI triggers systemic risks).

The Compliance Gap and Risk Analysis is an essential step in identifying and managing business risks.

  • The compliance gap analysis, as a systematic review process, enables the business to identify the difference between current practices and the compliance requirements of the relevant legal, regulatory and industry standards (and internal policies and compliance standards). It enables the organization to systematically identify the specific areas where it is not currently meeting requirements and take targeted actions  to ensure compliance (such as an action plan to revise internal policies, implement new controls or provide training).
  • The risk analysis enables the business to assess the likelihood and severity of the risks and consequences associated with any non-compliance identified in the gap analysis. It helps the business to efficiently prioritize the compliance gaps on a risk-based level and effectively start with identifying appropriate mitigation measures. Potential compliance risk includes for example regulatory enforcement, including financial penalties as fines, reputational damage and potential disruptions of the operations of the business.

3. Developing an AI strategy & governance program

Based on the findings from step 2, businesses should determine their overall business strategies on how to integrate the requirements and necessary compliance steps into their broader goals and values as well as already existing structures and procedures, and how to build an effective governance program that ensures compliance with legal requirements while supporting the business objectives.

Steps to consider are:

Determining organizational structure, roles and responsibilities. One of the current challenges for businesses is how to determine the appropriate organizational structure for building an adequate AI governance within their organizations. The diverse nature of the various topics emerging when offering, deploying or using AI require an interdisciplinary and cross-functional coordinated approach across the organization. This will imply developing the overall organizational framework, assigning new roles to certain positions or create entirely new functions, assigning clear responsibilities to these roles, establishing reporting and coordination mechanisms, and setting up cooperation for each stage of the AI lifecycle and the different responsibilities connected to these stages.

Developing policy framework, standards and procedures. This includes determining the policies, standards, and procedures required within an organization not only to ensure, and be able to demonstrate, compliance with the legal requirements under applicable laws, including the AI Act, but also achieve the relevant business objectives, and to protect company interests and assets. This policy framework needs to be developed in light of various legal, ethical and other considerations, and be aligned with company policies and requirements in various other areas, such as data governance, data protection, intellectual property, protection of trade secrets, competition law, IT- and cybersecurity, risk management, and various others.

Implementing technical measures and organizational procedures. This step requires establishing technical and organizational measures to ensure effective execution of the respective governance framework within the organization. In particular, the AI Act obliges e.g. providers of high-risk AI systems to implement technical measures concerning traceability, transparency, human oversight, data governance, cybersecurity and robustness. Organizational procedures must also be established. For high-risk AI systems, these procedures are required in particular in context of risk management, technical documentation, quality management, conformity assessment, testing and monitoring, incident reporting and registration. 

Furthermore, the effective, binding and enforceable internal implementation of the governance program is needed. This requires inter alia the management buy-in (tone from the top), mechanisms for ensuring a binding nature of policies, standards and procedures, appropriate training and human oversight, ensure AI literacy, and regular monitoring and controls (including potential sanctioning of misconduct within the company).

In particular, appropriate training forms an essential component of an effective internal compliance program. The AI Act requires businesses to establish “AI literacy” among staff and other persons dealing with the operation and use of AI systems on its behalf. AI literacy is understood to refer to skills, knowledge and understanding that allows providers, deployers and affected persons, to make informed decisions relating to the development and deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

Building and maintaining compliance documentation. To be able to demonstrate compliance with applicable legal requirements, international standards and internal policies, the AI governance program requires appropriate documentation of compliance measures and standards implemented within the organization. The AI Act, in particular, specifies certain documentation to be established and maintained by businesses falling under the scope of the Regulation.

In general, components of an appropriate documentation can entail:

  • AI policies (as appropriate to roles as provider or deployer), including the company’s general principles, standards and procedures for handling AI, and policies potentially covering various topics such as compliance guidelines for high-risk AI systems/GPAI, risk assessment, conformity assessment and quality management, AI developer guidelines, responsible use policies, content creation guidelines, rules for training and prompting AI, etc.;
  • AI risk assessments, which identify the likely risks arising from particular AI systems and outline how these risks will be appropriately mitigated.
  • AI asset inventory lists, including approved AI tools and use cases, model uses, and relevant data (sources), etc.;
  • AI standard notices and templates, including contract templates, transparency information, template risk assessment/FRIA templates, AI playbook, etc.
  • AI repositories (general/ high risk AI/ GPAI) and compliance measures documentation, including technical documentation, record-keeping/logs, risk assessments, and other documentation necessary to comply with the requirements for high-risk AI systems (such as conformity assessments and quality management systems) and GPAIs, etc.;
  • AI vendor and business partner documentation, including third party mapping, vendor due diligence questionnaires, vendor compliance audits, reviews, and certifications, third party contracts, etc.;
  • AI awareness and training materials for employees, contractors and business partners, and relevant training records, etc.;
  • AI audit / review documentation, including audit program and documentation on compliance audits, reviews and controls, third party certificates, etc.

4. Audits, controls & monitoring

First, this means implementing regular audits and controls to review, update and improve the company’s governance program, including the effectiveness of policies, standards and procedures. 

Second, the regulatory landscape must be continuously monitored; new laws, jurisprudence, codes, regulatory guidance and good practice standards are emerging on all ends and may bring

new requirements and obligations which require to adapt and refine the company’s governance system

5. Global context

Last but not least, businesses should be aware of the fact that at international level, the EU institutions will continue to work with multinational organizations, including the Council of Europe (Committee on Artificial Intelligence), the EU-US Trade and Technology Council (TTC), the G7 (Code of Conduct on Artificial Intelligence), the Organisation for Economic Collaboration and Development (“OECD”) (Recommendation on AI), the G20 (AI Principles), and the UN (AI Advisory Body), to promote the development and adoption of rules beyond the EU that will have to be aligned with the requirements of the AI Act.

Authored by Leopold von Gerlach, Martin Pflüger, Nicole Saurin, Stefan Schuppert, Jasper Siems, and Dan Whitehead.

Add a Comment

Your email address will not be published. Required fields are marked *