The world became a different place when Open AI launched ChatGPT on November 30, 2022, making generative AI accessible to anyone in the world with an internet connection. More than a million people used it in the first five days. Although generative AI is not an overnight phenomenon, Open AI has had a major impact on the industry and continues to do so to this day.

To date, numerous AI applications have been developed with multimodal capabilities, meaning that AI models have the ability to process information from images, video, and text. ChatGPT is just one example of a very popular generative AI application. Enterprises are extensively leveraging the development of AI strategies and applying generative AI in various domains, including but not limited to support functions such as customer support, data analysis, image generation, data consolidation and analysis, law, medicine, and more.

However, the use of AI in various contexts also poses significant risks to security and fundamental rights and raises questions about responsible use. The purpose of the European AI Act is to address and regulate the risks of specific uses of AI. The regulation aims to ensure that Europeans can trust the AI applications they use or interact with, and to provide a space to harness their potential in a responsible way. The European AI Law categorizes risks into four different levels:

- Unacceptable risk
- High risk
- Limited risk
- Minimal risk

What the EU AI law regulates and prohibits

Article 5 of the European AI Act outlines the restrictions on the use of AI and prohibits certain applications. A definition is needed to distinguish AI from other, simpler software systems. AI systems are described as "a machine-based system that is designed to operate with varying degrees of autonomy and that may exhibit adaptability after deployment and that, for explicit or implicit purposes, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments" (OECD Recommendation on Artificial Intelligence 2019).

The EU AI Act enumerates several prohibited AI systems, including biometric identification systems (Art. 5(1ba) EU AI Act), and imposes strict limitations on the use of real-time remote biometric identification systems in publicly accessible spaces (Art. 5(1d) EU Act). In addition, the EU AI Act prohibits AI systems that exploit any vulnerabilities of a person or a specific group of persons due to their age, disability or specific social or economic situation (Art. 5(1)(b) EU AI Act).

 

If you would like to discuss or explore state-of-the art due diligence tools for your M&A process, feel free to reach out.

Contact Us

Implications for M&A

M&A involves the processing of large amounts of data that must be analyzed and interpreted. As a result, such data collection, consolidation, analysis, and interpretation are labor intensive, even when supported by state-of-the-art technology. Therefore, AI models are primed to help M&A professionals and advisors work more efficiently and make better decisions.

Which AI systems can be used in the context of M&A? The EU AI Law lists several AI systems that "do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making" (Art. 6(2a) EU AI Law). According to White & Case, AI systems do not pose such risks if their intended use is limited to:

1. performing narrow procedural tasks,

2. improving the results of previously performed human activities,

3. identifying patterns of decision-making without replacing human judgment; or

4. merely preparing for a risk assessment.

 

Beginning in early 2024, M&A teams can specifically benefit from so-called general-purpose AI models that can handle a variety of different tasks. These AI models require massive training datasets and are complex and costly to develop. Generative AI models such as Chat GPT are general-purpose AI models. Other generative AI models that generate text and more include Meta's LLaMa, Baidu's Ernie, and Google's Bard.

Such providers of general-purpose AI models are obliged to comply with certain requirements, including (i) making available technical documentation, including training and testing procedures, or providing information to providers of AI systems that intend to use the GPAI model; (ii) cooperating with the Commission and national competent authorities; and (iii) complying with national copyright laws (Art. 52c EU AI Act). For providers of GPAI models with systemic risk, standardized model evaluations, systemic risk assessments and mitigation, incident tracking and reporting, and ensuring cybersecurity protection are mandatory (Art. 52d(1a), (1b), (2) and (1c) EU AI Act).

Conclusions

In our view, the EU AI Act strikes a fair balance by limiting certain applications and use cases while allowing for the continued development and use of AI models. The EU AI Act prohibits the use of AI to exploit the vulnerabilities of individuals or groups, and restricts the use of biometric data to determine a person's race, sexual orientation, religion or trade union membership, including social scoring and tracking of individual behavior.

However, while the EU AI Act provides a framework for the safe and responsible use of AI, the debate around data confidentiality and the use of AI models in M&A cannot be overstated. As these AI systems gain access to sensitive corporate data, there are legitimate concerns about how this information is processed, stored, and protected. The risk of data breaches or unauthorized data sharing is a pressing concern, as such incidents can jeopardize the integrity of the M&A process and the trust of all stakeholders involved.

In addition, there is the challenge of ensuring that AI algorithms, especially those based on LLaMA models, comply with confidentiality agreements in mergers and acquisitions (M&A). While these models offer powerful data analysis capabilities and can significantly improve decision-making processes, their operations lack transparency, particularly in how they handle and use data. This opacity is particularly concerning in the context of M&A, where sensitive information, including proprietary contracts and strategic plans, is routinely processed.

The risk that these AI systems could inadvertently disclose confidential information during their analysis is a significant concern. Such inadvertent disclosures could result from a number of factors, including the inherent complexity of AI models, which makes it difficult to fully predict or explain their data processing mechanisms. This lack of clarity about the inner workings of AI models and their data processing processes exacerbates the challenge of ensuring that sensitive data is not inadvertently disclosed or misappropriated.

In addition, the issue of monitoring and controlling the specific data that AI systems access and generate is particularly problematic. In the context of M&A, where confidentiality is paramount, the inability to fully audit and control the flow of sensitive data through these AI systems poses a real danger. The potential for unintentional data leakage or misuse raises significant concerns, as it could violate the strict confidentiality clauses that underpin M&A transactions. Moreover, the lack of data deletion capabilities in these systems exacerbates these concerns, leaving stakeholders with doubts about the long-term security and privacy of their confidential information.

The EU AI Act presents an opportunity for European and global M&A practitioners to establish a safety net for the responsible, fair and sustainable use of AI in the industry. As M&A relies on trust in both people and data, safeguarding these qualities is critical.

To discuss or explore M&A technology, including AI, with your M&A team, please contact me at mklawon@smartmerger.com 

Michael Klawon

Michael Klawon

CEO & Founder

View Profile

Article Topics

M&A Platform
smartmerger.com
Digitalization
Carve-Out