The AI Act: The EU’s serial digital overregulation

In the name of protecting consumers, the European Union has rushed to regulate the development of Artificial Intelligence, putting Europe’s competitiveness at risk. 

AI regulation by the EU
Brussels, Dec. 9, 2023: EU policymakers proudly announce an agreement intended to set a global benchmark for regulating AI. Concern over Europe’s competitiveness in innovation was missing. © Getty Images
×

In a nutshell

  • The AI Act has received much support and is being phased in across the EU
  • The regulation of nascent AI technology is arguably premature
  • Longer-term impacts of digital overregulation can impair Europe’s growth

The triad is complete: After the Digital Markets Act and the Digital Services Act, the European Union passed regulations on Artificial Intelligence (AI). Once again, the EU has proven to be a serial regulator. And once again, the world’s largest regulatory body has put innovation at risk.

Ironically, the EU’s newest attempt at regulating the digital sphere has a digital presence. Its website allows readers to explore the AI Act, including by using AI. According to the EU, the act is a regulation that aims to harmonize AI rules across the union. It was passed by the European Parliament on March 13, 2024, approved by the EU Council on May 21, 2024, and enacted on August 1, 2024. However, the regulations will take effect in stages with different delays.

As usual, society at large, the business community and politicians in all EU member states were enthusiastic about another piece of regulation. They praised the fine-grained approach of the act protecting consumers and democracy against AI. Little thought was given to the long-term implications of the regulation, especially on innovation and the welfare of society.

A summary of the AI Act

The regulation outlines four categories of risk that AI may pose. The first is “unacceptable risk,” which is prohibited. Examples of such risks are social scoring systems and manipulation. The second is “high risk,” which is thoroughly regulated and takes up most space in the Act. General Purpose AI (GPAI) is classified under this group. Then, the third group of “limited risk,” for example, chatbots, is subject to lighter transparency obligations. Finally, “minimal risk” is unregulated. Members of this group are spam filters or videogames. 

Most obligations fall on providers and developers of high-risk AI systems. They must abide by EU regulations regardless of whether they are based in the EU or a third country. Third-country providers whose high-risk AI systems’ output is used in the EU must also comply with the act.

The EU’s AI Act seems to be about product safety intertwined with copyright regulation. Many aspects of its implementation have not yet been developed.

What do providers of General Purpose AI need to do? According to EU regulations, they must provide technical documentation and user instructions, comply with the Copyright Directive and publish a summary of the content used for training their models. Providers of free and open-license GPAI models are only required to comply with copyright regulations and publish a training data summary unless their model poses a systemic risk. All providers of GPAI models that present a systemic risk – whether open or closed – must also conduct model evaluations, perform adversarial testing, track and report serious incidents and ensure cybersecurity protections are in place.

×

Facts & figures

Types of systemic risk

Open systemic risk pertains to open-source AI models that are considered to present systemic risks due to their capabilities and the scale at which they operate. 

Closed systemic risk refers to proprietary or closed AI systems that also exhibit systemic risks but are not open-source.

The EU’s AI Act seems to be about product safety intertwined with copyright regulation. Many aspects of its implementation have not yet been developed. Also, its dynamic component, a feature of the regulation that allows the EU to catch up with the technological development of AI without having to change the law, remains, at this stage, a declaration of principle without its operationalization by law.

Deficiencies in the AI Act

The EU’s regulation presents several challenges at both practical and conceptual levels. The practical issues are evident: The act’s incompleteness fails to provide legal certainty for firms that are developing, implementing or using AI. Furthermore, it imposes numerous obligations throughout the AI deployment process, significantly increasing costs at various stages. 

These additional regulatory requirements are likely to extend the time-to-market for many generative applications and considerably delay the launch of innovations categorized as higher risk. Consequently, this creates a threefold competitive advantage for providers and developers operating in jurisdictions with lower or no regulatory standards. They benefit from reduced costs, shorter timeframes and the ability to pursue innovative and unregulated approaches.

The stringent regulations proposed in the AI Act could hinder the EU’s ability to compete with global leaders in AI, particularly the United States and China.

However, this competitive advantage comes with significant drawbacks. Providers and developers from other jurisdictions face restrictions that prevent them from deploying their applications in the EU. This burden manifests in two ways. First, it stymies the entire AI innovation ecosystem by effectively cutting off access to a substantial market. Second, it results in a loss for the EU market itself, as both the EU and its citizens will be deprived of AI-based products and services, ultimately diminishing the welfare of people within the EU.

×

Facts & figures

Artificial Intelligence global market size projection

It is essential to illustrate this dynamic: Heavy regulatory burdens could stifle innovation by discouraging startups and small- and medium-sized enterprises from developing and deploying new AI technologies. The fear of non-compliance and the potential financial penalties may deter entrepreneurs from entering the AI sector, ultimately slowing the overall pace of innovation within the EU.

The stringent regulations proposed in the AI Act could hinder the EU’s ability to compete with global leaders in AI, particularly the United States and China. Both countries have adopted more flexible and adaptive regulatory approaches that foster innovation while addressing associated risks. In contrast, the EU’s approach may slow down AI development and adoption, causing European companies to fall behind their international counterparts.

Regulation against innovation

The deficiencies of the EU’s AI Act extend beyond the abovementioned practical issues. They also reveal EU regulators’ lack of trust and understanding of technology. For instance, the categorization of AI into four risk levels suggests that the regulators may overlook both the current state of AI and its future developments. The approach is particularly ambitious since the act does not provide a clear definition of AI beyond listing existing applications.

Read more from Henrique Schneider

Even if it were possible to oversee the entire field – which it is not – the EU would need an intensional definition of AI. This would involve defining the concept by its inherent meaning or characteristics, and developing a deeper theoretical understanding of AI itself rather than just listing examples or instances that fall under that concept. 

Instead, the act provides an extensional definition by categorizing applications into risk levels. That highlights the EU’s limited understanding of the technology at hand. The EU regulates AI as if it were a product, with much of its framework conceptualized as product safety regulation.

Imagine if the Continental Congress in North America had attempted to regulate all uses of electricity and its applications in the 18th century.

Product safety regulations are effective for single-purpose products, such as spam filters, where the operation and associated risks can be assessed. This is why spam filters are classified as minimal risk from the regulator’s perspective. However, General Purpose AI systems, such as Large Language Models and Generative AI like OpenAI’s ChatGPT, Meta’s Llama or Google’s Gemini, can be used for countless applications. This versatility makes it challenging to assess all potential risks and to create regulations that cover every possible use.

The AI Act attempts to address this issue by imposing a general obligation to avoid harm to the fundamental rights of humans. However, this approach is not very convincing. Legally, it is problematic because it opens the door to ongoing litigation and grants significant interpretative freedom to the judiciary.

Imagine if the Continental Congress in North America had attempted to regulate all uses of electricity and its applications in the 18th century. Electricity was beginning to be systematically explored and deployed across various applications, but at that time, a regulatory endeavor would be foolish. The EU’s AI Act is attempting something similar: to regulate a multi-purpose, versatile technology right at its inception.

×

Scenarios

With the EU’s AI Act now entering into effect and AI technology continuously evolving, three possible scenarios can be envisioned.

Best case: Ineffectiveness of the regulation

Under this scenario, the regulation will remain in place, but it will have little to no impact. This could occur if the requirements for modeling, risk assessment and transparency are minimal and easily manageable or standardized. Additionally, providers and developers might identify workarounds and loopholes within the act, further diminishing its effectiveness. The rapid advancement of AI – particularly in its diverse applications – could render the act obsolete within a short time. Consequently, assessing the likelihood of this scenario is challenging. However, if it does occur, it represents the best outcome as it allows for the continued development of AI within the EU.

Most probable: Accommodation of the act

Here, the AI Act is implemented, and AI providers and developers discover various ways to accommodate the regulation. This may involve increasing the prices of AI applications or offering versions of AI and its applications developed explicitly for the EU market, often with limited capabilities. In this case, startups and small companies may be forced to withdraw from the EU AI market, leaving it to more prominent players who can better absorb the costs and scope of the regulation. 

As a result, the EU will have access to only partial or second-best solutions. Without a solid domestic base for innovative AI development, the EU risks further eroding its competitive edge in this field. The likelihood of this scenario is relatively high, as evidenced by the experiences with the Digital Markets Act and the Digital Services Act, which suggest that the EU is more interested in regulation than innovation. The EU may be willing to accept partialized products and technology and forgo the presence of innovative firms if it can retain its regulatory authority.

Worst case: Inability to accommodate EU law

In the worst-case scenario, the AI Act is fully implemented, leading providers and developers to prioritize markets outside the EU, effectively halting the deployment of AI and its applications within the region. This results in a complete standstill of AI-related innovation in the EU, causing a significant drain of capital, talent and productive capabilities to areas with less stringent or no AI regulations. Consequently, neither providers, developers nor adopters of AI would have access to the technology and its applications, stifling innovation altogether.

However, the likelihood of this scenario is rather low. The EU’s size as a market is substantial enough to incentivize at least some stakeholders to adapt to the regulation. Additionally, there are often technological means available to circumvent regulatory challenges. The very nature of AI may facilitate these alternative paths, making it less probable that all providers will withdraw from the EU market entirely.

For industry-specific scenarios and bespoke geopolitical intelligence, contact us and we will provide you with more information about our advisory services.

Related reports

Scroll to top