13th May 2021

Artificial intelligence regulation will be the next Big Tech flashpoint


On 21 April, the European Commission published a hotly anticipated proposal for the regulation governing artificial intelligence (AI), which plays a central role in the EU Commission’s ambitious European Strategy for Data.

While the regulation has a long road ahead before it’s finalised, businesses in all industries should be prepared for significant oversight in this space.

Given the complexity of AI and the number of nation states and stakeholders involved, the draft regulation will likely not be adopted for some time. However, it is likely to set the benchmark for future ethical AI legislation around the world.

AI responsibility

Advances in AI have become commonplace across many day-to-day products. Cars have automatic-breaking, Google predicts what you want to search, and Netflix suggests recommendations for viewing.

The benefits of AI are evident, but there are growing concerns about the dangers it can create. Issues around ethics, potential economic losses, and the lack of physical security are among those that prompted the European Union to work on a legal regulatory framework.

The European Parliament has acknowledged that the current legal system lacks a specific discipline concerning AI systems’ liability. It believes AI technologies’ abilities and autonomy make it challenging to trace specific human decisions. As a result, the person who suffers from damage caused by an AI generally cannot be compensated without proof of the operator’s liability.

This is indeed a problem. Current AIs have shown in several cases to have severe limitations. Take, for example, the Amazon recruiting system that discriminated against women or the semi-autonomous Tesla car that was responsible for the deaths of two people.

Several Tesla cars have crashed while in "autopilot" mode, which allows the car to drive semi-autonomously.

Regulating risky AI

The EU’s proposed framework follows a risk-based approach and differentiates the uses of AI according to whether they create an unacceptable risk, a high risk, or a low risk.

The risk is unacceptable if an AI poses a clear threat to people’s security and fundamental rights, and it is prohibited for this reason. The European Commission has identified examples of unacceptable risk as uses of AI that manipulates human behaviour and systems that allow social-credit scoring. For example, the framework would prohibit an AI system similar to China’s social credit scoring.

The Commission defines high risk as an AI that has some kind of security component, such as systems related to critical infrastructure (e.g., road traffic and water supply), educational training (e.g., the use of AI systems to mark exams), safety components of products (e.g., robot-assisted surgery), and employee selection (e.g., job application software).

If AI systems have a low risk, they must comply with transparency obligations. In this case, users need to be aware that they are interacting with a machine. For example, in the case of a “deepfake”, where a person’s images and videos are manipulated to resemble someone else, users must declare that the image or video content has been manipulated. Likewise, chatbots in customer service will need to inform users that they are interacting with a machine.

What happens now?

The Commission’s proposal represents a very important step towards AI regulation. Next, the European Parliament and the member states will have to adopt the Commission’s proposal, which will take time. The new legal framework will then be directly applicable throughout the European Union.

The framework will have a strong economic impact on many individuals, companies, and organisations, as its reach could extend beyond the EU’s borders, affecting foreign tech companies that operate within the bloc.

The jurisdiction of the regulation covers providers of AI systems in the EU irrespective of where the provider is located, as well as users of AI systems located within the EU, and providers and users located outside the EU “where the output produced by the system is used in the Union”. This potentially extends the law’s reach to companies without a market presence in the EU that use AI systems to process data about EU citizens.

Google's response to the EU's proposed AI legislation has been tepid. Pictured, Google CEO Sundar Pichai.

The proposal’s limitations

The regulation is vague on the information that must be disclosed to the people affected by AI systems.

The regulation requires that people be informed when they “interact with” an AI system or when their emotions or gender, race, ethnicity, or sexual orientation are “recognised” by an AI system.

People must be told when “deepfake” systems artificially create or manipulate material, but not in other cases. For instance, people do not need to be told when they are algorithmically sorted to determine eligibility for public benefits, a loan from the bank, education, or employment.

Overall, Big Tech emerges virtually unscathed under the new AI legislation despite being the object of widespread and growing concern over the use of AI-driven algorithms and the nexus of most cutting-edge applied AI research.

The regulation does not treat as high risk the algorithms used in social media, search, online retailing, app stores, mobile apps, or mobile operating systems. It is possible that some algorithms used in ad tracking or recommendation engines might be prohibited as manipulative or exploitative practices, but this would be a decision for individual regulators, such as Ireland’s Data Protection Commissioner.

Big Tech responds

Since the start of this legislative process in 2020, Big Tech’s response has been mixed. Google has critiqued the proposal, citing the harm it could do to the sector. Tesla’s Elon Musk has called for strong regulations for all organisations developing advanced AI. Facebook gave a matter-of-fact approval to the AI proposal, noting “Facebook is aligned with the Commission’s goal of limiting regulation to those highest-risk AI uses that require it”.

However, as is usually the case with Big Tech, it is what corporates do below the parapet that could have the most impact on the future of AI regulation. This proposal, if anything, shows the influence of the Big Tech lobby; for example, Facebook’s vocal support for online regulation has been coupled with intense private lobbying to ensure the regulations officialdom decide upon are appealing to Facebook Inc.

These apparent shortcomings will almost certainly be discussed and revisited as the legislation moves through the complicated and lengthy European legislative process.

About the author

Karl specialises in public and corporate affairs for clients operating in highly regulated environments. Prior to joining 360, Karl worked in public affairs and public relations with other communication consultancies. He has also worked for high-profile politicians in both the European Parliament and the Houses of the Oireachtas.

Join the Circle

Get 360’s intelligent communications updates, insights, and research delivered to your inbox every quarter.