Search

GDPR and AI Act: similarities and differences

Jure Globocnik

Jure Globocnik

Guest author from activeMind AG

As digitalisation progresses and new technologies such as artificial intelligence (AI) emerge, the regulatory framework in the European Union (EU) is becoming increasingly important. Two of the most important regulations in this context are the General Data Protection Regulation (GDPR) and the AI Act, which defines the use of AI in the EU.

There are both similarities and significant differences between the two pieces of legislation. This article ventures a comparison of their objectives, areas of application and regulatory approaches.

Similar objectives and principles

Both the GDPR and the AI Act aim to protect the fundamental rights and freedoms of individuals. While the GDPR is primarily concerned with the protection of privacy, the objectives pursued by the AI Act are somewhat broader: According to Art. 1 AI Act, the AI Act is intended to ensure a high level of protection with regard to health, safety and the fundamental rights enshrined in the EU Charter, including democracy, the rule of law and environmental protection.

Spatial scope of application

Both the GDPR and the AI Act apply not only to companies from the EU, but also have a certain extraterritorial effect. Both also apply to non-European companies that operate in the EU market (market location principle).

Under the AI Act, this applies, for example, to providers who are not established in the EU if they place AI systems on the market or put them into operation in the EU, as well as to providers and operators of AI systems that are based in a third country or are located in a third country if the outputs produced by the AI system are used in the EU (Art. 2 para. 1 AI Act).

Material scope of application

While the GDPR exclusively defines the protection and processing of personal data, the scope of the AI Regulation covers the development, provision and use of AI systems and AI models with a general purpose. The AI Act therefore also applies even if no personal data is processed using AI.

Often, both the AI Act and the GDPR will apply to an AI system. However, there will also be cases in which only the GDPR or only the AI Act applies to an AI system.

  • The former will be the case, for example, if personal data is processed with an AI system that poses a minimal risk and is therefore outside the scope of the AI Act (e.g. suggestion algorithm for a music streaming service, AI-supported spam filter). In this case too, however, companies must ensure that their employees have sufficient AI literacy in accordance with Art. 4 of the AI Act.
  • By contrast, only the AI Act applies to an AI-supported safety component in the water, gas, heat or electricity supply (as long as no personal data is processed).

Technology-neutral approach

Both legal acts pursue a technology-neutral approach. This means that they do not define a specific technology, but are equally applicable to all technologies and methods – even those that have not yet been developed. For example, the policies of the GDPR were developed before the current rise of AI, but apply equally to it.

This was not the case in the first draft of the AI Act: The European Commission had initially proposed that only certain AI techniques and concepts explicitly mentioned in the AI Act should be regulated. This would mean that the AI Act would not apply to any new AI methods, although the Commission would have the option of updating the list by means of delegated acts.

Risk-based approach of the AI Act

A key difference between the two sets of regulations is how they deal with risks (so-called risk-based approach). Although the GDPR takes risks into account to a lesser extent (for example, by imposing increased requirements when processing sensitive data; technical and organisational measures in accordance with Art. 32 GDPR should also be based on the risk associated with the processing), most requirements apply equally regardless of the risk of the processing (e.g. legal bases, information obligations).

In contrast, the AI Act differentiates between different risk levels and divides AI systems into four categories: prohibited, high-risk, limited-risk (AI systems with a transparency risk) and minimal-risk systems. High-risk applications – such as AI systems in human resources and law enforcement – are subject to strict regulations, while some systems are not regulated at all.

Transparency and accountability

Another principle common to both sets of regulations is the requirement for transparency and accountability. In the GDPR, this is implemented through commitments such as the information obligations (Art. 13 GDPR) and the right to information (Art. 15 GDPR). Companies must also always be able to demonstrate compliance with the obligations under the GDPR (Art. 5 (2) GDPR).

The AI Act also emphasises transparency. For example, providers of high-risk AI systems must provide operators with instructions for use so that they can appropriately interpret and use the outputs of AI systems (Art. 13 AI Act). Further transparency requirements can be found in Art. 50 AI Act with regard to AI systems with a transparency risk.

It must also be possible to prove compliance with the requirements of the AI Act. Technical documentation in accordance with Art. 11 AI Act and logs to be created in accordance with Art. 12 AI Act serve this purpose.

Supervision of both legislations

Under the GDPR, the national supervisory authorities are responsible for enforcing the GDPR obligations. National supervisory authorities will also monitor compliance with the policies relating to AI systems, although the EU member states have yet to decide whether this will be the relevant national data protection authority or another authority. Although there are some arguments in favour of data protection authorities, it is becoming apparent that the Federal Network Agency will take on this task in Germany.

The newly established AI Office, which is based at the European Commission, is responsible for enforcing AI models with a general purpose.

Those who enjoy analysing the recommendations of the European Data Protection Board (EDPB) will be interested to learn that a similar body is also being set up under the AI Act. The European Committee on Artificial Intelligence, in which all EU member states will be represented, will act as a coordination platform and advisory body for the European Commission.

GDPR and AI Act fines

The structure of the maximum fines is similar in both sets of regulations: A percentage of the company’s global annual turnover in the previous year and an amount in euros are specified, with the higher amount serving as the upper limit in each specific case. The possible sanctions under the AI Act are even higher than under the GDPR (up to 7% of annual turnover or EUR 35 million).

The impact on innovation and the economy

Both the GDPR and the AI Act attempt to find an appropriate balance between the protection of fundamental rights and the promotion of innovation.

Even if this aspect is less pronounced in the GDPR, there are isolated policies in this regard (such as an exemption from the obligation to keep records of processing activities for companies with fewer than 250 employees, which in practice is largely ineffective).

In contrast, the AI Act provides for more policy in this regard. For example, AI real-world laboratories are to be set up in which innovative AI systems are developed, trained, validated and tested (Art. 57 ff. AI Act). The AI Act also contains certain specific measures and exemptions for SMEs and start-ups (Art. 62 et seq. AI Act). SMEs and start-ups are also somewhat privileged when it comes to fines, as the lower amount (percentage of annual turnover or euro amount) applies to them as the upper limit for calculating fines.

Conclusion

Both the GDPR and the AI Act aim to ensure the protection of fundamental rights. While the GDPR emphasises the protection of privacy, the AI Regulation takes a broader approach and also covers environmental protection and the protection of the rule of law, for example.

The policies of the AI Act are often similar to those of the GDPR and it is clear that the GDPR – alongside the product liability legislation of the EU – was allowed to serve as a blueprint for the AI Act. However, while the GDPR was developed on the basis of years of experience with national data protection law, the AI Act is entering uncharted legislative territory where it remains to be seen how the regulation of rapidly advancing technology can be implemented at all.

The GDPR and the AI Act have many basic principles in common, such as transparency and accountability. However, they differ in their material scope of application and in the way they deal with risks. The risk-based approach is much more pronounced in the AI Act than in the GDPR.

Both sets of rules will be decisive for how Europe masters the digital transformation and the promotion of innovation in a digital future.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: