Artificial intelligence (AI) can analyse large amounts of data in a very short time – and is therefore increasingly being used to support decision-making. This can be problematic in the case of automated decisions that have a legal effect on people, as well as when the training data is distorted.
This raises the question of transparency, fairness and legal accountability in such AI systems.
Problem of automated decision-making (by AI)
In many areas, such as lending, healthcare and even legal processes, AI is used to support decision-making. Systems based on machine learning analyse large amounts of data to identify patterns and make predictions. These can then serve as the basis for important decisions, such as assessing a customer’s creditworthiness or diagnosing an illness based on medical image data.
AI offers the advantage of making decisions faster and more efficiently than humans alone could. However, this automation also leads to potential problems, especially when people are directly affected by the results of these automated processes.
A prominent example of this is the automated credit score calculation by the German Schufa, the biggest private credit tracking agency in Germany. On 7 December 2023, the CJEU clarified that the automated calculation of a probability value (credit score) can be regarded as automated decision-making pursuant to Art. 22 (1) GDPR if this value constitutes a decisive basis for a credit institution’s decision on granting a loan.
Art. 22 GDPR stipulates that individuals have the right not to be subject to a purely automated decision that produces legal effects concerning them or significantly affects them in any other way.
The CJEU’s judgment raises the question of the extent to which automated systems such as the German Schufa’s scoring model can continue to be used without violating the GDPR. As Schufa makes a significant contribution to the decision-making process of banks, telephone providers, online retailers and landlords in many cases, the judgment may pose a threat to Schufa’s current business model.
If an automated decision is to be legally permissible in exceptional cases, this must be necessary for the conclusion or fulfilment of a contract, the express consent of the data subject must be available or there must be a legal basis in national law that complies with European law. There is currently a debate as to whether § 31 of the German Federal Data Protection Act (BDSG), which defines the use of probability values, is compatible with the GDPR. The CJEU has expressed considerable doubt as to whether this standard fulfils the requirements of Art. 22 (2) lit. b) GDPR.
Problem of automated decision-making (by AI)
In many areas, such as lending, healthcare and even legal processes, AI is used to support decision-making. Systems based on machine learning analyse large amounts of data to identify patterns and make predictions. These can then serve as the basis for important decisions, such as assessing a customer’s creditworthiness or diagnosing an illness based on medical image data.
AI offers the advantage of making decisions faster and more efficiently than humans alone could. However, this automation also leads to potential problems, especially when people are directly affected by the results of these automated processes.
A prominent example of this is the automated credit score calculation by the German Schufa, the biggest private credit tracking agency in Germany. On 7 December 2023, the CJEU clarified that the automated calculation of a probability value (credit score) can be regarded as automated decision-making pursuant to Art. 22 (1) GDPR if this value constitutes a decisive basis for a credit institution’s decision on granting a loan.
Art. 22 GDPR stipulates that individuals have the right not to be subject to a purely automated decision that produces legal effects concerning them or significantly affects them in any other way.
The CJEU’s judgment raises the question of the extent to which automated systems such as the German Schufa’s scoring model can continue to be used without violating the GDPR. As Schufa makes a significant contribution to the decision-making process of banks, telephone providers, online retailers and landlords in many cases, the judgment may pose a threat to Schufa’s current business model.
If an automated decision is to be legally permissible in exceptional cases, this must be necessary for the conclusion or fulfilment of a contract, the express consent of the data subject must be available or there must be a legal basis in national law that complies with European law. There is currently a debate as to whether § 31 of the German Federal Data Protection Act (BDSG), which defines the use of probability values, is compatible with the GDPR. The CJEU has expressed considerable doubt as to whether this standard fulfils the requirements of Art. 22 (2) lit. b) GDPR.
Decision assistance vs. automated decisions
An important distinction must be made at this point between fully automated decisions and so-called decision assistance systems. While fully automated systems make a decision without human intervention, assistance systems support people in making a decision. In the latter case, the human still has the final decision-making authority and can check the result of the system and correct it if necessary.
The difference is legally relevant, as the prohibition of automated decision-making under Art. 22 GDPR does not apply if a human being is significantly involved in the decision-making process. However, the CJEU judgment discussed above made it clear that there must be genuine human involvement in the decision. It is not sufficient for a human to merely confirm the machine-generated suggestions without critically scrutinising them. In the absence of such genuine human decision-making, the system could still be categorised as an automated decision.
This option for correcting statistical probabilities (which is exactly what (AI-supported) decision assistance systems do) is already established in many areas.
Problems of bias and discrimination
Another key problem when using AI in decision-making processes is bias. AI systems often reflect the biases or distortions present in the training data used to train algorithms. This can lead to certain groups in society being disadvantaged when AI systems make decisions about them.
A well-known example is discrimination in lending, where people from certain ethnic groups or from a certain neighbourhood could be disadvantaged by the scoring system.
To avoid such biases, experts are increasingly calling for more transparency in the development and application of AI systems. It is important that the training data is carefully selected and monitored to ensure that it is representative and does not contain any discriminatory patterns.
Tip: Read the guide on dealing with bias in AI systems.
Ethical challenges in the development and use of AI
With the emergence of AI systems in sensitive areas, there is also a growing call for ethical reflection on these technologies. The question of how neutral and fair an AI system can actually be is at the centre of the discussion. Transparency is one of the most frequent demands: Developers of AI systems should disclose how their algorithms work, what data is used and how decisions are made.
However, this demand for transparency often conflicts with the interests of developers and companies that want to protect their algorithms as trade secrets. A balance must be found between these interests to ensure that AI systems are used ethically and responsibly.
Conclusion and outlook
Artificial intelligence offers enormous potential to speed up and improve decision-making processes. At the same time, however, its use raises important questions about data protection, accountability and fairness. In the area of automated decision-making, the impact of the Schufa judgment mentioned above will be crucial. We will probably have to wait and see what the case law of the next few years and the developments regarding § 31 BDSG will bring.