BLOGPOST / THE ADEQUACY OF THE GDPR REGARDING AUTOMATED DECISION-MAKING IN EMPLOYMENT RELATIONSHIPS

Mark Bauer and Katja Svetina are both students at the Faculty of Law of the University of Ljubljana, Katja was also exchange student at the University of Groningen in 2023.

THE CONTEMPORARY NEED TO PROTECT EMPLOYEES’ PERSONAL DATA

The era of information society is constantly confronted with new technological and digital challenges that shape the modern legal landscape, making the protection of personal data a key legal issue. The collection, storage, organisation, dissemination, and erasure of personal data constitute a “necessary evil” of our modus operandi in many areas of modern life – perhaps most clearly and explicitly in the field of employment relations. The processing of personal data is a fundamental element of the establishment and maintenance of the employment relationship,[1] which places employers in the role of the so-called data controllers[2] and employees in the position of data subjects.

The protection of personal data in the European Union (EU) is predominantly governed by the General Data Protection Regulation (GDPR),[3] the adequacy and effectiveness of which are in some places questionable in light of the specificities of the employment relationship. This relationship inherently involves a power imbalance and an information asymmetry, which exposes employees to certain risks in their position as data subjects. These risks are exacerbated by digitisation and the ever-evolving development of modern technology. The nature of employment law is different from, for example, consumer law, which raises the question of how data protection law could better be adapted to the field of employment law. In this article, we will highlight the provisions of Article 22 of the GDPR that are insufficiently or too generally regulated to ensure effective protection of the employee’s personal data regarding automated decision-making. In view of the introduction of new technologies into employment relationships, we will try to find the reasons for incomplete regulation and present some possible solutions.

AUTOMATED DECISION-MAKING: IS THE EXISTING REGULATION OF ARTIFICIAL INTELLIGENCE UNDER ARTICLE 22 OF THE GDPR EFFECTIVE?

Automated decision-making is interconnected with the term algorithmic management, which, in the context of labour law, comprises decisions related to the allocation of duties and responsibilities, the instructions employees receive, the surveillance conducted by the employer, and various employee assessments.[4] As the name suggests, these decisions are made by various algorithms powered by artificial intelligence (AI), which has proved to be an essential and ever-present part of modern society; the field of labour is no exception. The introduction of AI into legal settings, such as the employer-employee relationship, holds great potential to make any decision-making process more rapid and efficient, even promising the elimination of human error, which must always be taken into account; that is under the presumption that these algorithms are implemented correctly.[5]

Despite its positive impact on the economical aspect of decision-making, the use of AI in labour settings poses certain risks for the subject that is being managed in such a way – the employee. It is widely recognised that algorithms have the capability to render (more or less) accurate decisions only when they can base these decisions upon extensive datasets (i.e., Big Data).[6] These datasets can also include personal data as defined by the GDPR. The GDPR pays particular attention to automated decision-making, wherein Article 22 is an attempt to regulate the issue. It states that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”,[7] meaning that the data subject holds the individual right to request not to be subject to automated processing.

Article 22 of the GDPR is characterised by its vague phrasing.[8] There has been debate about whether the European legislator intended the provision to act as a general prohibition (which applies whether or not the data subject acts against such processing) or a right that is activated only when invoked by the data subject. Article 29 Data Protection Working Party (WP29)[9] concluded that, although the Article articulates its meaning through the term ‘right’, it should be read as a general prohibition. This interpretation provides for automatic protection of the data subject and strengthens the overall control individuals have over their data.[10] Despite the fact that the majority tends to follow this opinion, there are arguments in favour of reading the provision as an individual right to be invoked. For example, the use of the word ‘right’ does imply that the legislator intended for it to not be a ban, especially considering that the (contextually similar) Law Enforcement Directive explicitly prohibits automated decision-making through non-vague wording.[11] Interpreting the provisions similarly would ensure a more coherent system of data protection in the European Union.[12] Nonetheless, the highest interpretational authority on European law, The Court of Justice of the European Union, put an end to the dilemma as it decided the SCHUFA case (C‑634/21) in December of 2023.[13] The court interpreted the provision so that it “lays down a prohibition in principle, the infringement of which does not need to be invoked individually by such a person,” meaning it clearly considered it a general prohibition.

Furthermore, it is important to stress the issue of the applicability of the first paragraph of Article 22 and question the range of situations it even refers to. Labour law as a network of legal relationships is specific in the notion that the balance of power is a priori unequal given the (primarily economic) dependency of the employee on the employer.[14] This distinction makes it difficult to understand why the scope of the Article is so narrowly defined and subject to a series of exceptions that significantly weaken the original provision.

According to how Article 22 is articulated, the data subject has the right to not be subject to a decision based solely on automated processing.[15] The formulation uses the term ‘solely’, which, in a fairly problematic manner, narrows the possible legal interpretation to mean that the only automated processing the data subject is protected from is the one that does not allow for any human intervention.[16]

Is it therefore possible to extend the material scope of protection to include those decisions that rely on automated processing but were, in the end, still made by a human (also referred to as augmented decision-making)?[17] It is fairly easy to imagine a scenario where an employer makes a decision about the recipient of a promotion based on automatically generated performance data. The question then arises as to how to delineate the human involvement necessary to prevent a decision from being classified as automated. According to the interpretation of the European Data Protection Board (EDPB), any surveillance by an individual with the authorisation to alter the decision is considered human intervention, if it is not merely a symbolic act.[18] Consequently, it is reasonable to conclude that a context of augmented decision-making, in which AI systems only supplement rather than fully supplant the employer’s decision-making processes, will generally already be adequate to preclude the invocation of Article 22(1) of the GDPR.[19]

Article 22 is subject to several exceptions, as is the case for a number of provisions found in the GDPR. In the context of labour law, the most concerning clause is set forth in point (c) of the second paragraph, which allows the application of the first paragraph to be circumvented in the presence of the data subject’s explicit consent.[20] Consent is interpreted as “any freely-given, specific, and informed indication of a data subject’s wishes by which he or she signifies his or her agreement to personal data relating to them being processed.[21] While consent is otherwise a commonly used legal basis for processing personal data under the GDPR, obtaining legitimate consent from employees is generally problematic. The inherent imbalance of power and information asymmetries in the employer-employee relationship[22] often results in the employee’s dependence on the employer, raising concerns that withholding consent could have detrimental consequences for the employee.[23] Additionally, the potential for coercion and peer pressure within the workplace further undermines the validity of such consent.[24] As a result, employee consent cannot be considered freely given and, therefore, does not constitute a valid legal basis for data processing.[25] Although the use of consent as a legal basis for employers is generally discouraged, it is not entirely prohibited. Employers may rely on consent in exceptional circumstances, provided there is a clear and definitive determination that the employee’s consent was specific, informed, and given freely.[26]

Effective redress can also be problematic for workers impacted by automated decision-making.[27] This can be attributed to the lack of transparency of the basis for the employer’s decision and to insufficient clarity of the decision itself – i.e., the reasons and/or data on which the AI algorithm based its decision. Generally, the employer is obliged to inform the employee of the fact that they are the subject of automated decision-making and of all other relevant information regarding the processing of their personal data, under Articles 13 and 14 of the GDPR. Herein, Article 29 Data Protection Working Party underlines that the employer should explain to the employee in a concise and comprehensible way how the AI system works and the ratio of their decision, which the employee could then use in their possible objection.[28] However, here we encounter the problem of the “explainability” of AI algorithms. Users (employers) often perceive their AI systems as a competitive advantage or a trade secret and are therefore reluctant to disclose the way in which they work in a clear way.[29] Moreover, theory is divided on this issue, with some advocating a full view into the inner workings of the “black-box” of AI algorithms and others advocating an approach where only the more intelligible reasons for the algorithm/employer’s decision (e.g., level of education) are presented to the worker.[30]

ARTIFICIAL INTELLIGENCE ACT AS A “PANACEA”?

Much hope for better regulation and protection of workers’ personal data lies in the Artificial Intelligence Act (AI Act),[31] which aims to – upon entering into force – establish a unified and overarching framework for the so-called “European” approach to artificial intelligence.[32] The AI Act garnered particularly close attention from legal experts specialising in labour law, as it, among other things, contains specific provisions for employment relationships. According to recital 57 of the AI Act, any AI systems used in the context of employment relationships should be classified as high-risk systems, since they may have an appreciable impact on the future career prospects and livelihoods of impacted workers. The proposed regime is based on the risk management system defined in Article 9 of the AI Act. The risk management system consists of “a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system”.[33] It establishes an exhaustive list of procedural requirements, which the users of such systems have to adhere to. The main requirement is the need to carry out an internal assessment of the risks that may arise from the use of a high-risk AI system. However, in accordance with Article 43 of the AI Act, no participation of a notified body – i.e., a professional legal body representing workers[34] – is required when users carry out this internal assessment, thus significantly reducing the effectiveness of this system in light of the protection of workers’ personal data.

The AI Act undoubtedly establishes some important protection mechanisms, such as increased transparency, human control, and internal control. It imposes strict standards and restrictions on providers and producers (i.e., developers) of AI to prevent the design of harmful AI-tools that could be sold to employers (i.e., deployers).[35] In this sense, the primary focus of the AI Act is on regulating the developers of AI rather than its end-users.[36] This focus is evident in the Act’s overarching aim to promote “trustworthy” AI, a principle deeply embedded in its conceptual design.[37]

On the other hand, however, the AI Act also incorporates significant obligations for employers, addressing concerns related to the protection of fundamental rights.[38] Specifically, Article 27 of the AI Act requires employers (i.e., deployers) to carry out a Fundamental Rights Impact Assessment (FRIA) prior to deploying high-risk AI systems.[39] This assessment includes a detailed analysis of potential impacts on fundamental rights, ensuring that employers consider these impacts when integrating AI systems into their workplace environments and workflows, thereby protecting employees’ rights from the outset​​.[40]

Moreover, under Article 86 of the AI Act, employees or any other individuals subject to AI decisions “have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure”.[41] This provision requires employers/deployers to explain the functioning of AI systems to those affected. For instance, if an AI system is used to monitor employee productivity or make hiring decisions, the employer must inform employees about how the system works, what data it collects, and how it impacts their roles. This ensures that individuals affected by AI decisions are adequately informed about the operation of these systems and their potential implications.

The theory nevertheless warns of a potential pitfall of the AI Act, as it could end up acting as a kind of regulatory “ceiling”, preventing the application of stricter provisions of national laws and thus effectively lowering the standard of protection.[42] This stems from the fact, that the AI Act does not contain a clause comparable to that of Article 88 of the GDPR, which explicitly empowers Member States to regulate the protection of personal data in the employment relationship more thoroughly with national legislation.[43] Consequently, this could trigger deregulation in the field of employment regulation across Europe.[44]

CLOSING REMARKS

Recent discourse has underscored the inevitability of radical changes arising from the implementation of algorithmic (AI) technologies in the workplace. While the GDPR offers a framework that could potentially be interpreted with the nuances of the employment relationship in mind, it falls short of providing adequate safeguards for the personal data of employees, particularly in the context of digital transformation.

It should be noted that the GDPR is written well in terms of the extensiveness of its purpose; however, its weakness lies precisely in its generality – it is applicable to all areas but to none specifically. Hence, while generality can be a strong point of the GDPR, it is ironically also its principal limitation, as it exhibits significant shortcomings in addressing the specificities of employment relationships. Article 22 of the GDPR, which governs automated decision-making, is particularly inadequate in this regard. The provision’s narrow scope and numerous exceptions fail to account for the power imbalances inherent in the employer-employee relationship. The use of AI in employment contexts, such as for hiring, performance evaluations, and promotions, often involves complex and opaque algorithms that employees may not fully understand or contest. Moreover, the notion of consent within the GDPR framework is problematic when applied to employment relationships. The inherent imbalance of power and potential for coercion make it difficult to obtain genuine consent from employees. Consequently, reliance on employee consent as a legal basis for data processing is fundamentally flawed.

The forthcoming AI Act holds promise for addressing some of these deficiencies. It aims to establish a more robust framework for transparency, human oversight, and risk management in AI usage, particularly in high-risk contexts like employment. Notably, it mandates employers to conduct a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk AI systems and ensures that employees are informed about how these systems function and their implications. Such measures could definitely enhance the protection of employees’ personal data. However, the AI Act is not without its potential pitfalls. There is concern that it may act as a regulatory ceiling, preventing the application of stricter national laws and thus lowering the standard of protection. Unlike the GDPR, the AI Act does not explicitly empower Member States to enact more detailed national legislation for the protection of personal data in employment, potentially leading to a deregulatory effect.

In conclusion, while the GDPR provides a foundational framework for data protection, it is insufficiently tailored to the nuances of employment relationships. The AI regulation promises to address some of these gaps, but its effectiveness will depend on its implementation and potential interplay with national laws. A comprehensive and nuanced approach is necessary to ensure that the rights of employees are adequately protected in the age of AI and digital transformation.

BIBLIOGRAPHY

Adrienn Lukács and Szilvia Váradi, ‘GDPR-Compliant AI-Based Automated Decision-Making in the World of Work’ (2023) 50 Computer Law & Security Review.

Aislinn Kelly-Lyth, ‘Algorithmic Discrimination at Work’ (2023) 14 European Labour Law Journal.

Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017.

Article 29 Working Party, Opinion 2/2017 on Data processing at work. Adopted on 8 June 2017.

Claudia Ogriseg, ‘GDPR and Personal Data Protection in the Employment Context’ (2017) 3 Labour & Law Issues.

Halefom Abraha, ‘Regulating Algorithmic Employment Decisions through Data Protection Law’ (2023) 14 European Labour Law Journal.

Hana Šerbec and Aljoša Polajžar, ‘Predlog Akta o Umetni Inteligenci in Vpliv Na Delovna Razmerja’ (2022) 41 Pravna praksa: časopis za pravna vprašanja.

Heidi Waem, Jeanne Dauzier, and Muhammed Demircan, ‘Fundamental Rights Impact Assessments under the EU AI Act: Who, What and How?’ (Technology’s Legal Edge: A Global Technology Sector Blog, 2024) <https://www.technologyslegaledge.com/2024/03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/>.

Klemen Kraigher Mišič (ed) and others, Uvod v Varstvo Osebnih Podatkov (1st edition, Lexpera, GV založba 2024) <https://plus.cobiss.net/cobiss/si/sl/bib/prflj/176388867>.

Luca Tosoni, ‘The Right to Object to Automated Individual Decisions: Resolving the Ambiguity of Article 22(1) of the General Data Protection Regulation’ 11 International Data Privacy Law.

Nataša Pirc Musar (ed) and others, Komentar Splošne uredbe o varstvu podatkov (The Official Gazette of the Republic of Slovenia 2020).

Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31 Harvard Journal of Law & Technology.

Sara Bagari, ‘Risks of Introducing Artificial Intelligence into Employment Relationships and Possible Legal Solutions’ (2023) 23 Employees & Employers.

The German Trade Union Confederation, ‘The German Trade Union Confederation’s Position on the EU Commission’s Draft of a European AI Regulation’ (2021) <https://www.dgb.de/downloadcenter/++co++9341cf1a-5107-11ec-9432-001a4a160123>.

LEGAL ACTS AND CASE LAW

Charter of Fundamental Rights of the European Union (2010/C 83/02).

European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM (2021)0206 – C9-0146/2021 – 2021/0106(COD)).

OQ v Land Hessen (intervener: Schufa Holding AG) [2023] ECLI:EU:C:2023:957, Case C-634/21.

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance).


[1] Klemen Kraigher Mišič, in: Klemen Kraigher Mišič (ed) and others, Uvod v Varstvo Osebnih Podatkov (1st edition, Lexpera, GV založba 2024) <https://plus.cobiss.net/cobiss/si/sl/bib/prflj/176388867>, page 173.

[2] Article 4(7) of the GDPR only refers to the “controller”, but for the sake of clarity we will use the term “data controller” throughout the article.

[3] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance).

[4] Sara Bagari, ‘Risks of Introducing Artificial Intelligence into Employment Relationships and Possible Legal Solutions’ (2023) 23 Employees & Employers, page 222.

[5] Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017, page 13.

[6] Adrienn Lukács and Szilvia Váradi, ‘GDPR-Compliant AI-Based Automated Decision-Making in the World of Work’ (2023) 50 Computer Law & Security Review, page 5.

[7] Article 22(1) of the GDPR. The Article also regulates profiling, but that will not be the focus of this article.

[8] Klemen Kraigher Mišič, in: Nataša Pirc Musar (ed) and others, Komentar Splošne uredbe o varstvu podatkov (The Official Gazette of the Republic of Slovenia 2020), pages 411–419.

[9] This working party was established by Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. With the GDPR entering into force on the 25th of May 2018, it has since been replaced by the European Data Protection Board (EDPB).

[10] Guidelines on Automated individual decision-making and Profiling (n 5), str. 19–20.

[11] Luca Tosoni, ‘The Right to Object to Automated Individual Decisions: Resolving the Ambiguity of Article 22(1) of the General Data Protection Regulation’ 11 International Data Privacy Law, pages 8–10.

[12] ibid, page 14.

[13] OQ v Land Hessen (intervener: Schufa Holding AG) [2023] ECLI:EU:C:2023:957, Case C-634/21.

[14] Article 29 Working Party, Opinion 2/2017 on Data processing at work. Adopted on 8 June 2017, page 4.

[15] See Article 22(1) of the GDPR.

[16] Klemen Kraigher Mišič, in: Pirc Musar (ed) and others (n 8), pages 411–419.

[17] Lukács and Váradi (n 6), pages 3–4.

[18] Klemen Kraigher Mišič, in: Pirc Musar (ed) and others (n 8), pages 411–419.

[19] Aislinn Kelly-Lyth, ‘Algorithmic Discrimination at Work’ (2023) 14 European Labour Law Journal, page 169; Halefom Abraha, ‘Regulating Algorithmic Employment Decisions through Data Protection Law’ (2023) 14 European Labour Law Journal, page 179.

[20] Article 22(1) of the GDPR: “the data subject shall have the right not to be subject to a decision based solely on automated processing /…/”.

[21] Opinion 2/2017 on Data processing at work (n 14), page 6.

[22] Claudia Ogriseg, ‘GDPR and Personal Data Protection in the Employment Context’ (2017) 3 Labour & Law Issues.

[23] Opinion 2/2017 on Data processing at work (n 14), pages 6–7.

[24] Kraigher Mišič, in: Kraigher Mišič (ed) and others (n 1), page 173.

[25] Opinion 2/2017 on Data processing at work (n 14), pages 6–7.

[26] ibid, page 7.

[27] Lukács and Váradi (n 6), page 8.

[28] Guidelines on Automated individual decision-making (n 5), page 26.

[29] Lukács and Váradi (n 6), page 9.

[30] Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31 Harvard Journal of Law & Technology, pages 880–883.

[31] European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM (2021)0206 – C9-0146/2021 – 2021/0106(COD)).

[32] Hana Šerbec and Aljoša Polajžar, ‘Predlog Akta o Umetni Inteligenci in Vpliv Na Delovna Razmerja’ (2022) 41 Pravna praksa: časopis za pravna vprašanja, page 11.

[33] Article 9(2) of the Artificial Intelligence Act.

[34] This was especially pushed for by the German Trade Union Confederation (DGB). See The German Trade Union Confederation, ‘The German Trade Union Confederation’s Position on the EU Commission’s Draft of a European AI Regulation’ (2021) <https://www.dgb.de/downloadcenter/++co++9341cf1a-5107-11ec-9432-001a4a160123>.

[35] Šerbec and Polajžar (n 32), page 13.

[36] Lukács and Váradi (n 6), page 12.

[37] ibid.

[38] See mainly Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (2010/C 83/02).

[39] Article 27 of the Artificial Intelligence Act.

[40] Heidi Waem, Jeanne Dauzier, and Muhammed Demircan, ‘Fundamental Rights Impact Assessments under the EU AI Act: Who, What and How?’ (Technology’s Legal Edge: A Global Technology Sector Blog, 2024) <https://www.technologyslegaledge.com/2024/03/fundamental-rights-impact-assessments-under-the-eu-ai-act-who-what-and-how/>.

[41] Article 86(1) of the Artificial Intelligence Act.

[42] Lukács and Váradi (n 6), page 12.

[43] Bagari (n 4), pages 236–237.

[44] ibid.

Leave a Reply

Your email address will not be published. Required fields are marked *