Legal Implications of Automated Emotion Recognition Systems – Technology Org

Current legal regulations do not offer real protection to employees against companies that use automated emotion recognition systems, which combine biometrics, algorithms and Artificial Intelligence (AI) to deduce or detect moods or intentions.

This is one of the conclusions emerging from the publication of a book by a lecturer at the Universidad Carlos III de Madrid (UC3M).

Legal Implications of Automated Emotion Recognition Systems – Technology Org

Emotions – artistic interpretation. Image credit: Mark Daynes via Unsplash, free license

Facial structure, fingerprints or voiceprints, retina patterns, finger and hand veins or even heartbeats are personal data that, if properly processed, can help detect fatigue, stress, lack of concentration, happiness or sadness in emotion recognition systems.

“It could be said that this technology is a modern version of the polygraph or lie detector,” says researcher Ana Belén Muñoz Ruiz, an associate professor in UC3M’s Private Social and International Law Department, who recently published the book “Biometría y sistemas automatizados de reconocimiento de emociones: Implicaciones Jurídicos-Laborales”, in which she analyses the uses of this technology in the workplace and the employment implications for workers. 

These systems increase companies’ capacity to analyse and exploit employees’ personal data. “Above all, I am concerned about whether, through this technology, companies could learn about the health data of their workers for non-preventive purposes,” says the researcher.

The European Union is currently working on the Artificial Intelligence Act that will introduce limits and guarantees against the use of this emotion recognition technology, which is expected to come into force at the end of 2026. At the same time, other technical standards are being prepared.

“However, the approval process is slow and, meanwhile, the current regulations do not offer real protection to employees,” concludes Ana Belén Muñoz Ruiz. These systems can affect a wide range of fundamental rights: the right to personal data protection, privacy, non-discrimination and, above all, the mental health of employees.

Her book analyses some of the first judicial decisions on this issue of emotion recognition systems in Europe. For example, the Hungarian Data Protection Agency’s ruling of 8th February 2022. This specific case involved software that used AI and analysed and evaluated the emotional states of customers and employees of a bank in order to manage complaints, monitor call quality and increase employee efficiency.

“This example is just a precedent for new realities in which we will experience new problems arising from the work-related use of algorithms and AI in companies,” says the expert.

In the United States and other countries such as China, there have been warnings that some companies are using these practices in the workplace. In the European Union, since July 2022, it has been mandatory to include a fatigue and drowsiness detector in newly registered cars.

Even in the case of emotion recognition systems that do not use biometric data, as in the case of chatbots and RPA (robotic process automation) robots, the use of this technology may even pulverise the fundamental right to privacy when the monitoring takes place in the workplace, according to the expert, as the traditional limits and guarantees linked to the fundamental rights to privacy and personal data protection do not seem to be sufficient.

Are companies entitled to use it? The book analyses this question at two points. Firstly, regarding the biometric data of the employee in line with compliance with the daily obligations of companies, such as time logging (in fact, time logging via fingerprint or other biometric data has been questioned in countries such as Austria, France, Italy or Norway).

Secondly, on the legitimisation of companies for the use of automated emotion recognition systems.

“After my studies, I conclude that improving customer satisfaction and employee performance or integrating people with disabilities do not seem to be sufficient reasons. While safety at work is a duty that the company must fulfil and can justify the processing of employee data, its extension to automated emotion recognition systems is highly questionable,” concludes Ana Belén Muñoz Ruiz.

Source: Universidad Carlos III de Madrid