What is an Emotion Recognition System under the EU’s Artificial Intelligence Act?: Part 1 - A machine that “understands” your Monday blues!

Written By

nora santalu Module
Nora Santalu

Associate
UK

I'm an associate in the privacy and data protection team in London. I advise on the GDPR, the EU AI Act as well as ePrivacy rules with a particular focus on the regulation of biometrics and fraud prevention.

This article explores what counts as an Emotion Recognition System (ERS) under the EU’s Artificial Intelligence Act (the AI Act) and delves into the ambiguities surrounding its definition under the AI Act. Since, the AI Act prohibits the use of ERSs in certain contexts (and since this prohibition is due to kick in on 2 February 2025), this article aims help businesses be more alert to possible regulatory guidance that might bring them within scope of the AI Act at short notice. 

Article 3(39) of the AI Act defines an ERS as an “AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”

There are three elements to this definition:

  1. an AI system for the purpose of identifying or inferring; 
  2. emotions or intentions of natural persons;
  3. on the basis of their biometric data.

Note that the draft versions of the AI Act (see EP amendment 191) also included identifying or inferring “thoughts” and “states of mind” in the definition of ERS and intended to cover inferences drawn from “individuals” as well as “groups” and in a separate version (see Art 3(34) of the Commission Text) “psychological states” were also included in the definition but those references did not make it to the final text.

What is an AI System?

The AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".

This definition is intentionally broad to cover a wide range of machine-based systems. 

Emotions or intentions of natural persons

a) What is an emotion?

Recital 18 of the AI Act says: “[t]he notion refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement.”

Whilst the above is not a conclusive list, the AI Act limits what constitutes an “emotion” or an “intention” and says that they do not include “physical states such as pain or fatigue”, nor do they include “basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, such as a raised voice or whispering”, unless they are used for identifying or inferring emotions. 

Given these physical states or basic expressions do not constitute emotions or intentions, logically the systems detecting them should not constitute ERSs either. For example, gaze-tracking systems or safety equipment that measure the alertness of employees that are rolled out in high-risk environments (such as around heavy machinery or large vehicles, especially when combined with long working-hours) should not constitute ERSs in light of the distinction.

Recital 44 might give clues as to why the law draws a distinction between basic facial expressions and emotions. It questions the scientific basis and reliability of AI systems in inferring a complex construct (such as an emotion) which can differ between cultures, situations and even within the same person (Amendment 52). As such, it might be that the AI Act targets the claims that certain ERSs can read people’s minds (as opposed to facial expressions, which are simpler, externally visible, and easier to classify universally). 

However, when ERSs are used to infer the emotions or intentions of individuals (for example: inferring happiness from a smile), they would fall back into the definition. The AI Act is not clear on who must do the inferring to be an ERS. For example, if a human makes an inference of happiness based on an AI system detecting a smile, would this also make the system an ERS (even though it is not the AI system inferring the emotion)?

Some examples of systems detecting or inferring emotions can be:

  • wearable technologies (e.g. immersive headsets or smart watches), which can infer excitement through heart rate, measure pupil dilation against visual prompts to infer attraction or analyse moods by detecting movement of the body via motion recognition systems.
  • retail sentiment analysis systems that analyse the facial expressions of in-store customers looking at products.
  • out of home advertising billboards assessing the facial expressions of the passerbys to analyse whether they like the advertisement shown.
  • customer helplines integrating voice-based emotion analysis to determine the customer satisfaction with products or services.
  • (potentially in the future when the consumer grade electroencephalography (eeg) systems are deployed on a large scale) measuring a consumer’s sentiments towards product or brands based on brainwave analyses (i.e. neuromarketing). See the ICO’s neurotechnology Report which says: “[i]n the future, non-invasive devices capable of reading responses may be used at home to tailor consumer preferences. This could include neurotechnology-enabled headphones that might target advertising and commercials of a variety of goods, similar to cookie-enabled tracking online.” 

b) What is an intention?

ERSs are not limited to recognising emotions. They also, under the definition in the AI Act, cover the “intentions” of natural persons. However, the AI Act does not give much clarity on the “intention” element. None of the examples cited in Recital 18 would typically be classified as “intention”. This is because “intention” would have a predictive quality about the future, whereas the examples in Recital 18 reflect present reactions to situations or environments. 

Possible use cases of AI systems to detect or infer “intentions” could include detecting or inferring:

  • an intention to commit a crime (e.g. aggression detection systems inferring violent intents on the basis of facial expressions or body language or gaze-tracking systems and social behaviour analysis systems predicting shoplifting intention). 
  • employees’ intentions to resign from their jobs on the basis of how happy they seem on videocalls (through facial expression analysis). 
  • suicidal intentions in suicide hotspots such as railways, high-rise buildings or bridges through CCTV and computer vision analysis (e.g. relying on behavioural indicators such as individuals’ repeated transitions between walking and standing, leaning against a railway fence with their head facing down… ).

Regulatory guidance would be needed on the meaning of “intention” under the ERS to identify which AI systems would be covered for detecting or inferring intentions. 

On the basis of biometric data

This is perhaps the most important (and the least clear) part of the definition of an ERS. The definition only covers systems that rely on biometric data. As such, when emotions or intentions are detected or inferred through non-biometric means, the AI system performing these detections/inferences would not be considered ERSs. To identify what non-biometric means, we first need to identify what would constitute a biometric ERS.

The AI Act defines biometric data as: “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”.

Whilst recital 14 of the AI Act says “The notion of ‘biometric data’ used in this Regulation should be interpreted in light of the notion of biometric data as defined in [the GDPR]”, the scope of biometric data under the AI Act is broader than the General Data Protection Regulation. This is because the definition in the AI Act, unlike GDPR, does not require biometric data to be able to confirm or allow for the unique identification of a person. Otherwise, most ERSs would not be considered using biometric data as they do not necessarily need or have the ability to identify the individual- despite using data generated from a person’s body or behaviour.

This then poses the question of whether all physical, physiological or behavioural data acquired from a person when processed through “specific technical processing” would be considered as biometric data under the AI Act, or (alternatively) if there still needs to be some immutable/hard-to-change or subconscious quality to the biometric data that sets it apart from other types of body- or mind-related data? 

The term “biometric” traditionally denotes some form of immutable physical characteristic or subconscious/uncontrollable behaviour (Biometric Recognition and Behavioural Detection). However, there is no consensus on this division any longer and there is an ongoing debate as to whether wider use cases of body- or mind-related data should be brought into the scope of biometric data (When is A Biometric No Longer A Biometric). 

Since the definition of ERS depends on the definition of biometric data, there is a need to clarify the scope of the definition of biometric data to determine which technologies count as ERSs. 

The use of “emotion recognition system” as a defined term in the AI Act

The AI Act does not use the defined term “emotion recognition system” consistently throughout its text. This begs the question: is the use of different terms intentional, or a mere oversight?

For example, Article 5.1(f) (on prohibited AI practices) states that “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions [is prohibited] except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons”. This wording does not refer to “emotion recognition system” (even though it is a defined term in the AI Act). There is also no reference to “intentions” or “detection”, and no reference to the prohibited system having to be biometric-based.

Recital 44 (regarding the “concerns about the scientific basis of AI systems aiming to identify or infer emotions”) further confuses the terminology. It states that “the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited.” Here, the wording refers to “detection” of “emotional state” rather than “inferring” of “emotions”.  However, it does refer to biometric ERSs in the context of prohibited practices, which suggests that the wording in Article 5(1)(f) is an oversight and it intends to refer to the defined term ERS.

When we look at Annex III of the AI Act (on high-risk AI systems/use cases), the terminology again is not quite clear where it refers to “AI systems intended to be used for emotion recognition”. However, it is easier to assume here that the reference is to ERS given it is included under the biometric AI systems heading and recital 54 implies that the high-risk emotion recognition systems are the emotion recognition systems that are not used in prohibited contexts, meaning the difference lies in the use case and not the technology. 

Overall, our view is that the use of mixed terminology is not intentional. Instead, all variations of the terminology are references to the defined term: “emotion recognition system”.

Author’s Note: Currently, there is no regulatory guidance on many of the above points. This article aims to alert businesses that their systems (such as sentiment analysis systems) might be caught by the AI Act’s definition of ERS in the regulatory guidelines due to be published before 2 February 2025. If you think your systems might be considered ERSs under the AI Act and are wondering what the implications for your products or business might be, please do get in touch with us. 

Latest insights

More Insights
featured image

Saudi Arabia: Qualified obligation on data controllers to register with Data Protection Authority

3 minutes Dec 03 2024

Read More
Curiosity line blue background

China TMT: Bi-monthly Update - September & October 2024 Issue

19 minutes Nov 28 2024

Read More
Curiosity line green background

China Cybersecurity and Data Protection Monthly Update - November 2024 Issue

19 minutes Nov 28 2024

Read More