On 7 June 2021, OHCHR B-Tech and ECNL jointly hosted a community lab to discuss the human rights impacts of emotion recognition technologies, as they are being increasingly developed and deployed in the workplace. This topic is of particular relevance today, given that the human rights risks of emotion recognition have been accelerated by the COVID-19 pandemic. This session looked at major concerns related to this issue, such as:
- Operating environment: How is emotion recognition used in the workplace? What human rights risks are embedded in the design of this technology? Should this technology even be used in the workplace at all, or is it inherently incompatible with human rights?
- Managerial implications: How are executives and members of the leadership addressing such concerns on a governance level? How do they engage affected stakeholders and ensure that critical voices, especially marginalized and vulnerable groups, heard?
- Grievance mechanisms: Do employers and third-party users have access to effective internal grievance mechanisms, should harm occur?
Key takeaways from speakers’ presentations:
Emotion recognition is a dangerous technology and its effectiveness is largely disputed.
Vidushi Marda, Senior Programme Officer at Article 19, explained that in some countries like China, emotion recognition technologies are already widespread. Such deployment often trickles down to the workplace, enabling managers to surveil their employees, exacerbating existing power asymmetries between workers and employers. Vidushi stated that emotion recognition goes a step further than facial recognition, as it aims to discern one’s inner state. It cannot be proved or disproved, unlike facial recognition. Algorithms are designed through pseudoscientific research that does not take into account cultural differences and biases. Critics point out that emotions are culturally constructed and cannot be objectively defined. According to Vidushi, the predictions made by emotion recognition technology are widely inaccurate. Given how faulty this technology is, one can wonder if it makes sense to deploy it at all. Although this is a highly debated issue among experts, most attendees agreed that this kind of technology should be used very carefully, if at all, arguing for a complete ban.
The use of emotion recognition in the workplace risks exacerbating existing social and economic inequality and is a severe threat for human rights.
Juan Carlos Lara, Research and Policy Director at Derechos Digitales, pointed out that the acquisition of technologies may have an impact on people’s rights. This includes workers’ rights as it is claimed to enhance the capacity of an employer to discipline employees. For example, marginalization and exclusion occur when companies offer to screen candidates through video-based assessments and the creation of personality profiles to understand emotional intelligence based on facial indicators and micro expressions, things that are complicit in discrimination.
These intrusive technologies can be used for predictive analysis purposes and for creating new indicators of performance. It can also help predict whether a worker will have social relationships with other workers or would need medical leave because of stress. Hence, this leads to employers adopting measures to regulate employees through promotions or demotions that can be discriminatory and can lead, for example, to unjustified dismissals. According to Juan, this will possibly have an impact on unionizing as surveillance may help to target people who tend to attend collective bargaining.
Dr. Kebene Wodajo, Senior Research Fellow at the University of St. Gallen, emphasized that human rights violations and discrimination should be understood as historical and structural issues: it builds over a long period of time through emotive action and interaction by different actors within the digital structure and beyond. This influences employers, States, regulators, and both public and private sectors, including individuals.
Marginalized communities are the most affected by the rise of emotion recognition technologies.
According to Kebene, emotion recognition technologies have a great impact on the marginalization of communities that do not have a voice in defining their concepts of emotions. Given that emotion is interactionally and culturally constructed, currently, there are no objective standards to define it. Thus, the dominant culture informing this form of technology has the upper hand on defining what emotions are and on using their perspective to conduct surveillance in the workplace. This underpins the potential discrimination faced by marginalized groups. For example, neurodivergent, LGBTQI+, and people speaking with accents or dialects might be discriminated against by voice recognition or other emotion recognition technologies. Besides, a non-binary person might be penalized by face recognition technologies as they do not fall within the binary division of gender behavioral norms. Therefore, there is a need for normative and legal perspective within tech companies, identifying how technologies impact the most vulnerable communities.
The regulations necessary to deal with emotion recognition technologies.
In general, there is a need for more transparency from governments about who is participating in the emotion recognition market. Methab Khan, Resident Fellow at Yale, also identified two possible approaches that could be taken to regulate emotion recognition technologies and provide rights to workers.
The first one is the individual rights approach, which assumes that people have an unbiased right to be hired and to object to discriminatory actions taken by employers during the hiring process. However, the victim needs to show evidence that discrimination has occurred. We would then need to define certain aspects of how individuals can be affected by emotion recognition technologies.
On the other hand, the collective rights approach suggests that a set of information should be published by companies and become available to the public in order to contest certain actions. To overcome the current opacity, laws must require companies to conduct audits. The latter could include: how people are ranked, what are the basis and standards for employment, where they derive from etc. Then, periodic reports should be published on those questions.