AI Programs in Japan Are Forcing Workers to Smile More
A Japanese supermarket chain is getting attention for implementing an AI tool called “Mr. Smile” that monitors workers for the quality and quantity of their smiles when interacting with customers, raising questions around the globe about how far to allow artificial intelligence into the workplace. Mr. Smile, introduced at eight Aeon locations earlier this year, initially monitored over 3,000 employees with artificial intelligence technology, using more than 450 elements to assess facial expressions, the length and sincerity of smiles, and the volume and tone of voice. Deeming the trial a success, Aeon just announced it will roll out the system to all 240 of its stores and monitor tens of thousands of workers across Japan “to standardize staff members’ smiles and satisfy customers to the maximum.” Could companies in the U.S. implement AI-driven emotional monitoring? Here’s what employers in Japan and the U.S. should consider when looking into AI technology that mandates specific emotions from its workers.
Worker Advocates' Harassment Concerns
Some believe that Aeon’s nationwide rollout of Mr. Smile is well-intentioned. After all, the standards for customer service in Japan are famously high, and this program will help provide feedback to workers about changes to improve their skills and create a happier experience for customers.
But according to a recent report, worker advocates are worried about rising rates of kasuhara—customers harassing workers for not being friendly enough to them. They believe that a system like Mr. Smile will put even more pressure on workers to maintain a state of constant happiness when dealing with customers, potentially leading to dissatisfied customers complaining if they don’t feel their experience matches these very high expectations.
- In fact, several other large employers in Japan have recently passed standards in an attempt to protect workers from customers who exploit their customer status to commit illegal acts or make unreasonable demands, even banning violators or calling the police. New policies prohibit abusive language, loud voices, insults, threats, excessive demands, and other unreasonable behavior.
- This problem has caught the eye of government regulators, who want to take action to prevent the harassment. Tokyo will likely be the first of Japan’s 47 prefectural assemblies to pass an ordinance prohibiting customer harassment by March 2025, and the ruling Liberal Democratic Party will soon propose a law to be brought before the Diet (national legislative body). This follows the Ministry of Health, Labour and Welfare issuing guidelines on how to protect employees from customer harassment and recognizing trauma caused by kasuhara as a workplace accident.
AI to the Rescue?
A recent report described two new AI tools that will aim to counteract the scourge of customer harassment:
- Masayuki Kiriu, dean and professor of social psychology at Toyo University, is developing an AI-driven training tool that can coach employees on how best to respond to abusive customers. It also assesses each worker’s threshold for harassment in an effort to educate companies developing policies on customer harassment.
- Another tech company is developing AI software that tunes out the anger in people’s voices on phone calls, making angry people sound calm and protecting workers from abuse.
Mr. Smile Might Not Be Welcomed in the U.S.
Any enterprising employer that wants to consider using an AI system to monitor facial expressions at work and mandate more smiling and happier tones will need to overcome a few potential legal barriers.
Disability Discrimination and Accommodations
Employees who are unable to conform to typical smiling standards due to physical, neurological, or mental conditions might not fare well under Mr. Smile’s watchful eye. That could cause employers problems under the Americans with Disabilities Act (ADA) and state disability laws. For example:
- Employees who have Bell’s palsy or other physical conditions that involve facial paralysis, or who have experienced a stroke or have facial nerve damage, are just some of the types of individuals who could have a physical or neurological reason they cannot smile.
- Those with depression, anxiety disorders, post-traumatic stress disorder, bipolar disorder, or other mental health conditions might not meet the standards set by a mandatory smiling program.
- Neurodiverse workers, such as those on the autism spectrum, may have difficulty interpreting and expressing emotions in socially typical ways. This could include them smiling less frequently or at unexpected times, which may not align with conventional social cues.
Under the ADA, employers must make reasonable accommodations for employees with disabilities unless such accommodations would cause undue hardship to the business. In the context of facial expression monitoring, if a disability prevents an employee from meeting this smiling standard, an employer may need to consider whether there are alternative ways to achieve the desired customer service outcomes without discriminating against that employee.
Potential AI Bias
AI systems that track facial expressions can have biases, particularly in recognizing emotions across different racial or ethnic groups. These systems may inaccurately evaluate the facial expressions of non-white employees, leading to unfair treatment or discrimination claims based on race or ethnicity. The American Civil Liberties Union recently alleged that a company that uses a video tool to aid with interviewing prospective workers is likely to discriminate based on race and other protected characteristics because of the underlying AI data relied upon by the programs. A similar argument might be made against any tool like Mr. Smile.
Privacy and Biometric Concerns
While employers generally have the right to monitor employees performing their duties in the workplace, constantly tracking facial expressions could be seen as an invasion of employees’ privacy, especially when such data could be collected continuously throughout the workday. Any employer that uses a facial recognition system would also need to ensure that any information collected about the workers’ faces is not mishandled or disclosed without consent. Collecting, sharing, or using this data in ways that could compromise employee privacy could lead to legal concerns.
Also, implementing an AI system to monitor employees’ facial expressions could raise several legal concerns under state privacy laws. The Illinois Biometric Information Privacy Act (BIPA) is arguably the most stringent. If the AI system captures and analyzes employees’ facial geometry to monitor expressions, this could fall under the part of the act that regulates the treatment of biometric identifiers. To start, employers would need to obtain informed consent from workers before collecting this information and would also need to provide certain disclosures to workers, among other requirements.
Employee Morale and Stress
Aside from the legal hurdles, employers need to consider that employees who know they are being monitored for smiling could feel additional stress or pressure to conform to what they consider to be arbitrary and artificial behavioral standards, which might reduce morale and productivity. It could lead to high turnover, difficulty recruiting new workers, and a poor reputation in the marketplace. Employers could also face situations where workers feel the need to turn to a union to help address what they consider to be a troubling work environment.
Labor Relations
Even nonunionized employers are required to comply with federal labor law, and the National Labor Relations Board could have at least two potential concerns over a system like Mr. Smile in the workplace. First, the board’s general counsel warned employers several years ago that agency investigators would be targeting electronic workplace surveillance to ensure it didn’t interfere with employees’ protected workplace activity. A system that tracks employee smiles might very well be in its crosshairs. Separately, the board has been scrutinizing workplace civility policies under a relatively new standard, concluding that many otherwise common and seemingly benign rules might conceivably chill employees’ organizing rights. Given that at least one current board proceeding is challenging rules that require individuals to “be positive” and “smile and have fun,” it would not be a stretch to see the agency put a policy requiring workers to smile under the microscope.
Conclusion
While some businesses in Japan might be open to a mandatory-smile policy enforced by AI, there are hurdles to overcome if employers want to consider a similar program in the U.S.
Kate Dedenbach is an attorney with Fisher Phillips in Detroit. Joshua D. Nadreau is an attorney with Fisher Phillips in Boston. Karen L. Odash is an attorney with Fisher Phillips in Philadelphia and Minneapolis. Nan Sato is an attorney with Fisher Phillips in Philadelphia and New York City. © 2024 Fisher Phillips. All rights reserved. Reposted with permission.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement