#MeToo Jumpstarts Tech Push to Combat Harassment

 

By Dave Zielinski March 26, 2019
LIKE SAVE

​When Sabrina Atienza watched a close friend experience the pain of being sexually harassed at work, she decided it was time for a new course of action. Atienza, a software engineer and entrepreneur, believed that a new kind of technology might help companies better anticipate and address harassment, bullying behavior and abusive language in their workplaces. 

Atienza launched Valued, a San Francisco-based company that uses artificial intelligence (AI) and natural language processing to analyze the communication on internal company networks like Slack and other platforms and give HR professionals a warning about potential problems brewing.

[SHRM members-only platform: SHRM Connect]

Rise of AI-Driven Monitoring

Valued is one of a growing group of vendors to offer tools that analyze internal company communications and provide anonymous, aggregated reports to HR to help detect patterns over time.

"Our goal isn't to replace HR but to nudge it in the right direction where there seems to be smoke and it makes sense to investigate further," Atienza said.

The anonymous feedback protects employee privacy but also gives HR clues about developing problems. Results are delivered to HR in the form of an organizational chart that identifies areas of troublesome communications. "We want to make it easy to visualize the impact that poor behavior might be having across functions or different work teams." 

Communicating the 'Why' Behind Monitoring

How the use of such monitoring tools is communicated to the workforce makes all the difference in their success, workplace experts say. Patti Perez, vice president of workplace strategy for Emtrain, a technology platform that provides compliance education, and author of the new book The Drama-Free Workplace (April 2019, Wiley), said it's important that AI-driven monitoring create "forward-looking" accountability as well as retrospective documentation or disciplinary action.

"Flagging objectionable behavior or language provides an opportunity not just for discipline but for education to improve future actions," Perez said. "If the technology is applied only in a backward-looking way, there is greater chance of it being viewed simply as a Big Brother tool by employees."

Explaining to the workforce why such monitoring technology is being used is vital. "It should be communicated about, deployed and executed in a way that makes it clear you're not just monitoring to protect the company legally but are interested in creating a healthier and more transparent culture," she said.

Perez also said it's important that companies reach some consensus about what constitutes objectionable behavior or language in online communications before deploying monitoring technology, since interpretations of those actions can vary. Atienza agreed that one of the biggest challenges for her clients is creating a mutually-agreed-upon definition of what constitutes bullying, abusive language or sexual harassment. "Given the same situation, different people will label the behavior very different things," she said.

Legal Considerations

Employing AI-driven monitoring of internal communications can have legal implications, but employee privacy is typically not one of them, employment attorneys say. "As long as an employer properly notifies employees that any communication on platforms considered company property is subject to monitoring, there is typically little privacy risk," said Jennifer Betts, an employment attorney and shareholder with Ogletree Deakins in Pittsburgh.

Where more risk lies is in how the data that's gathered from monitoring is stored and used, she said. "If a third-party vendor is storing information [that] it is gathering and filtering from a company's internal platforms, [then] it has to ensure that information is kept safe from data breaches."

If companies are making any type of employment decisions based on data gathered, Betts said, there also is the potential for disparate treatment or disparate impact claims connected to discrimination. "I think plaintiffs' counsels are starting to gear up to file these sorts of claims. One problem from an employer standpoint is they don't always fully understand how their AI technology or algorithms really work."

Using anonymized data and continuously testing AI tools provides legal protection, Betts said. "If you're using any AI technology to make employment decisions, you should have a mechanism to periodically test and validate the tool to ensure it is not having a disparate impact on any part of your workforce," she said. "That testing protects you in some degree from claims of discrimination."

Jay Glunt, a partner and employment attorney with the law firm of Reed Smith, said HR leaders also should consider provisions of the National Labor Relations Act (NLRA) when employing AI-driven monitoring. Employees have rights under the act to engage in "protected concerted" activity when discussing work-related issues on internal networks, Glunt said, protecting them from employer retaliation for such activities.

"Monitoring could potentially quell the opportunity for workers to engage in that protected concerted activity," Glunt said. "That component of the NLRA also applies to most employers and employees in the private sector regardless of whether the workforce is unionized or not."

Dave Zielinski is a freelance business writer and editor in Minneapolis.

LIKE SAVE

Job Finder

Find an HR Job Near You
Search Jobs
Post a Job

SPONSOR OFFERS

Find the Right Vendor for Your HR Needs

SHRM’s HR Vendor Directory contains over 10,000 companies

Search & Connect
temp_image