Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

AI-Related Lawsuits Are Coming


A judge hitting a gavel on a wooden table.


​As HR managers use artificial intelligence (AI) to make recruiting decisions, evaluate employee performance, and decide on promotions and firings, HR executives should know that several law firms are preparing for what they believe is inevitable: AI-related lawsuits.

In March of this year, law firm Paul Hastings launched an AI practice group to help clients deploying AI-driven services and products. The firm will help clients defend against class-action lawsuits and give legal advice in areas such as compliance with laws and regulations, data privacy issues, AI governance and ethics. 

"We saw a need in this space to form a group of experts to help advise clients on best practices, transparency, fairness and oversight in the use and deployment of AI, to help create corporate best practices and minimize the risks of what we believe will be litigation that I personally think will start and focus on the workplace," said Bradford Newman, chair of the Paul Hastings employee mobility and trade secret practice and a member of the law firm's AI practice group.

Newman said many companies are already using AI in the applicant-selection process, but he thinks there will be widespread use of AI tools to assess worker performance.

"AI tools are going to drive decisions like who ought to be promoted and who should be fired," Newman said. "When you have algorithms making decisions that impact humans in one of their most essential life functions—which is their work—there are going to be issues of fairness and transparency and legal challenges, and I think we are going to see those legal challenges start very soon."

[SHRM members-only toolkit: Screening by Means of Pre-Employment Testing]

The Value of Legal Advice

In May, global law firm DLA Piper launched an AI practice that provides legal guidance and helps companies understand the legal risks of adopting AI systems.

According to Danny Tobey, partner at DLA Piper, companies were increasingly asking about his firm's AI capabilities, and it became clear that opening an AI practice to serve their needs made sense.

"Companies realized that they had all these sources of data and wanted to know how they could capture and extract value from that data in ways that make them have a competitive edge," Tobey said. "But they didn't know how to get started, from both a technological perspective and a legal perspective."

Tobey added that DLA Piper helps companies set up central data councils and best-practice policies that can help clear technical hurdles, such as ensuring companies don't clean the same data twice. They also help clear legal hurdles, such as aligning data rights policies across the company.

He said DLA Piper's AI team includes lawyers who ensure that AI systems comply with anti-discrimination and other laws, looking at questions of both intent and impact.

"When you talk about setting these systems up, you want to work with a team that is both thoughtful about building systems to comply with current regulations and also on the cutting edge of knowing what's coming next in law and technology," Tobey said.

Because so many of a company's business operations can run into AI legal challenges, DLA Piper's legal team found many opportunities to start a discussion with several clients on the impact of AI.

Providing legal advice includes contracting and licensing AI services and technologies and understanding how AI technologies interact with regulations, Tobey said. Protecting AI through patents and intellectual property is another hot area. There are also clients currently embroiled in litigation.

"We are now seeing clients who are in disputes over responsibilities between AI providers and companies or even between companies and consumers when it comes to AI," Tobey said.

Other law firms, like Littler Mendelson, Fisher Phillips and Proskauer Rose are increasing their focus on AI and have partners that offer legal advice and expertise on AI and its risks.

Attracting Legislators' Attention

There is a growing interest in legislation that will impact how companies use AI tools. The Artificial Intelligence Video Interview Act, which will take effect in Illinois on Jan. 1, 2020, illustrates the shifting legal landscape that AI will bring. The Illinois law is the first of its kind in the U.S. and is designed to regulate the increasing use of AI in the hiring process.

Under the law, employers must inform applicants that algorithms will analyze their interview videos, and they must explain how their AI program works and what characteristics the AI uses to evaluate applicants' suitability for the job. Applicants must also agree to be evaluated by the technology.

Another bill recently proposed by Sens. Cory Booker, D-N.J., and Ron Wyden, D-Ore., is the Algorithmic Accountability Act of 2019, which calls on the Federal Trade Commission to establish rules for evaluating "highly sensitive" automated systems.

If the law passes, companies will have to assess whether their algorithms and the systems they support are biased or discriminatory and whether the information they contain presents a privacy or security risk to consumers.  

The possibility of perpetuating bias is a significant concern when using AI in the hiring process. Amazon's attempt to automate its recruiting platform highlighted the limitations of the technology. Most resumes that were used to build the algorithm came from men, and the system was taught to prefer male candidates.

Developing machine learning algorithms with information that consists of inherent bias and implementing a system built by people with their own biases may lead to a court case involving evidence of discrimination that is hard for employers to fight, said Elliot Dinkin, chief executive officer of Cowden Associates, a Pittsburgh-based HR consulting firm.

"HR executives have to look at AI with an air of caution, because they'll have to defend their use of AI and defend their results, especially in the recruiting process," Dinkin said.

Understanding the Risks of AI

Researchers at University College London have developed a machine algorithm that can learn a person's handwriting and replicate it with frightening accuracy. This might be helpful to a person with an injury or disability who needs help writing a letter, but it could also be used fraudulently. 

Tobey believes HR managers are going to need to understand not just that they are adopting AI technology, but what kind of AI they are adopting and its risks and benefits.

At that point, Tobey said, "they are going to need to be ready to explain to other people how decisions are being made using those technologies."

Nicole Lewis is a freelance journalist based in Miami. She covers business, technology and public policy.

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement