The California Supreme Court recently issued a
ruling that expands the definition of employer under the state's main discrimination statute, the Fair Employment and Housing Act (FEHA). This expansion not only increases the number of defendants that can be swept into a FEHA action, but it may also have a significant impact on California's burgeoning efforts to regulate the use of
artificial intelligence in employment decisions.
On March 16, 2022, the U.S. Court of Appeals for the Ninth Circuit certified to the Supreme Court of California the following question: "Does California's FEHA, which defines employer to include any person acting as an agent of an employer, permit a business entity acting as an agent of an employer to be held directly liable for employment discrimination?"
In
Raines v. U.S. Healthworks Medical Group, the California Supreme Court answered yes to this question. It concluded that an employer's agents may be considered "employers" for purposes of the statute. It held that a third-party agent may be held directly liable for employment discrimination in violation of the FEHA when it has at least five employees and carries out FEHA-regulated activities on behalf of an employer. The court recognized that its ruling increases the number of defendants that might share liability when a plaintiff brings FEHA-related claims against their employer.
The court analyzed the FEHA Section 12926(d)'s language, stating that the "most natural reading" supports the determination that an employer's business-entity's agent "is itself an employer for purposes of FEHA." The court further addressed the statute's legislative history, tracing the origins of the definition of "employer" to the Fair Employment Practices Act (FEPA) enacted in 1959, which adopted the National Labor Relations Act's (NLRA) "agent-inclusive language."
The court also looked to federal case law, finding support for the idea that "an employer's agent can, under certain circumstances, appropriately bear direct liability under the federal antidiscrimination laws." Significantly, the court found that its prior rulings in
Reno v. Baird and
Jones v. Lodge at Torrey Pines Partnership, which did not extend personal liability for claims of discrimination or retaliation to supervisors, did not dictate the result here.
The court also reviewed three policy reasons that could impact the reading of the statutory language:
- Imposing liability on an employer's agents broadens FEHA liability to the entity that is "most directly responsible for the FEHA violation" and "in the best position to implement industry-wide policies that will avoid FEHA violations."
- Imposing liability on an employer's agents "furthers the statutory mandate that the FEHA be construed liberally in furtherance of its remedial purposes.
- The court's reading of the statutory language will not impose liability on individuals who might face financial ruin for themselves and their families where held directly liable under the FEHA.
Equally important are rulings not made by the court in
Raines. The California Supreme Court noted that it was not deciding the significance, if any, of an employer's control over an agent's acts that gave rise to a FEHA violation, nor did the court decide whether its conclusion extends to business-entity agents that have fewer than five employees. Critically, it also did not address the scope of an agent's potential liability pursuant to FEHA's aiding-and-abetting provision.
Impact on California's Efforts to Regulate AI in Employment Decisions
Raines will likely have a significant impact on businesses that provide services or otherwise assist employers in the use of
automated-decision systems for recruiting, screening, hiring, compensation, and other employment decisions. Coupled with proposed revisions to the state's FEHA regulations, this expansion of the statute's reach takes California one step closer to establishing joint and several liability across the AI tool supply chain.
Under the
Fair Employment & Housing Council's proposed regulations addressing the use of artificial intelligence, machine learning, and other data-driven statistical processes to automate decision-making in the employment context, it is unlawful for an employer to use selection criteria—including automated decision systems—that screen out, or tend to screen out, an applicant or employee on the basis of a protected characteristic, unless the criteria are demonstrably job-related and consistent with business necessity.
The draft regulations explicitly define "agent" broadly to include third-party providers of AI-driven services related to recruiting, screening, hiring, compensation and other employment processes, and redefine "employment agency" to similarly cover these third-party entities.
One key proposal – under the aforementioned aiding-and-abetting provision – even extends liability to the "design, development, advertisement, sale, provision, and/or use of an automated-decision system." The high court's decision in
Raines unquestionably supports the council's proposed revisions and enhances joint and several liability for artificial intelligence tool supply chains, regardless of the final incarnation of the council's regulations.
Michelle Barrett Falconer, Marko Mrkonich, Niloy Ray, Alice Wang, and Cristina Piechocki
are attorneys with
Littler in California and Minnesota.
© 2023. All rights reserved. Reprinted with permission.