"Big data" is taking over corporate America, and if it hasn't already affected your HR department, it likely soon will. It comes with the risk, though, of discriminating in violation of employment laws, so HR must proceed cautiously when using big data.
"Big data" refers to the collection and analysis of the digital history created when people shop, surf the Internet, drive their cars and otherwise go about their daily lives. It also includes the tests, measurements and other ways of documenting aptitudes, behavior and competencies that employers compile about their employees and applicants. The "secret sauce" is the algorithm—the computational or statistical device that transforms reams of data, much of it seemingly irrelevant, into recommendations such as hire versus don't hire or promote versus don't promote.
It is just a matter of time before senior management recognizes that the same methods that inform their pricing, production and marketing decisions can assist in employee selection, compensation, promotions and layoffs. But unlike business decisions, poor HR decisions not only affect the bottom line but also may lead to lawsuits that tarnish the company's image and subject it to claims for punitive and compensatory damages. Here's how to help reduce those risks.
Wide Variety of Products and Approaches
An instruction from corporate higher-ups to integrate big data into the HR function is a lot like the accounting department being told to use arithmetic in balancing the corporate books.
The devil is in the details. There are a wide variety of big-data services and products that have appeared in the last few years, each promising to streamline decision-making and help you make better decisions for less than your company presently pays for these HR functions.
Take hiring, for example. Some big-data companies claim they can judge the facial expressions of candidates during job interviews and correlate those with how the best employees present themselves, to help you make the best choice. Other companies promise to help you mine your own company's data and identify job applicants who most closely mirror your best employees. Still others monitor social media and other public data to construct profiles of your applicants to separate those who merely seem promising from those whose promise is assured.
What Are the Potential Risks?
Employment decisions can be challenged under many laws that govern the workplace. For example, the Equal Pay Act requires that men and women in jobs requiring equal skill, effort and responsibility be paid equally. Title VII of the Civil Rights Act prohibits discrimination on the basis of race, gender, religion, color and national origin. The Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities and requires reasonable accommodations.
How, you may wonder, can a computer algorithm, which is at the heart of big data, discriminate against anyone when its data processing and decision-making are preprogrammed and entirely objective? If a computer knows only what it is told, and we tell it nothing about demographics, doesn't that ensure that its output must be unbiased?
In a word, under current law the answer is "no." As with traditional forms of employment-related testing, the employer is liable for its discriminatory acts, whether or not they are based on big data, and must validate the results.
Many anti-discrimination laws prohibit decision-making that treats everyone the same but that results in decisions that disproportionately exclude or disadvantage protected group members from employment, pay increases or other benefits. This is the disparate impact theory of employment discrimination.
For example, a company that is persuaded to hire only employees who drive cars with standard transmissions may find it is selecting a disproportionate number of males. If this criterion were challenged as discriminatory by a female job applicant, it would be the employer's burden to demonstrate that those who drove cars with standard transmissions were better employees in some objective sense. The employer still would not be exonerated if a plaintiff could prove there was a less discriminatory, but equally effective, way to select employees.
The ADA creates additional risks. Consider, again, standard transmissions. What if an applicant's disability prevents him or her from driving? The obvious solution might be simply to exclude these applicants from the algorithm and adopt a more appropriate hiring criterion. But what if neither the applicant nor the employer knows precisely how the algorithm works? It hardly benefits the developer of the algorithm to make its inner workings widely known—that's the secret sauce on which its business is based. The result is that algorithms that draw data from multiple sources might tap into datasets that omit applicants with disabilities and exclude them from consideration.
Risk Avoidance
An employer's legal exposure to the risks accompanying big data depends on the particular product and therefore the company and the research that stand behind it. Some algorithms may be neutral and do not adversely affect any protected groups. Others unintentionally rely on correlations that are not neutral in their effects.
There is no sure way to assess this other than to monitor outcomes among your own employees and applicants. A product with no adverse impact in test trials or when it is used by other businesses might adversely impact your own workforce.
Perhaps your workforce differs educationally from the groups on which the algorithm was tested. Or perhaps the social and cultural norms where your business is located make predictions based on other populations unreliable. Moreover, an employer cannot rely in its defense on how the algorithm performed elsewhere. Therefore, we recommend tailoring an algorithm to the needs of your business and the specific skills and performance required of your workforce.
A necessary first step to protect the company is to determine whether the algorithm has a disproportionate impact on any segment of your workforce or applicant pool. This generally means comparing the representation of protected groups among those selected by the algorithm to the demographics of those who were rejected.
Second, assess the reliability and validity claimed for the particular big-data application. This task is highly technical and likely beyond the training of most HR professionals. We recommend enlisting a qualified expert to advise on these issues before adopting the big-data solution. Although most vendors have performed their own studies and provide assurances that their methods comply with the Uniform Guidelines on Employee Selection Procedures, these technical manuals can be dense and the most significant flaws may be omissions, which can escape an untrained eye. For the same reason you enlist a realtor when buying a house, we urge you to consider retaining a professional industrial/organizational psychologist or data scientist. This expert can look under the hood and explain in understandable terms exactly what you are buying and whether it does what it purports to do.
This is important because big data doesn't fit easily into the framework of the Uniform Guidelines, which are now nearly 40 years old. The cornerstone of the Uniform Guidelines is the job analysis, which is common to each of the three types of validity they recognize. The goal of the job analysis is to distill each position to its essential functions so that a selection device (e.g., a test) can assess the skills necessary to perform those functions. However, big data doesn't care what employees do, only how well they do it.
The cornerstone of big data is correlation. Big-data algorithms search for robust correlates of various measures of superior job performance. As long as the correlation is strong and proves its worth repeatedly, that's all big data asks. As a result, the Uniform Guidelines, which assess whether tests or other screens are closely related to job requirements, may not provide the last word on whether a big-data algorithm will pass legal muster. Rather, courts may come to regard the guidelines as an anachronism, developed when the quality, not the quantity, of data was at a premium.
Third, whether or not you hire a consultant to guide you, we recommend obtaining an indemnity agreement from the vendor. No matter how skilled and experienced your consultant, no one will know the product like the vendor who created it. The vendor has carefully assessed its strengths and weaknesses and is better positioned than anyone to know if the product does what it purports to do and what the associated risks are. A vendor that is not willing to stand behind its product in this way may know its product all too well.
Allan G. King is an attorney with Littler in Austin, Texas. Marko J. Mrkonich is an attorney with Littler in Minneapolis.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.