Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

Using Artificial Intelligence for Employment Purposes




Overview

Artificial intelligence (AI) is the use of machines and software to perform tasks that typically have required human intelligence to complete. These machines can range from a laptop or cellphone to computer-controlled robotics. Software programs, which give directions to control the behavior of the machine, are specialized to mimic human intelligence and capabilities. The coupling of hardware and this software brings about artificial intelligence.

A workplace run by AI is not a futuristic concept. Such technology is already a part of the workplace and will continue to shape the labor market. Nearly 1 in 4 organizations reported using automation or AI to support HR-related activities, according to a 2022 survey by SHRM. The report found that 85 percent of employers using automation or AI said it saves time or increases efficiency.

See:

The Promise and Peril of Artificial Intelligence

What is artificial intelligence and how is it used in the workplace?

SHRM Research: AI Use on the Rise, Ethics Questions Remain

Not All AI Is Really AI: What You Need to Know

How Is AI Used by HR?

AI is being used in the workplace to manage the full employee life cycle, from sourcing and recruitment to performance management and employee development. Recruitment and hiring are by far the most popular areas where AI is used for employment-related purposes. However, AI can be utilized in almost any human resource discipline. 


See:

Bringing Artificial Intelligence into Pay Decisions

The Role of AI in Retaining Top Talent

Companies Turn to AI to Improve Workplace Safety

AI, however, won't replace the need for human involvement in most scenarios. As this toolkit discusses, human intervention is necessary to audit AI outcomes. And, in some cases, human interaction is preferred. While a chatbot may be sufficient to screen candidates for entry-level jobs in, say, retail or fast food, no organization is going to leave the selection of a C-suite executive to a robot. See Does AI Signal the End of HR?

AI in HR also presents an opportunity for new roles, such as developing the software programs that power the humanlike intelligence, or training AI programs with efficient prompts. Though these roles may not seem like traditional HR, bringing HR expertise to these emerging fields will help shape the future of workplaces.

Generative AI

Employers are increasingly implementing acceptable AI use policies, particularly for generative AI. Generative AI, such as OpenAI's ChatGPT and Google's Bard, allows users to ask questions in a conversational manner to find answers or to create or edit written content. For example, a manager might ask the bot to write an employee recognition letter, or a recruiter might prompt it to draft a job description. While the output from generative AI programs can be impressive, human review and final editing is almost always necessary.

See: 

Internal Generative AI Models: More Value, Less Risk for HR

Using AI to Enhance Employee Communications

Crafting Policies to Address the Proliferation of Generative AI

Legal Issues

While technology can make processes better and faster, employers should be aware of some areas of concern when utilizing AI.

There are currently no federal laws specific to the use of AI in employment decisions; however, nondiscrimination laws such as Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA) and the Age Discrimination in Employment Act (ADEA) apply. The use of AI can also trigger compliance issues with other employment laws, such as the Fair Credit Reporting Act (FCRA) when using a third party to conduct a background check, or global requirements for employers with a multinational workplace. See Regulations Ahead on AI.

State Laws

Currently, there are a few state and local law requirements specific to AI. Illinois, Maryland and New York City are examples of jurisdictions with such laws, and it is likely that more states and municipalities will follow.

See:

Employers, Vendors Plan Ahead of NYC's AI Law Enforcement Date

Illinois Employers Must Comply with Artificial Intelligence Video Interview Act

Legislation Related to Artificial Intelligence

Title VII Discrimination

AI can be biased (or in the case of machine learning applications, become biased), creating concerns of illegal discrimination depending on how the technology and data are used. Understanding how machine learning works and how data is used by AI tools is necessary to identify and correct any outcome that negatively impacts certain groups of people. See How to Avoid Discrimination When Using AI and EEOC Issues Guidance on Use of AI.

Amazon's automated applicant rating tool is a prime example of adverse impact when using AI. The algorithm Amazon used was based on the number of resumes submitted over the past decade, most of which were from men. This resulted in the tool being trained to favor men over women. Amazon abandoned the AI recruiting tool after discovering that it discriminated against women.

Instead of avoiding AI altogether, employers can take measures to prevent bias and illegal discrimination. Understanding the algorithms that are used and how individuals are screened in, or out, is important when implementing AI tools. Regular review of this information and the subsequent results is necessary to ensure that the tool isn't learning bias or illegal selection criteria over time. See AI Bias Audits Are Coming. Are You Ready?

The Uniform Guidelines on Employee Selection Procedures (UGESP) are intended to provide a framework for determining the proper use of tests and other selection procedures in the workplace, to avoid adverse impact. These guidelines can also be applied to the use of AI tools. Employers can also refer to other professional guidelines, such as the Society for Industrial and Organizational Psychology's Principles for the Validation and Use of Personnel Selection Procedures.

Americans with Disabilities Act

There is also a significant concern that the use of AI may disadvantage job applicants and employees with disabilities. The federal Equal Employment Opportunity Commission (EEOC) has issued guidance on avoiding adverse impact under the ADA when using AI for employment purposes.

Reasonable accommodations must be provided to individuals with disabilities when their medical condition will make the use of the technology difficult or result in less than favorable results. The EEOC provides the following examples of AI technology that may negatively impact an individual with a disability:

  • Video interviewing software that analyzes applicants' speech patterns to reach conclusions about their ability to solve problems can score an applicant unfairly if the applicant has a speech impediment that causes significant differences in speech patterns.
  • A chatbot that is programmed with an algorithm that rejects all applicants with significant gaps in their employment history could screen out an applicant with a disability if the applicant had a gap in employment caused by a disability (for example, if the individual needed to stop working to undergo treatment).
  • A gamified assessment of memory that has been shown to be an accurate measure of memory for most people in the general population could screen out individuals who have good memories but are blind, and who therefore cannot see the computer screen to play the games.

Employers should clearly communicate that reasonable accommodations, including alternative formats and alternative tests, are available to people with disabilities and provide clear instructions for requesting reasonable accommodations.

Age Discrimination

Age bias is another problem employers must watch for when using AI. Consider the example of Amazon's applicant rating tool that discriminated against women, and imagine instead that the machine learned to identify candidates who graduated from school after a particular date, or who had .edu e-mail addresses. Either of those possibilities could have an adverse impact on older workers and violate the ADEA. See EEOC Sues iTutorGroup for Age Discrimination.

Background Checks

Many background check vendors utilize AI to gather information on an individual's criminal history and other personal information. Employers may use AI to sort through these background check results. The same biases and discrimination issues found in other selection procedures can arise and run afoul of the Fair Credit Reporting Act (FCRA) or Title VII.

Avoiding bias and ensuring the technology being used has been carefully vetted are necessary when complying with the FCRA. Another consideration is the requirement under Title VII for employers to make an "individualized assessment" of a candidate's criminal history when determining if the information is job-related and consistent with business necessity.

This is not something that employers can leave to technology. This assessment requires employers to consider factors that may not be identified by an algorithm and often requires a human conversation with the candidate.

Global Considerations

As in the United States, around the world countries are evaluating and regulating the use of AI in the workplace. While the benefits and concerns of the use of technology in employment decisions are often the same, the regulatory requirements are likely to vary greatly. Employers should ensure understanding and compliance with laws in all locations where they have employees.

See:

Canada: Workplaces Should Consider Bias, Privacy in AI Policies

EU: Proposed Artificial Intelligence Law Could Affect Employers Globally

AI's Potential Role in Employee Discipline Draws Attention in Europe

The Impact of Artificial Intelligence on the Future of Workforces in the European Union and The United States of America

Transparency

Some state laws require an employer to disclose to individuals when AI is used in employment decisions, and the EEOC encourages this practice. The EEOC guidance indicates that employers should provide job applicants and employees who will undergo assessment by an AI tool with as much information about the tool as possible, including the following:

  • Which traits or characteristics the tool is designed to measure.
  • The methods by which those traits or characteristics are to be measured.
  • The disabilities, if any, that might potentially lower the assessment results or cause screen-out.

For example, an online assessment could include a what-to-expect screen for job candidates that includes this information, as well as instructions on how to request a reasonable accommodation.

It is recommended that employers not only be transparent, but also obtain consent from individuals before using AI technology for employment decisions.

See Report Recommends Transparency When Using AI in Hiring.

Vendor Selection

Employers must vet providers carefully when selecting an AI vendor. Employers can be liable for discrimination claims even when a third party developed the tool.

The following are examples of some possible inquiries about the development of a tool that an employer might consider asking a vendor:  

  • What kinds of statistical analysis do you perform to test your products, and how and why did you select those methods?
  • What were the results of your analysis? Can we have a copy?
  • Do you retest for disparate impact over time? How frequently?
  • Can you give references for companies that have used your services or tools?
  • Do you have diversity consultants or similar staff with whom you consult regarding your tools?
  • Are the materials presented to job applicants or employees in alternative formats? If so, which formats? Are there any kinds of disabilities for which you will not be able to provide accessible formats?

Mitigating Bias in AI

Employers that are considering using AI-powered tools in their workplace should consider taking the following actions:

  • Develop multidisciplinary innovation teams that include legal and human resource staff.
  • Continue human review of AI-assisted decision-making.
  • Implement disclosure and informed consent when necessary and appropriate.
  • Audit what is being measured before implementing the program, and on an ongoing basis.
  • Impose tight controls on data access.
  • Engage in careful external vendor contract reviews.
  • Work with vendors that take an inclusive approach to design. Consider whether the designers and programmers come from diverse backgrounds and have diverse points of view.
  • Insist on the right to review external validation studies.

Additional Resources

Generative Artificial Intelligence (AI) Chatbot Usage Policy

SHRM Resource Page: Artificial Intelligence in the Workplace

Data & Trust Alliance: Algorithmic Safety: Mitigating Bias in Workforce Decisions

EEO and DEI&A Considerations in the Use of Artificial Intelligence in Employment Decision Making

FTC: Using Artificial Intelligence and Algorithms