Machine Learning Can Help HR Overcome Human Failings

But the rise of robots also raises complicated questions

By Roy Maurer Jun 1, 2017
LIKE SAVE PRINT EMAIL
Reuse Permissions

Jed Kolko, Indeed's chief economist, speaking at Indeed Interactive 2017.

AUSTIN, Texas—Computers are increasingly able—in theory, at least—to help reduce human bias and limitations.

One of the latest innovations to hit the world of work is machine learning, when computers teach themselves rules based on pattern recognition and learning by example, explained Jed Kolko, chief economist for job search site Indeed.

[SHRM members-only toolkit: Introduction to HR Technology]

"Humans are not as rational as economists would like them to be," Kolko told attendees of Indeed Interactive 2017, a conference for talent acquisition professionals. One way to improve human decisions at work—and especially to eliminate bias from decision-making—may be to rely more on machines, he said.

"We know that there is discrimination and bias when it comes to hiring, pay, performance reviews and promotions. There are debates over the magnitude of discrimination and bias and the reasons for it, but there is no debate over whether any of it exists in the first place," he said.

There's also discrimination with respect to age. Older women, in particular, face discrimination, Kolko said. And researchers have found that there's bias in other, more subtle areas.

"Hiring managers are more likely to choose taller men to be project leaders," for example.

Or they favor people with similar experiences. "This comes up during hiring when the conversation turns to hobbies or activities the candidate is engaged in," Kolko said. "Affinity bias is a trickier kind of bias to assess, and fits into the notion of cultural fit. Of course we want people who will thrive on our teams and in our organizations, but it can be hard to tell whether cultural fit is essential to performance or a flavor of affinity bias."

The human state of mind can also unfairly or arbitrarily affect decisions. Kolko cited a study from Israel that showed that prisoners who had a parole hearing first thing in the morning or right after lunch were more likely to be granted freedom, whereas those who had hearings right before lunch or at the end of the day were almost never released.

"Old-school depersonalization techniques to deal with bias and state of mind [can be used for] blind hiring … and conducting structured interviews with standardized questions and testing for skills, as opposed to open-ended interviews where you are looking for commonality," he said.

Robots Still 'Far from Perfect'

Computer algorithms may be coming to the rescue. Machines have the potential to help phase out bias, but there are some issues with using algorithms that must be addressed. "Human-robot interaction at this point is far from perfect," Kolko said. "They may be learning, but they aren't very smart yet. It is not simply that algorithms solve issues but [that they] raise new, complicated questions."

Machine learning relies on humans to input massive sets of data and examples that the computer uses to see patterns. "Algorithms can sometime reinforce human biases," Kolko said. "They are ultimately defined by being 'garbage in, garbage out.' They learn from how humans actually are, not from how humans would like to be."

An example of this is when Microsoft introduced a chatbot that was designed to learn from and interact with users on Twitter. "Within a few hours of being released, the bot learned how to be as offensive, inflammatory, rude and obnoxious" as other Twitter users. It was taken down the same day.

Algorithms are also not very transparent, he said, which means that some biases may get baked in and would be difficult to ascertain.

A more disturbing risk is the possibility of moving to a two-track world, "where the privileged are processed more by people and the masses more by machines," Kolko said. "Those with elite status may get the human touch, but everyone else will be handled by an algorithm."

Like humans, machines have failings. They often can't tell why something is, Kolko said. "A computer can tell us that on rainy days people eat less ice cream. But an algorithm would never understand why it doesn't make sense to say that eating ice cream prevents rain. Only humans can make sense of the data results."

Kolko warned that the rise of machines may lead to the rest of us losing our humanness. "Our handwriting has gotten worse now that we type more, our sense of direction starts to disappear with more reliance on GPS. More seriously, there is a concern that pilots may be losing their reaction skills the more they rely on autopilot," he said.

Finally, a key difference between humans and robots is that only humans are able to judge what are the positives and negatives of using machines. "It's the responsibility of humans to build machines ethically and [to decide] whether or not to rely on them."

Was this article useful? SHRM offers thousands of tools, templates and other exclusive member benefits, including compliance updates, sample policies, HR expert advice, education discounts, a growing online member community and much more. Join/Renew Now and let SHRM help you work smarter.


LIKE SAVE PRINT EMAIL
Reuse Permissions

SHRM WEBCASTS

Choose from dozens of free webcasts on the most timely HR topics.

Register Today

Job Finder

Find an HR Job Near You
Post a Job

SPONSOR OFFERS

Find the Right Vendor for Your HR Needs

SHRM’s HR Vendor Directory contains over 3,200 companies

Search & Connect