Finally get that promotion? Get exclusive content, tips and tools to help you excel.
Implicit bias occurs when individuals make judgments about people based on gender, race or other prohibited factors without even realizing they’re doing it.
Is your employee handbook keeping up with the changing world of work? With SHRM's Employee Handbook Builder get peace of mind that your handbook is up-to-date.
Build competencies, establish credibility and advance your career—while earning PDCs—at SHRM Seminars in 12 cities across the U.S. this spring.
#SHRM18 will expand your perspective – on your organization, on your career, and on the way you approach HR. Join us in Chicago June 17-20, 2018
Members may download one copy of our sample forms and templates for your personal use within your organization. Please note that all such forms and policies should be reviewed by your legal counsel for compliance with applicable law, and should be modified to suit your organization’s culture, industry, and practices. Neither members nor non-members may reproduce such samples in any other way (e.g., to republish in a book or use for a commercial purpose) without SHRM’s permission. To request permission for specific items, click on the “reuse permissions” button on the page where you find the item.
Reminding interviewers that applicants lie may help screen out fabrications and exaggerations.
People lie on their resumes. They lie on job applications. And they generally get away with it—despite HR’s best efforts to derail the dishonest. But, according to research that we conducted, simply reminding an interviewer about the prevalence of lies may make a notable difference in that person’s efforts to distinguish fiction from fact.
Although no one knows the exact proportion of job seekers who enhance their personal work histories, estimates range from 40 percent to 70 percent.
Certainly, hiring employees based on such misinformation presents clear, potential risks for HR professionals and their companies. HR professionals who suspect false information on an applicant’s resume should ask appropriate follow-up questions during an interview. Doing so could be the key to uncovering lies and eliminating the applicant from further consideration.
However, the effectiveness of that approach could depend on the medium in which the interview is conducted. With the rise of the personal computer and the Internet, interviews increasingly are being held through videoconferencing, e-mail and even instant messaging, especially in the initial stages.
Says Nina Segal, an international career specialist with monster.ca, the Canadian portion of the global Monster employment web site: “More and more organizations are using technology to make interviewing less expensive while simultaneously casting a wide geographic net to attract a global candidate pool. ... An e-mail interview, or one through instant messaging, is sometimes used as an initial step in the hiring process.”
But electronic screening mechanisms such as e-mail and instant messaging provide little or no visual or other nonverbal communication, compared with more traditional phone or face-to-face encounters. A person’s voice or facial expressions are unavailable to the computer-using interviewer. Thus, lying applicants may be better able to get away with their ruse over electronic media.
A Controlled Experiment
We conducted research to test the notion that electronic media may work to the advantage of lying applicants. A total of 156 undergraduate students majoring in management information systems (MIS) in the business school at Florida State University in Tallahassee participated in the study.
In the study, students reported individually to a suite of interview rooms. The students were kept separated, since we hoped to mimic an environment in which interviewers and applicants cannot see each other.
Each student was paired with another student, but neither knew the other’s identity. One student became the interviewer and the other became the interviewee. By design, one student in each of the 78 pairs arrived at the experiment site 15 minutes before the other. The first student to arrive was placed in the role of the applicant; the student who arrived second served as the interviewer.
Each pair of students was randomly assigned to one of four different computer-based communication media: e-mail (via Hotmail, a web-based e-mail provider); text-based chat (enabled by Microsoft NetMeeting); an audio relay (essentially, a phone conversation via NetMeeting); and a combination of chat and audio (also through NetMeeting).
The “applicants” were first asked to help with a seemingly innocuous task: We presented them with a scenario in which the MIS department was developing a scholarship to be awarded to the top student in the department. To help us set the minimum requirements for potential applicants, we asked the students serving as applicants to fill out a sample application and, in doing so, to make themselves appear as competitive as possible. The students had been asked to bring a current resume with them.
With those simple instructions, applicants proceeded to falsify information on the application. (We knew what was falsified because we compared their applications with their resumes.) They falsified an average of 8.6 items on 19 separate application blanks—even though they had been given no specific instructions on what to change or on how to change it. Although we never said it explicitly, students inferred that falsifying their personal information was acceptable to us if that was what it took to make them appear competitive.
The more commonly altered items included grades (they improved; all became A’s regardless of whether they were B’s, C’s or worse), job experience in MIS-related capacities (students created summer internships with high-prestige companies), and activities in student organizations (everyone became a high-ranking officer). The items that were changed most frequently were also the items that were changed most dramatically, mainly the invention of internships that had never existed.
After the items they changed were pointed out to them, applicants were informed that they would now be interviewed on the basis of the applications, and they were asked to convince the interviewers that the applications were completely legitimate. They were told that the interviewers, who also were MIS students, would be located elsewhere and that the interviews would be conducted via computer.
To protect the applicants’ identities, their names and other data were omitted.
Enter the Interviewer
In the meantime, the later-arriving students were told they would be serving as interviewers. They were told that each would be interviewing a student who was “applying for a scholarship that the MIS department is considering giving in the future to the ‘top MIS student.’” They knew they were part of a study, but they did not know the scholarship was not real. They were also told that because they knew the MIS field and were familiar with the program curriculum in which their classmates were enrolled, they -- the interviewers -- would be effective in filtering out applicants. Since the interviewers thought the scholarship was real, there was no reason for them to be less diligent in trying to spot lies.
Interviewers likewise were informed that the interviews would be occurring over computer-based media. The falsified application was transferred from the applicant’s PC for the interviewer’s perusal. Interviewers were instructed to ask the applicants about anything on the applications that “caught their eye.”
In addition, half of the interviewers were told: “Remember, 40 percent of all applicants have been found to have lied on their applications.” Of course, in this experiment, all of the 78 applicants had lied to one degree or another.
Following the interview, both subjects were given questionnaires to elicit their personal opinions and feelings about what had just taken place. Among other things, applicants were asked to say whether they felt the interviewer seemed suspicious or not. Interviewers not only were queried about their feelings about the media but also were asked if they felt they had been lied to about the applications and to state specifically what they thought was false.
A Discouraging Outcome
The results of this exercise are not encouraging from an HR perspective: The interviewers who were not warned about applicants’ penchant for lying detected only about 2 percent of the fabrications. Those who were warned spotted the lies at a much higher rate -- 15 percent -- but their performance still is not encouraging.
Interestingly, in comparing the items that interviewers marked as false during the interviews, we found no statistically significant differences across the four media. Interviewers correctly identified about the same proportion of false statements regardless of what medium was used for the interview, whether it was e-mail, chat, chat with audio or audio only.
To investigate further, we examined the transcripts of the interviews, looking specifically at the questions the interviewers had asked. We wanted to discover which items on the applications they questioned.
We found that the number of false items interviewers asked about differed according to the media used for the interview. Interviewers using audio alone or chat with audio asked questions about approximately 40 percent of the false items, while interviewers using e-mail asked questions concerning approximately 20 percent of the false items. Overall, interviewers spotted false items in equal numbers regardless of the media, but the media appear to have affected how many false items they asked about.
Compared with interviewers who operated in chat with audio or in audio only, interviewers who worked in e-mail or chat -- the interviewers who had to type questions -- asked fewer questions overall and also asked fewer questions about false items.
Similarly, there was a split between the tipped-off interviewers (38 percent) and the otherwise naïve interviewers (21 percent).
Getting Away with It
The interviewers’ suspicions, however, did not translate into certainties. Across the board, interviewers who initially questioned items ultimately did not identify them as false on the post-session questionnaire. Even the most successful deception detectors -- the interviewers who had been warned about false resumes -- later recorded as false less than half of the items they first asked about.
One possible explanation for interviewers’ behavior is simple failure to recall all of the suspicious items when they were completing the questionnaire. Another possible explanation is that applicants were able to convince interviewers that the questionable items were legitimate after all. When it came time to fill out the questionnaire, the interviewer was no longer concerned about items that the applicant had been able to explain convincingly.
Whether we looked at what was recorded on the post-session questionnaires or at the items interviewers focused on during the interviews, we found that most of the false items in the applications got past the interviewers.
Admittedly, the students who served as interviewers were not formally trained in interrogating others, but they possessed a more complete knowledge of the subject matter and of the academic environment than the average HR generalist would.
Perhaps we also need to make the point that the students are more like the hiring managers than HR professionals, so HR professionals who rely on hiring managers to detect lies about the specific skill and subject area might be disappointed. Yet even this knowledge seemed to be of limited use when dealing with the veracity of these applications.
Looking just at what was recalled as suspicious by the interviewers, the overall deception-detection rate in this experiment was an abysmal 8 percent. The overall deception-detection rate based on the questions asked about false items is only 30 percent—better than 8 percent but not impressive.
These low numbers are sobering, especially considering that, during the interviews, anything that appeared to be suspicious could be questioned, with the interviewer able to communicate directly with the source of the information.
In the broader context of the findings from decades of research into deception in the academic field of communication, this finding is not that unusual. Past studies involving professional interrogators, police officers and intelligence agents have typically produced detection rates of 40 percent to 60 percent, even in the best of circumstances.
A Little Help For Interviewers
The fact is that people are not very effective at lie detection. Yet our evidence shows that tipping off interviewers can help somewhat. In our test, a warning composed of one seemingly off-hand statement made a difference of 13 percentage points in detection success, measured by what the interviewers remembered as suspicious at the end of the interview.
HR professionals may seek to find a more elaborate way of keeping HR staff and hiring manager interviewers vigilant, perhaps by warning them about the more commonly altered items of information found on resumes and applications.
The main concern about warning interviewers could be an increase in “false alarms,” where applicants are wrongly accused of lying. However, in our study, there were very few false alarms. If anything, interviewers erred on the side of being naïve, accepting most of what they read and heard as true.
Obviously, organizations have a great interest in discovering the facts about job applicants and their work histories. Unfortunately, those facts are often misreported, and resume enhancements may not always be uncovered during an initial interview, or even during a series of interviews.
While our research indicates that interviews conducted using lean media such as e-mail may not be very helpful in detecting false information, even e-mail interviewers are better at finding deception when they are simply reminded that many people distort their work histories and job qualifications. Such simple warnings seem to be a cost-effective way to help even untrained and naïve interviewers become better deception detectors.
Joey George is a professor in the College of Business at Florida State University. He has been a business professor for 17 years, and has published five books and more than 80 papers in journals, books and conference proceedings. Kent Marett is a doctoral student in management information systems at Florida State.
You have successfully saved this page as a bookmark.
Please confirm that you want to proceed with deleting bookmark.
You have successfully removed bookmark.
Please log in as a SHRM member before saving bookmarks.
Please sign in as a SHRM member before saving bookmarks.
Please purchase a SHRM membership before saving bookmarks.
An error has occurred
Recommended for you
Choose from dozens of free webcasts on the most timely HR topics.
SHRM’s HR Vendor Directory contains over 3,200 companies