Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

Agenda/Training & Development: Evaluating Evaluations




HR Magazine, June 2002 Evaluating Evaluations

Asking the right questions is only the first step in creating a good training evaluation.

Human resources often overlooks a key component of an effective training program: designing good training surveys. But if you don’t collect accurate data and feedback on the effectiveness of your training programs, you just may be throwing good money after bad. Nothing is more demoralizing for staff than sitting in a classroom or in front of a computer taking a course that provides little or no benefit to their work.

“Doing a good job of designing training evaluations is far more important than people are inclined to believe,” says Palmer Morrel-Samuels, Ph.D., president of Employee Motivation Performance Assessment, a company that conducts research and designs customized corporate evaluations in Ann Arbor, Mich.

“Too many HR professionals are using survey design principles formulated 50 years ago. They’ve fallen behind in the advances that have been made.” He explains that part of the problem is getting the information into HR professionals’ hands: “There is no one place where research on survey design is published.”

Without completely revamping your evaluations, companies can do simple things to yield more accurate feedback. “A couple of years ago we put together an evaluation team to standardize our surveys,” says Herb Bivens, director of e-learning development at NCR Corp., a technology solutions provider in Dayton, Ohio. Before that, “people were using their own surveys. Some were two and three pages long. There were too many, and they were too different.” Bivens says the company wanted to be able to compare courses delivered internally with those from a third-party vendor. “We came up with eight questions that would be asked of any course, regardless of delivery method, classroom or web-based. It simplified things.”

Begin with the End in Mind

Saul Carliner, Ph.D., an e-learning consultant in Brookline, Mass., cites the four levels of post-training evaluation developed by retired university professor Donald Kirkpatrick. Carliner explains: “Level one is reaction, a ‘smiley face’ survey. Level two tests learning (i.e., pre- and post-tests). Level three is evaluating a change in behavior after a certain lapse of time, usually a minimum of six weeks. Level four evaluates the impact of the training to the company, the financial return on investment.” Carliner is author of An Overview of Online Learning (HRD Press, 1999).

At NCR, “certain programs are earmarked for level three surveys, based on whether it is a rigorous or high-priority course,” says Bivens. “With level three, we tend to create customized surveys because we need to ask about specific skills. We have one to three comment questions which ask the [supervisor] to tell a story about how the skills have been applied.”

Morrel-Samuels says: “Most assessments seem to be designed with an inappropriate purpose in mind. They seem to be part of an ad campaign designed to make the [training] program look good and for internal marketing. Their purpose should be to gather objective measurements of effectiveness and impact.”

Roger Addison, director of performance technology at the International Society of Performance Improvement, a survey research firm in Silver Spring, Md., agrees. “What is the purpose of your survey? Once you have a purpose in mind, it helps drive how you construct it, the questions and who you’re going to send it to.”

Morrel-Samuels adds, “In most cases, the timing and responding groups are inappropriate.” He argues that the only important information to gather is the bottom-line impact of the training: Did it improve the employee’s performance? And one cannot possibly know the answer five minutes after the conclusion of the course. Moreover, it’s often better to ask the supervisor rather than the employee. Carliner adds: “Most people think they learned more than they did. It’s good to ask both the employee and the supervisor about the amount of learning that occurred.”

The Format

The format of the questionnaire is crucial in eliciting accurate responses. Experts give the following advice:

  • Place easy questions first. “A good rule of thumb is to start with simple, uncontroversial questions and move to more complex questions that may have a more controversial nature,” says Morrel-Samuels.

  • Keep the evaluation short. “Five minutes” or less, says Carliner. “They’re doing you a favor. There’s no benefit to them, only to future learners.” Bivens agrees: “When you deliver an evaluation in the classroom, you can get them to fill out any length survey you want. They’re a captive audience. But when you start delivering training via the web,” it is difficult to get people to complete evaluations. And long evaluations will reduce the response rate. Online evaluations should fit on one screen with no need to scroll.

  • Avoid unnecessary numbers. “Putting a number on the outside of your assessment can strongly impact the answers you get on the first few questions,” says Morrel-Samuels. “For example, if the questionnaire has Form #62917 on it, you will get inflated numbers on the first question. Respondents start to think of big numbers. Any stray numbers can serve as an anchor point.”

  • If there are summary questions, always put them at the end of a block of related questions. Answers to a summary question at the start of a section can be more favorable than warranted and can influence answers to subsequent questions. Answers to a summary question at the end are more accurate. Why? If the summary question is at the beginning and the respondent gives a rating of “excellent,” then he feels required to justify himself on following questions by replying to each with “excellent.” But if the summary question is asked at the end, the employee has more time to consider his answer in light of issues raised in the previous questions, and his answer will likely be more accurate.

  • Avoid mixing rating and ranking questions. Rating questions are those such as “At what speed did the course download? No. 1, fast, to No. 5, slow.” A ranking question is: “Rank the following statements in the order of the frequency that you run into this problem: My Internet connection is slow; the course takes too long to download; my computer is too old to take online multimedia courses; the e-learning courses do not pertain to my training needs.”

“Mixing rating and ranking questions will distort the answers you get on subsequent ratings,” says Morrel-Samuels. “Ranking questions require you to focus on one problem area for a long time.” Using the previous example, you begin to believe your problems with e-learning are larger than you originally thought, which skews the answers to any subsequent e-learning questions. Or, if the subsequent questions are unrelated to online learning, you won’t concentrate on them adequately because you are still pondering the e-learning question. Says Morrel-Samuels: “It’s better to avoid ranking questions altogether.”

  • Use a rating scale instead of words. Rating scales such as 1 to 10 are better than phrases such as strongly agree, somewhat agree, neutral, somewhat disagree and strongly disagree. “It is better to have two poles labeled with extremes and just numbers in between—always or never,” says Morrel-Samuels. This way respondents don’t need to analyze the meanings of the words. Besides, most surveys are tallied numerically, so you eliminate having to equate words with numbers.

How many numbers should you have on the scale? Experts disagree. Some believe in an even total of numerical choices because it forces a person to have an opinion. For example, if the scale is 1 to 10, there is no middle number, so you must choose a side. Others believe in an odd total of numerical choices; many use the Licard Scale, which is 1 to 5. “I’m a big fan of the odd number. I like the option of not knowing or not having an opinion,” says Carliner. Whatever you choose, he recommends that you “be consistent within the organization,” so that courses can be fairly compared. Also, Bivens warns not to have “too many choices for rating. It breaks it down so much that it is confusing for students. Do you need that much detail?”

The Questions to Ask—Or Not to Ask

First, it is important that the language of the survey “fits the audience. You have to be careful that your language is not too technical,” says Addison. Says Morrel-Samuels: “Use clear language without any ambiguities and without any jargon.” Also, in about one-third of the questions, a negative answer should be the desired one. “That prevents a response set—yes, yes, yes, yes. It prevents them from giving an automatic answer.”

“I like to use extreme language on my surveys,” says Carliner. He says that he tries to keep the question neutral but uses “something really strong” for the response scale. For example, “‘How was this class? Disgusting (1) or a delight (5).’ You’re trying to force people to have an opinion. Make it fun for them to do [and] make them feel like you will listen to their opinion.”

One of the most important things is to structure your questions so that they “ask people to make a judgment call about a directly observable event,” says Morrel-Samuels. Too often, questions require respondents to make predictions. For example, the question “Did the trainer understand your needs?” requires the respondent to guess at the trainer’s mind-set. A better question is “Did the trainer tailor the examples to situations you experience in your work area?”

Another common mistake is to “ask two things in the same question. ‘Did you like the course and was it useful to you?’ are two questions,” says Carliner. “You need to separate them.”

One of the most startling advances in survey design research is the discovery that asking about a behavior can change the subsequent frequency of the behavior. “For instance, if I ask you, ‘Do you think you’re going to buy a car next year?’ just asking will increase the likelihood by a small percentage,” says Morrel-Samuels. HR professionals should keep that in mind when they construct a question so that they increase the frequency of desirable behaviors instead of undesirable ones.

Finally, “one of the most overlooked qualities of a good survey is business utility. Does it matter to the business?” says Morrel-Samuels. Too often people ask questions that have no relevance to the bottom line. For instance, “‘Does your trainer have a good sense of humor?’ Of course, it’s pleasant to work with people who do, but what are you going to do with the information? Fire someone? Hire a stand-up comic? If you’re not going to do anything with the information, don’t even ask the question.”

Carliner agrees: “We ask a lot of trivial questions on questionnaires.”

A Good Survey

“A good survey has brevity, clarity, objectivity and focus,” says Morrel-Samuels. And to create a survey with those qualities takes time. Carliner says, “Five percent of the total effort should be spent on evaluation.” Morrel-Samuels says he knows he’s finished designing a survey when he has “designed it [in such a way] that I would be willing to take it.”

Finally, Bivens advises, “Make an effort to plan level three and level four evaluations, whether upper management is asking for that kind of data or not. Sooner or later someone will come back and [ask you to] justify whether the training organization should exist or not. If you can prove your training is effective [and crucial to the bottom line], it puts you in a very good position.”

Kathryn Tyler, M.A., is a freelance writer and former HR generalist and trainer in Wixom, Mich.


Web Extras

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement