A training evaluation, like any assessment, is only as good as the data it captures. So, what are the most effective questions to ask when evaluating training programs? How do you truly evaluate training effectiveness?
Donald L. Kirkpatrick has been writing about evaluating training programs for close to 60 years. The ideas presented in this article are from his work and that of James D. Kirkpatrick and Wendy Kayser Kirkpatrick. The Kirkpatricks are specialists in helping trainers understand how to evaluate a program’s effectiveness.
A four-level model of training evaluation
Level 1: How was it for you?
This level measures trainees’ reactions after completing their training program. Questions should investigate three key areas: how useful the trainees felt the training was, how they felt personally while doing the training, and suggestions they have for making the training even more effective.
Sample questions include:
- Were the benefits you gained worth the time and effort you spent on the training activities? Why or why not.
- What are your three biggest takeaways from this training?
- Did you feel that this training suited your personal learning style? Why or why not? If not, what suggestions do you have to improve the situation in the future?
- How engaging and motivating were the training materials? If not very much, how could this be improved?
Suggestions for “how to”
You can use a Google form to automatically capture and analyze your trainee reaction data. An equally-valid method, yet more labor-intensive, is personal interviews. This gives the added benefit of observing the trainee’s body language for further insight.
Level 2: So did they learn anything?
Evaluating training effectiveness is a process of measuring to what extent your company training program goals were achieved. In order to do that at this level, you need TWO measurements: one at the start (pre-training) and one at the end (post-training).
Pre-test and post-test
This is essentially two copies of the exact same test. By keeping the test material the same, the only relevant variable is the training. Thus, a more accurate measure of the value of the training is obtained.
Your test should clearly reflect the goals of the training. If there is knowledge to be learned, your test should assess that knowledge via questions and tasks which demonstrate knowledge acquisition. In cases where real-life, interpersonal interactions are the aim, your test must have those opportunities. In other words, your test should accurately reflect what you want your trainees to know/do at the end of their training.
Current Kirkpatrick additions to their training evaluation model include a post-test component of student reflection. Trainees are asked to assess what differences the training will make to their work performance in the future, their level of confidence in being able to put these differences into practice, and how much motivation they have to actually do this.
Level 3: OK, did anything change…really?
Situation 1: Your trainees reported that the training was great, and their post-tests showed vast improvements over their pre-tests.
Situation 2: Your trainees reported that the training was boring and irrelevant. The insignificant differences in their pre- and post-tests further support this idea.
In which situation will the training most likely be applied to real-life, work situations?
The answer is that we have no way of knowing unless we observe our trainees on the job…and over time. In other words, are our trainees really putting their training into practice consistently?
So, another part of evaluating training programs is to build in a series of structured observations. Employees should be made aware that they can be observed and evaluated as they naturally go about their usual duties, rather than special dates and times being set up. Although this can be uncomfortable at first, people get used to it, and this method encourages them to use more of their training on a continuous basis.
Make sure that employees are given a copy of their observation results. In addition, the results should show both the employee’s strengths and suggested points for improvement.
Level 4: Putting it all together
Looking at the big picture, what is the ROI of your company training programs?
For example, has customer service improved by the percentage you aimed for? Are factory-floor accidents down by your target figure? Company-wide (or department-wide), are more projects meeting their deadlines in a statistically significant way?
There is no cookie-cutter, “one size fits all” method for calculating your organization’s training ROI. The best way to make an accurate calculation is by creating S.M.A.R.T. training goals as a first step in developing your training program.
To improve your future training assessment ROI, you will also need to ask tough questions about what can be improved. Even if your ROI is relatively good, it can always be better. Remember: it is not the numbers themselves which are your goal but what they mean in terms of the continued prosperity of your organization.
Worldwide data (sample 1, sample 2, sample 3) consistently show that effective training leads to improved employee performance, and that employee performance is directly related to organizational success.
If you would like to check out an EdApp course completely for free, click here to take our Communication in Project Management course and learn about the basic communication skills you need in Project Management to work effectively with others.
For future-forward leaders who are ready to take the next step towards improving the viability of their organizations: click here.