Improving Training with The Kirkpatrick Model image

Using the Kirkpatrick Model to Evaluate Training image

This is part five of our Kirkpatrick Model series, by Hannah Brenner, in which we explore how to measure the results of training evaluation. If you haven’t yet, check out part one herepart two herepart three here, and part four here.

If you are just tuning in, stop. Go back to the beginning (or technically the end) and read from post one. Because even though Level 1 sounds like the beginning, this series is nearing its end.

Throughout our series, we have held true to the New World Kirkpatrick Model where “the end is the beginning.” We first discussed Level 4 and how to align training with strategic goals; we then identified behaviors needed to reach these goals, followed by the learning that needs to take place. Now, we are ready to spend a little time on Level 1.

Level 1: Reaction is defined as “the degree to which participants find the training favorable, engaging and relevant to their jobs.” This level is most likely familiar to all training professionals and educators alike. To efficiently and effectively evaluate, it is important to understand both the timing and components of Level 1.

Timing

When it comes to Level 1 data, the quicker you can obtain it the better. Stretching this out over too much time is not ideal, as often the longer you wait, the less data you will receive. Measuring immediately after training and then again, a couple weeks later, is typically sufficient.

In addition to the timing of when to evaluate, trainers should also be cognizant of how much time and energy is being spent on these results – we’ll get to this later in the post.

Methods

It is important to measure all three components of Level 1: engagement, satisfaction and relevance. There are various ways to quickly collect this data both throughout the training program and shortly following. Evaluating throughout training can provide meaningful insights.

For example, if engagement appears to be low, the trainer can immediately adjust his or her approach. Additionally, if low satisfaction is caused by an external factor (the room, technology, seating, etc.) it can be addressed before the session concludes.

Some ways to measure the three components include:

1. Surveys: Post-training surveys are the most common way to measure Level 1. But beware of over-surveying participants, as too many questions or the number of times surveyed can produce lower engagement and unreliable results.

2. Observation: Instructors can use their own observation skills to evaluate the level of engagement and satisfaction by reading the reaction and body language of participants.

3. Pulse checks: The same way trainers can do a “check for understanding,” they can also do a general pulse check to gather feedback on how things are going. These can be added in as needed or planned from the start.

4. Program observer: If this is an essential program, having another member of the team dedicated to observing is a great way to collect information. If this is done, make sure to introduce this person and his or her role at the start of training.

A couple weeks after training takes place, it may be a good idea to conduct another quick evaluation by survey, focus group or interview session. This follow up can help assess the relevance of the program once employees have the chance to apply what they’ve learned.

Now What?

Let’s say you have a new employee on your team. He has been with the company about six months now and your supervisor calls you in to discuss his performance. So far, he has not hit any goals. He’s still not working at a basic proficiency level and he’s committed considerable errors. Overall, you think he isn’t cut out for the position. But instead of giving this information to your supervisor, you simply say, “he is very nice and enjoys working here.”

Whereas this may be true, it doesn’t help your supervisor in any way. This is your Level 1 data – it may make you feel good that people enjoyed the session, but there are no actual results here. Therefore, reporting this to leadership is going to hold little meaning unless accompanied by real results.

Level 1 is currently the highest evaluated Kirkpatrick level with approximately 80% of classroom training and 58% of online training being evaluated.

But this is the least reliable information and does not provide meaningful data. In fact, “Level 1 is often overthought and overdone; it’s a waste of time and resources that could be put to more benefit at other levels.” Whereas it might be important to the trainer that people enjoy training, time is better spent collecting and preparing data that demonstrates return on investment to leadership.

If you’re a BizLibrary client, work with your Client Success Consultant on ways to quickly obtain Level 1 data without wasting resources. Your consultant can provide examples and best practices for gathering and evaluating this information in the shortest amount of time, so you can focus on higher level impact and results.

For more information on training evaluation, as well as examples of surveys and other tools, check out the Kirkpatrick Partners website at www.kirkpatrickpartners.com or the book Four Levels of Training Evaluation.

Finally, be on the lookout next week for the conclusion to this series where we will discuss making data work for you, showing ROI and ROE, and why it all matters.

Read more: Using the Kirkpatrick Model to Evaluate Training: Part Six

Learn more about how different training styles affect employee buy-in and engagement in our free webinar, presented by BizLibrary CEO, Dean Pichee.

Push or Pull webinar cover

Hannah Brenner is a Client Success Consultant with BizLibrary. She discusses training strategies and works with her clients to constantly improve their training program and see a positive return on investment.