Skip to main content

Day: November 4, 2016

Did it work? Four steps to evaluate your training sessions

Many of you offer training sessions to various groups in order to move the needle and make communities better. These training sessions are as varied as your areas of expertise, covering topics like financial literacy, self-defense, managing chronic disease, cultivating healthy relationships, and so much more. You put time and effort into the training session, and you truly do want it to make a difference in people’s lives. So how do you evaluate your training sessions to find out if you made a difference?

I took an entire course in graduate school on training—how to design a good training, how to implement it, and how to evaluate it. Fortunately for you, I won’t spend an entire semester on the topic, but I will hopefully convey the important parts!

The Theory Behind It All
My favorite theoretical model to evaluate training is by Kirkpatrick (1959, 1975, 1992). That’s probably not a sentence that makes you super interested in reading the rest of this post, but stay with me.

This model says there’s four levels to evaluating your training. Sometimes, you are fortunate enough to have the time and resources to measure all four, while other times you never get past the first levels (and that’s OK). The levels are:

  • Reaction: Basically, did they like the training?
  • Learning: Did they learn something new?
  • Behavior: Based on that new knowledge, did they actually go home and change their behaviors?
  • Results: What happened as a result of that behavior change?

Example Evaluation: Diabetes Prevention and Management
In honor of the fact that November in National Diabetes Month, let’s use the example of diabetes. Nationally, an estimated 29.1 million Americans have diabetes—roughly 9.3% of the population. The majority of these cases (more than 90%) are Type 2 diabetes, which is typically brought on by unhealthy behaviors—eating unhealthy foods, not getting enough exercise, being obese, etc. (Note: there are notable exceptions to this, such as veterans exposed to Agent Orange who develop diabetes as a side effect, etc., but for the most part, type 2 diabetes is preventable). You can learn more about diabetes at www.diabetes.org.

So, let’s say you did a needs assessment in your community, which found that diabetes rates are elevated. Since diabetes is a serious condition that can result in blindness, amputation, and death (it’s the 7th leading cause of death in America), you decide that it passes the high-prevalence-high-severity test, and that you need to do something about it.

Based on previous experience and some research, you find that many people don’t understand diabetes well. They don’t understand how to prevent it, how to manage it once they have it, or how serious it is. So, you decide to offer educational opportunities for people to learn more about the disease. The course material highlights understanding type 2 diabetes, healthy eating, and active living.

Now, how would you evaluate this training, according to the four levels in Kirkpatrick’s theory? Well, here’s some possible metrics:

  • Reaction: At least 85% of people in your class found it to be informative and interesting
  • Learning: At least 85% of people in your class learned something new about diabetes that they did not know before today
  • Behavior: At least 70% of people who took your class changed their eating and/or exercising habits
  • Results: At least 20% of people who changed their eating and/or exercising habits were able to lower their weight and blood sugar to normal, healthy levels

Evaluation Timing
By looking at the four levels, you can see that the earlier levels can be measured immediately, while the later levels take time to measure.

Reaction: This level is typically something you can measure immediately after the training session, perhaps with a brief pencil-and-paper survey for participants. You can assess their affect (e.g., “Did you enjoy the training?”) and/or utility (e.g., “did you find this training useful?”)

Learning: Measuring this level is something that may involve a pre- and post-training session test (e.g., “We asked our participants to take a pre- and post-test that included 10 true/false questions about diabetes. Before the training session, the average score was 6 out of 10. By the end of the training session, the average score was 9 out of 10!”). Or, it may just be a simple follow-up survey that you can combine with your assessment of reaction (e.g., “list two new things you learned about diabetes in today’s session”).

Behavior: This level requires follow-up after they leave your class (so be sure to get contact information if you want to measure this level!). By its very nature, that means you’ll likely get some drop-off; some people will just disappear into thin air. In our diabetes example, I’d recommend asking participants to keep a log of what they eat (and when, because that has a big impact with diabetes), and how often/how much they exercise. Having access to these logs will help you to track behaviors over time. It will likely take some incentives in order to get this data from your group—perhaps offer people $10 for every week that they turn in their logs.

Results: This level requires long-term follow-up, and because of that, it’s the most difficult to get. If you’ve ever tried to lose weight, you probably know that it takes months to “move the needle”. So in the diabetes example, I’d recommend asking participants to come back one year later, to have their blood sugar tested and be weighed. Now, that doesn’t sound very appealing, so you’d need to incentivize them with something—maybe everyone that comes back after a year gets a free gym membership, or a new set of cooking pans, or some other incentive that will encourage healthy behavior.

 

Now go forth and evaluate your work!