A Zoom meeting.
Have you ever wondered if conducting Level 1 evaluations are worth the effort? Or if you should stop using them altogether? If you’ve had these thoughts, you’re not alone. According to a 2019 Association for Talent Development (ATD) research study, 83% of organizations evaluate some learning programs at Level 1. Yet, only 35% view the data they collect as having either high or very high value.
So, is there something you can do to start getting more valuable results from your Level 1 evaluations? The answer is a resounding "Yes!" Start by including predictive questions in your Level 1s. Predictive questions forecast the results a learning program is likely to achieve. They also begin to answer the question business executives and L&D professionals both want to be answered: "Is this program delivering value?" These predictions aren't proof that specific program outcomes are inevitable but rather a forecast that certain results are likely.
In this highly informative, thought-provoking session, participants will learn how to create three predictive measures: a Level 2 learning gain score predictive metric, a Level 3 training transfer predictive metric, and a Level 4 business results predictive metric.
After attending this session, participants should be able to:
Use facts from a recent research study to benchmark the use of Level 1 evaluations in an organization. Create predictive questions that forecast participant learning, the likely transfer of participant learning back on the job, and the likely improvement of business results if participants apply what they learned. Calculate a learning gain score predictive metric, a training transfer likelihood score predictive metric, and an improved business results likelihood predictive metric.