Upper management was unconvinced, “Why spend so much on training when we can’t see the results?” they asked Jane and her colleagues from the L&D department. It was a question that they couldn’t answer with certainty.
Their training budgets were in jeopardy.
Despite all the good classroom experiences during training, the results are often suspect. It has always been difficult to measure the impact of training programmes – and whether they lead to positive behaviour change and intended business outcomes.
With only one or two days in a room, do we really expect training to create behaviour change? Our behaviours – good and bad – are the result of years of reinforcement. If we are to create change in people and in organisations, don’t we need more than a couple of days of training?
But the problem is – to spend more time, is to spend significantly more money. Could there instead be a tech-based method to reinforce learning, track learning as it happens, regularly encourage intended behaviours, and get actionable data that lead to on-time learning interventions?
“If you can’t prove that your training is actually making a difference, we should spend less on training”, the upper manager suggested to Jane. Jane clenched her teeth. Anger, frustration, and confusion clouded her mind. “There must be a way”, Jane thought.
Then there’s the problem of observing and tracking change within a large organisation. It’s easy enough to track our own personal change – if we’re getting fitter or fatter, or if we’re getting better or battered. But when it comes to looking after hundreds or thousands of people within an organisation, tracking change is significantly harder.
Jane tried to track training outcomes by using the old way – surveys, forms, and personal interviews. But soon, these mountains of forms threatened to bury her training department in an avalanche of half-heartedly filled forms (many of which offered little insight into how people were doing). Jane was frustrated. It was demoralising to try so hard to collect data, only to find it so difficult to process the data for insights.
It didn’t help that training evaluations are oftentimes stuck on the lowest level – level 1 – the good old ‘training evaluation’ form. The form that’s used after every training session to ‘hopefully’ gain insights on the effectiveness of a training programme.
Well-designed Level 1 forms are great indicators of classroom experience – but they’re poor predictors of whether your participants will grow, improve, and change their behaviours. To effectively track learning and behaviour change, you’ll have to go up the Kirkpatrick evaluation model, up to levels 2 and 3.
According to Donald Kirkpatrick, PhD, training evaluations are split into four levels:
Level 1 – Reaction: These are the ubiquitous training evaluation forms.
Level 2 – Learning: This evaluates if knowledge is retained.
Level 3 – Behaviour: This tracks behaviour change.
Level 4 – Results: This tracks business ROI.
The higher up the Kirkpatrick evaluation levels you rise to, the harder it is to evaluate. If Jane hadn’t taken action, her L&D department would now be swimming in paperwork. Thankfully, she found the automated tools that assisted her to:
- Reinforce knowledge and new behaviours, and
- Track for level 1, 2, and 3 evaluations.
After a lot of searching, Jane chose an app called Mindmarker. Implementing Mindmarker in her organisation, they obtained the following results:
Once Jane decided on using Mindmarker, we ran post-training reinforcement design workshops for her L&D team. Her team then went on to design post-training reinforcements and rolled them out across 20 of their existing training programmes – encompassing 850 participants. The results? 90% of them completed their 12-16-week post-training reinforcement programme. Participants retained on average 80% of the knowledge that was passed to them during training. And 75% of the participants are applying the intended new behaviours.
That’s huge.
Having gone through this journey, Jane and her L&D team are now able to implement effective post-training reinforcement – thus enabling them to help their people to perform better at work. And they’re now able to track the effectiveness of their training – giving her L&D department the oh-so-helpful feedback that they needed – to design better interventions in the future. Equally important, Jane now has data about the effectiveness of her training programmes to show senior management. That’s huge, too.
It has been more than a year now, and their training budgets have not been cut – instead, they have been increased. Jane and her colleagues have bravely averted disaster and are now empowering even more people than before.
If you’d like to learn more about how we can help you to do the same, email us at [email protected] – and ask about our Mindmarker post-training reinforcement.
Watch Joshua Ng, a behaviour change consultant, as we interview him about post-training reinforcement and how it impacted an entire L&D department:
Footnote: The results shown are real and the story told here is based on a real-life case. But the names have been changed to protect the privacy of our client.