Made to Measure: A Customised Approach to Learning Evaluation

thinqi logo
Nick Davies
Head of Sales
Made to Measure: A Customised Approach to Learning Evaluation

L&D is now facing greater pressure from business leaders to use demonstrable data to measure the impact of learning programs.

So why, in a survey from over 500 senior learning professionals, were only 28% of those surveyed measuring training against business KPIs? In the same survey, it was also revealed that the most common form of training evaluation is the simple learner evaluation form – or ‘happy sheet’ – and this, perhaps, is our clearest indication of where the problem lies. As we noted in a previous blog post, most decisions are based on cost and learner feedback. The reason for this is that the return on investment (ROI) of the training isn’t worked out and there aren’t any processes in place to include the measurement of effectiveness.

To help overcome this challenge and start proving results in a way that’s clear and evidence-based, learning evaluation models can provide a structured approach to evaluating the impact of your learning strategy. Which model – or, as you will find out later, which aspects of various models – you decide to use will depend entirely on your unique business aims and objectives.

Let’s explore four common models.

  1. Kirkpatrick’s Four Levels

You’re probably familiar with the old Kirkpatrick model involving the four levels of learning evaluation:

  • Level 1: Satisfaction – This describes the learner’s immediate reaction to the learning program.
  • Level 2: Learning – This involves measuring the learning outcome – has the learning been retained and become embedded?
  • Level 3: Impact – This involves measuring the behaviour change of the learner to ensure that they can apply what they’ve learned in the workplace.
  • Level 4: Results – This involves measuring the impact of the learner’s behaviour on the organisation.

The Kirkpatrick model was revolutionary when Dr Donald Kirkpatrick originally defined it in the 1950s but, over time, it became noted for its limitations. For example, while it provides a logical structure and process to measure learning, it neither establishes business aims nor does it take ROI into account.

Key takeaways: While Level 1 might not be a great indicator of learning success – for example, a learner may not enthuse about a particular learning program but this doesn’t mean they haven’t learned anything from it – it does however provide an early warning on what’s not working. This could be that the content wasn’t engaging enough, the delivery style was poor, or that the resources weren’t up to scratch.

Levels 3 and 4, on the other hand, are critical to your evaluation. Knowing the impacts of behavioural change on both the learner and the organisation is key to measuring the success of your learning program. However, in order for this to work, you have to be doing this continuously and not just as the one-off event suggested by Kirkpatrick’s model.

If you want to use some of the methods outlined in the Kirkpatrick model, it’s worth remembering that  xAPI can be useful to collect meaningful data at all four levels.

2. The Kirkpatrick-Phillips Model

It’s been established that most organisations don’t measure ROI and that if they do it is simple things like cost saving, compliance or user satisfaction. What other recognised criteria for the effectiveness of L&D are there and how can we measure them?

There are several models out there but the most popular one is the Kirkpatrick-Phillips model. Kirkpatrick defined satisfaction up to results, then Jack Phillips added the ROI cherry on top.  For an in-depth explanation, it’s worth taking a look at our whitepaper How to Measure and Maximise Return on Investment from Learning & Development.

While ROI is often seen as a necessity for proving the business case of L&D to leaders, we need to bear in mind that ROI tends to be applied only after the learning intervention has taken place. The downfall of this is that if the ROI calculation actually shows a higher resulting cost than overall value, it is by then too late to make changes.

Another drawback of the ROI calculation means that when a low-cost learning intervention is set against a much greater project cost, it can create a falsely positive impression.

Key takeaways: Phillips himself recommends that ROI should only be calculated when the learning intervention is:

  • Targeted towards a population
  • Important to the strategy
  • Expensive in terms of cost or time
  • A long-term project
  • High profile and of interest to senior management

Providing the intervention meets these criteria, you can then follow the steps below to help maximise your returns:

  • Align learning outcomes to the business challenge
  • Make sure your content is designed effectively
  • Make sure your assessment is appropriate to the cognition level expected from your learning outcomes

These three points are explored more in-depth in our expert guide, ‘How to Measure and Maximise Return on Investment from Learning and Development’, but the key takeaway here is alignment, relevance and continual monitoring. This enables you to track your costs and successes against the business aims to make sure you stay on track.

3. Anderson’s Value of Learning Model

Anderson’s Value of Learning Model is a more recent development, published by CIPD in 2006. It is a high-level, three-stage evaluation model which aims to address the two main business challenges: the evaluation challenge and the value challenge.

The three stages are as follows:

  • Stage 1:  Determine current alignment against strategic priorities – Is the training in line with the business goals?
  • Stage 2: Use a range of methods to assess and evaluate the contribution of learning – This includes four key measures: return on expectation, return on investment, learning function, and benchmark and capacity.
  • Stage 3: Establish the most relevant approaches for your organisation – This is the final, decision-making stage.

According to a report by CIPD, 91% of high-performing learning organisations have L&D that is fully aligned with the strategic goals of the organisation. The Value of Learning model therefore seeks to be of benefit at an organisational level rather than for specific learning interventions. By using the model below, you can find the right measures to suit your organisation’s needs.

The model is not fail-proof, however; it’s only suitable for providing insight into the effectiveness of learning throughout the organisation as a whole. For single learning interventions, you’ll need to look beyond the Value of Learning model.

Key takeaways: The Value of Learning model is perfect for ensuring the learning strategy is in alignment with the organisation’s overall priorities and providing evidence that the resources are being used in the most effective way. The beauty of this model is that it allows the organisation to measure the right metrics based on their strategic aims.

However, like the other models mentioned in this piece, it needs to be supported by other models too for a more detailed evaluation.

4. Brinkerhoff’s Success Case Method

Rob Brinkerhoff defines his Success Case Method (SCM) as a “low-cost, high-yield evaluation” which draws a comparison between the most successful and least successful cases whenever a change is implemented. To discover why a particular method worked and how it could be improved, the following questions are asked:

  • “How have you used the training?”
  • “What benefits can be attributed to the training?”
  • “What problems did you encounter?”
  • “What were the negative consequences?”
  • “What criteria did you use to decide if you were using the training correctly or incorrectly?”

This should provide you with a set of qualitative data based on collected responses.

It’s important to remember that SCM is not limited to just learning interventions – it also takes into account that there are a number of variables that could be held accountable for results (for example, new technology or a change in processes).

Key takeaways: By identifying the most successful and least successful examples, SCM is a great way to identify exactly what worked (and what didn’t). This can help you to get specific about what needs to change and gives you something to publicise in the case of successes – something which can really help to boost the profile of L&D to your leaders.

However, as the results are based on qualitative data, it should only be used as a one-time insight – SCM alone just isn’t going to provide you with the full picture. This is why we suggest you combine SCM with other, more tangible methods of evaluation for ongoing analysis.

In Summary…

Evaluation is critical to your learning interventions and it’s all too easy to start panicking about which evaluation model is best. Should you be sticking with Kirkpatrick’s good old four levels of learning evaluation? Does the answer lie within Anderson’s high-level approach? Or is Brinkerhoff’s qualitative method the way forward?

At CDSM Thinqi, we don’t believe there is any one right answer when it comes to which method you choose. A blanket approach to learning never works, nor is any evaluation method ‘one-size-fits-all’ – it all depends on taking the approach most relevant to the strategic aims of your business.

This is exactly why we encourage L&D practitioners to adopt a more customised approach to learning evaluation. By taking the most useful and relevant parts from a range of models, you can present a more robust set of evidence to your business leaders – and equip yourself with an essential tool for raising the business value of L&D.

If you would like to learn more about how our cutting-edge blended learning ecosystem can help you evaluate effectively, we’ve got the tools and expertise to help you succeed. Request a demo to arrange to speak to one of our experts.

We’re always exploring key trends in the learning and development world, so keep an eye on our blog and social media channels to see when new insights are published:

 

thinqi logo
Nick Davies
Head of Sales