What is the best way to approach training evaluation?

by Brett Henebery18 Aug 2017
Training evaluation has always been a hot topic in L&D, but it can often mean different things to different people, and as such, measuring its impact can be difficult.

Mark McPherson is a speaker, trainer and coach specialising in helping CEOs, managers, business owners, consultants and their staff get the best behaviour, communication and performance from people.

McPherson told L&D Professional that some people use the term ‘evaluation’ to mean what value people – particularly those attending the program - put on the training.

“Others mean it to be an evaluation of the intended learning outcomes, behaviours and skills in both the short-term and long-term,” he said.

“Many people use the word evaluation to mean the value we put on the program and use the word 'assessment' to mean the assessment of the actual competencies of the participants at the end of the program compare to the beginning of the program.”

However, McPherson said those in the accredited training arena would have a lot to say about the differentiation between the words ‘evaluation’ and ‘assessment’.

“I think it's safe to say that very little is done to determine if the skills to be learned because of the training program are actually learned to start with, and then secondly, if they are used appropriately or properly later when the participants are back in what can be called ‘the real world’,” he said.

“To be fair, I am guilty of this problem. I run plenty of programs without properly assessing people's skills at the end of the program and don't assess them three months down the track – and I think this is incredibly common.” 

McPherson said he has been in plenty of other training programs over the years but has never been asked how he was going or whether he’d use the skills or not.

“One of the major problems here though is the lack of appropriate and documented ‘intended learning outcomes’,” he pointed out.

“So in many ways it actually starts right here in being clear about what it is you're actually trying to achieve anyway.”

McPherson said that if all an L&D professional is trying to achieve is an increase in someone's knowledge then it is easy to do a simple test at the end to see if they've gained the knowledge their trainer wanted them to gain.

“However, when it comes to skills, it's harder of course. And I get that,” he said.

“Without evidence to the contrary, the state of evaluation of the change that training programs make to people's behaviour and performance 'down the track' is scarce.

“There will of course be people who will tell you what a great job they doing – me included – but we have very little evidence to back it up.

And there will be those who tell you that they do the evaluation I'm talking about, but on close inspection it doesn't measure up.”

Andres Jonmundsson, head of learning and development at Fuji Xerox Australia, told L&D Professional that some argue for a ‘return of investment’ approach, while others prefer ‘return on expectations’

“And then of course there are those who advocate both,” he told L&D Professional.

“I don’t think we measure ROI particularly well, mostly because of the complexity and effort involved.”

Jonmundsson says a more interesting concept of ‘return on impact’ has been gaining more momentum and seems to provide an alternative perspective.

“The thinking around this is to identify the desired impact of the training and seeing if that impact was met,” he said.


Related stories:
The key trends challenging L&D Professionals in 2017
Is this what’s been causing all your workplace problems?
 

L&D TV