Discuss, debate and exchange ideas on latest trends and opportunities in the Business Process Outsourcing (BPO) landscape. Deliberate on adding “business value” to clients, vendors, employees and various other stakeholders to enhance customer satisfaction and sustain long term partnerships.

« Freight fright - Why is arranging freight such a fright for most transport managers? | Main | Guiding lights: bridging the gap between Information Technology and Procurement »

Measuring Training Effectiveness: Let's Not Overdo

training_2038_1.jpg

Talk about measuring training effectiveness and think about the post-course happy sheet-the feedback form!! This has become a standard staple safe game plan. But this falls short of a meaningful business performance indicator because liking a training/trainer, environment and eventually individual's training experience, after all, does not necessarily lead to learning.

Most organizations do nothing to assess the true effects of their training and development programs, though almost everyone has a vision to measure training effectiveness. Amidst continuing economic turmoil, corporates spend billions on formal training and development while making no effort to invest in measuring the efficacy. Hardly any other area of business would have such a huge investment accompanied by so little meaningful follow-up.

We have an ingrained infatuation with Return on investment which has led to some complex mathematical formulas which largely fails to address the inherent uniqueness of the training world which is better measured against 'Return on expectations'.

So what do we do?

Simplify and measure only what absolutely needs to be measured. Don't measure metrics because data is available.

What are the various aspects to be evaluated?

  • Training project
  • Training program
  • Training Delivery
  • Training Curriculum

For a 360 degree understanding of the impact of the training, we have to evaluate all the above aspects in isolation to appreciate their inherent differences.

Generally, researchers develop their own scales for measuring variables for different attributes. The scales should be reliable, valid, sensitive, generalizable, and relevant. Now, let's focus on the relevance before we embark on the effectiveness study. No learning is bad; however, a lot of learning at the organization's expense can be unnecessary. Let's focus on the study to weed these elements of redundancies out of training offerings; automatically the training will become more effective.

Possibly an independent review of the training repository might be beneficial to assess what is relevant for the 80% of the mass population. It may also be worthwhile to evaluate training programs that are not utilized over the period of time and resources are spent to update and maintain the above mentioned four training aspects.

There is an inherent latency of training effectiveness; hence there is a need for observing the effectiveness pattern over period of time, the periodicity being contingent upon the training curriculum.

It can be an instantaneous measure if there is a knowledge point to be checked, or a 30/60/90 day observation period to map the progressive exhibited change. The fast feedback approach will not be clinical in analysis always.

The traditional way of measuring learning often overlooks the softer, intangible aspects of measuring effectiveness.

The hard benefits are quantifiable, example; increase in the quality of work by, measuring error percentage, however, overall behavioral change resulting in increased ownership of the job often does not get measured. There is a need to tap these softer aspects and articulate its inherent business value.

One way of ensuring this is to make sure the trainee is not deserted after the training/ post course feedback session, in fact close monitoring of the trainee from a discrete distance and ensuring pressure of performance doesn't impact outcome is a better measure of training effectiveness.

I have a game-plan to propose. This probably will not find takers among my own training fraternity, but, I feel it's still worth a try. Let's sanitize the training offering and observe the withdrawal symptoms one at a time. If, on removing a training offering doesn't result in any adverse indicators over a period of 7/ 15/30/60 and may be 90 days period, then it's probably time to say adieu to that particular training. At least it makes a case for dissecting the return on expectation from that particular training, against the actual training offering and set the stone rolling in the right direction for measuring effectiveness. If we find favorable outcome then we can move to more complex terrain and study withdrawal pattern of co-related trainings and then proceed to make the study more complex but more meaningful.

 

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Process Progression Model

Tweets by @Infosys_BPO