Traps to avoid in designing performance measures

by Stacey Barr20 Jul 2016
When we try to observe our business performance by walking around, our observation is limited by how far our eye can see.

Not to mention restrictions such as the snapshot of time we are there, what our eye happens to notice, what our ear happens to listen to, and what is in plain sight.

We miss what is happening somewhere else, what we don’t happen to notice, what people are not saying, and what isn’t in plain sight.

In assessing business performance, our past experience and our intuition are not illuminating lights. They are filters.

But when we use performance measures, or KPIs, they don’t have these filters. They give us a more objective picture of how our business is really performing.

But the measures need to be well-designed. And most are not, because of a few common traps we inadvertently fall into.

Trap #1: Not recognising an immeasurable goal.

An immeasurable goal is an outcome or result that we want to measure, but it’s worded so broadly or vaguely that we really struggle to anchor it down in the tangible world in which it’s supposed to happen. We won’t succeed to measure a goal until we’ve made sure it’s measurable. To be measurable, the words that articulate it must make it observable.

Trap #2: Letting the weasel in.

When we produce a list of draft performance measures for our goal, it is very easy to use weasel words to describe those measures. Efficiency Ratio. Staff Productivity. Employee Engagement. Workforce Capability. Customer Loyalty. They are all 'weasely', because they can mean different things to different people. We’ll get better measures when we write them in plain English that avoids ambiguity and the possibility of multiple interpretations.

Trap #3: Not writing quantitative measures.

Performance measures are quantitative things. They must be specifically articulated in quantitative terms. This means that when we write a measure, we need to follow a two-part quantification recipe. Part one is the statistic, such as percentage, average, sum, or count. Part two is the data item our measure is built from, such as customer satisfaction rating, employee injuries, hours of rework, or delivery cycle time.

Trap #4: Prioritising feasibility over relevance.

“We don’t have any data for that.” That’s a common reason for not choosing measures. But often these measures are very powerful evidence of the result to be measured. If we limit our measures to the data we have, we’ll never have the data we need. Yes, we do need to take feasibility of data collection into account, but relevance trumps feasibility.

Trap #5: Writing vague measure names and descriptions.

Vague measure names and limp descriptions (or no descriptions at all) are a terrible starting point for implementing new measures. The ambiguity wastes time and effort later on, when no-one has any clue what exactly to report. Customer Satisfaction might be a good name for a measure, but it’s not a measure without a clear description: The average rating that customers, who were active in the last month, gave us on a scale of 1 to 10 for how satisfied they were with our overall service delivery to them in the past month.

Better measures mean better decisions, and that’s how business performance improves. And better measures will only come from a deliberate measure design process that makes sure our measures are the best evidence of our business’ actual performance.

Stacey Barr is a specialist in business performance measurement and KPIs, and the author of the book ‘Practical Performance Measurement: Using the PuMP Blueprint for Fast, Easy, and Engaging KPIs’. For more, visit