Five years ago, when I was CTO at Genesys, I wrote a blog post titled “KPIs, MBOs, and Doing the Right Thing”. As I re-read that post, I realize how little has really changed. “Pay for performance” remains the dominant paradigm in business, and the main changes seem to have been shifting styles concerning which metrics were most important. Ten years ago “service level” was dominant, then came newer metrics such as “net promoter score”, “first call resolution”, and various variations of “customer sat index” built from some set of lower-level metrics and survey or third-party satisfaction measurement scores.
To make it clear how dangerous a game this can be, consider a real-world example from another domain—software development. A development executive I knew was asked to identify “three KPIs we can use to measure your performance, and that of your department and its members”. This executive, knowing that a significant percentage of his compensation would be determined by the metrics he selected, suggested what seem at first glance to be highly relevant and useful metrics. Specifically, he volunteered to be measured by the quality of his development process, as determined by schedule adherence, scope adherence, and defect density.
Now, leaving aside for a moment the ever-troubling challenge of how to measure business performance in anything approaching a scientific way, let’s consider what results would be driven by these metrics. One might think that what would be measured would indeed be the “quality of the development process and its resulting products”, but there are significant gaps. You see, this executive’s department would be measured against schedules and feature scopes it agreed to—and therein lies the problem. You see, the way for this executive to maximize his payout would be to push back on as many features requested by product managers as possible, and to make very conservative schedule estimates. By taking this cautious approach, it would be simple for him to deliver products with low defect density on the schedule he committed to, with all committed features included.
While it may seem that the executive would be doing what was desired (and certainly he was doing what he was paid to do), when measured against what customers want, his output would be dismal. This is because customers don’t want minimal features on a laconic schedule—they want more features, faster, and they want it with high overall product quality (which is about a lot more than defect density, for instance usability, ease of installation and configuration, scalability, and so forth). So the metrics the development executive committed to sounded virtuous, but virtually guaranteed that mediocre products that, while they had few defects, added little value for customers, would be delivered on a very slow schedule. This is certainly not what was desired from the perspective of shareholders either, if one considers the competitive nature of the software industry. Minimal feature sets delivered on a multiyear schedule is hardly the way to stay in front of hungry competitors!
The same kind of dysfunction is common in contact centers as well. When agents and supervisors are compensated based on easy-to-measure and politically safe metrics, you can be assured that human nature will deliver on its perennial promise, and high scores will be had by all motivated employees. But the results will often not correlate in any useful way with service quality as it is actually perceived by customers, and it is unlikely that corporate goals such as improving brand engagement will be enhanced either.
For example, many common contact center metric remain tied to the notion of measuring service quality, which seems a laudable goal. But there are serious issues with using quality as a key compensation driver. First of all, it is extremely difficult to measure perceived quality. Secondly, even if you could measure quality perfectly, it is not a sensible primary compensation metric.
This counterintuitive point—that quality is not a good metric to drive compensation even if it could be measured well—deserves some explanation. Consider what would happen if everyone in the organization really acted the way you pay them to in this case. If everyone does whatever it takes to make the customer happy—no matter who the customer is, no matter what she wants, and no matter what else is going on—then quality will be perceived as outstanding by customers, but mayhem will usually follow.
One good example of this is insurance claims. I worked with a large insurer’s contact centers, where they paid everyone based on service level. For this company, if there was a surge of claims early in the month due to an unforeseen (and possibly unforeseeable) calamity, the service level for the claims department would predictably drop well below targets. This was true even if steps were taken ahead of time to surge to meet the demand, since common sense says some degradation in times of disaster is understandable and inevitable (imagine the cost of achieving 80-20 in a hurricane situation!).
Now, a week later, imagine the problem facing the VP of Claims. In order for her and her people to achieve their MBOs, they need to get Service Level back to 80/20 by the end of the month. The only way for them to achieve this is to massively overstaff, since they will need to answer all calls nearly simultaneously until the end of the month just to claw their way back to a respectable value. This sounds extreme, but I have seen it in the real world. The result was poor service quality during times of high stress, and poor budget control during times of low stress—basically exactly the opposite of what management would expect to achieve.
Another challenge when KPIs are used for compensation and not carefully thought through is that there then emerges a strong motivation to take a legalistic viewpoint toward one or more metrics, since compensation is tied to them. In the same insurance company, there was a team of people working across organizations to standardize how they measure Service Level. When I suggested that maybe, instead of focusing on the wrong metric, the team should focus on figuring out what the right metric (or series of metrics) should be. "Rightness" means the probability that working to improve the metric will actually drive the desired top-level business outcomes (usually a mix of customer experience, revenue, and costs).What was the response? "Hey, if we can't even measure this metric right, why should we focus on anything else?" And the team soldiered on for the better part of a year...