It is somewhat surprising, but contact centers are by their nature among the more complex business processes any enterprise is likely to have to manage. So much is going on in the modern contact center, especially those that service many types of customer needs, that it is very difficult to get a grasp on what is really happening. The need to do so is pressing, and many technology solutions have been proposed over the last dozen or more years, in what has been a flood of innovative attempts to solve the "business intelligence" problem.
As is often the case when things are so demanding that we do our best just to keep our heads above water, there are some common implicit assumptions that tend to push us back down under, struggling for each breath.
One of these is the implied notion that the right way to understand what is really going on with center performance is to ask structured questions that are driven by the design of the center (that is, the design of its call flows, its IVR applications, its business rules, and so forth). It seems self-evident—of course we would want to ask structured questions to measure whether our design is achieving what we intended.
But as is often the case with complex systems, it turns out that self-evident common sense often leads us astray. In fact, there are many important questions we should like to ask about our contact centers but typically don't, because the infrastructure and its associated data design lead us away from asking the right questions.
To clearly see why this is so, consider call flows. In most contact centers, considerable time is spent designing call flows to handle all the differing customer needs in an efficient fashion. Often very large call flow diagrams are developed, and then these designs are coded/configured into routing strategies, ACD vectors, voice/IVR applications and so forth.
It gets even messier when you add in email, chat, instant messaging, social chat, and other newer channels to the mix.
In order to measure performance, detailed reports are designed that measure the performance of each of the call (or interaction) flows. These reports are built up from very structured data that enables the counting of calls that took each flow, as well as the time spent in each leg of the flow (and often the business outcome from the call).
So far, so good—except that this approach rarely measures things that one did not plan for or expect to see. Unfortunately, unplanned flows are commonplace in contact centers—how it could it be otherwise? Customers do not care (nor should they have to), how we define our call flows. They do what they want to do, and the more freedom we give them, the more they will like the experience we provide. The situation is even worse on web sites, where users are quite proficient in using navigation tools (back keys, favorites, history lists, etc.) to traverse web sites as they see fit.
These realities call for a new approach to contact center analytics. This approach is based on discovering and measuring all of the patterns that occur, rather than measuring only the patterns that are designed in. In web terms, this means studying the paths people actually take as they traverse a web site, rather than analyzing only expected or designed paths.
This is somewhat familiar in the web world, but it is far less so in the "call flow" world. But the difference in results can be startling. In one real world contact center where I did some consulting a few years ago, there were two dozen call flows set up. Reports were set up for each of them, and these were reviewed by many people routinely. But, when I asked the question "What are all the call flows that actually occurred, regardless of whether you designed them in, and how often did each occur?", no one could answer it.
In fact, new tools and a new approach were needed to discover all the flows that occurred.
Again, for emphasis, this is quite distinct from measuring all of the flows that were designed.
In the real world contact center situation, there were over 500 distinct call flows that actually occurred! Not surprisingly, most of the calls were accounted for by counting the most-used flows (the 80/20 rule held, more or less). However, one of the top ten flows was a completely undesigned—and unmeasured and undesirable—one. A very meaningful fraction of callers to this company were ending up in a "Hotel California" queue—you could check out any time (by abandoning), but you could never leave (that is, get an agent)! Because the reporting was designed around planned flows, this unplanned flow remained undetected, and these customers remained unserved (if they remained customers or not could not be detected...).
A similar situation likely exists with agent work patterns. Many centers rely heavily on hard scripting or rigid rules about how to handle calls, and these centers usually also maintain significant investments in highly structured reporting to measure compliance with these designed work patterns.
Analyzing performance using only these "as designed" reports is doomed to miss many important features of agent work patterns. Some of these might be bad ones—ways that agents might mistakenly or otherwise fail to accomplish what is desired—but many will also likely be good patterns that will not be detected and amplified (by teaching them to other agents). We too often rely on chance to detect these unplanned flows—someone following up on a complaint (or a kudo) discovers that things are not, after all, going as planned.
The good news? There are techniques for performing unstructured data analysis. Many techniques are well-established today, some of which I helped pioneer when I was at Genesys. And today, at NVM Labs, we are working on an entire cloud-based architecture designed around the need for advanced analytics in modern, multichannel contact centers.
We plan to embed our years of experience in our cloud-based platform so each of our clients will be able to begin the process of developing a deeper level of understanding into the complex dynamics of their customer care processes.
I look forward to providing more details as things develop, but in the meantime each organization should be able to make real improvements by adopting, using the tools at hand, a more exploratory, data-driven (one might even say experiment-driven) approach to its current operational challenges.If you are having an operational challenge that is defying "easy solutions", let us know and we'll be happy to share ideas about new ways to work with your current data—regardless of what platform you are using.