Response Design Corporation:Creating the Uncommon Call Center
Kathryn's Uncommon Call Center Blog
October 30, 2006 12:34 AM
Categories: Measurement 
Correlation versus cause and effect

Mark Twain once said, “There are three kinds of lies: lies, damn lies, and statistics.”

Let me tell you another one of my pet peeves. Statistics are an important component of becoming excellent. But watch out. Watch the correlations that are made between different contact center data elements. When some people compare data and find a correlation, they automatically assume that the two elements have a “cause” and “effect” relationship.

Finding the wrong meaning in correlations can be dangerous. Let’s say I find a statistical correlation between the number of drowning victims at a beach and the number of ice cream cones sold at the same beach. I would be in error to claim, “because of the correlation, we believe that ice cream cones cause drowning.” That’s probably not the story; a better explanation would be that on hot days, the number of people at the beach increases, the more ice cream is purchased, and the more people are in the water exposed to drowning risk.

The second problem with correlations is that people seldom address what is causing what. A correlation doesn’t indicate which factor is the “cause” and which is the “effect.” One of our studies showed a positive correlation between marketing a contact center to the rest of the enterprise and lower agent turnover. But, we didn’t do additional work to identify whether “marketing” was the cause and “lower turnover” the effect or the other way around. Sometimes you can “guess” at the direction of the “cause / effect” based on insight, experience, and logic but you never really know for sure without further research.

We should watch the data source and understand how the researchers gathered the data and drew their conclusions. And researchers need to understand the complexity of the call center environment. Without the understanding, they may draw the wrong conclusions.

Once, when I was managing a contact center, a customer complained to my boss about receiving a busy signal when she called. This boss saw a positive correlation between busy signals and customer complaints. He called me in and strongly suggested that I add more lines so no customer would ever get a busy signal again. I told him that I would be glad to, but we would probably need to add more agents. If we did not, we would be trading one problem for another -- the customer would have to wait on hold longer. My boss saw the correlation, but didn’t understand all the contact center linkages and complexities. What seemed to him like a quick fix was much more complex.

Entry logged at 12:34 AM
October 23, 2006 12:31 AM
Categories: Measurement 
Gaps versus waste

All of us want to be fiscally responsible. That means we have to invest our resources where they count. Sometimes, we invest in addressing problems that are “easy” to solve, but don’t gain much value in the process. Other times, we invest in areas that we think are important, but we can’t really back up our thoughts with hard data.

I was in a meeting with an executive who said he didn’t want any customer to wait longer than 20 seconds in queue. He thought that no caller on any day or during any time of day should wait longer than 20 seconds for an agent. So we told him we would put together a financial model to help him understand his initial investment. Guess what. When we ran the model for his 50-seat call center, we found that he needed an initial investment of more than $5 million. We told him, “The difference between your peak and average-hour call volume indicates we would need to triple capacity to make certain we got close to making this happen.” After he recovered, we asked him why he wanted to do this. He said he thought it would look good in a marketing brochure.

We talked about what was important to his customers. He didn’t know whether the current wait time was an issue. We conducted a customer survey and discovered that 20 percent of his orders were shipped with missing or incorrect items. The customers really wanted him to invest in fixing the shipping problems!

When he did remedy the shipping issues, guess what happened to his call volume and his wait times?

Right. They fell because irate customers weren’t calling about incorrect orders anymore.

This is a perfect example of gaps and waste. It would have been a waste of money to invest in the 20-second wait time. The real gap, in the customers’ minds, was getting the orders right. Investing in fixing this gap would give the executive far greater return.

Be careful about gaps and waste. Gaps occur when we aren’t meeting a need and waste occurs when we are providing something no one values. Investing in what is of value is a smart use of our precious resources.

Entry logged at 12:31 AM
October 16, 2006 12:29 AM
Categories: Measurement 
Best-practice benchmarking versus surveys - Part 2

Surveys can be problematic. You know the old story about the lemmings that follow each other over a cliff to their death? Surveys may be telling you that everyone is heading a certain way but, if you look hard enough, you can discern where they are going.

Let me illustrate with a story from my recent past. I was in Philadelphia driving to a meeting and could not find a parking place anywhere. Finally, I found a lot that wasn’t full. So, I pulled in. When I did, the parking lot attendant ran up to my car yelling hysterically. He kept pointing to the “entrance” to the lot, and yelling at me that I had come in the wrong way and had just run over the spikes. (You know the ones; they are the in-ground “teeth” that keep people from stealing a car.)

Needless to say, all I could think was, “Great, not only am I late, but I’ll also have four flat tires when I return from the meeting.” I glanced in the rearview mirror and noticed that another car had followed me in the same “wrong” way. The parking lot attendant ran to the driver of the other car to alert him of the danger. The other driver pointed at me and yelled, “But I followed her!” To which the parking lot attendant yells right back “Yes, and she ran over the spikes as well!”

To me, this is a great story about the difference between following someone you know has mastered a best practice and someone whom is simply “feeling” his or her way through trial and error. (By the way, I had no damage to my tires; the man who pulled in behind me had four flats.)

So, be careful. Know when you need to benchmark (and how) and when a survey will serve you just fine. Don’t drive four hours for nothing, and please don’t fall over the edge of a cliff!

Entry logged at 12:29 AM
October 9, 2006 12:27 AM
Categories: Measurement 
Best-practice benchmarking versus surveys - Part 1

Let me tell you one of my pet peeves. Many people don’t know the different between best practice benchmarking and surveying, and they apply the results of each in all the wrong ways.

Best practice benchmarking is conducted to find best practices. (That’s an eye opener, I’ll bet!) Surveys, on the other hand, simply tell you what other call centers are doing. It may or may not be a best practice.

A best practice is a double-edged sword. What seems to be a best practice for one call center may not be for the next. Not all best practices make fiscal sense to a call center in its quest for excellence. Companies can no longer pursue “best-in-class” without demonstrating fiscal responsibility.

When benchmarking first came on the scene, people were excited and wanted to benchmark everything. Benchmarking nay-sayers claimed that the benchmarking teams were engaged in “industrial tourism.” People were visiting contact centers just to get out of the office -- and maybe get a few good ideas. Strict preparation was seldom done and agendas were seldom followed.

I spoke with Sally just after she had been promoted to contact center director. She wanted, in the worst way, to visit another center. She finally found another center manager who was willing to let her visit. Sally tried to sell the trip to her management as a benchmarking opportunity.

They didn’t buy it.

Well, Sally decided to go on her own. She drove four hours to the other center. When she arrived, the other director graciously ushered Sally into his office. As they settled in, the center director politely asked Sally what her questions were. Sally hadn’t thought to write them down. She was able to think of some on the spur of the moment. Sally continued to emphasize that she really wanted to “see” the call center.

After 60 agonizing minutes, the center director finally said, “Well, you want to go take a look?” Sally was elated. They both stood up to walk out the office door. Immediately after crossing the threshold, the director stopped, made a sweeping gesture with his right hand and asked, “Well, how do you like it?” Sally was sure there would be more, but after a couple of very quiet moments during which the director didn’t move from the doorway, she answered, “It’s very nice.” The director commented, “Yes, we think so.” The director then returned to his desk to wrap things up. Sally thought, “This is it? This is what I took a day off work for and drove four hours to experience?”

No, Sally. There is so much more – especially in the preparation!

Entry logged at 12:27 AM
October 2, 2006 12:26 AM
Categories: Measurement 
Indicator versus diagnostic metrics

Most people measure the wrong things in the wrong timeframe. Sometimes, we measure tactical actions when we should be looking at results. How many actions do your agents take to produce the desired result? Wouldn’t it be easier to simply measure the result, and, if the result is not as expected, measure the actions?

Our indicator metrics need to keep us focused on the right things. If an indicator metric performance is weak, then we can dig deeper into what I call “diagnostic metrics.” These diagnostic metrics are usually a pitfall for most organizations (like the company measuring more than 100 metrics per agent per day). If you focus on diagnostic metrics, you never have time to change. You spend all your time looking at the numbers trying to figure out what they are telling you.

The concept of indicator and diagnostic metrics holds true for every aspect of the contact center – agents, operations, service level, etc. All contact centers I have worked with (no matter how unique or complex) have indicator metrics. Figuring out what these indicators takes time, but it is time well spent.

People look at micro-level (diagnostic) metrics when they don't need to. If you are examining a mountain of reports all day long, you'll never have time for strategic thinking nor will you have time to invest in your people. The answer lies in knowing when to look at certain data and when to ignore it.

Entry logged at 12:26 AM
Sign up FREE for the Uncommon News